id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
162,017
https://en.wikipedia.org/wiki/Rayon
Rayon, also called viscose and commercialised in some countries as sabra silk or cactus silk, is a semi-synthetic fiber made from natural sources of regenerated cellulose, such as wood and related agricultural products. It has the same molecular structure as cellulose. Many types and grades of viscose fibers and films exist. Some imitate the feel and texture of natural fibers such as silk, wool, cotton, and linen. The types that resemble silk are often called artificial silk. It can be woven or knit to make textiles for clothing and other purposes. Rayon production involves solubilizing cellulose to allow turning the fibers into required form. Three common solubilization methods are: The cuprammonium process (not in use today), using ammoniacal solutions of copper salts The viscose process, the most common today, using alkali and carbon disulfide The Lyocell process, using amine oxide, which avoids producing neurotoxic carbon disulfide but is more expensive History French scientist and industrialist Hilaire de Chardonnet (1838–1924) invented the first artificial textile fiber, artificial silk. Swiss chemist Matthias Eduard Schweizer (1818–1860) discovered that cellulose dissolved in tetraamminecopper dihydroxide. Max Fremery and Johann Urban developed a method to produce carbon fibers for use in light bulbs in 1897. Production of cuprammonium rayon for textiles started in 1899 in the Vereinigte Glanzstoff Fabriken AG in Oberbruch (near Aachen). Improvement by J. P. Bemberg AG in 1904 made the artificial silk a product comparable to real silk. English chemist Charles Frederick Cross and his collaborators, Edward John Bevan and Clayton Beadle, patented their artificial silk in 1894. They named it "viscose" because its production involved the intermediacy of a highly viscous solution. Cross and Bevan took out British Patent No. 8,700, "Improvements in Dissolving Cellulose and Allied Compounds" in May, 1892. In 1893, they formed the Viscose Syndicate to grant licences and, in 1896, formed the British Viscoid Co. Ltd. The first commercial viscose rayon was produced by the UK company Courtaulds Fibres in November 1905. Courtaulds formed an American division, American Viscose (later known as Avtex Fibers), to produce their formulation in the US in 1910. The name "rayon" was adopted in 1924, with "viscose" being used for the viscous organic liquid used to make both rayon and cellophane. In Europe, though, the fabric itself became known as "viscose", which has been ruled an acceptable alternative term for rayon by the US Federal Trade Commission (FTC). Rayon was produced only as a filament fiber until the 1930s, when methods were developed to utilize "broken waste rayon" as staple fiber. Manufacturers' search for a less environmentally-harmful process for making Rayon led to the development of the lyocell method for producing Rayon. The lyocell process was developed in 1972 by a team at the now defunct American Enka fibers facility at Enka, North Carolina. In 2003, the American Association of Textile Chemists and Colorists (AATCC) awarded Neal E. Franks their Henry E. Millson Award for Invention for lyocell. In 1966–1968, D. L. Johnson of Eastman Kodak Inc. studied NMMO solutions. In the decade 1969 to 1979, American Enka tried unsuccessfully to commercialize the process. The operating name for the fibre inside the Enka organization was "Newcell", and the development was carried through pilot plant scale before the work was stopped. The basic process of dissolving cellulose in NMMO was first described in a 1981 patent by Mcorsley for Akzona Incorporated (the holding company of Akzo). In the 1980s the patent was licensed by Akzo to Courtaulds and Lenzing. The fibre was developed by Courtaulds Fibres under the brand name "Tencel" in the 1980s. In 1982, a 100 kg/week pilot plant was built in Coventry, UK, and production was increased tenfold (to a ton/week) in 1984. In 1988, a 25 ton/week semi-commercial production line opened at the Grimsby, UK, pilot plant. The process was first commercialized at Courtaulds' rayon factories at Mobile, Alabama (1990), and at the Grimsby plant (1998). In January 1993, the Mobile Tencel plant reached full production levels of 20,000 tons per year, by which time Courtaulds had spent £100 million and 10 years on Tencel development. Tencel revenues for 1993 were estimated as likely to be £50 million. A second plant in Mobile was planned. By 2004, production had quadrupled to 80,000 tons. Lenzing began a pilot plant in 1990, and commercial production in 1997, with 12 metric tonnes/year made in a plant in Heiligenkreuz im Lafnitztal, Austria. When an explosion hit the plant in 2003 it was producing 20,000 tonnes/year, and planning to double capacity by the end of the year. In 2004 Lenzing was producing 40,000 tons [sic, probably metric tonnes]. In 1998, Lenzing and Courtaulds reached a patent dispute settlement. In 1998 Courtaulds was acquired by competitor Akzo Nobel, which combined the Tencel division with other fibre divisions under the Accordis banner, then sold them to private equity firm CVC Partners. In 2000, CVC sold the Tencel division to Lenzing AG, which combined it with their "Lenzing Lyocell" business, but maintained the brand name Tencel. It took over the plants in Mobile and Grimsby, and by 2015 were the largest lyocell producer at 130,000 tonnes/year. Process Rayon is produced by dissolving cellulose, then converting this solution back to insoluble fibrous cellulose. Various processes have been developed for this regeneration. The most common methods for creating rayon are the cuprammonium method, the viscose method, and the lyocell process. The first two methods have been practiced for more than a century. Cuprammonium methods Cuprammonium rayon has properties similar to viscose; however, during its production, the cellulose is combined with copper and ammonia (Schweizer's reagent). Due to the detrimental environmental effects of this production method, cuprammonium rayon is no longer being produced in the United States. The process has been described as obsolete, but cuprammonium rayon is still made by one company in Japan. Tetraamminecopper(II) sulfate is also used as a solvent. Viscose method The viscose process builds on the reaction of cellulose with a strong base, followed by treatment of that solution with carbon disulfide to give a xanthate derivative. The xanthate is then converted back to a cellulose fiber in a subsequent step. The viscose method can use wood as a source of cellulose, whereas other routes to rayon require lignin-free cellulose as a starting material. The use of woody sources of cellulose makes viscose cheaper, so it was traditionally used on a larger scale than the other methods. On the other hand, the original viscose process generates large amounts of contaminated wastewater. Newer technologies use less water and have improved the quality of the wastewater. The raw material for viscose is primarily wood pulp (sometimes bamboo pulp), which is chemically converted into a soluble compound. It is then dissolved and forced through a spinneret to produce filaments, which are chemically solidified, resulting in fibers of nearly pure cellulose. Unless the chemicals are handled carefully, workers can be seriously harmed by the carbon disulfide used to manufacture most rayon. To prepare viscose, pulp is treated with aqueous sodium hydroxide (typically 16–19% by mass) to form "alkali cellulose", which has the approximate formula [C6H9O4−ONa]. This material is allowed to depolymerize to an extent. The rate of depolymerization (ripening or maturing) depends on temperature and is affected by the presence of various inorganic additives, such as metal oxides and hydroxides. Air also affects the ripening process, since oxygen causes depolymerization. The alkali cellulose is then treated with carbon disulfide to form sodium cellulose xanthate: Rayon fiber is produced from the ripened solutions by treatment with a mineral acid, such as sulfuric acid. In this step, the xanthate groups are hydrolyzed to regenerate cellulose and carbon disulfide: Aside from regenerated cellulose, acidification gives hydrogen sulfide (H2S), sulfur, and carbon disulfide. The thread made from the regenerated cellulose is washed to remove residual acid. The sulfur is then removed by the addition of sodium sulfide solution, and impurities are oxidized by bleaching with sodium hypochlorite solution or hydrogen peroxide solution. Production begins with processed cellulose obtained from wood pulp and plant fibers. The cellulose content in the pulp should be around 87–97%. The steps: Immersion: The cellulose is treated with caustic soda. Pressing. The treated cellulose is then pressed between rollers to remove excess liquid. The pressed sheets are crumbled or shredded to produce what is known as "white crumb". The "white crumb" is aged through exposure to oxygen. This is a depolymerization step and is avoided in the case of polynosics. The aged "white crumb" is mixed in vats with carbon disulfide to form the xanthate. This step produces "orange-yellow crumb". The "yellow crumb" is dissolved in a caustic solution to form viscose. The viscose is set to stand for a period of time, allowing it to "ripen". During this stage the molecular weight of the polymer changes. After ripening, the viscose is filtered, degassed, and then extruded through a spinneret into a bath of sulfuric acid, resulting in the formation of rayon filaments. The acid is used as a regenerating agent. It converts cellulose xanthate back to cellulose. The regeneration step is rapid, which does not allow proper orientation of cellulose molecules. So to delay the process of regeneration, zinc sulfate is used in the bath, which converts cellulose xanthate to zinc cellulose xanthate, thus providing time for proper orientation to take place before regeneration. Spinning. The spinning of viscose rayon fiber is done using a wet-spinning process. The filaments are allowed to pass through a coagulation bath after extrusion from the spinneret holes. The two-way mass transfer takes place. Drawing. The rayon filaments are stretched, in a procedure known as drawing, to straighten out the fibers. Washing. The fibers are then washed to remove any residual chemicals from them. Cutting. If filament fibers are desired, then the process ends here. The filaments are cut down when producing staple fibers. Lyocell method The lyocell process relies on dissolution of cellulose products in a solvent, N-methyl morpholine N-oxide (NMMO). The process starts with cellulose and involves dry jet-wet spinning. It was developed at the now defunct American Enka Company and Courtaulds Fibres. Lenzing's Tencel is an example of a lyocell fiber. Unlike the viscose process, the lycocell process does not use highly toxic carbon disulfide. "Lyocell" has become a genericized trademark, used to refer to the lyocell process for making cellulose fibers. , the lyocell process is not widely used, because it is still more expensive than the viscose process. Properties Rayon is a versatile fiber and is widely claimed to have the same comfort properties as natural fibers, although the drape and slipperiness of rayon textiles are often more like nylon. It can imitate the feel and texture of silk, wool, cotton, and linen. The fibers are easily dyed in a wide range of colors. Rayon fabrics are soft, smooth, cool, comfortable, and highly absorbent, but they do not always insulate body heat, making them ideal for use in hot and humid climates, although also making their "hand" (feel) cool and sometimes almost slimy to the touch. The durability and appearance retention of regular viscose rayons are low, especially when wet; also, rayon has the lowest elastic recovery of any fiber. However, HWM rayon (high-wet-modulus rayon) is much stronger and exhibits higher durability and appearance retention. Recommended care for regular viscose rayon is dry-cleaning only. HWM rayon can be machine-washed. Regular rayon has lengthwise lines called striations and its cross-section is an indented circular shape. The cross-sections of HWM and cupra rayon are rounder. Filament rayon yarns vary from 80 to 980 filaments per yarn and vary in size from 40 to 5000 denier. Staple fibers range from 1.5 to 15 denier and are mechanically or chemically crimped. Rayon fibers are naturally very bright, but the addition of delustering pigments cuts down on this natural brightness. Structural modification The physical properties of rayon remained unchanged until the development of high-tenacity rayon in the 1940s. Further research and development led to high-wet-modulus rayon (HWM rayon) in the 1950s. Research in the UK was centred on the government-funded British Rayon Research Association. High-tenacity rayon is another modified version of viscose that has almost twice the strength of HWM. This type of rayon is typically used for industrial purposes such as tire cord. Industrial applications of rayon emerged around 1935. Substituting cotton fiber in tires and belts, industrial types of rayon developed a totally different set of properties, amongst which tensile strength and elastic modulus were paramount. is a genericized trademark of Lenzing AG, used for (viscose) rayon which is stretched as it is made, aligning the molecules along the fibers. Two forms are available: "polynosics" and "high wet modulus" (HWM). High-wet-modulus rayon is a modified version of viscose that is stronger when wet. It can be mercerized like cotton. HWM rayons are also known as "polynosic". Polynosic fibers are dimensionally stable and do not shrink or get pulled out of shape when wet like many rayons. They are also wear-resistant and strong while maintaining a soft, silky feel. They are sometimes identified by the trade name Modal. Modal is used alone or with other fibers (often cotton or spandex) in clothing and household items like pajamas, underwear, bathrobes, towels, and bedsheets. Modal can be tumble-dried without damage. The fabric has been known to pill less than cotton due to fiber properties and lower surface friction. The trademarked Modal is made by spinning beech-tree cellulose and is considered a more eco-friendly alternative to cotton, as the production process uses on average 10–20 times less water. Producers and brand names In 2018, viscose fiber production in the world was approximately 5.8 million tons, and China was the largest producer with about 65% of total global production. Trade names are used within the rayon industry to label the type of rayon in the product. Viscose rayon was first produced in Coventry, England in 1905 by Courtaulds. Bemberg is a trade name for cuprammonium rayon developed by J. P. Bemberg. Bemberg performs much like viscose but has a smaller diameter and comes closest to silk in feel. Bemberg is now only produced in Japan. The fibers are finer than viscose rayon. Modal and Tencel are widely used forms of rayon produced by Lenzing AG. Tencel, generic name lyocell, is made by a slightly different solvent recovery process, and is considered a different fiber by the US FTC. Tencel lyocell was first produced commercially by Courtaulds' Grimsby plant in England. The process, which dissolves cellulose without a chemical reaction, was developed by Courtaulds Research. Birla Cellulose is also a volume manufacturer of rayon. They have plants located in India, Indonesia and China. Accordis was a major manufacturer of cellulose-based fibers and yarns. Production facilities can be found throughout Europe, the U.S. and Brazil. Visil rayon and HOPE FR are flame retardant forms of viscose that have silica embedded in the fiber during manufacturing. North American Rayon Corporation of Tennessee produced viscose rayon until its closure in the year 2000. Indonesia is one of the largest producers of rayon in the world, and Asia Pacific Rayon (APR) of the country has an annual production capacity of 0.24 million tons. Environmental impact The biodegradability of various fibers in soil burial and sewage sludge was evaluated by Korean researchers. Rayon was found to be more biodegradable than cotton, and cotton more than acetate. The more water-repellent the rayon-based fabric, the more slowly it will decompose. Subsequent experiments have shown that wood-based fibres, like Lyocell, biodegrade much more readily than polyester. Silverfish—like the firebrat—can eat rayon, but damage was found to be minor, potentially due to the heavy, slick texture of the tested rayon. Another study states that "artificial silk [...] [was] readily eaten" by the grey silverfish. A 2014 ocean survey found that rayon contributed to 56.9% of the total fibers found in deep ocean areas, the rest being polyester, polyamides, acetate and acrylic. A 2016 study found a discrepancy in the ability to identify natural fibers in a marine environment via Fourier transform infrared spectroscopy. Later research of oceanic microfibers instead found cotton being the most frequent match (50% of all fibers), followed by other cellulosic fibers at 29.5% (e.g., rayon/viscose, linen, jute, kenaf, hemp, etc.). Further analysis of the specific contribution of rayon to ocean fibers was not performed due to the difficulty in distinguishing between natural and man-made cellulosic fibers using FTIR spectra. For several years, there have been concerns about links between rayon manufacturers and deforestation. As a result of these concerns, FSC and PEFC came on the same platform with CanopyPlanet to focus on these issues. CanopyPlanet subsequently started publishing a yearly Hot Button report, which puts all the man-made cellulosics manufacturers globally on the same scoring platform. The scoring from the 2020 report scores all such manufacturers on a scale of 35, the highest scores having been achieved by Birla Cellulose (33) and Lenzing (30.5). Carbon disulfide toxicity Carbon disulfide is highly toxic. It is well documented to have seriously harmed the health of rayon workers in developed countries, and emissions may also harm the health of people living near rayon plants and their livestock. Rates of disability in modern factories (mainly in China, Indonesia, and India) are unknown. This has raised ethical concerns over viscose rayon production. , production facilities located in developing countries generally do not provide environmental or worker safety data. Most global carbon disulfide emissions come from rayon production, as of 2008. , about 250 g of carbon disulfide is emitted per kilogram of rayon produced. Control technologies have enabled improved collection of carbon disulfide and reuse of it, resulting in a lower emissions of carbon disulfide. These have not always been implemented in places where it was not legally required and profitable. Carbon disulfide is volatile and is lost before the rayon gets to the consumer; the rayon itself is basically pure cellulose. Studies from the 1930s show that 30% of American rayon workers experienced significant health impacts due to carbon disulfide exposure. Courtaulds worked hard to prevent this information being published in Britain. During the Second World War, political prisoners in Nazi Germany were made to work in appalling conditions at the Phrix rayon factory in Krefeld. Nazis used forced labour to produce rayon across occupied Europe. In the 1990s, viscose rayon producers faced lawsuits for negligent environmental pollution. Emissions abatement technologies had been consistently used. Carbon-bed recovery, for instance, which reduces emissions by about 90%, was used in Europe, but not in the US, by Courtaulds. Pollution control and worker safety started to become cost-limiting factors in production. Japan has reduced carbon disulfide emissions per kilogram of viscose rayon produced (by about 16% per year), but in other rayon-producing countries, including China, emissions are uncontrolled. Rayon production is steady or decreasing except in China, where it is increasing, . Rayon production has largely moved to the developing world, especially China, Indonesia and India. Rates of disability in these factories are unknown, , and concerns for worker safety continue. Controversy Studies have found the production of rayon can be harmful to the health of factory workers. Workers in factories utilizing the viscose process may be exposed to high levels of carbon disulfide, which can cause coronary heart disease, retinal damage, behavioral changes, impaired motor function, and various fertility and hormonal effects. Related materials Related materials are not regenerated cellulose, but esters of cellulose. Nitrocellulose is a derivative of cellulose that is soluble in organic solvents. It is mainly used as an explosive or as a lacquer. Many early plastics, including celluloid, were made from nitrocellulose. Cellulose acetate shares many traits with viscose rayon and was formerly considered the same textile. However, rayon resists heat, while acetate is prone to melting. Acetate must be laundered with care either by hand-washing or dry cleaning, and acetate garments disintegrate when heated in a tumble dryer. The two fabrics are now required to be listed distinctly on USA garment labels. Cellophane is generally made by the viscose process, but dried into sheets instead of fibers. See also References Further reading Gupta, VB; Kothari, VK and Sengupta, AK eds. (1997) Manufactured Fibre Technology. Chapman & Hall, London. . For a review of all rayon production methods and markets see "Regenerated Cellulose Fibres" (book – Edited by C R Woodings) Hardback 2001, , Woodhead Publishing Ltd. For a description of the production method at a factory in Germany in World War II, see Agnès Humbert (tr. Barbara Mellor) Résistance: Memoirs of Occupied France, London, Bloomsbury Publishing PLC, 2008 (American title: Resistance: A Frenchwoman's Journal of the War, Bloomsbury, USA, 2008) pp. 152–155 For a complete set of photographs of the process see "The Story of Rayon" published by Courtaulds Ltd (1948) Arnold Hard, the textile journalist, produced two books documenting the experiences of some of the pioneers in the early British rayon industry the Hard, Arnold. H. (1933). The Romance of Rayon. Whittaker & Robinson, Manchester and Hard, Arnold (1944) The Story of Rayon, United Trade Press Ltd, London External links Organic polymers Cellulose Synthetic fibers Products introduced in 1891 Articles containing video clips Pulp and paper industry
Rayon
[ "Chemistry" ]
5,062
[ "Organic compounds", "Synthetic materials", "Organic polymers", "Synthetic fibers" ]
162,132
https://en.wikipedia.org/wiki/Derangement
In combinatorial mathematics, a derangement is a permutation of the elements of a set in which no element appears in its original position. In other words, a derangement is a permutation that has no fixed points. The number of derangements of a set of size is known as the subfactorial of or the derangement number or de Montmort number (after Pierre Remond de Montmort). Notations for subfactorials in common use include , , , or  . For , the subfactorial equals the nearest integer to , where denotes the factorial of and is Euler's number. The problem of counting derangements was first considered by Pierre Raymond de Montmort in his Essay d'analyse sur les jeux de hazard in 1708; he solved it in 1713, as did Nicholas Bernoulli at about the same time. Example Suppose that a professor gave a test to 4 students – A, B, C, and D – and wants to let them grade each other's tests. Of course, no student should grade their own test. How many ways could the professor hand the tests back to the students for grading, such that no student receives their own test back? Out of 24 possible permutations (4!) for handing back the tests, {| style="font:125% monospace;line-height:1;border-collapse:collapse;" |ABCD, |ABDC, |ACBD, |ACDB, |ADBC, |ADCB, |- |BACD, |BADC, |BCAD, |BCDA, |BDAC, |BDCA, |- |CABD, |CADB, |CBAD, |CBDA, |CDAB, |CDBA, |- |DABC, |DACB, |DBAC, |DBCA, |DCAB, |DCBA. |} there are only 9 derangements (shown in blue italics above). In every other permutation of this 4-member set, at least one student gets their own test back (shown in bold red). Another version of the problem arises when we ask for the number of ways n letters, each addressed to a different person, can be placed in n pre-addressed envelopes so that no letter appears in the correctly addressed envelope. Counting derangements Counting derangements of a set amounts to the hat-check problem, in which one considers the number of ways in which n hats (call them h1 through hn) can be returned to n people (P1 through Pn) such that no hat makes it back to its owner. Each person may receive any of the n − 1 hats that is not their own. Call the hat which the person P1 receives hi and consider his owner: Pi receives either P1's hat, h1, or some other. Accordingly, the problem splits into two possible cases: Pi receives a hat other than h1. This case is equivalent to solving the problem with n − 1 people and n − 1 hats because for each of the n − 1 people besides P1 there is exactly one hat from among the remaining n − 1 hats that they may not receive (for any Pj besides Pi, the unreceivable hat is hj, while for Pi it is h1). Another way to see this is to rename h1 to hi, where the derangement is more explicit: for any j from 2 to n, Pj cannot receive hj. Pi receives h1. In this case the problem reduces to n − 2 people and n − 2 hats, because P1 received his hat and Pi received h1's hat, effectively putting both out of further consideration. For each of the n − 1 hats that P1 may receive, the number of ways that P2, ..., Pn may all receive hats is the sum of the counts for the two cases. This gives us the solution to the hat-check problem: Stated algebraically, the number !n of derangements of an n-element set is for , where and The number of derangements of small lengths is given in the table below. There are various other expressions for , equivalent to the formula given above. These include for and for where is the nearest integer function and is the floor function. Other related formulas include and The following recurrence also holds: Derivation by inclusion–exclusion principle One may derive a non-recursive formula for the number of derangements of an n-set, as well. For we define to be the set of permutations of objects that fix the object. Any intersection of a collection of of these sets fixes a particular set of objects and therefore contains permutations. There are such collections, so the inclusion–exclusion principle yields and since a derangement is a permutation that leaves none of the n objects fixed, this implies On the other hand, since we can choose elements to be in their own place and derange the other elements in just ways, by definition. Growth of number of derangements as n approaches ∞ From and by substituting one immediately obtains that This is the limit of the probability that a randomly selected permutation of a large number of objects is a derangement. The probability converges to this limit extremely quickly as increases, which is why is the nearest integer to The above semi-log graph shows that the derangement graph lags the permutation graph by an almost constant value. More information about this calculation and the above limit may be found in the article on the statistics of random permutations. Asymptotic expansion in terms of Bell numbers An asymptotic expansion for the number of derangements in terms of Bell numbers is as follows: where is any fixed positive integer, and denotes the -th Bell number. Moreover, the constant implied by the big O-term does not exceed . Generalizations The problème des rencontres asks how many permutations of a size-n set have exactly k fixed points. Derangements are an example of the wider field of constrained permutations. For example, the ménage problem asks if n opposite-sex couples are seated man-woman-man-woman-... around a table, how many ways can they be seated so that nobody is seated next to his or her partner? More formally, given sets A and S, and some sets U and V of surjections A → S, we often wish to know the number of pairs of functions (f, g) such that f is in U and g is in V, and for all a in A, f(a) ≠ g(a); in other words, where for each f and g, there exists a derangement φ of S such that f(a) = φ(g(a)). Another generalization is the following problem: How many anagrams with no fixed letters of a given word are there? For instance, for a word made of only two different letters, say n letters A and m letters B, the answer is, of course, 1 or 0 according to whether n = m or not, for the only way to form an anagram without fixed letters is to exchange all the A with B, which is possible if and only if n = m. In the general case, for a word with n1 letters X1, n2 letters X2, ..., nr letters Xr, it turns out (after a proper use of the inclusion-exclusion formula) that the answer has the form for a certain sequence of polynomials Pn, where Pn has degree n. But the above answer for the case r = 2 gives an orthogonality relation, whence the Pn's are the Laguerre polynomials (up to a sign that is easily decided). In particular, for the classical derangements, one has that where is the upper incomplete gamma function. Computational complexity It is NP-complete to determine whether a given permutation group (described by a given set of permutations that generate it) contains any derangements. {| class="wikitable collapsible collapsed" style="margin:0; width:100%" |+ Table of factorial and derangement values |- ! scope="col" | ! scope="col" class="nowrap" | Permutations, ! scope="col" class="nowrap" | Derangements, ! scope="col" | |- | style="text-align: center" | 0 | 1 =1×100 | 1 =1×100 |  = 1 |- | style="text-align: center" | 1 | 1 =1×100 | 0 |  = 0 |- | style="text-align: center" | 2 | 2 =2×100 | 1 =1×100 |  = 0.5 |- | style="text-align: center" | 3 | 6 =6×100 | 2 =2×100 |align="right"| ≈0.33333 33333 |- | style="text-align: center" | 4 | 24 =2.4×101 | 9 =9×100 |  = 0.375 |-style="border-top:2px solid #aaaaaa;" | style="text-align: center" | 5 | 120 =1.20×102 | 44 =4.4×101 |align="right"| ≈0.36666 66667 |- | style="text-align: center" | 6 | 720 =7.20×102 | 265 =2.65×102 |align="right"| ≈0.36805 55556 |- | style="text-align: center" | 7 | 5,040 =5.04×103 | 1,854 ≈1.85×103 |align="right"| ≈0.36785,71429 |- | style="text-align: center" | 8 | 40,320 ≈4.03×104 | 14,833 ≈1.48×104 |align="right"| ≈0.36788 19444 |- | style="text-align: center" | 9 | 362,880 ≈3.63×105 | 133,496 ≈1.33×105 |align="right"| ≈0.36787 91887 |-style="border-top:2px solid #aaaaaa;" | style="text-align: center" | 10 | 3,628,800 ≈3.63×106 | 1,334,961 ≈1.33×106 |align="right"| ≈0.36787 94643 |- | style="text-align: center" | 11 | 39,916,800 ≈3.99×107 | 14,684,570 ≈1.47×107 |align="right"| ≈0.36787 94392 |- | style="text-align: center" | 12 | 479,001,600 ≈4.79×108 | 176,214,841 ≈1.76×108 |align="right"| ≈0.36787 94413 |- | style="text-align: center" | 13 | 6,227,020,800 ≈6.23×109 | 2,290,792,932 ≈2.29×109 |align="right"| ≈0.36787 94412 |- | style="text-align: center" | 14 | 87,178,291,200 ≈8.72×1010 | 32,071,101,049 ≈3.21×1010 |align="right"| ≈0.36787 94412 |-style="border-top:2px solid #aaaaaa;" | style="text-align: center" | 15 |style="font-size:80%;"| 1,307,674,368,000 ≈1.31×1012 |style="font-size:80%;"| 481,066,515,734 ≈4.81×1011 |align="right"| ≈0.36787 94412 |- | style="text-align: center" | 16 |style="font-size:80%;"| 20,922,789,888,000 ≈2.09×1013 |style="font-size:80%;"| 7,697,064,251,745 ≈7.70×1012 |align="right"| ≈0.36787 94412 |- | style="text-align: center" | 17 |style="font-size:80%;"| 355,687,428,096,000 ≈3.56×1014 |style="font-size:80%;"| 130,850,092,279,664 ≈1.31×1014 |align="right"| ≈0.36787 94412 |- | style="text-align: center" | 18 |style="font-size:80%;"| 6,402,373,705,728,000 ≈6.40×1015 |style="font-size:80%;"| 2,355,301,661,033,953 ≈2.36×1015 |align="right"| ≈0.36787 94412 |- | style="text-align: center" | 19 |style="font-size:80%;"| 121,645,100,408,832,000 ≈1.22×1017 |style="font-size:80%;"| 44,750,731,559,645,106 ≈4.48×1016 |align="right"| ≈0.36787 94412 |-style="border-top:2px solid #aaaaaa;" | style="text-align: center" | 20 |style="font-size:80%;"| 2,432,902,008,176,640,000 ≈2.43×1018 |style="font-size:80%;"| 895,014,631,192,902,121 ≈8.95×1017 |align="right"| ≈0.36787 94412 |- | style="text-align: center" | 21 |style="font-size:80%;"| 51,090,942,171,709,440,000 ≈5.11×1019 |style="font-size:80%;"| 18,795,307,255,050,944,540 ≈1.88×1019 |align="right"| ≈0.36787 94412 |- | style="text-align: center" | 22 |style="font-size:80%;"| 1,124,000,727,777,607,680,000 ≈1.12×1021 |style="font-size:80%;"| 413,496,759,611,120,779,881 ≈4.13×1020 |align="right"| ≈0.36787 94412 |- | style="text-align: center" | 23 |style="font-size:80%;"| 25,852,016,738,884,976,640,000 ≈2.59×1022 |style="font-size:80%;"| 9,510,425,471,055,777,937,262 ≈9.51×1021 |align="right"| ≈0.36787 94412 |- | style="text-align: center" | 24 |style="font-size:80%;"| 620,448,401,733,239,439,360,000 ≈6.20×1023 |style="font-size:80%;"| 228,250,211,305,338,670,494,289 ≈2.28×1023 |align="right"| ≈0.36787 94412 |-style="border-top:2px solid #aaaaaa;" | style="text-align: center" | 25 |style="font-size:80%;"| 15,511,210,043,330,985,984,000,000 ≈1.55×1025 |style="font-size:80%;"| 5,706,255,282,633,466,762,357,224 ≈5.71×1024 |align="right"| ≈0.36787 94412 |- | style="text-align: center" | 26 |style="font-size:80%;"| 403,291,461,126,605,635,584,000,000 ≈4.03×1026 |style="font-size:80%;"| 148,362,637,348,470,135,821,287,825 ≈1.48×1026 |align="right"| ≈0.36787 94412 |- | style="text-align: center" | 27 |style="font-size:80%;"| 10,888,869,450,418,352,160,768,000,000 ≈1.09×1028 |style="font-size:80%;"| 4,005,791,208,408,693,667,174,771,274 ≈4.01×1027 |align="right"| ≈0.36787 94412 |- | style="text-align: center" | 28 |style="font-size:80%;"| 304,888,344,611,713,860,501,504,000,000 ≈3.05×1029 |style="font-size:80%;"| 112,162,153,835,443,422,680,893,595,673 ≈1.12×1029 |align="right"| ≈0.36787 94412 |- | style="text-align: center" | 29 |style="font-size:80%;"| 8,841,761,993,739,701,954,543,616,000,000 ≈8.84×1030 |style="font-size:80%;"| 3,252,702,461,227,859,257,745,914,274,516 ≈3.25×1030 |align="right"| ≈0.36787 94412 |-style="border-top:2px solid #aaaaaa;" | style="text-align: center" | 30 |style="font-size:80%;"| 265,252,859,812,191,058,636,308,480,000,000 ≈2.65×1032 |style="font-size:80%;"| 97,581,073,836,835,777,732,377,428,235,481 ≈9.76×1031 |align="right"| ≈0.36787 94412 |} Footnotes References External links Permutations Fixed points (mathematics) Integer sequences es:Subfactorial
Derangement
[ "Mathematics" ]
4,358
[ "Sequences and series", "Functions and mappings", "Integer sequences", "Mathematical structures", "Mathematical analysis", "Permutations", "Recreational mathematics", "Fixed points (mathematics)", "Mathematical objects", "Combinatorics", "Topology", "Mathematical relations", "Numbers", "Nu...
162,143
https://en.wikipedia.org/wiki/Keloid
Keloid, also known as keloid disorder and keloidal scar, is the formation of a type of scar which, depending on its maturity, is composed mainly of either type III (early) or type I (late) collagen. It is a result of an overgrowth of granulation tissue (collagen type III) at the site of a healed skin injury which is then slowly replaced by collagen type I. Keloids are firm, rubbery lesions or shiny, fibrous nodules, and can vary from pink to the color of the person's skin or red to dark brown in color. A keloid scar is benign and not contagious, but sometimes accompanied by severe itchiness, pain, and changes in texture. In severe cases, it can affect movement of skin. In the United States, keloid scars are seen 15 times more frequently in people of sub-Saharan African descent than in people of European descent. There is a higher tendency to develop a keloid among those with a family history of keloids and people between the ages of 10 and 30 years. Keloids should not be confused with hypertrophic scars, which are raised scars that do not grow beyond the boundaries of the original wound. Signs and symptoms Keloids expand in claw-like growths over normal skin. They have the capability to hurt with a needle-like pain or to itch, the degree of sensation varying from person to person. Keloids form within scar tissue. Collagen, used in wound repair, tends to overgrow in this area, sometimes producing a lump many times larger than that of the original scar. They can also range in color from pink to red. Although they usually occur at the site of an injury, keloids can also arise spontaneously. They can occur at the site of a piercing and even from something as simple as a pimple or scratch. They can occur as a result of severe acne or chickenpox scarring, infection at a wound site, repeated trauma to an area, excessive skin tension during wound closure or a foreign body in a wound. Keloids can sometimes be sensitive to chlorine. If a keloid appears when someone is still growing, the keloid can continue to grow as well. Location Keloids can develop in any place where skin trauma has occurred. They can be the result of pimples, insect bites, scratching, burns, or other skin injury. Keloid scars can develop after surgery. They are more common in some sites, such as the central chest (from a sternotomy), the back and shoulders (usually resulting from acne), and the ear lobes (from ear piercings). They can also occur on body piercings. The most common spots are earlobes, arms, pelvic region, and over the collar bone. Cause Most skin injury types can contribute to scarring. This includes burns, acne scars, chickenpox scars, ear piercing, scratches, surgical incisions, and vaccination sites. According to the US National Center for Biotechnology Information, keloid scarring is common in young people between the ages of 10 and 20. Studies have shown that those with darker complexions are at a higher risk of keloid scarring as a result of skin trauma. They occur in 15–20% of individuals with sub-Saharan African, Asian or Latino ancestry, significantly less in those of a Caucasian background. Although it was previously believed that people with albinism did not get keloids, a recent report described the incidence of keloids in Africans with albinism. Keloids tend to have a genetic component, which means one is more likely to have keloids if one or both of their parents has them. However, no single gene has yet been identified which is a causing factor in keloid scarring but several susceptibility loci have been discovered, most notably in Chromosome 15. Genetics People who have ancestry from Sub-Saharan Africa, Asia, or Latin America are more likely to develop a keloid. Among ethnic Chinese in Asia, the keloid is the most common skin condition. In the United States, keloids are more common in African Americans and Hispanic Americans than European Americans. Those who have a family history of keloids are also susceptible since about 1/3 of people who get keloids have a first-degree blood relative (mother, father, sister, brother, or child) who also gets keloids. This family trait is most common in people of African and/or Asian descent. Development of keloids among twins also lends credibility to existence of a genetic susceptibility to develop keloids. Marneros et al. (1) reported four sets of identical twins with keloids; Ramakrishnan et al. also described a pair of twins who developed keloids at the same time after vaccination. Case series have reported clinically severe forms of keloids in individuals with a positive family history and black African ethnic origin. Pathology Histologically, keloids are fibrotic tumors characterized by a collection of atypical fibroblasts with excessive deposition of extracellular matrix components, especially collagen, fibronectin, elastin, and proteoglycans. Generally, they contain relatively acellular centers and thick, abundant collagen bundles that form nodules in the deep dermal portion of the lesion. Keloids present a therapeutic challenge that must be addressed, as these lesions can cause significant pain, (itching), and physical disfigurement. They may not improve in appearance over time and can limit mobility if located over a joint. Keloids affect all sexes equally, although the incidence in young female patients has been reported to be higher than in young males, probably reflecting the greater frequency of earlobe piercing among women. The frequency of occurrence is 15 times higher in highly pigmented people. People of African descent have increased risk of keloid occurrences. Treatments Prevention of keloid scars in patients with a known predisposition to them includes preventing unnecessary trauma or surgery (such as ear piercing and elective mole removal) whenever possible. Any skin problems in predisposed individuals (e.g., acne, infections) should be treated as early as possible to minimize areas of inflammation. Treatments (both preventive and therapeutic) available are pressure therapy, silicone gel sheeting, intra-lesional triamcinolone acetonide (TAC), cryosurgery (freezing), radiation, laser therapy (pulsed dye laser), interferon (IFN), fluorouracil (5-FU) and surgical excision as well as a multitude of extracts and topical agents. Appropriate treatment of a keloid scar is age-dependent: radiotherapy, anti-metabolites and corticosteroids would not be recommended to be used in children, in order to avoid harmful side effects, like growth abnormalities. In adults, corticosteroids combined with 5-FU and PDL in a triple therapy, enhance results and diminish side effects. Cryotherapy (or cryosurgery) refers to the application of extreme cold to treat keloids. This treatment method is easy to perform, effective, safe, and has the least chance of recurrence. Surgical excision is currently still the most common treatment for a significant amount of keloid lesions. However, when used as the solitary form of treatment there is a large recurrence rate of between 70 and 100%. It has also been known to cause a larger lesion formation on recurrence. While not always successful alone, surgical excision when combined with other therapies dramatically decreases the recurrence rate. Examples of these therapies include but are not limited to radiation therapy, pressure therapy and laser ablation. Pressure therapy following surgical excision has shown promising results, especially in keloids of the ear and earlobe. The mechanism of how exactly pressure therapy works is unknown at present, but many patients with keloid scars and lesions have benefited from it. Intralesional injection with a corticosteroid such as triamcinolone acetonide (Kenalog) does appear to aid in the reduction of fibroblast activity, inflammation and pruritus. Tea tree oil, salt or other topical oil has no effect on keloid lesions. A 2022 systematic review included multiple studies on laser therapy for treating keloid scars. There was not enough evidence for the review authors to determine if laser therapy was more effective than other treatments. They were also unable to conclude if laser therapy leads to more harm than benefits compared with no treatment or different kinds of treatment. Another 2022 systematic review compared silicone gel sheeting with no treatment, treatment with non-silicone gel sheeting and treatment with intralesional injections of triamcinolone acetonide. The authors only found two small studies (36 participants in total) that compared these treatment options so were unable to determine which (if any) was more effective. Epidemiology Persons of any age can develop a keloid. Children under 10 are less likely to develop keloids, even from ear piercing. Keloids may also develop from pseudofolliculitis barbae; continued shaving when one has razor bumps will cause irritation to the bumps, infection, and over time keloids will form. Persons with razor bumps are advised to stop shaving in order for the skin to repair itself before undertaking any form of hair removal. The tendency to form keloids is speculated to be hereditary. Keloids can tend to appear to grow over time without even piercing the skin, almost acting out a slow tumorous growth; the reason for this tendency is unknown. Extensive burns, either thermal or radiological, can lead to unusually large keloids; these are especially common in firebombing casualties, and were a signature effect of the atomic bombings of Hiroshima and Nagasaki. The true incidence and prevalence of keloid in the United States is not known. Indeed, there has never been a population study to assess the epidemiology of this disorder. In his 2001 publication, Marneros stated that “reported incidence of keloids in the general population ranges from a high of 16% among the adults in the Democratic Republic of the Congo to a low of 0.09% in England,” quoting from Bloom's 1956 publication on heredity of keloids. Clinical observations show that the disorder is more common among sub-Saharan Africans, African Americans and Asians, with unreliable and very wide estimated prevalence rates ranging from 4.5 to 16%. History Keloids were described by Egyptian surgeons around 1700 BCE, recorded in the Smith papyrus, regarding surgical techniques. Baron Jean-Louis Alibert (1768–1837) identified the keloid as an entity in 1806. He called them , later changing the name to to avoid confusion with cancer. The word is derived from the Ancient Greek , , meaning "crab pincers", and the suffix -oid, meaning "like". In the 19th century it was known as the "Keloid of Alibert" as opposed to "Addison’s keloid" (Morphea). The famous American Civil War-era photograph "Whipped Peter" depicts an escaped former slave with extensive keloid scarring as a result of numerous brutal beatings from his former overseer. Intralesional corticosteroid injections were introduced as a treatment in the mid-1960s as a method to attenuate scarring. Pressure therapy has been used for prophylaxis and treatment of keloids since the 1970s. Topical silicone gel sheeting was introduced as a treatment in the early 1980s. References Further reading External links Dermal and subcutaneous growths Radiation health effects Scarring
Keloid
[ "Chemistry", "Materials_science" ]
2,447
[ "Radiation effects", "Radiation health effects", "Radioactivity" ]
162,183
https://en.wikipedia.org/wiki/GPIB
IEEE 488, also known as HP-IB (Hewlett-Packard Interface Bus) and generically as GPIB (General Purpose Interface Bus), is a short-range digital communications 8-bit parallel multi-master interface bus specification developed by Hewlett-Packard. It subsequently became the subject of several standards. Although the bus was originally created to connect together automated test equipment, it also had some success as a peripheral bus for early microcomputers, notably the Commodore PET. Newer standards have largely replaced IEEE 488 for computer use, but it is still used by test equipment. History In the 1960s, Hewlett-Packard (HP) manufactured various automated test and measurement instruments, such as digital multimeters and logic analyzers. They developed the HP Interface Bus (HP-IB) to enable easier interconnection between instruments and controllers (computers and other instruments). This part of HP was later (c. 1999) spun off as Agilent Technologies, and in 2014 Agilent's test and measurement division was spun off as Keysight Technologies. The bus was relatively easy to implement using the technology at the time, using a simple parallel bus and several individual control lines. For example, the HP 59501 Power Supply Programmer and HP 59306A Relay Actuator were both relatively simple HP-IB peripherals implemented in TTL, without the need for a microprocessor. HP licensed the HP-IB patents for a nominal fee to other manufacturers. It became known as the General Purpose Interface Bus (GPIB), and became a de facto standard for automated and industrial instrument control. As GPIB became popular, it was formalized by various standards organizations. In 1975, the IEEE standardized the bus as Standard Digital Interface for Programmable Instrumentation, IEEE 488; it was revised in 1978 (producing IEEE 488-1978). The standard was revised in 1987, and redesignated as IEEE 488.1 (IEEE 488.1-1987). These standards formalized the mechanical, electrical, and basic protocol parameters of GPIB, but said nothing about the format of commands or data. In 1987, IEEE introduced Standard Codes, Formats, Protocols, and Common Commands, IEEE 488.2. It was revised in 1992. IEEE 488.2 provided for basic syntax and format conventions, as well as device-independent commands, data structures, error protocols, and the like. IEEE 488.2 built on IEEE 488.1 without superseding it; equipment can conform to IEEE 488.1 without following IEEE 488.2. While IEEE 488.1 defined the hardware and IEEE 488.2 defined the protocol, there was still no standard for instrument-specific commands. Commands to control the same class of instrument, e.g., multimeters, varied between manufacturers and even models. The United States Air Force, and later Hewlett-Packard, recognized this as a problem. In 1989, HP developed their Test Measurement Language (TML) or Test and Measurement Systems Language (TMSL) which was the forerunner to Standard Commands for Programmable Instrumentation (SCPI), introduced as an industry standard in 1990. SCPI added standard generic commands, and a series of instrument classes with corresponding class-specific commands. SCPI mandated the IEEE 488.2 syntax, but allowed other (non-IEEE 488.1) physical transports. The IEC developed their own standards in parallel with the IEEE, with IEC 60625-1 and IEC 60625-2 (IEC 625), later replaced by IEC 60488-2. National Instruments introduced a backward-compatible extension to IEEE 488.1, originally known as HS-488. It increased the maximum data rate to 8 Mbyte/s, although the rate decreases as more devices are connected to the bus. This was incorporated into the standard in 2003 (IEEE 488.1-2003), over HP's objections. In 2004, the IEEE and IEC combined their respective standards into a "Dual Logo" IEEE/IEC standard IEC 60488-1, Standard for Higher Performance Protocol for the Standard Digital Interface for Programmable Instrumentation - Part 1: General, replaces IEEE 488.1/IEC 60625-1, and IEC 60488-2,Part 2: Codes, Formats, Protocols and Common Commands, replaces IEEE 488.2/IEC 60625-2. Characteristics IEEE 488 is an 8-bit, electrically parallel bus which employs sixteen signal lines — eight used for bi-directional data transfer, three for handshake, and five for bus management — plus eight ground return lines. The bus supports 31 five-bit primary device addresses numbered from 0 to 30, allocating a unique address to each device on the bus. The standard allows up to 15 devices to share a single physical bus of up to total cable length. The physical topology can be linear or star (forked). Active extenders allow longer buses, with up to 31 devices theoretically possible on a logical bus. Control and data transfer functions are logically separated; a controller can address one device as a "talker" and one or more devices as "listeners" without having to participate in the data transfer. It is possible for multiple controllers to share the same bus, but only one can be the "Controller In Charge" at a time. In the original protocol, transfers use an interlocked, three-wire ready–valid–accepted handshake. The maximum data rate is about one megabyte per second. The later HS-488 extension relaxes the handshake requirements, allowing up to 8 Mbyte/s. The slowest participating device determines the speed of the bus. Connectors IEEE 488 specifies a 24-pin Amphenol-designed micro ribbon connector. Micro ribbon connectors have a D-shaped metal shell, but are larger than D-subminiature connectors. They are sometimes called "Centronics connectors" after the 36-pin micro ribbon connector Centronics used for their printers. One unusual feature of IEEE 488 connectors is they commonly use a "double-headed" design, with male on one side, and female on the other. This allows stacking connectors for easy daisy-chaining. Mechanical considerations limit the number of stacked connectors to four or fewer, although a workaround involving physically supporting the connectors may be able to get around this. They are held in place by screws, either 6-32 UNK (now largely obsolete) or metric M3.5×0.6 threads. Early versions of the standard suggested that metric screws should be blackened to avoid confusion with the incompatible UTS threads. However, by the 1987 revision this was no longer considered necessary because of the prevalence of metric threads. The IEC 60625 standard prescribes the use of 25-pin D-subminiature connectors (the same as used for the parallel port on IBM PC compatibles). This connector did not gain significant market acceptance against the established 24-pin connector. Capabilities Use as a computer interface HP's designers did not specifically plan for IEEE 488 to be a peripheral interface for general-purpose computers; the focus was on instrumentation. But when HP's early microcomputers needed an interface for peripherals (disk drives, tape drives, printers, plotters, etc.), HP-IB was readily available and easily adapted to the purpose. HP computer products which used HP-IB included the HP Series 80, HP 9800 series, the HP 2100 series, and the HP 3000 series. HP computer peripherals which did not utilize the RS-232 communication interface often used HP-IB including disc systems like the HP 7935. Some of HP's advanced pocket calculators of the 1980s, such as the HP-41 and HP-71B series, also had IEEE 488 capabilities, via an optional HP-IL/HP-IB interface module. Other manufacturers adopted GPIB for their computers as well, such as with the Tektronix 405x line. The Commodore PET (introduced 1977) range of personal computers connected their peripherals using the IEEE 488 bus, but with a non-standard card edge connector. Commodore's following 8-bit machines utilized a serial bus whose protocol was based on IEEE 488. Commodore marketed an IEEE 488 cartridge for the VIC-20 and the Commodore 64. Several third party suppliers of Commodore 64 peripherals made a cartridge for the C64 that provided an IEEE 488-derived interface on a card edge connector similar to that of the PET series. Eventually, faster, more complete standards such as SCSI superseded IEEE 488 for peripheral access. Comparison with other interface standards Electrically, IEEE 488 used a hardware interface that could be implemented with some discrete logic or with a microcontroller. The hardware interface enabled devices made by different manufacturers to communicate with a single host. Since each device generated the asynchronous handshaking signals required by the bus protocol, slow and fast devices could be mixed on one bus. The data transfer is relatively slow, so transmission line issues such as impedance matching and line termination are ignored. There was no requirement for galvanic isolation between the bus and devices, which created the possibility of ground loops causing extra noise and loss of data. Physically, the IEEE 488 connectors and cabling were rugged and held in place by screws. While physically large and sturdy connectors were an advantage in industrial or laboratory set ups, the size and cost of the connectors was a liability in applications such as personal computers. Although the electrical and physical interfaces were well defined, there was not an initial standard command set. Devices from different manufacturers might use different commands for the same function. Some aspects of the command protocol standards were not standardized until Standard Commands for Programmable Instruments (SCPI) in 1990. Implementation options (e.g. end of transmission handling) can complicate interoperability in pre-IEEE 488.2 devices. More recent standards such as USB, FireWire, and Ethernet take advantage of declining costs of interface electronics to implement more complex standards providing higher bandwidth. The multi-conductor (parallel data) connectors and shielded cable were inherently more costly than the connectors and cabling that could be used with serial data transfer standards such as RS-232, RS-485, USB, FireWire or Ethernet. Very few mass-market personal computers or peripherals (such as printers or scanners) implemented IEEE 488. See also References External links Part 1 Specifications IEEE/IEC 60488-1-2004, 158 page PDF file, costs USD$391 in 2024 Part 2 Specifications IEEE 488.2-1992, 254 page PDF file, costs USD$52 in 2024 (superseded by IEEE/IEC 60488-2-2004) IEEE/IEC 60488-2-2004, 264 page PDF file, costs USD$373 in 2024 Other GPIB / IEEE 488 multiple page tutorial Computer buses IEEE standards Electronic test equipment
GPIB
[ "Technology", "Engineering" ]
2,273
[ "Computer standards", "IEEE standards", "Electronic test equipment", "Measuring instruments" ]
162,190
https://en.wikipedia.org/wiki/Computer%20telephony%20integration
Computer telephony integration, also called computer–telephone integration or CTI, is a common name for any technology that allows interactions on a telephone and a computer to be coordinated. The term is predominantly used to describe desktop-based interaction for helping users be more efficient, though it can also refer to server-based functionality such as automatic call routing. See also Automatic number identification (ANI) Automatic call distributor Dialed Number Identification Service (DNIS) PhoneValet Message Center Predictive dialer Screen pop Telephony Application Programming Interface (TAPI) Telephony Server Application Programming Interface (TSAPI) Computer-supported telecommunications applications (CSTA) Multi-Vendor Integration Protocol References External links User Agent CSTA (uaCSTA) - TR/87 - ECMA International Telephone service enhanced features
Computer telephony integration
[ "Technology" ]
163
[ "Information technology", "Computer telephony integration" ]
162,197
https://en.wikipedia.org/wiki/Duralumin
Duralumin (also called duraluminum, duraluminium, duralum, dural(l)ium, or dural) is a trade name for one of the earliest types of age-hardenable aluminium–copper alloys. The term is a combination of Düren and aluminium. Its use as a trade name is obsolete. Today the term mainly refers to aluminium-copper alloys, designated as the 2000 series by the international alloy designation system (IADS), as with 2014 and 2024 alloys used in airframe fabrication. Duralumin was developed in 1909 in Germany. Duralumin is known for its strength and hardness, making it suitable for various applications, especially in the aviation and aerospace industry. However, it is susceptible to corrosion, which can be mitigated by using alclad-duralum materials. History Duralumin was developed by the German metallurgist Alfred Wilm at private military-industrial laboratory (Center for Scientific-Technical Research) in Neubabelsberg. In 1903, Wilm discovered that after quenching, an aluminium alloy containing 4% copper would harden when left at room temperature for several days. Further improvements led to the introduction of duralumin in 1909. The name, originally a trade mark of Dürener Metallwerke AG which acquired Wilm's patents and commercialized the material, is mainly used in pop-science to describe all Al-Cu alloys system, or '2000' series, as designated through the international alloy designation system originally created in 1970 by the Aluminum Association. Composition In addition to aluminium, the main materials in duralumin are copper, manganese and magnesium. For instance, Duraluminium 2024 consists of 91-95% aluminium, 3.8-4.9% copper, 1.2-1.8% magnesium, 0.3-0.9% manganese, <0.5% iron, <0.5% silicon, <0.25% zinc, <0.15% titanium, <0.10% chromium and no more than 0.15% of other elements together. Although the addition of copper improves strength, it also makes these alloys susceptible to corrosion. Corrosion resistance can be greatly enhanced by the metallurgical bonding of a high-purity aluminium surface layer, referred to as alclad-duralum. Alclad materials are commonly used in the aircraft industry to this day. Microstructure Duralumin's remarkable strength and durability stem from its unique microstructure, which is significantly influenced by heat treatment processes. Initial Microstructure Solid Solution: After initial solidification, duralumin exists as a single-phase solid solution, primarily composed of aluminium atoms with dispersed copper, magnesium, and other alloying elements. This initial state is relatively soft and ductile. Heat Treatment and Microstructural Changes Solution Annealing: Duralumin undergoes solution annealing, a high-temperature heat treatment process that dissolves the alloying elements into the aluminium matrix, creating a homogeneous solid solution. Quenching: Rapid cooling (quenching) after solution annealing freezes the high-temperature solid solution, preventing the precipitation of strengthening phases. Aging (Precipitation Hardening): During aging, the supersaturated solid solution becomes unstable. Fine precipitates, such as CuAl2 and Mg2Si, form within the aluminum matrix. These precipitates act as obstacles to dislocation movement, significantly increasing the alloy's strength and hardness. Final Microstructure The final microstructure of duralumin consists of a predominantly aluminium matrix dispersed fine precipitates (CuAl2, Mg2Si) Grain boundaries. The size, distribution, and type of precipitates play a crucial role in determining the mechanical properties of duralumin. Optimal aging conditions lead to the formation of finely dispersed precipitates, resulting in peak strength and hardness. Applications Aluminium alloyed with copper (Al-Cu alloys), which can be precipitation hardened, are designated by the International Alloy Designation System as the 2000 series. Typical uses for wrought Al-Cu alloys include: 2011: Wire, rod, and bar for screw machine products. Applications where good machinability and good strength are required. 2014: Heavy-duty forgings, plate, and extrusions for aircraft fittings, wheels, and major structural components, space booster tankage and structure, truck frame and suspension components. Applications requiring high strength and hardness including service at elevated temperatures. 2017 or Avional (France): Around 1% Si. Good machinability. Acceptable resistance to corrosion in air and mechanical properties. Also called AU4G in France. Used for aircraft applications between the wars in France and Italy. Also saw some use in motor-racing applications from the 1960s, as it is a tolerant alloy that could be press-formed with relatively unsophisticated equipment. 2024: Aircraft structures, rivets, hardware, truck wheels, screw machine products, and other structural applications. 2036: Sheet for auto body panels 2048: Sheet and plate in structural components for aerospace application and military equipment Aviation German scientific literature openly published information about duralumin, its composition and heat treatment, before the outbreak of World War I in 1914. Despite this, use of the alloy outside Germany did not occur until after fighting ended in 1918. Reports of German use during World War I, even in technical journals such as Flight, could still mis-identify its key alloying component as magnesium rather than copper. Engineers in the UK showed little interest in duralumin until after the war. The earliest known attempt to use duralumin for a heavier-than-air aircraft structure occurred in 1916, when Hugo Junkers first introduced its use in the airframe of the Junkers J 3, a single-engined monoplane "technology demonstrator" that marked the first use of the Junkers trademark duralumin corrugated skinning. The Junkers company completed only the covered wings and tubular fuselage framework of the J 3 before abandoning its development. The slightly later, solely IdFlieg-designated Junkers J.I armoured sesquiplane of 1917, known to the factory as the Junkers J 4, had its all-metal wings and horizontal stabilizer made in the same manner as the J 3's wings had been, like the experimental and airworthy all-duralumin Junkers J 7 single-seat fighter design, which led to the Junkers D.I low-wing monoplane fighter, introducing all-duralumin aircraft structural technology to German military aviation in 1918. Its first use in aerostatic airframes came in rigid airship frames, eventually including all those of the "Great Airship" era of the 1920s and 1930s: the British-built R100, the German passenger Zeppelins LZ 127 Graf Zeppelin, LZ 129 Hindenburg, LZ 130 Graf Zeppelin II, and the U.S. Navy airships USS Los Angeles (ZR-3, ex-LZ 126), USS Akron (ZRS-4) and USS Macon (ZRS-5). Bicycles Duralumin was used to manufacture bicycle components and framesets from the 1930s to 1990s. Several companies in Saint-Étienne, France stood out for their early, innovative adoption of duralumin: in 1932, Verot et Perrin developed the first light alloy crank arms; in 1934, Haubtmann released a complete crankset; from 1935 on, Duralumin freewheels, derailleurs, pedals, brakes and handlebars were manufactured by several companies. Complete framesets followed quickly, including those manufactured by: Mercier (and Aviac and other licensees) with their popular Meca Dural family of models, the Pelissier brothers and their race-worthy La Perle models, and Nicolas Barra and his exquisite mid-twentieth century “Barralumin” creations. Other names that come up here also included: Pierre Caminade, with his beautiful Caminargent creations and their exotic octagonal tubing, and also Gnome et Rhône, with its deep heritage as an aircraft engine manufacturer that also diversified into motorcycles, velomotors and bicycles after World War Two. Mitsubishi Heavy Industries, which was prohibited from producing aircraft during the American occupation of Japan, manufactured the “cross” bicycle out of surplus wartime duralumin in 1946. The “cross” was designed by Kiro Honjo, a former aircraft designer responsible for the Mitsubishi G4M. Duralumin use in bicycle manufacturing faded in the 1970s and 1980s. Vitus nonetheless released the venerable “979” frameset in 1979, a “Duralinox” model that became an instant classic among cyclists. The Vitus 979 was the first production aluminium frameset whose thin-wall 5083/5086 tubing was slip-fit and then glued together using a dry heat-activated epoxy. The result was an extremely lightweight but very durable frameset. Production of the Vitus 979 continued until 1992. Automotive In 2011, BBS Automotive made the RI-D, the world's first production automobile wheel made of duralumin. The company has since made other wheels of duralumin also, such as the RZ-D. References Aluminium–copper alloys Products introduced in 1909 Aerospace materials
Duralumin
[ "Chemistry", "Engineering" ]
1,950
[ "Aerospace engineering", "Aerospace materials", "Alloys", "Aluminium alloys" ]
162,263
https://en.wikipedia.org/wiki/Telephone%20directory
A telephone directory, commonly called a telephone book, telephone address book, phonebook, or the white and yellow pages, is a listing of telephone subscribers in a geographical area or subscribers to services provided by the organization that publishes the directory. Its purpose is to allow the telephone number of a subscriber identified by name and address to be found. The advent of the Internet, search engines, and smartphones in the 21st century greatly reduced the need for a paper phone book. Some communities, such as Seattle and San Francisco, sought to ban their unsolicited distribution as wasteful, unwanted and harmful to the environment. The slogan "Let Your Fingers Do the Walking" refers to use of phone books. Content Subscriber names are generally listed in alphabetical order, together with their postal or street address and telephone number. In principle every subscriber in the geographical coverage area is listed, but subscribers may request the exclusion of their number from the directory, often for a fee; their number is then said to be "unlisted" (US and Canada), "ex-directory" (British English), or "private" (Australia and New Zealand). A telephone directory may also provide instructions: how to use the telephone service, how to dial a particular number, be it local or international, what numbers to access important and emergency services, utilities, hospitals, doctors, and organizations who can provide support in times of crisis. It may also have civil defense, emergency management, or first aid information. There may be transit maps, postal code/zip code guides, international dialing codes or stadium seating charts, as well as advertising. In the US, under current rules and practices, mobile phone and voice over IP listings are not included in telephone directories. Efforts to create cellular directories have met stiff opposition from several fronts, including those who seek to avoid telemarketers. Types A telephone directory and its content may be known by the colour of the paper it is printed on. White pages generally indicates personal or alphabetic listings. Yellow pages, golden pages, A2Z, or classified directory is usually a "business directory", where businesses are listed alphabetically within each of many classifications (e.g., "lawyers"), almost always with paid advertising. Grey pages, sometimes called a "reverse telephone directory", allowing subscriber details to be found for a given number. Not available in all jurisdictions. (These listings are often published separately, in a city directory, or under another name, for a price, and made available to commercial and government agencies.) Other colors may have other meanings; for example, information on government agencies is often printed on blue pages or green pages. Publication Telephone directories can be published in hard copy or in electronic form. In the latter case, the directory can be on physical media such as CD-ROM, or using an online service through proprietary terminals or over the Internet. In many countries, directories are both published in book form and also available over the Internet. Printed directories were usually supplied free of charge. CD ROM Selectphone (ProCD) Inc.) and PhoneDisc (Digital Directory Assistance Inc) were among the earliest such products. These were not a matter of a single click: PhoneDisc, depending on the mix of Residential, Business or both, involved up to eight CD-ROMs. SelectPhone is fewer CD-ROMs: five. Both provide a reverse lookup feature (by phone number or by address), albeit involving up to five CD-ROMs. Internet The combination of phone number lookups, along with Internet access, was offered by some service providers; VoIP (Voice over IP) was an additional feature. History Telephone directories are a type of city directory. Books listing the inhabitants of an entire city were widely published starting in the 18th century, before the invention of the telephone. The first telephone directory, consisting of a single piece of cardboard, was issued on 21 February 1878; it listed 50 individuals, businesses, and other offices in New Haven, Connecticut, that had telephones. The directory was not alphabetized and no numbers were included with the people listed in it. In 1879, Dr. Moses Greeley Parker suggested the format of the telephone directory be changed so that subscribers appeared in alphabetical order and each telephone be identified with a number. Parker came to this idea out of fear that Lowell, Massachusetts's four operators would contract measles and be unable to connect telephone subscribers to one another. The first British telephone directory was published on 15 January 1880 by The Telephone Company. It contained 248 names and addresses of individuals and businesses in London; telephone numbers were not used at the time as subscribers were asked for by name at the exchange. The directory is preserved as part of the British phone book collection by BT Archives. The Reuben H. Donnelly company asserts that it published the first classified directory, or yellow pages, for Chicago, Illinois, in 1886. In 1938, AT&T commissioned the creation of a new typeface, known as Bell Gothic, the purpose of which was to be readable at very small font sizes when printed on newsprint where small imperfections were common. In 1981, France became the first country to have an electronic directory on a system called Minitel. The directory is called "11" after its telephone access number. In 1991, the U.S. Supreme Court ruled (in Feist v. Rural) that telephone companies do not have a copyright on telephone listings, because copyright protects creativity and not the mere labor of collecting existing information. In late July 1995 Kapitol launched the Infobel.be website. Infobel was then the first telephone directory website launched on the then-nascent Internet. In 1996, in the US the first telephone directories went online. Yellowpages.com and Whitepages.com both saw their start in April. In 1999, the first online telephone directories and people-finding sites such as LookupUK.com went online in the UK. In 2003, more advanced UK searching including Electoral Roll became available on LocateFirst.com. With online directories, and with many people giving up landlines for cell phones whose numbers are not listed in telephone directories, printed directories are no longer as necessary as they once were. Regulators no longer required that residential listings be printed, starting with New York in 2010. Yellow pages continued to be printed because some advertisers still reached consumers that way. In the 21st century, printed telephone directories are increasingly criticized as waste. In 2012, after some North American cities passed laws banning the distribution of telephone books, an industry group sued and obtained a court ruling permitting the distribution to continue. In 2010, manufacture and distribution of telephone directories produced over 1,400,000 metric tons of greenhouse gases and consumed over 600,000 tons of paper annually. Reverse directories A reverse telephone directory is sorted by phone number, so the name and address of a subscriber is looked up by phone number. See also City directory References Further reading External links Phone Book of the World.com 1878 introductions American inventions Directories History of the telephone Telephone numbers
Telephone directory
[ "Mathematics" ]
1,458
[ "Mathematical objects", "Numbers", "Telephone numbers" ]
162,267
https://en.wikipedia.org/wiki/Freiling%27s%20axiom%20of%20symmetry
Freiling's axiom of symmetry () is a set-theoretic axiom proposed by Chris Freiling. It is based on intuition of Stuart Davidson but the mathematics behind it goes back to Wacław Sierpiński. Let denote the set of all functions from to countable subsets of . (In other words, where is the collection of subsets of of cardinality at most .) The axiom then states: For every , there exist such that and . A theorem of Sierpiński says that under the assumptions of ZFC set theory, is equivalent to the negation of the continuum hypothesis (CH). Sierpiński's theorem answered a question of Hugo Steinhaus and was proved long before the independence of CH had been established by Kurt Gödel and Paul Cohen. Freiling claims that probabilistic intuition strongly supports this proposition while others disagree. There are several versions of the axiom, some of which are discussed below. Freiling's argument Fix a function f in A. We will consider a thought experiment that involves throwing two darts at the unit interval. We are not able to physically determine with infinite accuracy the actual values of the numbers x and y that are hit. Likewise, the question of whether "y is in f(x)" cannot actually be physically computed. Nevertheless, if f really is a function, then this question is a meaningful one and will have a definite "yes" or "no" answer. Now wait until after the first dart, x, is thrown and then assess the chances that the second dart y will be in f(x). Since x is now fixed, f(x) is a fixed countable set and has Lebesgue measure zero. Therefore, this event, with x fixed, has probability zero. Freiling now makes two generalizations: Since we can predict with virtual certainty that "y is not in f(x)" after the first dart is thrown, and since this prediction is valid no matter what the first dart does, we should be able to make this prediction before the first dart is thrown. This is not to say that we still have a measurable event, rather it is an intuition about the nature of being predictable. Since "y is not in f(x)" is predictably true, by the symmetry of the order in which the darts were thrown (hence the name "axiom of symmetry") we should also be able to predict with virtual certainty that "x is not in f(y)". The axiom is now justified based on the principle that what will predictably happen every time this experiment is performed, should at the very least be possible. Hence there should exist two real numbers x, y such that x is not in f(y) and y is not in f(x). Relation to the (Generalised) Continuum Hypothesis Fix an infinite cardinal (e.g. ). Let be the statement: there is no map from sets to sets of size for which either or . Claim: . Proof: Part I (): Suppose . Then there exists a bijection . Setting defined via , it is easy to see that this demonstrates the failure of Freiling's axiom. Part II (): Suppose that Freiling's axiom fails. Then fix some to verify this fact. Define an order relation on by iff . This relation is total and every point has many predecessors. Define now a strictly increasing chain as follows: at each stage choose . This process can be carried out since for every ordinal , is a union of many sets of size ; thus is of size and so is a strict subset of . We also have that this sequence is cofinal in the order defined, i.e. every member of is some . (For otherwise if is not some , then since the order is total ; implying has many predecessors; a contradiction.) Thus we may well-define a map by . So which is union of many sets each of size . Hence . Note that so we can easily rearrange things to obtain that the above-mentioned form of Freiling's axiom. The above can be made more precise: . This shows (together with the fact that the continuum hypothesis is independent of choice) a precise way in which the (generalised) continuum hypothesis is an extension of the axiom of choice. Objections to Freiling's argument Freiling's argument is not widely accepted because of the following two problems with it (which Freiling was well aware of and discussed in his paper). The naive probabilistic intuition used by Freiling tacitly assumes that there is a well-behaved way to associate a probability to any subset of the reals. But the mathematical formalization of the notion of probability uses the notion of measure, yet the axiom of choice implies the existence of non-measurable subsets, even of the unit interval. Some examples of this are the Banach–Tarski paradox and the existence of Vitali sets. A minor variation of his argument gives a contradiction with the axiom of choice whether or not one accepts the continuum hypothesis, if one replaces countable additivity of probability by additivity for cardinals less than the continuum. (Freiling used a similar argument to claim that Martin's axiom is false.) It is not clear why Freiling's intuition should be any less applicable in this instance, if it applies at all. So Freiling's argument seems to be more an argument against the possibility of well ordering the reals than against the continuum hypothesis. Connection to graph theory Using the fact that in ZFC, we have (see above), it is not hard to see that the failure of the axiom of symmetry — and thus the success of  — is equivalent to the following combinatorial principle for graphs: The complete graph on can be so directed, that every node leads to at most -many nodes. In the case of , this translates to: The complete graph on the unit circle (or any set of the same size as the reals) can be so directed, that every node has a path to at most countably-many nodes. Thus in the context of ZFC, the failure of a Freiling axiom is equivalent to the existence of a specific kind of choice function. References Axioms of set theory Thought experiments
Freiling's axiom of symmetry
[ "Mathematics" ]
1,321
[ "Axioms of set theory", "Mathematical axioms" ]
162,269
https://en.wikipedia.org/wiki/Magnesium%20oxide
Magnesium oxide (MgO), or magnesia, is a white hygroscopic solid mineral that occurs naturally as periclase and is a source of magnesium (see also oxide). It has an empirical formula of MgO and consists of a lattice of Mg2+ ions and O2− ions held together by ionic bonding. Magnesium hydroxide forms in the presence of water (MgO + H2O → Mg(OH)2), but it can be reversed by heating it to remove moisture. Magnesium oxide was historically known as magnesia alba (literally, the white mineral from Magnesia), to differentiate it from magnesia nigra, a black mineral containing what is now known as manganese. Related oxides While "magnesium oxide" normally refers to MgO, the compound magnesium peroxide MgO2 is also known. According to evolutionary crystal structure prediction, MgO2 is thermodynamically stable at pressures above 116 GPa (gigapascals), and a semiconducting suboxide Mg3O2 is thermodynamically stable above 500 GPa. Because of its stability, MgO is used as a model system for investigating vibrational properties of crystals. Electric properties Pure MgO is not conductive and has a high resistance to electric current at room temperature. The pure powder of MgO has a relative permittivity inbetween 3.2 to 9.9 with an approximate dielectric loss of tan(δ) > 2.16x103 at 1kHz. Production Magnesium oxide is produced by the calcination of magnesium carbonate or magnesium hydroxide. The latter is obtained by the treatment of magnesium chloride solutions, typically seawater, with limewater or milk of lime. Mg2+ + Ca(OH)2 → Mg(OH)2 + Ca2+ Calcining at different temperatures produces magnesium oxide of different reactivity. High temperatures 1500 – 2000 °C diminish the available surface area and produces dead-burned (often called dead burnt) magnesia, an unreactive form used as a refractory. Calcining temperatures 1000 – 1500 °C produce hard-burned magnesia, which has limited reactivity and calcining at lower temperature, (700–1000 °C) produces light-burned magnesia, a reactive form, also known as caustic calcined magnesia. Although some decomposition of the carbonate to oxide occurs at temperatures below 700 °C, the resulting materials appear to reabsorb carbon dioxide from the air. Applications Refractory insulator MgO is prized as a refractory material, i.e. a solid that is physically and chemically stable at high temperatures. It has the useful attributes of high thermal conductivity and low electrical conductivity. According to a 2006 reference book: MgO is used as a refractory material for crucibles. It is also used as an insulator in heat-resistant electrical cable. Biomedical Among metal oxide nanoparticles, magnesium oxide nanoparticles (MgO NPs) have distinct physicochemical and biological properties, including biocompatibility, biodegradability, high bioactivity, significant antibacterial properties, and good mechanical properties, which make it a good choice as a reinforcement in composites. Heating elements It is used extensively as an electrical insulator in tubular construction heating elements as in electric stove and cooktop heating elements. There are several mesh sizes available and most commonly used ones are 40 and 80 mesh per the American Foundry Society. The extensive use is due to its high dielectric strength and average thermal conductivity. MgO is usually crushed and compacted with minimal airgaps or voids. Cement MgO is one of the components in Portland cement in dry process plants. Sorel cement uses MgO as the main component in combination with MgCl2 and water. Fertilizer MgO has an important place as a commercial plant fertilizer and as animal feed. Fireproofing It is a principal fireproofing ingredient in construction materials. As a construction material, magnesium oxide wallboards have several attractive characteristics: fire resistance, termite resistance, moisture resistance, mold and mildew resistance, and strength, but also a severe downside as it attracts moisture and can cause moisture damage to surrounding materials. Medical Magnesium oxide is used for relief of heartburn and indigestion, as an antacid, magnesium supplement, and as a short-term laxative. It is also used to improve symptoms of indigestion. Side effects of magnesium oxide may include nausea and cramping. In quantities sufficient to obtain a laxative effect, side effects of long-term use may rarely cause enteroliths to form, resulting in bowel obstruction. Waste treatment Magnesium oxide is used extensively in the soil and groundwater remediation, wastewater treatment, drinking water treatment, air emissions treatment, and waste treatment industries for its acid buffering capacity and related effectiveness in stabilizing dissolved heavy metal species. Many heavy metals species, such as lead and cadmium, are least soluble in water at mildly basic conditions (pH in the range 8–11). Solubility of metals increases their undesired bioavailability and mobility in soil and groundwater. Granular MgO is often blended into metals-contaminating soil or waste material, which is also commonly of a low pH (acidic), in order to drive the pH into the 8–10 range. Metal-hydroxide complexes tend to precipitate out of aqueous solution in the pH range of 8–10. MgO is packed in bags around transuranic waste in the disposal cells (panels) at the Waste Isolation Pilot Plant, as a getter to minimize the complexation of uranium and other actinides by carbonate ions and so to limit the solubility of radionuclides. The use of MgO is preferred over CaO since the resulting hydration product () is less soluble and releases less hydration heat. Another advantage is to impose a lower pH value (about 10.5) in case of accidental water ingress into the dry salt layers, in contast to the more soluble which would create a higher pH of 12.5 (strongly alkaline conditions). The cation being the second most abundant cation in seawater and in rocksalt, the potential release of magnesium ions dissolving in brines intruding the deep geological repository is also expected to minimize the geochemical disruption. Niche uses As a food additive, it is used as an anticaking agent. It is known to the US Food and Drug Administration for cacao products; canned peas; and frozen dessert. It has an E number of E530. As a reagent in the installation of the carboxybenzyl (Cbz) group using benzyl chloroformate in EtOAc for the N-protection of amines and amides. Doping MgO (about 1–5% by weight) into hydroxyapatite, a bioceramic mineral, increases the fracture toughness by migrating to grain boundaries, where it reduces grain size and changes the fracture mode from intergranular to transgranular. Pressed MgO is used as an optical material. It is transparent from 0.3 to 7 μm. The refractive index is 1.72 at 1 μm and the Abbe number is 53.58. It is sometimes known by the Eastman Kodak trademarked name Irtran-5, although this designation is obsolete. Crystalline pure MgO is available commercially and has a small use in infrared optics. An aerosolized solution of MgO is used in library science and collections management for the deacidification of at-risk paper items. In this process, the alkalinity of MgO (and similar compounds) neutralizes the relatively high acidity characteristic of low-quality paper, thus slowing the rate of deterioration. Magnesium oxide is used as an oxide barrier in spin-tunneling devices. Owing to the crystalline structure of its thin films, which can be deposited by magnetron sputtering, for example, it shows characteristics superior to those of the commonly used amorphous Al2O3. In particular, spin polarization of about 85% has been achieved with MgO versus 40–60 % with aluminium oxide. The value of tunnel magnetoresistance is also significantly higher for MgO (600% at room temperature and 1,100 % at 4.2 K) than Al2O3 (ca. 70% at room temperature). MgO is a common pressure transmitting medium used in high pressure apparatuses like the multi-anvil press. Brake lining Magnesia is used in brake linings for its heat conductivity and intermediate hardness. It helps dissipate heat from friction surfaces, preventing overheating, while minimizing wear on metal components. Its stability under high temperatures ensures reliable and durable braking performance in automotive and industrial applications. Thin film transistors In thin film transistors(TFTs), MgO is often used as a dielectric material or an insulator due to its high thermal stability, excellent insulating properties, and wide bandgap. Optimized IGZO/MgO TFTs demonstrated an electron mobility of 1.63 cm²/Vs, an on/off current ratio of 10⁶, and a subthreshold swing of 0.50 V/decade at −0.11 V. These TFTs are integral to low-power applications, wearable devices, and radiation-hardened electronics, contributing to enhanced efficiency and durability across diverse domains. Historical uses It was historically used as a reference white color in colorimetry, owing to its good diffusing and reflectivity properties. It may be smoked onto the surface of an opaque material to form an integrating sphere. Early gas mantle designs for lighting, such as the Clamond basket, consisted mainly of magnesium oxide. Precautions Inhalation of magnesium oxide fumes can cause metal fume fever. See also Notes References External links Data page at UCL Ceramic data page at NIST NIOSH Pocket Guide to Chemical Hazards at CDC Magnesium minerals Magnesium compounds Oxides Refractory materials Optical materials Ceramic materials Antacids E-number additives Rock salt crystal structure
Magnesium oxide
[ "Physics", "Chemistry", "Engineering" ]
2,119
[ "Refractory materials", "Oxides", "Salts", "Materials", "Optical materials", "Ceramic materials", "Ceramic engineering", "Matter" ]
162,289
https://en.wikipedia.org/wiki/Computer-aided%20manufacturing
Computer-aided manufacturing (CAM) also known as computer-aided modeling or computer-aided machining is the use of software to control machine tools in the manufacturing of work pieces. This is not the only definition for CAM, but it is the most common. It may also refer to the use of a computer to assist in all operations of a manufacturing plant, including planning, management, transportation and storage. Its primary purpose is to create a faster production process and components and tooling with more precise dimensions and material consistency, which in some cases, uses only the required amount of raw material (thus minimizing waste), while simultaneously reducing energy consumption. CAM is now a system used in schools and lower educational purposes. CAM is a subsequent computer-aided process after computer-aided design (CAD) and sometimes computer-aided engineering (CAE), as the model generated in CAD and verified in CAE can be input into CAM software, which then controls the machine tool. CAM is used in many schools alongside CAD to create objects. Overview Traditionally, CAM has been numerical control (NC) programming tool, wherein two-dimensional (2-D) or three-dimensional (3-D) models of components are generated in CAD. As with other "computer-aided" technologies, CAM does not eliminate the need for skilled professionals such as manufacturing engineers, NC programmers, or machinists. CAM leverages both the value of the most skilled manufacturing professionals through advanced productivity tools, while building the skills of new professionals through visualization, simulation and optimization tools. A CAM tool generally converts a model to a language the target machine in question understands, typically G-code. The numerical control can be applied to machining tools, or more recently to 3D printers. History Early commercial applications of CAM were in large companies in the automotive and aerospace industries; for example, Pierre Béziers work developing the CAD/CAM application UNISURF in the 1960s for car body design and tooling at Renault. Alexander Hammer at DeLaval Steam Turbine Company invented a technique to progressively drill turbine blades out of a solid metal block of metal with the drill controlled by a punch card reader in 1950. Boeing first obtained NC machines in 1956, made by companies such as Kearney and Trecker, Stromberg-Carlson and Thompson Ramo Waldridge. Historically, CAM software was seen to have several shortcomings that necessitated an overly high level of involvement by skilled CNC machinists. Fallows created the first CAD software but this had severe shortcomings and was promptly taken back into the developing stage. CAM software would output code for the least capable machine, as each machine tool control added on to the standard G-code set for increased flexibility. In some cases, such as improperly set up CAM software or specific tools, the CNC machine required manual editing before the program will run properly. None of these issues were so insurmountable that a thoughtful engineer or skilled machine operator could not overcome for prototyping or small production runs; G-Code is a simple language. In high production or high precision shops, a different set of problems were encountered where an experienced CNC machinist must both hand-code programs and run CAM software. The integration of CAD with other components of CAD/CAM/CAE Product lifecycle management (PLM) environment requires an effective CAD data exchange. Usually it had been necessary to force the CAD operator to export the data in one of the common data formats, such as IGES or STL or Parasolid formats that are supported by a wide variety of software. The output from the CAM software is usually a simple text file of G-code/M-codes, sometimes many thousands of commands long, that is then transferred to a machine tool using a direct numerical control (DNC) program or in modern Controllers using a common USB Storage Device. CAM packages could not, and still cannot, reason as a machinist can. They could not optimize toolpaths to the extent required of mass production. Users would select the type of tool, machining process and paths to be used. While an engineer may have a working knowledge of G-code programming, small optimization and wear issues compound over time. Mass-produced items that require machining are often initially created through casting or some other non-machine method. This enables hand-written, short, and highly optimized G-code that could not be produced in a CAM package. At least in the United States, there is a shortage of young, skilled machinists entering the workforce able to perform at the extremes of manufacturing; high precision and mass production. As CAM software and machines become more complicated, the skills required of a machinist or machine operator advance to approach that of a computer programmer and engineer rather than eliminating the CNC machinist from the workforce. Typical areas of concern High-Speed Machining, including streamlining of tool paths Multi-function Machining 5 Axis Machining Feature recognition and machining Automation of Machining processes Ease of Use Overcoming historical shortcomings Over time, the historical shortcomings of CAM are being attenuated, both by providers of niche solutions and by providers of high-end solutions. This is occurring primarily in three arenas: Ease of usage Manufacturing complexity Integration with PLM and the extended enterprise Ease in use For the user who is just getting started as a CAM user, out-of-the-box capabilities providing Process Wizards, templates, libraries, machine tool kits, automated feature based machining and job function specific tailorable user interfaces build user confidence and speed the learning curve. User confidence is further built on 3D visualization through a closer integration with the 3D CAD environment, including error-avoiding simulations and optimizations. Manufacturing complexity The manufacturing environment is increasingly complex. The need for CAM and PLM tools by the manufacturing engineer, NC programmer or machinist is similar to the need for computer assistance by the pilot of modern aircraft systems. The modern machinery cannot be properly used without this assistance. Today's CAM systems support the full range of machine tools including: turning, 5 axis machining, waterjet, laser / plasma cutting, and wire EDM. Today’s CAM user can easily generate streamlined tool paths, optimized tool axis tilt for higher feed rates, better tool life and surface finish, and ideal cutting depth. In addition to programming cutting operations, modern CAM software can also drive non-cutting operations such as machine tool probing. Integration with PLM and the extended enterprise LM to integrate manufacturing with enterprise operations from concept through field support of the finished product. To ensure ease of use appropriate to user objectives, modern CAM solutions are scalable from a stand-alone CAM system to a fully integrated multi-CAD 3D solution-set. These solutions are created to meet the full needs of manufacturing personnel including part planning, shop documentation, resource management and data management and exchange. To prevent these solutions from detailed tool specific information a dedicated tool management Machining process Most machining progresses through many stages, each of which is implemented by a variety of basic and sophisticated strategies, depending on the part design, material, and software available. Roughing This process usually begins with raw stock, known as billet, or a rough casting which a CNC machine cuts roughly to shape of the final model, ignoring the fine details. In milling, the result often gives the appearance of terraces or steps, because the strategy has taken multiple "steps" down the part as it removes material. This takes the best advantage of the machine's ability by cutting material horizontally. Common strategies are zig-zag clearing, offset clearing, plunge roughing, rest-roughing, and trochoidal milling (adaptive clearing). The goal at this stage is to remove the most material in the least time, without much concern for overall dimensional accuracy. When roughing a part, a small amount of extra material is purposely left behind to be removed in subsequent finishing operation(s). Semi-finishing This process begins with a roughed part that unevenly approximates the model and cuts to within a fixed offset distance from the model. The semi-finishing pass must leave a small amount of material (called the scallop) so the tool can cut accurately, but not so little that the tool and material deflect away from the cutting surfaces. Common strategies are raster passes, waterline passes, constant step-over passes, pencil milling. Finishing Finishing involves many light passes across the material in fine steps to produce the finished part. When finishing a part, the steps between passes is minimal to prevent tool deflection and material spring back. In order to reduce the lateral tool load, tool engagement is reduced, while feed rates and spindle speeds are generally increased in order to maintain a target surface speed (SFM). A light chip load at high feed and RPM is often referred to as High Speed Machining (HSM), and can provide quick machining times with high quality results. The result of these lighter passes is a highly accurate part, with a uniformly high surface finish. In addition to modifying speeds and feeds, machinists will often have finishing specific endmills, which never used as roughing endmills. This is done to protect the endmill from developing chips and flaws in the cutting surface, which would leave streaks and blemishes on the final part. Contour milling In milling applications on hardware with rotary table and/or rotary head axes, a separate finishing process called contouring can be performed. Instead of stepping down in fine-grained increments to approximate a surface, the work piece or tool is rotated to make the cutting surfaces of the tool tangent to the ideal part features. This produces an excellent surface finish with high dimensional accuracy. This process is commonly used to machine complex organic shapes such as turbine and impeller blades, which due to their complex curves and overlapping geometry, are impossible to machine with only three axis machines. Software: large vendors See also Computer-integrated manufacturing (CIM) Digital modeling and fabrication Direct numerical control (DNC) Flexible manufacturing system (FMS) Integrated Computer-Aided Manufacturing (ICAM) Manufacturing process management (MPM) STEP-NC Rapid prototyping and rapid manufacturing – solid freeform fabrication direct from CAD models CNC pocket milling References Further reading https://patents.google.com/patent/US5933353A/en External links CADSite.ru CAD Models Cimatron Brazil about Software CAD/CAM CimatronE Dragomatz and Mann reviewed toolpath algorithms in 1997. Pocket Machining Based on Offset Curves by Martin Held Purdue University Purdue Research and Education Centre for Information Systems in Engineering How to evaluate a CAM system Sheetmetalworld.com article Information technology management Product lifecycle management
Computer-aided manufacturing
[ "Technology" ]
2,196
[ "Information technology", "Information technology management" ]
162,312
https://en.wikipedia.org/wiki/Mechanical%20wave
In physics, a mechanical wave is a wave that is an oscillation of matter, and therefore transfers energy through a material medium. (Vacuum is, from classical perspective, a non-material medium, where electromagnetic waves propagate.) While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial equilibrium position. Mechanical waves can be produced only in media which possess elasticity and inertia. There are three types of mechanical waves: transverse waves, longitudinal waves, and surface waves. Some of the most common examples of mechanical waves are water waves, sound waves, and seismic waves. Like all waves, mechanical waves transport energy. This energy propagates in the same direction as the wave. A wave requires an initial energy input; once this initial energy is added, the wave travels through the medium until all its energy is transferred. In contrast, electromagnetic waves require no medium, but can still travel through one. Transverse wave A transverse wave is the form of a wave in which particles of medium vibrate about their mean position perpendicular to the direction of the motion of the wave. To see an example, move an end of a Slinky (whose other end is fixed) to the left-and-right of the Slinky, as opposed to to-and-fro. Light also has properties of a transverse wave, although it is an electromagnetic wave. Longitudinal wave Longitudinal waves cause the medium to vibrate parallel to the direction of the wave. It consists of multiple compressions and rarefactions. The rarefaction is the farthest distance apart in the longitudinal wave and the compression is the closest distance together. The speed of the longitudinal wave is increased in higher index of refraction, due to the closer proximity of the atoms in the medium that is being compressed. Sound is a longitudinal wave. Surface waves This type of wave travels along the surface or interface between two media. An example of a surface wave would be waves in a pool, or in an ocean, lake, or any other type of water body. There are two types of surface waves, namely Rayleigh waves and Love waves. Rayleigh waves, also known as ground roll, are waves that travel as ripples with motion similar to those of waves on the surface of water. Such waves are much slower than body waves, at roughly 90% of the velocity of for a typical homogeneous elastic medium. Rayleigh waves have energy losses only in two dimensions and are hence more destructive in earthquakes than conventional bulk waves, such as P-waves and S-waves, which lose energy in all three directions. A Love wave is a surface wave having horizontal waves that are shear or transverse to the direction of propagation. They usually travel slightly faster than Rayleigh waves, at about 90% of the body wave velocity, and have the largest amplitude. Examples Seismic waves Sound waves Wind waves on seas and lakes Vibration See also Acoustics Ultrasound Underwater acoustics References Waves Mechanics
Mechanical wave
[ "Physics", "Engineering" ]
617
[ "Physical phenomena", "Waves", "Motion (physics)", "Mechanics", "Mechanical engineering" ]
162,321
https://en.wikipedia.org/wiki/Invariant%20mass
The invariant mass, rest mass, intrinsic mass, proper mass, or in the case of bound systems simply mass, is the portion of the total mass of an object or system of objects that is independent of the overall motion of the system. More precisely, it is a characteristic of the system's total energy and momentum that is the same in all frames of reference related by Lorentz transformations. If a center-of-momentum frame exists for the system, then the invariant mass of a system is equal to its total mass in that "rest frame". In other reference frames, where the system's momentum is nonzero, the total mass (a.k.a. relativistic mass) of the system is greater than the invariant mass, but the invariant mass remains unchanged. Because of mass–energy equivalence, the rest energy of the system is simply the invariant mass times the speed of light squared. Similarly, the total energy of the system is its total (relativistic) mass times the speed of light squared. Systems whose four-momentum is a null vector (for example, a single photon or many photons moving in exactly the same direction) have zero invariant mass and are referred to as massless. A physical object or particle moving faster than the speed of light would have space-like four-momenta (such as the hypothesized tachyon), and these do not appear to exist. Any time-like four-momentum possesses a reference frame where the momentum (3-dimensional) is zero, which is a center of momentum frame. In this case, invariant mass is positive and is referred to as the rest mass. If objects within a system are in relative motion, then the invariant mass of the whole system will differ from the sum of the objects' rest masses. This is also equal to the total energy of the system divided by c2. See mass–energy equivalence for a discussion of definitions of mass. Since the mass of systems must be measured with a weight or mass scale in a center of momentum frame in which the entire system has zero momentum, such a scale always measures the system's invariant mass. For example, a scale would measure the kinetic energy of the molecules in a bottle of gas to be part of invariant mass of the bottle, and thus also its rest mass. The same is true for massless particles in such system, which add invariant mass and also rest mass to systems, according to their energy. For an isolated massive system, the center of mass of the system moves in a straight line with a steady subluminal velocity (with a velocity depending on the reference frame used to view it). Thus, an observer can always be placed to move along with it. In this frame, which is the center-of-momentum frame, the total momentum is zero, and the system as a whole may be thought of as being "at rest" if it is a bound system (like a bottle of gas). In this frame, which exists under these assumptions, the invariant mass of the system is equal to the total system energy (in the zero-momentum frame) divided by . This total energy in the center of momentum frame, is the minimum energy which the system may be observed to have, when seen by various observers from various inertial frames. Note that for reasons above, such a rest frame does not exist for single photons, or rays of light moving in one direction. When two or more photons move in different directions, however, a center of mass frame (or "rest frame" if the system is bound) exists. Thus, the mass of a system of several photons moving in different directions is positive, which means that an invariant mass exists for this system even though it does not exist for each photon. Sum of rest masses The invariant mass of a system includes the mass of any kinetic energy of the system constituents that remains in the center of momentum frame, so the invariant mass of a system may be greater than sum of the invariant masses (rest masses) of its separate constituents. For example, rest mass and invariant mass are zero for individual photons even though they may add mass to the invariant mass of systems. For this reason, invariant mass is in general not an additive quantity (although there are a few rare situations where it may be, as is the case when massive particles in a system without potential or kinetic energy can be added to a total mass). Consider the simple case of two-body system, where object A is moving towards another object B which is initially at rest (in any particular frame of reference). The magnitude of invariant mass of this two-body system (see definition below) is different from the sum of rest mass (i.e. their respective mass when stationary). Even if we consider the same system from center-of-momentum frame, where net momentum is zero, the magnitude of the system's invariant mass is not equal to the sum of the rest masses of the particles within it. The kinetic energy of such particles and the potential energy of the force fields increase the total energy above the sum of the particle rest masses, and both terms contribute to the invariant mass of the system. The sum of the particle kinetic energies as calculated by an observer is smallest in the center of momentum frame (again, called the "rest frame" if the system is bound). They will often also interact through one or more of the fundamental forces, giving them a potential energy of interaction, possibly negative. As defined in particle physics In particle physics, the invariant mass is equal to the mass in the rest frame of the particle, and can be calculated by the particle's energy  and its momentum  as measured in any frame, by the energy–momentum relation: or in natural units where , This invariant mass is the same in all frames of reference (see also special relativity). This equation says that the invariant mass is the pseudo-Euclidean length of the four-vector , calculated using the relativistic version of the Pythagorean theorem which has a different sign for the space and time dimensions. This length is preserved under any Lorentz boost or rotation in four dimensions, just like the ordinary length of a vector is preserved under rotations. In quantum theory the invariant mass is a parameter in the relativistic Dirac equation for an elementary particle. The Dirac quantum operator corresponds to the particle four-momentum vector. Since the invariant mass is determined from quantities which are conserved during a decay, the invariant mass calculated using the energy and momentum of the decay products of a single particle is equal to the mass of the particle that decayed. The mass of a system of particles can be calculated from the general formula: where is the invariant mass of the system of particles, equal to the mass of the decay particle. is the sum of the energies of the particles is the vector sum of the momentum of the particles (includes both magnitude and direction of the momenta) The term invariant mass is also used in inelastic scattering experiments. Given an inelastic reaction with total incoming energy larger than the total detected energy (i.e. not all outgoing particles are detected in the experiment), the invariant mass (also known as the "missing mass") of the reaction is defined as follows (in natural units): If there is one dominant particle which was not detected during an experiment, a plot of the invariant mass will show a sharp peak at the mass of the missing particle. In those cases when the momentum along one direction cannot be measured (i.e. in the case of a neutrino, whose presence is only inferred from the missing energy) the transverse mass is used. Example: two-particle collision In a two-particle collision (or a two-particle decay) the square of the invariant mass (in natural units) is Massless particles The invariant mass of a system made of two massless particles whose momenta form an angle has a convenient expression: Collider experiments In particle collider experiments, one often defines the angular position of a particle in terms of an azimuthal angle  and pseudorapidity . Additionally the transverse momentum, , is usually measured. In this case if the particles are massless, or highly relativistic () then the invariant mass becomes: Rest energy Rest energy (also called rest mass energy) is the energy associated with a particle's invariant mass. The rest energy of a particle is defined as: where is the speed of light in vacuum. In general, only differences in energy have physical significance. The concept of rest energy follows from the special theory of relativity that leads to Einstein's famous conclusion about equivalence of energy and mass. See . See also Mass in special relativity Invariant (physics) Transverse mass References Citations Theory of relativity Mass Energy (physics) Physical quantities
Invariant mass
[ "Physics", "Mathematics" ]
1,800
[ "Scalar physical quantities", "Physical phenomena", "Physical quantities", "Quantity", "Mass", "Size", "Energy (physics)", "Theory of relativity", "Wikipedia categories named after physical quantities", "Physical properties", "Matter" ]
162,400
https://en.wikipedia.org/wiki/Force%20spectroscopy
Force spectroscopy is a set of techniques for the study of the interactions and the binding forces between individual molecules. These methods can be used to measure the mechanical properties of single polymer molecules or proteins, or individual chemical bonds. The name "force spectroscopy", although widely used in the scientific community, is somewhat misleading, because there is no true matter-radiation interaction. Techniques that can be used to perform force spectroscopy include atomic force microscopy, optical tweezers, magnetic tweezers, acoustic force spectroscopy, microneedles, and biomembranes. Force spectroscopy measures the behavior of a molecule under stretching or torsional mechanical force. In this way a great deal has been learned in recent years about the mechanochemical coupling in the enzymes responsible for muscle contraction, transport in the cell, energy generation (F1-ATPase), DNA replication and transcription (polymerases), DNA unknotting and unwinding (topoisomerases and helicases). As a single-molecule technique, as opposed to typical ensemble spectroscopies, it allows a researcher to determine properties of the particular molecule under study. In particular, rare events such as conformational change, which are masked in an ensemble, may be observed. Experimental techniques There are many ways to accurately manipulate single molecules. Prominent among these are optical or magnetic tweezers, atomic-force-microscope (AFM) cantilevers and acoustic force spectroscopy. In all of these techniques, a biomolecule, such as protein or DNA, or some other biopolymer has one end bound to a surface or micrometre-sized bead and the other to a force sensor. The force sensor is usually a micrometre-sized bead or a cantilever, whose displacement can be measured to determine the force. Atomic force microscope cantilevers Molecules adsorbed on a surface are picked up by a microscopic tip (nanometres wide) that is located on the end of an elastic cantilever. In a more sophisticated version of this experiment (Chemical Force Microscopy) the tips are covalently functionalized with the molecules of interest. A piezoelectric controller then pulls up the cantilever. If some force is acting on the elastic cantilever (for example because some molecule is being stretched between the surface and the tip), this will deflect upward (repulsive force) or downward (attractive force). According to Hooke's law, this deflection will be proportional to the force acting on the cantilever. Deflection is measured by the position of a laser beam reflected by the cantilever. This kind of set-up can measure forces as low as 10 pN (10−11 N), the fundamental resolution limit is given by the cantilever's thermal noise. The so-called force curve is the graph of force (or more precisely, of cantilever deflection) versus the piezoelectric position on the Z axis. An ideal Hookean spring, for example, would display a straight diagonal force curve. Typically, the force curves observed in the force spectroscopy experiments consist of a contact (diagonal) region where the probe contacts the sample surface, and a non-contact region where the probe is off the sample surface. When the restoring force of the cantilever exceeds tip-sample adhesion force the probe jumps out of contact, and the magnitude of this jump is often used as a measure of adhesion force or rupture force. In general the rupture of a tip-surface bond is a stochastic process; therefore reliable quantification of the adhesion force requires taking multiple individual force curves. The histogram of the adhesion forces obtained in these multiple measurements provides the main data output for force spectroscopy measurement. In biophysics, single-molecule force spectroscopy can be used to study the energy landscape underlying the interaction between two bio-molecules, like proteins. Here, one binding partner can be attached to a cantilever tip via a flexible linker molecule (PEG chain), while the other one is immobilized on a substrate surface. In a typical approach, the cantilever is repeatedly approached and retracted from the sample at a constant speed. In some cases, binding between the two partners will occur, which will become visible in the force curve, as the use of a flexible linker gives rise to a characteristic curve shape (see Worm-like chain model) distinct from adhesion. The collected rupture forces can then be analysed as a function of the bond loading rate. The resulting graph of the average rupture force as a function of the loading rate is called the force spectrum and forms the basic dataset for dynamic force spectroscopy. In the ideal case of a single sharp energy barrier for the tip-sample interactions the dynamic force spectrum will show a linear increase of the rupture force as function of a logarithm of the loading rate, as described by a model proposed by Bell et al. Here, the slope of the rupture force spectrum is equal to the , where is the distance from the energy minimum to the transition state. So far, a number of theoretical models exist describing the relationship between loading rate and rupture force, based upon different assumptions and predicting distinct curve shapes. For example, Ma X.,Gosai A. et al., utilized dynamic force spectroscopy along with molecular dynamics simulations to find out the binding force between thrombin, a blood coagulation protein, and its DNA aptamer. Acoustic force spectroscopy A recently developed technique, acoustic force spectroscopy (AFS), allows the force manipulation of hundreds of single-molecules and single-cells in parallel, providing high experimental throughput. In this technique, a piezo element resonantly excites planar acoustic waves over a microfluidic chip. The generated acoustic waves are capable of exerting forces on microspheres with different density than the surrounding medium. Biomolecules, such as DNA, RNA or proteins, can be individually tethered between the microspheres and a surface and then probed by the acoustic forces exerted by the piezo sensor. With AFS devices it is possible to apply forces ranging from 0 to several hundreds of picoNewtons on hundreds of microspheres and obtain force-extension curves or histograms of rupture forces of many individual events in parallel. This technique is mostly utilized to study DNA-bindings protein. For example, AFS was used to examine bacterial transcription with presence of antibacterial agents. Viral proteins also can be studied by AFS, for instance this technique was used to explore DNA compaction along with other single-molecule approaches. Cells also can be manipulated by the acoustic forces directly, or by using microspheres as handles. Optical tweezers Another technique that has been gaining ground for single molecule experiments is the use of optical tweezers for applying mechanical forces on molecules. A strongly focused laser beam has the ability to catch and hold particles (of dielectric material) in a size range from nanometers to micrometers. The trapping action of optical tweezers results from the dipole or optical gradient force on the dielectric sphere. The technique of using a focused laser beam as an atom trap was first applied in 1984 at Bell laboratories. Until then experiments had been carried out using oppositely directed lasers as a means to trap particles. Later experiments, at the same project at Bell laboratories and others since, showed damage-free manipulation on cells using an infrared laser. Thus, the ground was made for biological experiments with optical trapping. Each technique has its own advantages and disadvantages. For example, AFM cantilevers, can measure angstrom-scale, millisecond events and forces larger than 10 pN. While glass microfibers cannot achieve such fine spatial and temporal resolution, they can measure piconewton forces. Optical tweezers allow the measurement of piconewton forces and nanometer displacements which is an ideal range for many biological experiments. Magnetic tweezers can measure femtonewton forces, and additionally they can also be used to apply torsion. AFS devices allow the statistical analysis of the mechanical properties of biological systems by applying picoNewton forces to hundreds of individual particles in parallel, with sub-millisecond response time. Applications Common applications of force spectroscopy are measurements of polymer elasticity, especially biopolymers such as RNA and DNA. Another biophysical application of polymer force spectroscopy is on protein unfolding. Modular proteins can be adsorbed to a gold or (more rarely) mica surface and then stretched. The sequential unfolding of modules is observed as a very characteristic sawtooth pattern of the force vs elongation graph; every tooth corresponds to the unfolding of a single protein module (apart from the last that is generally the detachment of the protein molecule from the tip). Much information about protein elasticity and protein unfolding can be obtained by this technique. Many proteins in the living cell must face mechanical stress. Moreover, force spectroscopy can be used to investigate the enzymatic activity of proteins involved in DNA replication, transcription, organization and repair. This is achieved by measuring the position of a bead attached to a DNA-protein complex stalled on a DNA tether that has one end attached to a surface, while keeping the force constant. This technique has been used, for example, to study transcription elongation inhibition by Klebsidin and Acinetodin. The other main application of force spectroscopy is the study of mechanical resistance of chemical bonds. In this case, generally the tip is functionalized with a ligand that binds to another molecule bound to the surface. The tip is pushed on the surface, allowing for contact between the two molecules, and then retracted until the newly formed bond breaks up. The force at which the bond breaks up is measured. Since mechanical breaking is a kinetic, stochastic process, the breaking force is not an absolute parameter, but it is a function of both temperature and pulling speed. Low temperatures and high pulling speeds correspond to higher breaking forces. By careful analysis of the breaking force at various pulling speeds, it is possible to map the energy landscape of the chemical bond under mechanical force. This is leading to interesting results in the study of antibody-antigen, protein-protein, protein-living cell interaction and catch bonds. Recently this technique has been used in cell biology to measure the aggregative stochastic forces created by motor proteins that influence the motion of particles within the cytoplasm. In this way, force spectrum microscopy may be used better to understand the many cellular processes that require the motion of particles within cytoplasm. References Further reading Spectroscopy Scanning probe microscopy
Force spectroscopy
[ "Physics", "Chemistry", "Materials_science" ]
2,210
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Scanning probe microscopy", "Microscopy", "Nanotechnology", "Spectroscopy" ]
162,435
https://en.wikipedia.org/wiki/Mind%20uploading
Mind uploading is a speculative process of whole brain emulation in which a brain scan is used to completely emulate the mental state of the individual in a digital computer. The computer would then run a simulation of the brain's information processing, such that it would respond in essentially the same way as the original brain and experience having a sentient conscious mind. Substantial mainstream research in related areas is being conducted in neuroscience and computer science, including animal brain mapping and simulation, development of faster supercomputers, virtual reality, brain–computer interfaces, connectomics, and information extraction from dynamically functioning brains. According to supporters, many of the tools and ideas needed to achieve mind uploading already exist or are under active development; however, they will admit that others are, as yet, very speculative, but say they are still in the realm of engineering possibility. Mind uploading may potentially be accomplished by either of two methods: copy-and-upload or copy-and-delete by gradual replacement of neurons (which can be considered as a gradual destructive uploading), until the original organic brain no longer exists and a computer program emulating the brain takes control of the body. In the case of the former method, mind uploading would be achieved by scanning and mapping the salient features of a biological brain, and then by storing and copying that information state into a computer system or another computational device. The biological brain may not survive the copying process or may be deliberately destroyed during it in some variants of uploading. The simulated mind could be within a virtual reality or simulated world, supported by an anatomic 3D body simulation model. Alternatively, the simulated mind could reside in a computer inside—or either connected to or remotely controlled by—a (not necessarily humanoid) robot, biological, or cybernetic body. Among some futurists and within part of transhumanist movement, mind uploading is treated as an important proposed life extension or immortality technology (known as "digital immortality"). Some believe mind uploading is humanity's current best option for preserving the identity of the species, as opposed to cryonics. Another aim of mind uploading is to provide a permanent backup to our "mind-file", to enable interstellar space travel, and a means for human culture to survive a global disaster by making a functional copy of a human society in a computing device. Whole-brain emulation is discussed by some futurists as a "logical endpoint" of the topical computational neuroscience and neuroinformatics fields, both about brain simulation for medical research purposes. It is discussed in artificial intelligence research publications as an approach to strong AI (artificial general intelligence) and to at least weak superintelligence. Another approach is seed AI, which would not be based on existing brains. Computer-based intelligence such as an upload could think much faster than a biological human even if it were no more intelligent. A large-scale society of uploads might, according to futurists, give rise to a technological singularity, meaning a sudden time constant decrease in the exponential development of technology. Mind uploading is a central conceptual feature of numerous science fiction novels, films, and games. Overview Many neuroscientists believe that the human mind is largely an emergent property of the information processing of its neuronal network. Neuroscientists have stated that important functions performed by the mind, such as learning, memory, and consciousness, are due to purely physical and electrochemical processes in the brain and are governed by applicable laws. For example, Christof Koch and Giulio Tononi wrote in IEEE Spectrum: Eminent computer scientists and neuroscientists have predicted that advanced computers will be capable of thought and even attain consciousness, including Koch and Tononi, Douglas Hofstadter, Jeff Hawkins, Marvin Minsky, Randal A. Koene, and Rodolfo Llinás. Many theorists have presented models of the brain and have established a range of estimates of the amount of computing power needed for partial and complete simulations. Using these models, some have estimated that uploading may become possible within decades if trends such as Moore's law continue. As of December 2022, this kind of technology is almost entirely theoretical. Theoretical benefits and applications "Immortality" or backup In theory, if the information and processes of the mind can be disassociated from the biological body, they are no longer tied to the individual limits and lifespan of that body. Furthermore, information within a brain could be partly or wholly copied or transferred to one or more other substrates (including digital storage or another brain), thereby—from a purely mechanistic perspective—reducing or eliminating "mortality risk" of such information. This general proposal was discussed in 1971 by biogerontologist George M. Martin of the University of Washington. This questions the concept of identity. From the perspective of the biological brain, the simulated brain may just be a copy, even if it is conscious and has an indistinguishable character. As such, the original biological being, before the uploading, might consider the digital twin to be a new and independent being rather than the future self. Space exploration An "uploaded astronaut" could be used instead of a "live" astronaut in human spaceflight, avoiding the perils of zero gravity, the vacuum of space, and cosmic radiation to the human body. It would allow for the use of smaller spacecraft, such as the proposed StarChip, and it would enable virtually unlimited interstellar travel distances. Mind editing While some researchers believe editing human brains to be physically possible in theory, for example by performing neurosurgery with nanobots, it would require particularly advanced technology. Editing an uploaded mind would be much easier, as long as the exact edits to be made are known. This would facilitate cognitive enhancement and the precise control of the well-being, motivations or personality of the emulated beings. Speed Although the number of neuronal connections in the human brain is very significant (around 100 trillions), the frequency of activation of biological neurons is limited to around 200 Hz, whereas electronic hardware can easily operate at multiple GHz. With sufficient hardware parallelism, a simulated brain could thus in theory be made to run faster than a biological brain. Uploaded beings may therefore not only be more efficient, but also supposedly have a faster rate of subjective experience than biological brains (e.g. experiencing an hour of lifetime in a single second of real time). Relevant technologies and techniques The focus of mind uploading, in the case of copy-and-transfer, is on data acquisition, rather than data maintenance of the brain. A set of approaches known as loosely coupled off-loading (LCOL) may be used in the attempt to characterize and copy the mental contents of a brain. The LCOL approach may take advantage of self-reports, life-logs and video recordings that can be analyzed by artificial intelligence. A bottom-up approach may focus on the specific resolution and morphology of neurons, the spike times of neurons, the times at which neurons produce action potential responses. Computational complexity Advocates of mind uploading point to Moore's law to support the notion that the necessary computing power is expected to become available within a few decades. However, the actual computational requirements for running an uploaded human mind are very difficult to quantify, potentially rendering such an argument specious. Regardless of the techniques used to capture or recreate the function of a human mind, the processing demands are likely to be immense, due to the large number of neurons in the human brain along with the considerable complexity of each neuron. Required computational capacity strongly depends on the chosen level of simulation model scale: Scanning and mapping scale of an individual When modelling and simulating the brain of a specific individual, a brain map or connectivity database showing the connections between the neurons must be extracted from an anatomic model of the brain. For whole brain simulation, this network map should show the connectivity of the whole nervous system, including the spinal cord, sensory receptors, and muscle cells. Destructive scanning of a small sample of tissue from a mouse brain including synaptic details is possible as of 2010. However, if short-term memory and working memory include prolonged or repeated firing of neurons, as well as intra-neural dynamic processes, the electrical and chemical signal state of the synapses and neurons may be hard to extract. The uploaded mind may then perceive a memory loss of the events and mental processes immediately before the time of brain scanning. A full brain map has been estimated to occupy less than 2 x 1016 bytes (20,000 TB) and would store the addresses of the connected neurons, the synapse type and the synapse "weight" for each of the brains' 1015 synapses. However, the biological complexities of true brain function (e.g. the epigenetic states of neurons, protein components with multiple functional states, etc.) may preclude an accurate prediction of the volume of binary data required to faithfully represent a functioning human mind. Serial sectioning A possible method for mind uploading is serial sectioning, in which the brain tissue and perhaps other parts of the nervous system are frozen and then scanned and analyzed layer by layer, which for frozen samples at nano-scale requires a cryo-ultramicrotome, thus capturing the structure of the neurons and their interconnections. The exposed surface of frozen nerve tissue would be scanned and recorded, and then the surface layer of tissue removed. While this would be a very slow and labor-intensive process, research is underway to automate the collection and microscopy of serial sections. The scans would then be analyzed, and a model of the neural net recreated in the system into which the mind was being uploaded. There are uncertainties with this approach using current microscopy techniques. If it is possible to replicate neuron function from its visible structure alone, then the resolution afforded by a scanning electron microscope would suffice for such a technique. However, as the function of brain tissue is partially determined by molecular events (particularly at synapses, but also at other places on the neuron's cell membrane), this may not suffice for capturing and simulating neuron functions. It may be possible to extend the techniques of serial sectioning and to capture the internal molecular makeup of neurons, through the use of sophisticated immunohistochemistry staining methods that could then be read via confocal laser scanning microscopy. However, as the physiological genesis of 'mind' is not currently known, this method may not be able to access all of the necessary biochemical information to recreate a human brain with sufficient fidelity. Brain imaging It may be possible to create functional 3D maps of the brain activity, using advanced neuroimaging technology, such as functional MRI (fMRI, for mapping change in blood flow), magnetoencephalography (MEG, for mapping of electrical currents), or combinations of multiple methods, to build a detailed three-dimensional model of the brain using non-invasive and non-destructive methods. Today, fMRI is often combined with MEG for creating functional maps of human cortex during more complex cognitive tasks, as the methods complement each other. Even though current imaging technology lacks the spatial resolution needed to gather the information needed for such a scan, important recent and future developments are predicted to substantially improve both spatial and temporal resolutions of existing technologies. Brain simulation Ongoing work in the field of brain simulation includes partial and whole simulations of some animals. For example, the C. elegans roundworm, Drosophila fruit fly, and mouse have all been simulated to various degrees. The Blue Brain Project, initiated by the Brain and Mind Institute of the École Polytechnique Fédérale de Lausanne in Switzerland, is an attempt to create a synthetic brain by reverse-engineering mammalian brain circuitry, in order to accelerate experimental research on the brain. In 2009, after a successful simulation of part of a rat brain, the director Henry Markram claimed that "A detailed, functional artificial human brain can be built within the next 10 years". In 2013, Markram became the director of the new decade-long Human Brain Project. But less than two years into it, the project was recognized to be mismanaged and its claims overblown, and Markram was asked to step down. Issues Philosophical issues The main philosophical problem faced by "mind uploading" or mind copying is the hard problem of consciousness: the difficulty of explaining how a physical entity such as a human can have qualia, phenomenal consciousness, or subjective experience. Many philosophical responses to the hard problem entail that mind uploading is fundamentally or practically impossible, while others are compatible with at least some formulations of mind uploading. Many proponents of mind uploading defend the possibility of mind uploading by recourse to physicalism, which includes the philosophical belief that consciousness is an emergent feature that arises from large neural network high-level patterns of organization, which could be realized in other processing devices. Mind uploading relies on the idea that the human mind (the "self" and the long-term memory) reduces to the current neural network paths and the weights of synapses in the brain. In contrast, many dualistic and idealistic accounts seek to avoid the hard problem of consciousness by explaining it in terms of immaterial (and presumably inaccessible) substances like soul, which would pose a fundamental or at least practical challenge to the feasibility of artificial consciousness in general. Assuming physicalism is true, the mind can be defined as the information state of the brain, so it is immaterial only in the same sense as the information content of a data file, or the state of software residing in a computer's memory. In this case, data specifying the information state of the neural network could be captured and copied as a "computer file" from the brain and re-implemented into a different physical form. This is not to deny that minds are richly adapted to their substrates. An analogy to mind uploading is to copy the information state of a computer program from the memory of the computer on which it is executing to another computer and then continue its execution on the second computer. The second computer may perhaps have different hardware architecture, but it emulates the hardware of the first computer. These philosophical issues have a long history. In 1775, Thomas Reid wrote: “I would be glad to know... whether when my brain has lost its original structure, and when some hundred years after the same materials are fabricated so curiously as to become an intelligent being, whether, I say that being will be me; or, if, two or three such beings should be formed out of my brain; whether they will all be me, and consequently one and the same intelligent being.” Although the name of the hard problem of consciousness was coined in 1994, debate surrounding the problem itself is ancient. Augustine of Hippo argued against physicalist "Academians" in the 5th century, writing that consciousness cannot be an illusion because only a conscious being can be deceived or experience an illusion. René Descartes, the founder of mind-body dualism, made a similar objection in the 17th century, coining the popular phrase "Je pense, donc je suis" ("I think, therefore I am"). Although physicalism is known to have been proposed in ancient times, Thomas Huxley was among the first to describe mental experience as merely an epiphenomenon of interactions within the brain, having no causal power of its own and being entirely downstream from the brain's activity. A considerable portion of transhumanists and singularitarians place great hope in the belief that they may become immortal, by creating one or many non-biological functional copies of their brains, thereby leaving their "biological shell". However, the philosopher and transhumanist Susan Schneider claims that at best, uploading would create a copy of the original person's mind. Schneider agrees that consciousness has a computational basis, but this does not mean we can upload and survive. According to her views, "uploading" would probably result in the death of the original person's brain, while only outside observers can maintain the illusion of the original person still being alive. For it is implausible to think that one's consciousness would leave one's brain and travel to a remote location; ordinary physical objects do not behave this way. Ordinary objects (rocks, tables, etc.) are not simultaneously here, and elsewhere. At best, a copy of the original mind is created. Neural correlates of consciousness, a sub-branch of neuroscience, states that consciousness may be thought of as a state-dependent property of some undefined complex, adaptive, and highly interconnected biological system. Others have argued against such conclusions. For example, Buddhist transhumanist James Hughes has pointed out that this consideration only goes so far: if one believes the self is an illusion, worries about survival are not reasons to avoid uploading, and Keith Wiley has presented an argument wherein all resulting minds of an uploading procedure are granted equal primacy in their claim to the original identity, such that survival of the self is determined retroactively from a strictly subjective position. Some have also asserted that consciousness is a part of an extra-biological system that is yet to be discovered; therefore it cannot be fully understood under the present constraints of neurobiology. Without the transference of consciousness, true mind-upload or perpetual immortality cannot be practically achieved. Another potential consequence of mind uploading is that the decision to "upload" may then create a mindless symbol manipulator instead of a conscious mind (see philosophical zombie). If a computer could process sensory inputs to generate the same outputs that a human mind does (speech, muscle movements, etc.) without necessarily having any experience of consciousness, then it may be impossible to determine whether the uploaded mind is truly conscious, and not merely an automaton that externally behaves the way a human would. Thought experiments like the Chinese room raise fundamental questions about mind uploading: If an upload displays behaviors that are highly indicative of consciousness, or even verbally insists that it is conscious, does that prove it is conscious? There might also be an absolute upper limit in processing speed, above which consciousness cannot be sustained. The subjectivity of consciousness precludes a definitive answer to this question. Numerous scientists, including Ray Kurzweil, believe that whether a separate entity is conscious is impossible to know with confidence, since consciousness is inherently subjective (see also: solipsism). Regardless, some scientists believe consciousness is the consequence of computational processes which are substrate-neutral. Still other scientists, prominent among them Roger Penrose, believe consciousness may emerge from some form of quantum computation that is dependent on the organic substrate (see quantum mind). In light of uncertainty about whether mind uploads are conscious, Sandberg proposes a cautious approach: Ethical and legal implications The process of developing emulation technology raises ethical issues related to animal welfare and artificial consciousness. The neuroscience required to develop brain emulation would require animal experimentation, first on invertebrates and then on small mammals before moving on to humans. Sometimes the animals would just need to be euthanized in order to extract, slice, and scan their brains, but sometimes behavioral and in vivo measures would be required, which might cause pain to living animals. In addition, the resulting animal emulations themselves might suffer, depending on one's views about consciousness. Bancroft argues for the plausibility of consciousness in brain simulations on the basis of the "fading qualia" thought experiment of David Chalmers. He then concludes: “If, as I argue above, a sufficiently detailed computational simulation of the brain is potentially operationally equivalent to an organic brain, it follows that we must consider extending protections against suffering to simulations.” Chalmers himself has argued that such virtual realities would be genuine realities. However, if mind uploading occurs and the uploads are not conscious, there may be a significant opportunity cost. In the book Superintelligence, Nick Bostrom expresses concern that we could build a "Disneyland without children." It might help reduce emulation suffering to develop virtual equivalents of anaesthesia, as well as to omit processing related to pain and/or consciousness. However, some experiments might require a fully functioning and suffering animal emulation. Animals might also suffer by accident due to flaws and lack of insight into what parts of their brains are suffering. Questions also arise regarding the moral status of partial brain emulations, as well as creating neuromorphic emulations that draw inspiration from biological brains but are built somewhat differently. Brain emulations could be erased by computer viruses or malware, without the need to destroy the underlying hardware. This may make assassination easier than for physical humans. The attacker might take the computing power for its own use. Many questions arise regarding the legal personhood of emulations. Would they be given the rights of biological humans? If a person makes an emulated copy of themselves and then dies, does the emulation inherit their property and official positions? Could the emulation ask to "pull the plug" when its biological version was terminally ill or in a coma? Would it help to treat emulations as adolescents for a few years so that the biological creator would maintain temporary control? Would criminal emulations receive the death penalty, or would they be given forced data modification as a form of "rehabilitation"? Could an upload have marriage and child-care rights? If simulated minds would come true and if they were assigned rights of their own, it may be difficult to ensure the protection of "digital human rights". For example, social science researchers might be tempted to secretly expose simulated minds, or whole isolated societies of simulated minds, to controlled experiments in which many copies of the same minds are exposed (serially or simultaneously) to different test conditions. Research led by cognitive scientist Michael Laakasuo has shown that attitudes towards mind uploading are predicted by an individual's belief in an afterlife; the existence of mind uploading technology may threaten religious and spiritual notions of immortality and divinity. Political and economic implications Emulations might be preceded by a technological arms race driven by first-strike advantages. Their emergence and existence may lead to increased risk of war, including inequality, power struggles, strong loyalty and willingness to die among emulations, and new forms of racism, xenophobia, and religious prejudice. If emulations run much faster than humans, there might not be enough time for human leaders to make wise decisions or negotiate. It is possible that humans would react violently against the growing power of emulations, especially if that depresses human wages. Emulations may not trust each other, and even well-intentioned defensive measures might be interpreted as offense. The book The Age of Em by Robin Hanson poses many hypotheses on the nature of a society of mind uploads, including that the most common minds would be copies of adults with personalities conducive to long hours of productive specialized work. Emulation timelines and AI risk Kenneth D. Miller, a professor of neuroscience at Columbia University and a co-director of the Center for Theoretical Neuroscience, raised doubts about the practicality of mind uploading. His major argument is that reconstructing neurons and their connections is in itself a formidable task, but it is far from being sufficient. Operation of the brain depends on the dynamics of electrical and biochemical signal exchange between neurons; therefore, capturing them in a single "frozen" state may prove insufficient. In addition, the nature of these signals may require modeling at the molecular level and beyond. Therefore, while not rejecting the idea in principle, Miller believes that the complexity of the "absolute" duplication of an individual mind is insurmountable for the nearest hundreds of years. There are very few feasible technologies that humans have refrained from developing. The neuroscience and computer-hardware technologies that may make brain emulation possible are widely desired for other reasons, and logically their development will continue into the future. We may also have brain emulations for a brief but significant period on the way to non-emulation based human-level AI. Assuming that emulation technology will arrive, a question becomes whether we should accelerate or slow its advance. Arguments for speeding up brain-emulation research: If neuroscience is the bottleneck on brain emulation rather than computing power, emulation advances may be more erratic and unpredictable based on when new scientific discoveries happen. Limited computing power would mean the first emulations would run slower and so would be easier to adapt to, and there would be more time for the technology to transition through society. Improvements in manufacturing, 3D printing, and nanotechnology may accelerate hardware production, which could increase the "computing overhang" from excess hardware relative to neuroscience. If one AI-development group had a lead in emulation technology, it would have more subjective time to win an arms race to build the first superhuman AI. Because it would be less rushed, it would have more freedom to consider AI risks. Arguments for slowing brain-emulation research: Greater investment in brain emulation and associated cognitive science might enhance the ability of artificial intelligence (AI) researchers to create "neuromorphic" (brain-inspired) algorithms, such as neural networks, reinforcement learning, and hierarchical perception. This could accelerate risks from uncontrolled AI. Participants at a 2011 AI workshop estimated an 85% probability that neuromorphic AI would arrive before brain emulation. This was based on the idea that brain emulation would require understanding of the workings and functions of the different brain components, along with the technological know-how to emulate neurons. To counter this idea, reverse engineering the Microsoft Windows code base is already hard, so reverse engineering the brain would likely be much harder. By a very narrow margin, the participants on balance leaned toward the view that accelerating brain emulation would increase expected AI risk. Waiting might give society more time to think about the consequences of brain emulation and develop institutions to improve cooperation. Emulation research would also accelerate neuroscience as a whole, which might accelerate medical advances, cognitive enhancement, lie detectors, and capability for psychological manipulation. Emulations might be easier to control than de novo AI because: Human abilities, behavioral tendencies, and vulnerabilities are more thoroughly understood, thus control measures might be more intuitive and easier to plan. Emulations could more easily inherit human motivations. Emulations are harder to manipulate than de novo AI, because brains are messy and complicated; this could reduce risks of their rapid takeoff. Also, emulations may be bulkier and require more hardware than AI, which would also slow the speed of a transition. Unlike AI, an emulation would not be able to rapidly expand beyond the size of a human brain. Emulations running at digital speeds would have less intelligence differential vis-à-vis AI and so might more easily control AI. As counterpoint to these considerations, Bostrom notes some downsides: Even if we better understand human behavior, the evolution of emulation behavior under self-improvement might be much less predictable than the evolution of safe de novo AI under self-improvement. Emulations may not inherit all human motivations. Perhaps they would inherit our darker motivations or would behave abnormally in the unfamiliar environment of cyberspace. Even if there is a slow takeoff toward emulations, there would still be a second transition to de novo AI later on. Two intelligence explosions may mean more total risk. Because of the postulated difficulties that a whole brain emulation-generated superintelligence would pose for the control problem, computer scientist Stuart J. Russell in his book Human Compatible rejects creating one, simply calling it "so obviously a bad idea". Advocates In 1979, Hans Moravec (1979) described and endorsed mind uploading using a brain surgeon. Moravec used a similar description in 1988, calling it "transmigration". Ray Kurzweil, director of engineering at Google, has long predicted that people will be able to "upload" their entire brains to computers and become "digitally immortal" by 2045. Kurzweil made this claim for many years, e.g. during his speech in 2013 at the Global Futures 2045 International Congress in New York, which claims to subscribe to a similar set of beliefs. Mind uploading has also been advocated by a number of researchers in neuroscience and artificial intelligence, such as Marvin Minsky. In 1993, Joe Strout created a small web site called the Mind Uploading Home Page, and began advocating the idea in cryonics circles and elsewhere on the net. That site has not been actively updated in recent years, but it has spawned other sites including MindUploading.org, run by Randal A. Koene, who also moderates a mailing list on the topic. These advocates see mind uploading as a medical procedure which could eventually save countless lives. Many transhumanists look forward to the development and deployment of mind uploading technology, with transhumanists such as Nick Bostrom predicting that it will become possible within the 21st century due to technological trends such as Moore's law. Michio Kaku, in collaboration with Science, hosted a documentary, Sci Fi Science: Physics of the Impossible, based on his book Physics of the Impossible. Episode four, titled "How to Teleport", mentions that mind uploading via techniques such as quantum entanglement and whole brain emulation using an advanced MRI machine may enable people to be transported vast distances at near light-speed. The book Beyond Humanity: CyberEvolution and Future Minds by Gregory S. Paul & Earl D. Cox, is about the eventual (and, to the authors, almost inevitable) evolution of computers into sentient beings, but also deals with human mind transfer. Richard Doyle's Wetwares: Experiments in PostVital Living deals extensively with uploading from the perspective of distributed embodiment, arguing for example that humans are currently part of the "artificial life phenotype". Doyle's vision reverses the polarity on uploading, with artificial life forms such as uploads actively seeking out biological embodiment as part of their reproductive strategy. In fiction Mind uploading—transferring an individual's personality to a computer—appears in several works of science fiction. It is distinct from the concept of transferring a consciousness from one human body to another. It is sometimes applied to a single person and other times to an entire society. Recurring themes in these stories include whether the computerized mind is truly conscious, and if so, whether identity is preserved. It is a common feature of the cyberpunk subgenre, sometimes taking the form of digital immortality. See also BRAIN Initiative Brain transplant Brain-reading Cyborg Cylon (reimagining) Democratic transhumanism Human Brain Project Isolated brain Neuralink Open individualism Posthumanization Robotoid Ship of Theseus—thought experiment asking if objects having all parts replaced fundamentally remain the same object Simulation hypothesis Technologically enabled telepathy Teletransportation paradox Thought recording and reproduction device Turing test The Future of Work and Death Vertiginous question Chinese room 2045 Initiative Dmitry Itskov Miguel Nicolelis Neural network (machine learning) References Fictional technology Hypothetical technology Immortality Neurotechnology Posthumanism Transhumanism
Mind uploading
[ "Technology", "Engineering", "Biology" ]
6,422
[ "Genetic engineering", "Transhumanism", "Ethics of science and technology" ]
162,438
https://en.wikipedia.org/wiki/Queensboro%20Bridge
The Queensboro Bridge, officially the Ed Koch Queensboro Bridge, is a cantilever bridge over the East River in New York City. Completed in 1909, it connects the Long Island City neighborhood in the borough of Queens with the East Midtown and Upper East Side neighborhoods in Manhattan, passing over Roosevelt Island. Because the western end of the bridge connects to 59th Street in Manhattan, it is also called the 59th Street Bridge. The bridge consists of five steel spans measuring long; including approaches, its total length is . The Queensboro Bridge carries New York State Route 25 (NY 25), which terminates at the bridge's western end in Manhattan. The bridge has two levels: an upper level with a pair of two-lane roadways, and a lower level with five vehicular lanes and a walkway/bike lane. The western leg of the Queensboro Bridge is flanked on its northern side by the Roosevelt Island Tramway. The bridge is one of four vehicular bridges directly connecting Manhattan Island and Long Island, along with the Williamsburg, Manhattan, and Brooklyn bridges to the south. It lies along the courses of the New York City Marathon and the Five Boro Bike Tour. Serious proposals for a bridge linking Manhattan to Long Island City were first made as early as 1838, but various 19th-century plans to erect such a bridge, including two proposals by Queens doctor Thomas Rainey, never came to fruition. After the creation of the City of Greater New York in 1898, plans for a city-operated bridge were finalized in 1901. The bridge opened for public use on March 30, 1909, and was initially used by pedestrians, horse-drawn and motor vehicles, elevated trains, and trolleys. Elevated service ceased in 1942, followed by trolley service in 1957. The upper-level roadways were built in the early 1930s and the late 1950s. Designated as a New York City landmark in 1973, the bridge was renovated extensively from the late 1970s to the 1990s. The bridge was officially renamed in 2011 in honor of former New York City mayor Ed Koch, and another renovation occurred in the early 2020s. Name The Queensboro Bridge was originally named for the borough of Queens and was the third bridge across the East River to be named after a New York City borough, after the Brooklyn Bridge and the Manhattan Bridge. By the late 20th century, the Queensboro Bridge was also known as the 59th Street Bridge because its Manhattan end is located between 59th and 60th streets. This name caused controversy among Queens residents who felt that the 59th Street Bridge name did not honor the borough of Queens. In December 2010, mayor Michael Bloomberg announced that the bridge would be renamed in honor of former mayor Ed Koch; the bridge had been renovated extensively in the 1980s, when he was mayor. The Ed Koch Queensboro Bridge name was formalized on March 23, 2011. The renaming was unpopular among Queens residents and business leaders; The Los Angeles Times wrote that Queens residents found the renaming disrespectful to their borough. The general public continued to call it the Queensboro Bridge years after the renaming. New York City Council member Peter Vallone Jr. of Queens proposed removing Koch's name from the bridge in 2013. Description The Queensboro Bridge is a two-level double cantilever bridge, with separate cantilevered spans over channels on each side of Roosevelt Island joined by a fixed central truss. In all, it has five steel truss spans, as well as approach viaducts on either side. The total length of the five spans, between the anchorages on the Manhattan and Queens sides, are approximately , of which are above water. In addition, there is a approach viaduct in Manhattan and a approach viaduct in Queens, connecting the anchorages on either side to street level. This brings the bridge's total length to . The bridge carries New York State Route 25, which ends at the span's western terminus. Spans The lengths of the steel spans are as follows, from the westernmost span to the easternmost: The bridge was intended to carry a dead load of . Each span includes two parallel lines of trusses, one each on the north and south sides of the bridge; the centers of these trusses are spaced apart. The bottom chord of each set of trusses is composed of box girders, while the top chord is composed of eyebars measuring deep. The trusses range in height from between the bottom and top chords; the steel towers atop each pier measure tall. Unlike other large bridges, the trusses are not suspended; instead, the spans are directly connected to each other. In addition, there are transverse floor beams, which protrude from the trusses on either side of the deck. Atop the bridge's topmost chords were originally galvanized steel ropes, which acted as handrails for bridge painters. Five hand-operated scaffolds were also placed on the bridge. The spans are cantilevered from steel towers that rise above four central piers. Each cantilevered section measures long. The two spans above the East River's channels are composed of cantilever arms, which extend outward from the towers on either side of the channel. Each pair of cantilever arms meets at a set of bents above the middle of each channel. The bents allowed the cantilever arms to move horizontally due to temperature changes, and it allowed structural loads to be distributed between the two arms. The bridge uses nickel-steel bars that were intended to be 40 to 50 percent stronger than regular structural-steel bars of the same weight. The beams could withstand loads of up to each, while the nickel-steel eyebars were intended to withstand loads of up to . The decks themselves were designed to carry as much as . The steel spans between the anchorages weigh a total of and have a maximum grade of 3.41 percent. The spans were intended to be at least above mean high water; the bridge reaches a maximum height of or above high mean water. Until it was surpassed by the Quebec Bridge in 1917, the span between Manhattan and Roosevelt Island was the longest cantilever in North America; it was also the second-longest worldwide, after the Forth Bridge in Scotland. Levels The upper level is wide. The upper level originally contained two pedestrian walkways and two elevated railway tracks, which connected a spur of the IRT Second Avenue elevated line in Manhattan to the Queensboro Plaza station in Queens. There were also provisions for two additional tracks between the trusses (taking up the space occupied by the walkways), as well as walkways cantilevered outside the trusses. , the upper level has four lanes of automobile traffic, consisting of a pair of two-lane roadways. Although both roadways end at Thomson Avenue in Queens, they diverge in Manhattan. The two northern lanes, normally used by westbound traffic, lead to 62nd and 63rd Streets. The two southern lanes, normally used by eastbound traffic, lead to 57th and 58th Streets. The southern roadway is used as a westbound high-occupancy vehicle lane during morning rush hours, when all eastbound traffic uses the lower level. The lower level is wide and is divided into three sections: a northern, central, and southern roadway. The center roadway is wide and was originally composed of a general-purpose road in the middle, flanked by a pair of trolley tracks. The northern and southern lower-level roadways each had one additional trolley track, for a total of four trolley tracks. The central roadway originally had a wood block pavement. , the lower level has five vehicular lanes: two in each direction within the center roadway and one eastbound lane on the southern roadway. The northern lower-level roadway was converted into a permanent pedestrian walk and bicycle path in September 2000. Piers The five spans are supported by six piers; the westernmost and easternmost piers act as anchorages. Each of the piers consists of two columns supported by an elliptical arch measuring wide. The piers each measure across at their bases (including the arched openings). They range from tall, with the piers on Roosevelt Island being the tallest. The foundations of the Roosevelt Island piers are shallow, since there is bedrock just below the surface of the island. By comparison, the piers in Manhattan and Queens extend over deep. The piers are faced with Maine granite and are attached to a backing made of concrete and Mohawk Valley limestone. In total, workers used of limestone, of concrete, and of granite to build the bridges. Above the piers rise the bridge's towers, which contain domed decorations and Art Nouveau-inspired spires. The towers extend above the bridge's lower chords. The tops of the towers are made of 225 granite blocks, which were part of the original design but not added until 1937. The spires were removed at some point in the 20th century after deteriorating. The two anchorages, one each at the Manhattan and Queens ends, are about inland of the shore. Each anchorage was built with spiral staircases and elevators. The anchorage in Manhattan is between First Avenue and York Avenue, while the Queens anchorage is near Vernon Boulevard. The anchorages are topped by small rooms with arched openings. Approaches The approaches on both sides of the bridge are composed of stiffened steel frames, but the Manhattan approach is the only one that is ornately decorated. The Queens approach consists of a series of elevated concrete-and-steel ramps, which were never formally decorated. Manhattan approach The Manhattan approach to the bridge is supported on a series of Guastavino tile vaults. The vaults are composed of three layers of tiles, which support themselves and measure thick in total. A layer of glazing and small lights were installed in 1918. The space under the Manhattan approach measures across. It is divided into a series of tiled vaults measuring across. As the bridge ascends to the east, the floor slopes down and the ceiling slopes up; as such, the ceiling measures high at its highest point. The Guastavino tiles cover the steel superstructure of the approach ramp. Originally, the vaults were intended as storage space. From the bridge's 1909 opening, the space under the Manhattan approach was used as a food market. The food market was renovated in 1933 and was later converted to a sign shop and garage. By the 1970s, the space under the Manhattan approach was used by the Department of Highways. New York City Center's Cinematheque leased space under the Queensboro Bridge in 1973, although the Cinematheque never opened due to a lack of money. A developer proposed the open-air Bridgemarket under the bridge in 1976, which local residents significantly opposed, and Bridgemarket was not approved until 1996. Bridgemarket, covering , opened in 1999 at a cost of $24 million. The store operated until the end of 2015. In February 2020, it was announced that Trader Joe's was planning to open a supermarket in this space, which opened in December 2021. There is a massive bronze lamppost at the end of the Manhattan approach, near the intersection of Second Avenue and 59th Street. Formerly, there was a second lamppost near 60th Street. Both lampposts consisted of thick piers, which were topped by four stanchions (each with a globe-shaped lamp) and a larger spherical lamp in the center. Each lamppost had five tiers of decorations, and the sides of each lamppost were inscribed with the names of four of the city's five boroughs. The lampposts were both removed in 1974 when the Roosevelt Island Tramway was developed, but the 59th Street lamppost was restored two years later. Parts of the other lamppost were found in a Queens warehouse in 2012 and rededicated on Roosevelt Island in 2015. Use during races The Queensboro Bridge has been part of the New York City Marathon course since 1976, when the marathon course traversed all five boroughs for the first time. During the marathon, which happens every November, runners cross the Queensboro Bridge westbound toward Manhattan, then pass under the bridge at First Avenue. The bridge is approximately from the beginning of the course on the Verrazzano-Narrows Bridge. The deck of the bridge was initially covered with carpeting for the 1976 marathon; the carpeting was not used after 1977, when the bridge was repaved. The bridge is also part of the course of the Five Boro Bike Tour, which occurs every April; contestants traverse the bridge eastbound toward Queens. , the Five Boro Bike Tour uses the northern upper-level roadway. Development Planning Prior to the construction of the Queensboro Bridge, two ferries connected modern-day Manhattan and Queens, neither of which were near the modern-day bridge. One such ferry connected Borden Avenue in Hunters Point, Queens, to 34th Street in Kips Bay, Manhattan, while the other ferry connected Astoria Boulevard in Astoria, Queens, with 92nd Street on Manhattan's Upper East Side. Benjamin Henry Latrobe first proposed a masonry bridge between Manhattan and Queens in 1804. The Family Magazine published an article in 1833, suggesting a bridge between Manhattan and Queens over Roosevelt Island (which then was known as Blackwell's Island). An architect named R. Graves proposed a three-span suspension bridge linking Manhattan to Long Island City. Queens, in the late 1830s. John A. Roebling, who would later design the Brooklyn Bridge, proposed suspension bridges at the site in 1847 and 1856. Rainey attempts An attempt to finance a fixed East River crossing was made in 1867 by wealthy Long Island City residents, who established the New-York and Long Island Bridge Company to erect the crossing. This group was led by Thomas Rainey, a doctor from Astoria. The crossing would have connected 77th Street in Manhattan and 34th Avenue in Queens, passing over the center of Blackwell's Island. The New-York and Long Island Bridge Company appointed commissioners for the proposed bridge in 1875 and hosted an architectural design competition for the bridge in 1876. A cantilever design by Charles Macdonald and the Delaware Bridge Company was selected in early 1877, but no action had been taken by 1878, a year after the plans were approved. Media sources reported in May 1881 that work was to commence shortly, and a cofferdam for one of the bridge's piers was installed that month. By the time the United States Congress approved plans for the bridge in 1887, Rainey's bridge had been relocated southward. A state justice found in 1890 that the bridge's charter was invalid. Nonetheless, Rainey's efforts to build the bridge made his name "a household word in western Long Island".By the 1890s, Long Island Rail Road (LIRR) president Austin Corbin had merged Rainey's plan and a competing plan. Rainey resubmitted plans for the bridge in early 1890. The state legislature gave Rainey a charter for the Blackwell's Island Bridge in mid-1892. Corbin received an option to buy out Rainey's charter, and a groundbreaking ceremony for the bridge was held at 64th Street in Manhattan on August 19, 1894. The span was planned as a cantilever bridge carrying four LIRR tracks, as well as roadways and footpaths. By that November, two cofferdams were being sunk for the bridge's piers. Laborers began constructing foundations for another pier on the eastern shore of Blackwell Island in April 1895. Stone and steel contracts had been awarded by the following year, and two of the piers had been built above the water line. Construction was halted after the piers were built, first due to lawsuits, then because of Corbin's death. Post-unification approval Manhattan and Queens were merged into the City of Greater New York in 1898, spurring alternate plans for a bridge between Manhattan and Queens. New York Assembly members proposed separate bills in early 1898 to revoke Rainey's franchise for the bridge and to have the city purchase Rainey's franchise. Rainey vowed not to sell his franchise, but the state legislature passed a bill in March 1900 allowing the city to take over Rainey's franchise. Although Rainey himself eventually consented to the city's takeover of his franchise, mayor Robert Anderson Van Wyck wanted to build a new bridge in a slightly different location. A New York state senator introduced legislation in early 1897 to permit the development of a bridge between Manhattan and Queens; the unified city government was to pay for the bridge. At a meeting in Long Island City in February 1898, a group of men from both boroughs were appointed to consider plans for the bridge. By late 1898, Queens residents were threatening to not vote for the Democratic Party (of which Van Wyck was part) if the construction of the bridge did not begin shortly. The city allocated $100,000 for preliminary surveys and borings for the Blackwell's Island Bridge, as well as the Williamsburg Bridge between Manhattan and Brooklyn, at the end of 1898. In early 1899, R. S. Buck published plans for an asymmetrical cantilever bridge connecting Queens with Manhattan; the early plans called for a utilitarian design. The New York City Bridge Department's chief engineer finalized plans for the bridge in October 1899. Coler drew up a plan for a tunnel between Queens and Manhattan via Blackwell's Island; he claimed that the tunnel would cost $1.9 million, while the bridge would cost $13 million. The Board of Aldermen appropriated $1 million for the bridge at the end of 1899. State assemblyman Edward C. Brennan proposed a bill in January 1900 to appoint commissioners for a bridge or tunnel between Manhattan and Queens. The city's Municipal Assembly initially failed to authorize the bridge's construction due to opposition from Tammany Hall politicians. The bridge was approved that November; the bridge was relocated southward so its Manhattan end was near 60th Street. The United States Department of War, which had to certify the plans for the bridge before any work could begin, approved the span's construction in February 1901. Initially, the crossing was referred to as East River Bridge No. 4; the Board of Aldermen voted to officially rename it the Blackwell's Island Bridge in March 1902. Construction Pier construction and proposed modifications R. S. Buck and his assistants were directed to prepare plans for the sites of the bridge's piers, anchorages, and foundations. The Department of Bridges received bids for the foundations in June 1901, with Ryan & Parker as the low bidder. Groundbreaking took place that September. After Seth Low was elected as the city's mayor in late 1901, he promised that work would continue, even though the city's new bridge commissioner, Gustav Lindenthal, wanted to temporarily halt construction. Lindenthal narrowed the bridge from . The modifications would allow the city to save $850,000 while allowing the city to build toll booths, as well as stairs and elevators to Blackwell's Island, within these piers. To compensate for the reduced width, a upper deck would be built. By January 1902, only $42,000 had been spent on the project. In June 1902, a subcommittee of the New York City Board of Estimate requested another $5 million for construction. The same month, Lindenthal ordered Ryan & Parker to stop working on the bridge, but the firm refused to comply with his order, saying they would lose large amounts of money if work were halted. Lindenthal submitted the modified plans to the Municipal Art Society for approval but withdrew them that July, and he also allowed Ryan & Parker to continue constructing the piers. Lindenthal decided to significantly modify his plans. Queens residents strongly protested any design changes, and Lindenthal finally agreed not to change the bridge's width. By mid-1902, Lindenthal was requesting an additional $3.78 million for the bridge's completion. In October, a special committee recommended that Lindenthal's plans be rejected, saying that it would cost the city more if construction were halted and that two other East River bridges were also about 120 feet wide. City comptroller Edward M. Grout, meanwhile, wanted workers to divert their efforts to the Manhattan Bridge. Low appointed a group of engineering experts that November to review Lindenthal's revised plans. The experts concluded that neither the original proposal nor Lindenthal's revision were sufficient and suggested that the bridge instead be wide. The approaches retained their original 120-foot width, as did the piers themselves. Henry Hornbostel was directed in early 1903 to prepare drawings of the bridge's towers and roadway, though no architectural contract had been awarded yet. By mid-1903, the piers were two-thirds completed. The bedrock under the Queens side of the bridge was very close to the ground, so work on the piers in Queens was able to proceed more rapidly than work on the other piers. The Board of Estimate appropriated an additional $3.86 million for the bridge's construction in July 1903. Low rejected a plan for widening 59th Street to serve as the bridge's Manhattan approach, and Queens residents disagreed over plans for the Queens approach. The final plans called for the Queens approach to end at Crescent Street; a new boulevard, Queens Plaza, would connect the approach to Jackson Avenue and Queens Boulevard. All of the piers were finished by May 1904, and city officials inspected the bridge's piers that July. Initial work on superstructure The Pennsylvania Steel Company submitted a bid to construct the bridge's superstructure for $5.3 million in September 1903; Lindenthal rejected the bid, suspecting that the company was engaging in collusion. The city requested further bids for the superstructure the next month, but an injunction prevented Lindenthal from awarding a steel contract. The Pennsylvania Steel Company received the steel contract that November, and the Art Commission approved plans for the bridge's spires the same month. Just before Lindenthal left office, the city received bids for four elevator towers and two powerhouses for the bridge at the end of 1903; the powerhouses were to supply the elevators. These elevators were to be positioned within the ends of the piers, which would make it impossible to widen the piers at a later date. City corrections commissioner Francis J. Lantry opposed the elevators because they would allow prisoners on Blackwell's Island to escape. In early 1904, Lindenthal's successor George Best canceled plans for ornamentation on the bridge. The Pennsylvania Steel Company was obligated to complete the superstructure by the beginning of 1907, and it submitted drawings for the construction of the superstructure in mid-1904. Later that year, Best postponed construction of the bridge's elevators and power houses, and the city authorized another $400,000 for the bridge's construction. Local merchants protested the postponement of the elevators, saying it would not save money. Before work on the superstructure began, workers erected seventeen temporary bents between the two piers on Blackwell's Island. When the bents were almost complete, ironworkers organized a sympathetic strike in June 1905, in solidarity with striking workers at the Pennsylvania Steel Company's Harrisburg factory. The work stoppage lasted a month, during which workers were not allowed to complete steel castings for the bridge. By that August, over of steel castings had been completed, and another of castings were being fabricated. There was not enough material to begin constructing the superstructure. There were so few workers on site, a local group estimated that the bridge would not be completed for fifty years. Work on the superstructure began later in 1905. By that November, workers had erected part of a steel tower atop the pier on the western side of Blackwell's Island; at the time, the media anticipated that of steel would be erected every month. The first steel span, that above Blackwell's Island, was completed at the beginning of 1906. After the Blackwell's Island span was finished, the falsework was moved to Manhattan and Queens, and the westernmost and easternmost spans were built atop the falsework. At that point, the city government had acquired much of the land for the approaches. The bridge's construction was delayed when the Housesmiths' Union went on strike that January. Unions representing other trades refused to join the strike, and the Pennsylvania Steel Company had replaced the striking workers by that May. The strike delayed construction by four months. City officials condemned a strip of land for the Queens approach viaduct in October 1906. Progress on superstructure and approaches The city's Bridge Commission received bids for the construction of a steel approach viaduct in Queens in December 1906, and the Buckley Realty Construction Company submitted a low bid of $798,000. Work on the Queens approach began in February 1907. By then, about of steel for the bridge, representing nine-tenths of the steel contract, had been manufactured. Workers erected 512 tons of steel each day. To erect the two spans across the East River's west and east channels, they first built steel towers above each pier, then constructed the cantilever arms from each tower toward the center of the river. As such, the bridge was essentially built in three sections in Manhattan, Blackwell's Island, and Queens. By early 1907, the cost of acquiring land for the approaches had increased to $6 million, double the original estimate, and the cost of the entire bridge had increased to as much as $18 million. Snare & Triest submitted a low bid of $1.577 million for the construction of the Manhattan approach that May, and work on that approach began that July. After the collapse of the similarly designed Quebec Bridge in mid-1907, engineers said they had no concerns about the Blackwell's Island Bridge. The steel towers above both of the Blackwell's Island piers had been completed and were being painted. That September, some beams at the eastern end of the bridge were blown into the river during a heavy windstorm. The same month, Maryland Steel Company submitted a low bid of $758,000 for a steel-and-masonry approach in Queens. Several buildings in Long Island City, including rowhouses and an old homestead, were demolished for the Queens approach. The easternmost steel span was well underway by the end of 1907, and work on the steel towers on the Manhattan and Queens waterfronts began that December. At the time, the bridge was more than 70 percent complete. Although Manhattan residents supported widening 59th Street to serve as the bridge's Manhattan approach, the city's controller was opposed. The project continued to experience labor disputes, such as in early 1908, when disgruntled workers tried to destroy the Blackwell's Island span with dynamite. Completion The Manhattan and Blackwell's Island sections of the bridge were riveted together on March 13, 1908, and the Blackwell's Island and Queens sections were linked on March 18. The Board of Aldermen appropriated another $1.2 million for the bridge's completion shortly afterward; the project had cost $6.2 million up to that point. The New York City Department of Finance's chief engineer began investigating the bridge in May 1908 in response to concerns over its structural integrity, as the bridge was similar to the collapsed Quebec Bridge, and the plans had been modified after the contract for the superstructure had been awarded. That June, the Board of Estimate authorized $30,000 for two investigations into the bridge's safety. The Pennsylvania Steel Company formally completed the superstructure on June 16, 1908, eighteen months behind schedule. The Department of Bridges began receiving bids that July for paving and electrical equipment, and the approach viaducts were completed on August 17. The city refused to pay Pennsylvania Steel until 1912, when a judge forced them to do so. Businessmen proposed renaming the crossing as the Queensboro Bridge in September 1908, saying the Blackwell Island name was too closely associated with the island's hospitals and asylums. Despite several Irish-American groups' objections that the Queensboro name resembled a British name, it stuck. The structural engineers tasked with studying the bridge concluded that it was structurally sound, although the bridge was altered to carry two elevated tracks rather than four. There was still skepticism over the bridge's structural integrity, and the Bridge Department planned to remove some heavy stringers from the upper deck to reduce the bridge's dead load. Paving of the bridge's decks was completed in January 1909. In total, the crossing had cost about $20 million, including $12.6 million for spans and over $5 million for land acquisition. One newspaper had estimated that 55 workers had been killed during construction. Operational history Opening and 1910s In February 1909, the Celebration Committee set June 12 as the bridge's official opening date, and two grand parades were planned for the bridge's official opening. The lights on the bridge were first turned on March 28, and the bridge opened to the public two days later on March 30, 1909. The upper deck's tracks were not in service because engineers had deemed them unsafe for use. The Queensboro Bridge formally opened as scheduled on June 12, 1909; at the time, it was the fourth-longest bridge in the world. The grand opening included a fireworks display, a parade lasting several hours, a "Queen of the Queensboro Bridge" beauty pageant in a local newspaper, and a week of carnivals. During late 1909, the Williams Engineering and Contracting Company sued the city for damages relating to the unbuilt elevators on Blackwell's Island, and there was another lawsuit over its safety. There was a ten-cent toll to drive over the bridge, although pedestrians walked across for free. Shortly after the Queensboro Bridge opened, the city government conducted a study and found that it had no authority to charge tolls on the Queensboro and Manhattan bridges. Tolls on the Queensboro Bridge, as well as the Williamsburg, Manhattan, and Brooklyn bridges to the south, were abolished in July 1911 as part of a populist policy initiative headed by New York City mayor William Jay Gaynor. A bridge approach between Second and Third avenues in Manhattan was proposed in 1913, and plans for elevated rapid transit on the upper level were approved at the same time. By that year, the bridge carried 29 million people a year (compared to 3.6 million during 1909). Horse-drawn vehicles made up almost 30 percent of the bridge's total vehicular traffic in the early 1910s, which dropped to less than 2 percent within a decade. In mid-1914, engineers devised plans to add two subway tracks to the lower level and replace the existing roadway with a pair of roadways on the upper and lower levels. The upper roadway would have connected to Van Alst Avenue (21st Street) in Queens; one company proposed constructing the deck in 18 months. The subway plans were ultimately dropped in favor of the 60th Street Tunnel. In early 1916, the New York City government allocated $144,000 for repairs to the roadway, as it had never been repaved and was full of holes and ruts. A new foundation was installed to slow down the decay of the wooden pavement. Simultaneously, the city's Public Service Commission had approved the construction of connections between the bridge's upper-level tracks and the elevated lines at either end. Elevated service across the bridge commenced in July 1917, and the entire repaving project was nearly done later that year. 1920s to 1940s By the early 1920s, one hundred thousand people a day used the span, and the Queensboro Bridge and the other East River bridges were rapidly reaching their vehicular capacity. One count in 1920 found that an estimated 18,000 motor vehicles used the bridge daily, while another count in 1925 found that 45,000 vehicles used the span in 24 hours. Proposals to relieve traffic on the bridge included a ferry from Manhattan to Queens; larger signs pointing to existing ferries; a parallel bridge; and a parallel tunnel (later the Queens–Midtown Tunnel). Traffic on the bridge more than doubled from 1924 to 1932, though the opening of new vehicular crossings caused congestion to increase less rapidly after 1932. By the mid-1930s, the bridge handled an average of 110,000 vehicles daily. When the Queens–Midtown Tunnel opened in 1940, The New York Times predicted it would relieve congestion on the Queensboro Bridge. 1920s modifications and new roadway The Manhattan approach viaduct was repaired in 1920, and city officials began adding a concrete pavement to the bridge in mid-1924. Engineers determined at the time that a hard-surfaced roadway would be too heavy for the bridge. Queens borough president Maurice E. Connolly said the weight of trucks had caused the steel buckle plates under the pavement to break, though the commissioner of the city's Plant and Structure Department said the bridge was still safe and that stronger plates were being installed. In addition, Manhattan borough president Julius Miller proposed a plaza and a new approach road at the Manhattan end in 1924, and he submitted plans to acquire property for the plaza and road later the same year. Miller revised his plans in 1925, calling for a tunnel under Second Avenue and a new street east of the avenue between 57th and 63rd streets. To alleviate congestion, one of the bridge's lanes was used as a reversible lane during peak hours. In late 1926, Plant and Structure commissioner Albert Goldman proposed adding three vehicular lanes and removing the bridge's footpaths; the proposal also called for new approaches at either end and relocation of the elevated tracks. The Merchants Association and the Fifth Avenue Association endorsed this plan. The Board of Estimate allocated $150,000 for improvements to the bridge in April 1927, and the board approved the $3 million plan that June. The project was delayed due to difficulties in acquiring property, and the city controller's office contemplated abandoning plans for the new approaches. In late 1928, the Board of Estimate allowed construction to commence on both the new lanes and the approach viaducts at either end. To reduce congestion, the Manhattan ends of the upper and lower roadways were apart, while the Queens ends of these roadways were about apart. Real-estate developers supported the project because it would encourage real-estate and business activity in Queens. Fire extinguishers and chemical carts, for fighting small fires, were also installed on the bridge in 1928. Goldman publicized his plans for the southern upper roadway in April 1929, and the T. H. Reynolds Company had been hired to move the elevated tracks by the next month. The Bersin Construction Company received a contract for the new roadway in August 1929 and started construction the same month. A contract for the Queens approach viaduct was awarded to Bersin-Ronn Engineering Corporation in April 1930. The upper roadway was substantially completed by early 1931; it opened that June and carried only eastbound cars. By then, the bridge was carrying almost 100,000 vehicles a day. A new footpath was also constructed on the south side of the upper level but was not opened with the upper roadway. Initially, the upper deck had a wood, granite, and asphalt pavement. It contained grooves for motorists' tires, preventing them from changing lanes; after drivers complained about damaged tires, the grooves were first widened, then infilled by September. 1930s and 1940s modifications To reduce congestion, one civic group suggested a plaza at the bridge's Manhattan end in the early 1930s, while Manhattan's borough president Samuel Levy proposed building an underpass to carry traffic on Second Avenue beneath the Manhattan end of the bridge. Precipitation had begun to corrode the bridge's steel supports, as the masonry work had never been completed; this prompted a grand jury investigation into the bridge's safety in 1934. There were also proposals to charge tolls on the bridge in the 1930s, though local groups widely opposed these plans. In 1934, westbound motorists began using the upper southern roadway during weekday mornings, Sundays, and holiday evenings; the upper roadway continued to carry eastbound traffic at all other times. To reduce congestion, traffic agents began controlling traffic at each end of the bridge in July 1935, and lane control lights for the lower level's reversible lanes were installed later the same year. The bridge's wooden pavement also posed a hazard during rainy weather and made the bridge one of the city's most dangerous roadways by the mid-1930s. This prompted local groups to call for the installation of a non-skid pavement. Workers repaved the upper level in early 1935 and began installing an experimental concrete-and-steel pavement on the lower level that April. City officials also contemplated adding an asphalt-plank pavement to the bridge. Works Progress Administration (WPA) laborers began repaving the lower level in March 1936; The city government also planned to add lane markings to the lower roadway and convert the upper roadway permanently into a one-way road. After delays caused by material and labor shortages, the repaving of the lower level was completed in June 1937. WPA laborers also completed the tops of the bridge's towers. WPA workers began rebuilding the upper level pavement in July 1938, and the upper roadway closed that October, reopening two months later. By 1942, the city government was planning to shutter and dismantle the Second Avenue Elevated tracks across the Queensboro Bridge; the line closed in June 1942, and it was demolished by the end of the year. There were also plans in the mid-1940s to connect the bridge's Queens terminal with an expressway running to the John F. Kennedy International Airport. The City Planning Commission proposed rebuilding the Manhattan end of the bridge in late 1946 and adding an eight-story parking garage above the approach viaduct. This proposal was postponed due to a lack of money. The bridge was repainted in 1948, and a $12 million renovation of the bridge was announced the next year. The plan included two extra lanes on the upper level, new pavement, a bus terminal in Manhattan, and cloverleaf ramps at the Manhattan approach. The city government was concurrently planning the Welfare Island Bridge, which would allow people to access Welfare Island without needing to use the Queensboro Bridge's elevator. 1950s and 1960s Officials installed fences in 1951 to prevent jaywalking at the Manhattan approach, and the city's parking authority contemplated erecting a parking garage west of the bridge's Manhattan terminus the same year. Another proposal to toll the bridge was rejected as overly expensive. Public Works commissioner Frederick H. Zurmuhlen announced that October that his office was preparing plans for the northern upper roadway, and he petitioned the city government for $6.5 million for the new roadway. By the next year, plans for the roadway and its Manhattan approach were complete, and workers were demolishing buildings to make way for the roadway's Manhattan approach. Zurmuhlen requested $8.2 million from the city in 1953 for the construction of the roadway; in exchange, he dropped plans for a bus terminal at the Manhattan end of the bridge. The bridge's approaches were repaved in 1954. The Board of Estimate allocated $7.7 million in June 1955 for the construction of the northern upper roadway and approach ramps. With the opening of the Welfare Island Bridge that year, the city shuttered the trolley lanes, mid-bridge station, and stairs to Roosevelt Island, and it also planned to close down the bridge's elevators. The last trolley traversed the bridge in April 1957, and the elevators and stairs on the Queens side of the bridge were closed the same month, although the elevator in Roosevelt Island would not be demolished for 13 years. The Queens approach ramps were also rebuilt, accounting for over two-thirds of the project's cost. The Thomson Avenue ramp was completed first, followed by the ramp to 21st Street in late 1957. The northern upper roadway opened in September 1958, and the bridge was formally rededicated in April 1959 for its 50th anniversary. In 1958, Consolidated Edison proposed converting the lower-level trolley tracks into vehicular lanes in exchange for permission to install power cables under the bridge. Consolidated Edison spent $4 million in 1960 to install power cables, convert the trolley tracks, and construct slip roads between the lower-level roadways. The new lanes, on the northern and southern sides of the bridge, opened on September 15, 1960. The same year, Manhattan borough president Louis A. Cioffi proposed a $2.06 million ramp at the Manhattan end of the bridge. Also during the early 1960s, the city's Department of Public Works requested funding for a feasibility study of additional roadways, and the city's traffic commissioner Henry Barnes studied the feasibility of a computer-controlled traffic monitoring system for the bridge. In 1964, mayor Robert F. Wagner Jr. approved the demolition of several buildings for a proposed underpass connecting the bridge's westbound lanes with Second Avenue in Manhattan. Had the underpass been built, a bus terminal and landscaped plaza would also have been erected at the Manhattan end of the bridge. These plans were scrapped due to a lack of funding. City planner Robert Moses proposed a 1,000-space parking garage at the bridge's Manhattan end in 1965, though Barnes objected to the plan. Instead, Barnes proposed a 1,100-spot garage on the Queens side, which was approved in June 1966. The bridge was repainted for seven months starting in November 1966 at a cost of $240,000. Between 1968 and 1970, officials commissioned five studies of Queensboro Bridge traffic, but no changes were made as a result. 1970s to 1990s Landmark status, toll plan, and deterioration In 1970, the federal government enacted the Clean Air Act, a series of federal air pollution regulations. As part of a plan by mayor John Lindsay and the federal Environmental Protection Agency, the city government considered implementing tolls on the four East River bridges, including the Queensboro, in the early 1970s. The plan would have raised money for New York City's transit system and allowed the city to meet the Clean Air Act. Had the tolls been implemented, a tollbooth would have been installed on the bridge's Manhattan approach. A small terminal for express buses was also proposed for the Manhattan end of the bridge, but it was not built. On November 23, 1973, the New York City Landmarks Preservation Commission (LPC) designated the Queensboro Bridge as a city landmark, preventing any modifications without the LPC's approval. It was the second East River bridge to be so designated, after the Brooklyn Bridge. While there were concerns that the landmark status could prevent tollbooths from being installed, planners said the tollbooths could just be installed on the bridge's approaches. The Board of Estimate delayed ratification of the landmark designation because some space under the bridge's approaches was used for commercial purposes. The tolling proposal was opposed by figures such as Queens borough president Donald Manes, who encouraged the state government to take over the bridge so tolls could not be charged. According to Manes, the tolls would merely increase pollution around Queens Plaza. Abraham Beame, who became mayor in 1974, refused to implement the tolls, and the U.S. Congress subsequently moved to forbid tolls on the East River bridges. The northern lower-level roadway was closed in 1976 while the wires underneath the deck were being replaced. By the mid-1970s, as the city government considered an open-air market under the bridge, a city engineer described the bridge as severely deteriorated. Among the issues cited were extensive rusting, faulty expansion joints, clogged drains, potholes, and dirt. New York State Department of Transportation (NYSDOT) engineering director George Zaimes described the bridge's frame as being rusty, with some holes that were as large as a person's head. According to Zaimes, the upper roadway was only attached to the bridge "by its own weight and memory". 1970s and 1980s renovations The state government started inspecting the Queensboro Bridge and five others in 1978, allocating $1.1 million for a study. That year, the city government also repainted the bridge in a brown and tan color scheme. To reduce congestion, a contraflow lane for express buses was installed at the Manhattan end of the bridge in 1979. That year, the lower deck's outer lanes were closed to vehicles; parts of the outer roadways had weakened to the point that they could barely carry the weight of a passenger car. Repairs to the outer lanes were expected to last for three years and cost $50 million. The southern outer roadway was converted into a pedestrian and bicycle path, which opened in July 1979. The city received $18.6 million in federal funds for the Queensboro Bridge's restoration in 1980. By then, an estimated 175,000 vehicles daily used the bridge. An extensive renovation commenced on February 25, 1981, and was completed in six phases. That December, the United States Department of Transportation gave $28.8 million for the bridge's renovation. The pedestrian and bike path closed in May 1983. The NYSDOT announced that July that the southern upper roadway, which carried eastbound traffic, would be closed for repairs, which were expected to take 18 months. The northern upper roadway, normally used by westbound traffic. was converted to eastbound-only operation, except during weekday mornings when it carried westbound traffic. The ramp leading from 57th and 58th streets to the southern upper roadway was temporarily closed for reconstruction in early 1984. By the beginning of 1985, the southern upper roadway had reopened after being rebuilt for $31 million. The outer lanes of the lower level had also reopened, but state officials estimated that the project would not be complete until 1992. The Queensboro Bridge's pedestrian path reopened in July 1985; the same year, the city received another $60 million in federal funds for the renovations of the Queensboro, Manhattan, and Brooklyn bridges. In February 1987, the New York City Department of Transportation (NYCDOT) announced that parts of the northern upper roadway would be closed for two years. As part of the $42 million project, a new concrete deck would be installed, and the steel structure would be restored. The ramps to 62nd and 63rd streets closed in October 1987 and reopened twelve months later. This closure coincided with the renovations of other East River bridges. The lower-level bike path was opened to vehicular traffic at peak times, and flatbed trucks carried bicycles across the bridge. The lower deck's southern outer roadway was closed for emergency repairs in 1988 after workers discovered severe corrosion. The reconstruction of the upper deck was completed in 1989 for $100 million. The bridge was still in poor condition: during a tour of the bridge in 1988, transportation engineer Sam Schwartz peeled off part of one of the bridge's beams with one hand. 1990s renovations The Metropolitan Transportation Authority (MTA) proposed a rail link to LaGuardia and JFK airports in 1990; the line, which would have used the Queensboro Bridge, was canceled in 1995. A renovation of the Queensboro Bridge's lower level began in June 1990, when two Manhattan-bound lanes were closed. This phase of construction was supposed to cost $120 million. The lower deck's partial closure caused severe congestion in Queens, since part of the nearby Long Island Expressway was also closed for renovation. By 1993, the renovation was slated to be completed the next year. At that time, officials announced plans for a Manhattan-bound high-occupancy vehicle (HOV) lane on the bridge during morning rush hours. A Queens-bound HOV lane during the afternoon was deemed infeasible due to heavy congestion in Manhattan. The Manhattan-bound HOV lane opened in April 1994, and all lower-level lanes had reopened by that October. The NYCDOT announced in 1995 that it would spend another $161 million to renovate the outer lower-level roadways starting the following year. Two lanes were again closed for maintenance from April to September 1996, causing severe congestion. Following complaints from residents near 57th Street, starting in October 1996, traffic on the upper level traveled on the left during rush hours to reduce noise pollution and traffic congestion. Vehicles headed for Queens had to enter at 62nd and 63rd Streets, which caused widespread confusion. After protests from Upper East Side residents, the original right-hand traffic pattern was reinstated on the upper level, and the southern lower roadway (used by pedestrians) was converted to an eastbound vehicular lane during the afternoon rush hour. Some pedestrians and bikers opposed the conversion of the southern lower roadway, as they would have to wait for a van to take them across the bridge during weekday afternoons, but the new traffic pattern was implemented anyway. In the late 1990s, the NYCDOT hired architect Walter Melvin to renovate the vaults under the Manhattan approach. During the renovation of the main span, a scaffold collapsed in 1997, killing a worker. The renovation of the northern lower roadway was completed in mid-1998. That August, the NYCDOT implemented a new traffic pattern during evening rush hours, where the northern upper roadway carried eastbound traffic, giving the bridge six eastbound and three westbound lanes during that time. The northern lower roadway, which carried pedestrians and cyclists during mornings and off-peak hours, was converted into a westbound lane during the evening rush hour. The NYCDOT's commissioner called the changes an "interim fix for nine to 14 months". By then, about 184,000 vehicles used the bridge daily, with slightly more eastbound than westbound vehicles using the bridge. 2000s to present Following the completion of additional renovations in September 2000, the northern upper roadway was converted back to a westbound road at all times. The northern lower roadway was converted into a bike and pedestrian path, while the southern lower roadway became an eastbound lane. After the September 11 attacks on the World Trade Center in 2001, drivers without passengers were temporarily banned from using the bridge during rush hours. The city announced plans in 2002 to restore six masonry piers supporting the bridge. The same year, mayor Michael Bloomberg again proposed tolling the four free East River bridges, including the Queensboro Bridge; many local residents opposed his plan, and Bloomberg postponed the tolling plan in 2003. As part of a $168 million project that began in 2004, workers repainted the bridge. They also added fences and lighting, restored a trolley kiosk on the Manhattan end of the bridge, and restored the Manhattan approach in a separate project between 2003 and 2006. The renovation was temporarily halted in October 2005 after a small fire. A group of Roosevelt Island residents requested in 2007 that the city government install an elevator or stairway from the bridge, but city officials expressed multiple concerns with the proposal, including security vulnerabilities, the need to close a lane of traffic, and the bridge's landmark designation. In March 2009, the New York City Bridge Centennial Commission sponsored events marking the centennial of the bridge's opening. The American Society of Civil Engineers designated the bridge as a National Historic Civil Engineering Landmark the same year. The bridge was renamed after Ed Koch in 2011. After a series of fatal crashes in 2013, officials closed the southern lower roadway at night. By the middle of the decade, the bridge carried 175,000 daily vehicles, making it the East River's busiest bridge. Mayor Bill de Blasio announced plans in April 2016 to allocate $244 million for repairs to the Queensboro Bridge's upper deck. Concurrently, elected officials proposed adding tolls to the bridge yet again. In January 2021, the city decided to install a two-way protected bike path on the northern lower roadway and convert the southern lower roadway to a pedestrian path; the conversion was delayed because of a renovation of the upper deck. The renovation commenced in February 2022. A plan for congestion pricing in New York City was approved in mid-2023, allowing the MTA to toll drivers who use the Queensboro Bridge and then travel south of 60th Street. Congestion pricing was implemented in January 2025; Drivers on the northern upper roadway are exempt from the toll, but all other Manhattan-bound drivers pay a toll, which varies based on the time of day. Although no toll is charged upon exiting the congestion zone, all Queens-bound drivers must pay a toll to access streets leading to the bridge, even if they drive only one or two blocks within the congestion zone. Public transportation Rail service Rapid transit The bridge, built with two elevated railway tracks on its upper level, had space for two more tracks. A connection from the Interborough Rapid Transit Company's Second Avenue Elevated to the bridge was first proposed in 1910; early plans called for a line extending to Malba. The elevated tracks were approved in 1913, and the connection opened in 1917, allowing Second Avenue trains to access the Astoria and Flushing lines. The tracks carried elevated trains until service was discontinued in 1942. There were also plans to run a New York City Subway line across the bridge in September 1909; in a report submitted to the New York City Board of Estimate in June 1911, the Brooklyn Rapid Transit Company was to extend its Broadway Line onto the bridge. By December 1914, the Board of Estimate had abandoned the proposal, which would have required $2.6 million in modifications to the bridge and would have caused serious congestion. Instead, the board proposed the double-tracked 60th Street Tunnel under the East River, which would allow the city to save $500,000. The New York Public Service Commission approved the tunnel in July 1915. In 1990, the MTA proposed an airport rail link running via the bridge to JFK and LaGuardia airports. This plan was scaled down in 1995, becoming the AirTrain JFK, which serves a small part of Queens. Streetcars The bridge had streetcar tracks occupying the northern and southern lower roadways. On the Manhattan side, there were two ramps from each of the outer lower-level roadways to a set of platforms under Second Avenue. On the Queens side, the tracks split into multiple branches. Six streetcar companies had applied for franchises to use the bridge by late 1908, before its official opening. The first trolleys traveled on the bridge in September 1909, and passenger service began the next month. In the bridge's first decade, the tracks were used by the New York and Queens County Railway, Manhattan and Queens Traction Company, Steinway Lines, and Third Avenue Bridge Company. When the Third Avenue Railway started using the bridge in 1913, it built power infrastructure under the roadway, as its streetcars received power from underground. The South Shore Traction Company also applied for permission to use the bridge but was denied. A streetcar stop was constructed at the middle of the bridge in 1919 to serve the elevator to Roosevelt Island. The tracks connecting the Third Avenue Railway with the Queensboro Bridge were removed in 1922, after the company stopped using the bridge. Although almost all streetcar service had been withdrawn by 1939, the Queensboro Bridge Local route ran across the bridge until April 7, 1957; it was the last trolley route in New York state. On the Manhattan end of the Queensboro Bridge were originally five trolley kiosks, which contained stairs leading to a trolley terminal underground. Lindenthal and Hornbostel designed the structures, which had terracotta-paneled facades, cast-iron columns, and a copper roof with cast-iron fascias. There were arched, glazed-tile ceilings inside each of the kiosks. The kiosks also had Greek key motifs; shields with garlands; and ornamental brackets. The locations of three kiosks are unknown. Another kiosk was sent to the Brooklyn Children's Museum in 1974, then was relocated to Roosevelt Island and renovated into a visitor center. The Roosevelt Island kiosk, which reopened in July 2007, measures across and weighs . Yet another kiosk remains in place in Manhattan but is used as storage space. The remaining kiosk in Manhattan was planned to be removed in 2002 but was instead restored. Buses The bridge carries three local bus routes operated by MTA Regional Bus Operations: the , and . The bridge also carries 20 express bus routes in the eastbound direction only: the , and , which all use the Queens-Midtown Tunnel for westbound travel. Elevator to Roosevelt Island An elevator from the bridge to Roosevelt Island (then known as Blackwell's Island) was proposed in October 1912. Although various groups opposed an elevator in the middle of the bridge's deck because it would block traffic, an elevator next to the deck was tested the next month. The Board of Estimate provided $366,000 in 1916 for an elevator building connecting the bridge to Roosevelt Island. The building, on the bridge's north side, was finished in 1918 or 1919. The building was nine or ten stories tall and had two passenger and three freight elevators. The structure was set back from the bridge to reduce damage in a fire. The top floor was connected to the bridge by a roadway measuring wide; there was also a stair and a guard's booth. The other nine floors contained various food storage rooms. After the trolley lines across the bridge were largely replaced by buses in the 1930s, Steinway Transit retained one of the bridge's trolley tracks and established the Queensboro Bridge Railway, a shuttle streetcar route connecting with the elevator to Roosevelt Island. The elevator was demolished in 1970, having been replaced by the Roosevelt Island Bridge. A separate passenger elevator ran during weekdays to Welfare Island, via a storehouse described as "clean but gloomy", until mid-1973. Impact Reception When plans for the bridge were being finalized in 1901, there was commentary on its cantilevered design; all of the other bridges across the East River at the time were suspension bridges. The city's bridge commissioner at the time, John L. Shea, said that the Queensboro Bridge would not be as "picturesque" compared to a suspension bridge but that it could look as attractive as either the Williamsburg or Brooklyn bridges. Buck said that the U.S. had some "homely" cantilever bridges but hoped the Queensboro Bridge was not ugly. The chief engineer of the city's Bridge Department said in 1904 that he believed the cantilever design was "a mistake" and that a suspension bridge on the same site, supported by three towers, would have been a novelty. When the bridge was finished in 1908, The Christian Science Monitor wrote that the Queensboro was "one of the greatest bridges in the world, and one of the most beautiful of its type", despite having received relatively little media attention during construction. Two decades after the bridge opened, The New York Times said the "Brooklyn Bridge has the reputation but Queensboro Bridge has the traffic". The New York Daily News wrote in 1981 that the Queensboro Bridge "reminds people of the bridges they built with erector sets as children". Nonetheless, the bridge was not as widely appreciated as the Brooklyn Bridge further south, especially in the late 20th century, and The Los Angeles Times wrote in 2010 that "the Queensboro appears far grittier than the romantic Brooklyn Bridge or the soaring Verrazano-Narrows Bridge to the south". Impact on development The New-York Tribune wrote in 1904 that the Queensboro Bridge's construction would cause Blackwell's Island to "lose at least a share of its sinister reputation". Even before the bridge was completed, real-estate values in Queens had been increasing several times over, and its construction also spurred the sale of property along 59th Street in Manhattan. Its development allowed various parts of Queens to be served by direct train and streetcar lines to Manhattan. The Brooklyn Daily Eagle predicted in 1908 that the bridge's completion would draw investors toward Long Island and away from New Jersey to the west. The same newspaper predicted that the bridge, along with the Steinway Tunnel and East River Tunnels, would change Long Island from a sparsely populated rural outpost to a densely packed suburb of New York City. A New York Times article from 1923 wrote that the bridge's opening "marked the first step in eliminating the East River as a barrier to the spread of population eastward". The opening of the bridge encouraged development of vacant land in Queens, where tracts were resold for residential and commercial use. Many industrial firms began operating in western Queens, including vehicle-manufacturing plants in Long Island City. By the early 1910s, numerous industrial structures and loft buildings had been built around the bridge's Queens end, particularly on Queens Plaza. Further east, neighborhoods such as Jackson Heights were built on former farmland. The Queensboro Chamber of Commerce's spokesperson said in 1924 that real estate values in Queens had tripled within 15 years of the bridge's opening, while the population grew from 284,000 to 736,000. At the bridge's 50th anniversary, The New York Times credited the bridge with encouraging industrial and residential development in Queens. Newsday wrote in the 1990s: "More than any other development, the Queensboro Bridge created the modern urban borough of Queens." The completion of the Queensboro Bridge inspired what became Queens Boulevard, although the thoroughfare was not finished until 1936. Media Because of its design and location, the Queensboro Bridge has appeared in numerous media works, including films and TV shows, set in New York City. For example, the title of Simon & Garfunkel's 1966 song "The 59th Street Bridge Song (Feelin' Groovy)" refers to the Queensboro Bridge, and it has been mentioned in media such as F. Scott Fitzgerald's 1925 novel The Great Gatsby. The bridge has been the setting or filming location for several movies, such as Manhattan (1979), Spider-Man (2002) and The Dark Knight Rises (2012). See also List of bridges and tunnels in New York City List of bridges documented by the Historic American Engineering Record in New York List of bridges and tunnels on the National Register of Historic Places in New York List of New York City Designated Landmarks in Manhattan from 59th to 110th Streets List of New York City Designated Landmarks in Queens National Register of Historic Places listings in Manhattan from 59th to 110th Streets National Register of Historic Places listings in Queens, New York References Notes Explanatory notes Inflation figures Citations Sources External links NYCDOT.gov 1908 establishments in New York City Bike paths in New York City Bridges completed in 1909 Bridges in Manhattan Bridges in Queens, New York Bridges on the National Register of Historic Places in New York City Bridges over the East River Buildings and structures on the National Register of Historic Places in Manhattan Cantilever bridges in the United States Double-decker bridges Henry Hornbostel buildings Historic American Engineering Record in New York City Historic Civil Engineering Landmarks Long Island City National Register of Historic Places in Queens, New York New York City Designated Landmarks in Manhattan New York City Designated Landmarks in Queens, New York Pedestrian bridges in New York City Railroad bridges in New York City Railroad bridges on the National Register of Historic Places in New York City Road bridges in New York City Road bridges on the National Register of Historic Places in New York City Roads with a reversible lane Roosevelt Island Steel bridges in the United States New York State Register of Historic Places in New York County New York State Register of Historic Places in Queens County
Queensboro Bridge
[ "Engineering" ]
12,779
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
162,498
https://en.wikipedia.org/wiki/Nuclear%20meltdown
A nuclear meltdown (core meltdown, core melt accident, meltdown or partial core melt) is a severe nuclear reactor accident that results in core damage from overheating. The term nuclear meltdown is not officially defined by the International Atomic Energy Agency or by the United States Nuclear Regulatory Commission. It has been defined to mean the accidental melting of the core of a nuclear reactor, however, and is in common usage a reference to the core's either complete or partial collapse. A core meltdown accident occurs when the heat generated by a nuclear reactor exceeds the heat removed by the cooling systems to the point where at least one nuclear fuel element exceeds its melting point. This differs from a fuel element failure, which is not caused by high temperatures. A meltdown may be caused by a loss of coolant, loss of coolant pressure, or low coolant flow rate or be the result of a criticality excursion in which the reactor is operated at a power level that exceeds its design limits. Once the fuel elements of a reactor begin to melt, the fuel cladding has been breached, and the nuclear fuel (such as uranium, plutonium, or thorium) and fission products (such as caesium-137, krypton-85, or iodine-131) within the fuel elements can leach out into the coolant. Subsequent failures can permit these radioisotopes to breach further layers of containment. Superheated steam and hot metal inside the core can lead to fuel–coolant interactions, hydrogen explosions, or steam hammer, any of which could destroy parts of the containment. A meltdown is considered very serious because of the potential for radioactive materials to breach all containment and escape (or be released) into the environment, resulting in radioactive contamination and fallout, and potentially leading to radiation poisoning of people and animals nearby. Causes Nuclear power plants generate electricity by heating fluid via a nuclear reaction to run a generator. If the heat from that reaction is not removed adequately, the fuel assemblies in a reactor core can melt. A core damage incident can occur even after a reactor is shut down because the fuel continues to produce decay heat. A core damage accident is caused by the loss of sufficient cooling for the nuclear fuel within the reactor core. The reason may be one of several factors, including a loss-of-pressure-control accident, a loss-of-coolant accident (LOCA), an uncontrolled power excursion. Failures in control systems may cause a series of events resulting in loss of cooling. Contemporary safety principles of defense in depth ensure that multiple layers of safety systems are always present to make such accidents unlikely. The containment building is the last of several safeguards that prevent the release of radioactivity to the environment. Many commercial reactors are contained within a thick pre-stressed, steel-reinforced, air-tight concrete structure that can withstand hurricane-force winds and severe earthquakes. In a loss-of-coolant accident, either the physical loss of coolant (which is typically deionized water, an inert gas, NaK, or liquid sodium) or the loss of a method to ensure a sufficient flow rate of the coolant occurs. A loss-of-coolant accident and a loss-of-pressure-control accident are closely related in some reactors. In a pressurized water reactor, a LOCA can also cause a "steam bubble" to form in the core due to excessive heating of stalled coolant or by the subsequent loss-of-pressure-control accident caused by a rapid loss of coolant. In a loss-of-forced-circulation accident, a gas cooled reactor's circulators (generally motor or steam driven turbines) fail to circulate the gas coolant within the core, and heat transfer is impeded by this loss of forced circulation, though natural circulation through convection will keep the fuel cool as long as the reactor is not depressurized. In a loss-of-pressure-control accident, the pressure of the confined coolant falls below specification without the means to restore it. In some cases, this may reduce the heat transfer efficiency (when using an inert gas as a coolant), and in others may form an insulating "bubble" of steam surrounding the fuel assemblies (for pressurized water reactors). In the latter case, due to localized heating of the "steam bubble" due to decay heat, the pressure required to collapse the "steam bubble" may exceed reactor design specifications until the reactor has had time to cool down. (This event is less likely to occur in boiling water reactors, where the core may be deliberately depressurized so that the emergency core cooling system may be turned on). In a depressurization fault, a gas-cooled reactor loses gas pressure within the core, reducing heat transfer efficiency and posing a challenge to the cooling of fuel; as long as at least one gas circulator is available, however, the fuel will be kept cool. Light-water reactors (LWRs) Before the core of a light-water nuclear reactor can be damaged, two precursor events must have already occurred: A limiting fault (or a set of compounded emergency conditions) that leads to the failure of heat removal within the core (the loss of cooling). Low water level uncovers the core, allowing it to heat up. Failure of the emergency core cooling system (ECCS). The ECCS is designed to rapidly cool the core and make it safe in the event of the maximum fault (the design basis accident) that nuclear regulators and plant engineers could imagine. There are at least two copies of the ECCS built for every reactor. Each division (copy) of the ECCS is capable, by itself, of responding to the design basis accident. The latest reactors have as many as four divisions of the ECCS. This is the principle of redundancy, or duplication. As long as at least one ECCS division functions, no core damage can occur. Each of the several divisions of the ECCS has several internal "trains" of components. Thus the ECCS divisions themselves have internal redundancy – and can withstand failures of components within them. The Three Mile Island accident was a compounded group of emergencies that led to core damage. What led to this was an erroneous decision by operators to shut down the ECCS during an emergency condition due to gauge readings that were either incorrect or misinterpreted; this caused another emergency condition that, several hours after the fact, led to core exposure and a core damage incident. If the ECCS had been allowed to function, it would have prevented both exposure and core damage. During the Fukushima incident the emergency cooling system had also been manually shut down several minutes after it started. If such a limiting fault were to occur, and a complete failure of all ECCS divisions were to occur, both Kuan, et al and Haskin, et al describe six stages between the start of the limiting fault (the loss of cooling) and the potential escape of molten corium into the containment (a so-called "full meltdown"): Uncovering of the Core – In the event of a transient, upset, emergency, or limiting fault, LWRs are designed to automatically SCRAM (a SCRAM being the immediate and full insertion of all control rods) and spin up the ECCS. This greatly reduces reactor thermal power (but does not remove it completely); this delays core becoming uncovered, which is defined as the point when the fuel rods are no longer covered by coolant and can begin to heat up. As Kuan states: "In a small-break LOCA with no emergency core coolant injection, core uncovery [sic] generally begins approximately an hour after the initiation of the break. If the reactor coolant pumps are not running, the upper part of the core will be exposed to a steam environment and heatup of the core will begin. However, if the coolant pumps are running, the core will be cooled by a two-phase mixture of steam and water, and heatup of the fuel rods will be delayed until almost all of the water in the two-phase mixture is vaporized. The TMI-2 accident showed that operation of reactor coolant pumps may be sustained for up to approximately two hours to deliver a two phase mixture that can prevent core heatup." Pre-damage heat up – "In the absence of a two-phase mixture going through the core or of water addition to the core to compensate water boiloff, the fuel rods in a steam environment will heat up at a rate between 0.3 °C/s (0.5 °F/s) and 1 °C/s (1.8 °F/s) (3)." Fuel ballooning and bursting – "In less than half an hour, the peak core temperature would reach . At this temperature, the zircaloy cladding of the fuel rods may balloon and burst. This is the first stage of core damage. Cladding ballooning may block a substantial portion of the flow area of the core and restrict the flow of coolant. However, complete blockage of the core is unlikely because not all fuel rods balloon at the same axial location. In this case, sufficient water addition can cool the core and stop core damage progression." Rapid oxidation – "The next stage of core damage, beginning at approximately , is the rapid oxidation of the Zircaloy by steam. In the oxidation process, hydrogen is produced and a large amount of heat is released. Above , the power from oxidation exceeds that from decay heat (4,5) unless the oxidation rate is limited by the supply of either zircaloy or steam." Debris bed formation – "When the temperature in the core reaches about , molten control materials (1,6) will flow to and solidify in the space between the lower parts of the fuel rods where the temperature is comparatively low. Above , the core temperature may escalate in a few minutes to the melting point of zircaloy [] due to increased oxidation rate. When the oxidized cladding breaks, the molten zircaloy, along with dissolved UO2 (1,7) would flow downward and freeze in the cooler, lower region of the core. Together with solidified control materials from earlier down-flows, the relocated zircaloy and UO2 would form the lower crust of a developing cohesive debris bed." (Corium) Relocation to the lower plenum – "In scenarios of small-break LOCAs, there is generally a pool of water in the lower plenum of the vessel at the time of core relocation. The release of molten core materials into the water always generates large amounts of steam. If the molten stream of core materials breaks up rapidly in water, there is also a possibility of a steam explosion. During relocation, any unoxidized zirconium in the molten material may also be oxidized by steam, and in the process hydrogen is produced. Recriticality also may be a concern if the control materials are left behind in the core and the relocated material breaks up in unborated water in the lower plenum." At the point at which the corium relocates to the lower plenum, Haskin, et al relate that the possibility exists for an incident called a fuel–coolant interaction (FCI) to substantially stress or breach the primary pressure boundary when the corium relocates to the lower plenum of the reactor pressure vessel ("RPV"). This is because the lower plenum of the RPV may have a substantial quantity of water - the reactor coolant - in it, and, assuming the primary system has not been depressurized, the water will likely be in the liquid phase, and consequently dense, and at a vastly lower temperature than the corium. Since corium is a liquid metal-ceramic eutectic at temperatures of , its fall into liquid water at may cause an extremely rapid evolution of steam that could cause a sudden extreme overpressure and consequent gross structural failure of the primary system or RPV. Though most modern studies hold that it is physically infeasible, or at least extraordinarily unlikely, Haskin, et al state that there exists a remote possibility of an extremely violent FCI leading to something referred to as an alpha-mode failure, or the gross failure of the RPV itself, and subsequent ejection of the upper plenum of the RPV as a missile against the inside of the containment, which would likely lead to the failure of the containment and release of the fission products of the core to the outside environment without any substantial decay having taken place. The American Nuclear Society has commented on the TMI-2 accident, that despite melting of about one-third of the fuel, the reactor vessel itself maintained its integrity and contained the damaged fuel. Breach of the primary pressure boundary There are several possibilities as to how the primary pressure boundary could be breached by corium. Steam explosion As previously described, FCI could lead to an overpressure event leading to RPV fail, and thus, primary pressure boundary fail. Haskin et al report that in the event of a steam explosion, failure of the lower plenum is far more likely than ejection of the upper plenum in the alpha mode. In the event of lower plenum failure, debris at varied temperatures can be expected to be projected into the cavity below the core. The containment may be subject to overpressure, though this is not likely to fail the containment. The alpha-mode failure will lead to the consequences previously discussed. Pressurized melt ejection (PME) It is quite possible, especially in pressurized water reactors, that the primary loop will remain pressurized following corium relocation to the lower plenum. As such, pressure stresses on the RPV will be present in addition to the weight stress that the molten corium places on the lower plenum of the RPV; when the metal of the RPV weakens sufficiently due to the heat of the molten corium, it is likely that the liquid corium will be discharged under pressure out of the bottom of the RPV in a pressurized stream, together with entrained gases. This mode of corium ejection may lead to direct containment heating (DCH). Severe accident ex-vessel interactions and challenges to containment Haskin et al identify six modes by which the containment could be credibly challenged; some of these modes are not applicable to core melt accidents. Overpressure Dynamic pressure (shockwaves) Internal missiles External missiles (not applicable to core melt accidents) Meltthrough Bypass Standard failure modes If the melted core penetrates the pressure vessel, there are theories and speculations as to what may then occur. In modern Russian plants, there is a "core catching device" in the bottom of the containment building. The melted core is supposed to hit a thick layer of a "sacrificial metal" that would melt, dilute the core and increase the heat conductivity, and finally the diluted core can be cooled down by water circulating in the floor. There has never been any full-scale testing of this device, however. In Western plants there is an airtight containment building. Though radiation would be at a high level within the containment, doses outside of it would be lower. Containment buildings are designed for the orderly release of pressure without releasing radionuclides, through a pressure release valve and filters. Hydrogen/oxygen recombiners also are installed within the containment to prevent gas explosions. In a melting event, one spot or area on the RPV will become hotter than other areas, and will eventually melt. When it melts, corium will pour into the cavity under the reactor. Though the cavity is designed to remain dry, several NUREG-class documents advise operators to flood the cavity in the event of a fuel melt incident. This water will become steam and pressurize the containment. Automatic water sprays will pump large quantities of water into the steamy environment to keep the pressure down. Catalytic recombiners will rapidly convert the hydrogen and oxygen back into water. One debated positive effect of the corium falling into water is that it is cooled and returns to a solid state. Extensive water spray systems within the containment along with the ECCS, when it is reactivated, will allow operators to spray water within the containment to cool the core on the floor and reduce it to a low temperature. These procedures are intended to prevent release of radioactivity. In the Three Mile Island event in 1979, a theoretical person standing at the plant property line during the entire event would have received a dose of approximately 2 millisieverts (200 millirem), between a chest X-ray's and a CT scan's worth of radiation. This was due to outgassing by an uncontrolled system that, today, would have been backfitted with activated carbon and HEPA filters to prevent radionuclide release. In the Fukushima incident, however, this design failed. Despite the efforts of the operators at the Fukushima Daiichi nuclear power plant to maintain control, the reactor cores in units 1–3 overheated, the nuclear fuel melted and the three containment vessels were breached. Hydrogen was released from the reactor pressure vessels, leading to explosions inside the reactor buildings in units 1, 3 and 4 that damaged structures and equipment and injured personnel. Radionuclides were released from the plant to the atmosphere and were deposited on land and on the ocean. There were also direct releases into the sea. As the natural decay heat of the corium eventually reduces to an equilibrium with convection and conduction to the containment walls, it becomes cool enough for water spray systems to be shut down and the reactor to be put into safe storage. The containment can be sealed with release of extremely limited offsite radioactivity and release of pressure. After perhaps a decade for fission products to decay, the containment can be reopened for decontamination and demolition. Another scenario sees a buildup of potentially explosive hydrogen, but passive autocatalytic recombiners inside the containment are designed to prevent this. In Fukushima, the containments were filled with inert nitrogen, which prevented hydrogen from burning; the hydrogen leaked from the containment to the reactor building, however, where it mixed with air and exploded. During the 1979 Three Mile Island accident, a hydrogen bubble formed in the pressure vessel dome. There were initial concerns that the hydrogen might ignite and damage the pressure vessel or even the containment building; but it was soon realized that lack of oxygen prevented burning or explosion. Speculative failure modes One scenario consists of the reactor pressure vessel failing all at once, with the entire mass of corium dropping into a pool of water (for example, coolant or moderator) and causing extremely rapid generation of steam. The pressure rise within the containment could threaten integrity if rupture disks could not relieve the stress. Exposed flammable substances could burn, but there are few, if any, flammable substances within the containment. Another theory, called an "alpha mode" failure by the 1975 Rasmussen (WASH-1400) study, asserted steam could produce enough pressure to blow the head off the reactor pressure vessel (RPV). The containment could be threatened if the RPV head collided with it. (The WASH-1400 report was replaced by better-based newer studies, and now the Nuclear Regulatory Commission has disavowed them all and is preparing the overarching State-of-the-Art Reactor Consequence Analyses [SOARCA] study - see the Disclaimer in NUREG-1150.) By 1970, there were doubts about the ability of the emergency cooling systems of a nuclear reactor to prevent a loss-of-coolant accident and the consequent meltdown of the fuel core; the subject proved popular in the technical and the popular presses. In 1971, in the article Thoughts on Nuclear Plumbing, former Manhattan Project nuclear physicist Ralph Lapp used the term "China syndrome" to describe a possible burn through of the containment structures, and the subsequent escape of radioactive material(s) into the atmosphere and environment. The hypothesis derived from a 1967 report by a group of nuclear physicists, headed by W. K. Ergen. Some fear that a molten reactor core could penetrate the reactor pressure vessel and containment structure and burn downwards to the level of the groundwater. It has not been determined to what extent a molten mass can melt through a structure (although that was tested in the loss-of-fluid-test reactor described in Test Area North's fact sheet). The Three Mile Island accident provided real-life experience with an actual molten core: the corium failed to melt through the reactor pressure vessel after over six hours of exposure due to dilution of the melt by the control rods and other reactor internals, validating the emphasis on defense in depth against core damage incidents. Other reactor types Other types of reactors have different capabilities and safety profiles than the LWR does. Advanced varieties of several of these reactors have the potential to be inherently safe. CANDU reactors CANDU reactors, Canadian-invented deuterium-uranium design, are designed with at least one, and generally two, large low-temperature and low-pressure water reservoirs around their fuel/coolant channels. The first is the bulk heavy-water moderator (a separate system from the coolant), and the second is the light-water-filled shield tank (or calandria vault). These backup heat sinks are sufficient to prevent either the fuel meltdown in the first place (using the moderator heat sink), or the breaching of the core vessel should the moderator eventually boil off (using the shield tank heat sink). Other failure modes aside from fuel melt will probably occur in a CANDU rather than a meltdown, such as deformation of the calandria into a non-critical configuration. All CANDU reactors are located within standard Western containments as well. Gas-cooled reactors One type of Western reactor, known as the advanced gas-cooled reactor (or AGR), built by the United Kingdom, is not very vulnerable to loss-of-cooling accidents or to core damage except in the most extreme of circumstances. By virtue of the relatively inert coolant (carbon dioxide), the large volume and high pressure of the coolant, and the relatively high heat transfer efficiency of the reactor, the time frame for core damage in the event of a limiting fault is measured in days. Restoration of some means of coolant flow will prevent core damage from occurring. Lead and lead-bismuth-cooled reactors Recently heavy liquid metal, such as lead or lead-bismuth, has been proposed as a reactor coolant. Because of the similar densities of the fuel and the HLM, an inherent passive safety self-removal feedback mechanism due to buoyancy forces is developed, which propels the packed bed away from the wall when certain threshold of temperature is attained and the bed becomes lighter than the surrounding coolant, thus preventing temperatures that can jeopardize the vessel’s structural integrity and also reducing the recriticality potential by limiting the allowable bed depth. Experimental or conceptual designs Some design concepts for nuclear reactors emphasize resistance to meltdown and operating safety. The PIUS (process inherent ultimate safety) designs, originally engineered by the Swedes in the late 1970s and early 1980s, are LWRs that by virtue of their design are resistant to core damage. No units have ever been built. Power reactors, including the Deployable Electrical Energy Reactor, a larger-scale mobile version of the TRIGA for power generation in disaster areas and on military missions, and the TRIGA Power System, a small power plant and heat source for small and remote community use, have been put forward by interested engineers, and share the safety characteristics of the TRIGA due to the uranium zirconium hydride fuel used. The Hydrogen Moderated Self-regulating Nuclear Power Module, a reactor that uses uranium hydride as a moderator and fuel, similar in chemistry and safety to the TRIGA, also possesses these extreme safety and stability characteristics, and has attracted a good deal of interest in recent times. The liquid fluoride thorium reactor is designed to naturally have its core in a molten state, as a eutectic mix of thorium and fluorine salts. As such, a molten core is reflective of the normal and safe state of operation of this reactor type. In the event the core overheats, a metal plug will melt, and the molten salt core will drain into tanks where it will cool in a non-critical configuration. Since the core is liquid, and already melted, it cannot be damaged. Advanced liquid metal reactors, such as the U.S. Integral Fast Reactor and the Russian BN-350, BN-600, and BN-800, all have a coolant with very high heat capacity, sodium metal. As such, they can withstand a loss of cooling without SCRAM and a loss of heat sink without SCRAM, qualifying them as inherently safe. Soviet Union–designed reactors RBMKs Soviet-designed RBMK reactors (Reaktor Bolshoy Moshchnosti Kanalnyy), found only in Russia and other post-Soviet states and now shut down everywhere except Russia, do not have containment buildings, are naturally unstable (tending to dangerous power fluctuations), and have emergency cooling systems (ECCS) considered grossly inadequate by Western safety standards. RBMK emergency core cooling systems only have one division and little redundancy within that division. Though the large core of the RBMK is less energy-dense than the smaller Western LWR core, it is harder to cool. The RBMK is moderated by graphite. In the presence of both steam and oxygen at high temperatures, graphite forms synthesis gas and with the water gas shift reaction, the resultant hydrogen burns explosively. If oxygen contacts hot graphite, it will burn. Control rods used to be tipped with graphite, a material that slows neutrons and thus speeds up the chain reaction. Water is used as a coolant, but not a moderator. If the water boils away, cooling is lost, but moderation continues. This is termed a positive void coefficient of reactivity. The RBMK tends towards dangerous power fluctuations. Control rods can become stuck if the reactor suddenly heats up and they are moving. Xenon-135, a neutron absorbent fission product, has a tendency to build up in the core and burn off unpredictably in the event of low power operation. This can lead to inaccurate neutronic and thermal power ratings. The RBMK does not have any containment above the core. The only substantial solid barrier above the fuel is the upper part of the core, called the upper biological shield, which is a piece of concrete interpenetrated with control rods and with access holes for refueling while online. Other parts of the RBMK were shielded better than the core itself. Rapid shutdown (SCRAM) takes 10 to 15 seconds. Western reactors take 1 - 2.5 seconds. Western aid has been given to provide certain real-time safety monitoring capacities to the operating staff. Whether this extends to automatic initiation of emergency cooling is not known. Training has been provided in safety assessment from Western sources, and Russian reactors have evolved in response to the weaknesses that were in the RBMK. Nonetheless, numerous RBMKs still operate. Though it might be possible to stop a loss-of-coolant event prior to core damage occurring, any core damage incidents will probably allow massive release of radioactive materials. Upon entering the EU in 2004, Lithuania was required to phase out its two RBMKs at Ignalina NPP, deemed totally incompatible with European nuclear safety standards. The country planned to replace them with safer reactors at Visaginas Nuclear Power Plant. MKER The MKER is a modern Russian-engineered channel type reactor that is a distant descendant of the RBMK, designed to optimize the benefits and fix the serious flaws of the original. Several unique features of the MKER's design make it a credible and interesting option. The reactor remains online during refueling, ensuring outages only occasionally for maintenance, with uptime up to 97-99%. The moderator design allows the use of less-enriched fuels, with a high burnup rate. Neutronics characteristics have been optimized for civilian use, for superior fuel fertilization and recycling; and graphite moderation achieves better neutronics than is possible with light water moderation. The lower power density of the core greatly enhances thermal regulation. An array of improvements make the MKER's safety comparable to Western Generation III reactors: improved quality of parts, advanced computer controls, comprehensive passive emergency core cooling system, and very strong containment structure, along with a negative void coefficient and a fast-acting rapid shutdown system. The passive emergency cooling system uses reliable natural phenomena to cool the core, rather than depending on motor-driven pumps. The containment structure is designed to withstand severe stress and pressure. In the event of a pipe break of a cooling-water channel, the channel can be isolated from the water supply, preventing a general failure. The greatly enhanced safety and unique benefits of the MKER design enhance its competitiveness in countries considering full fuel-cycle options for nuclear development. VVER The VVER is a pressurized light-water reactor that is far more stable and safe than the RBMK. This is because it uses light water as a moderator (rather than graphite), has well-understood operating characteristics, and has a negative void coefficient of reactivity. In addition, some have been built with more than marginal containments, some have quality ECCS systems, and some have been upgraded to international standards of control and instrumentation. Present generations of VVERs (starting from the VVER-1000) are built to Western-equivalent levels of instrumentation, control, and containment systems. Even with these positive developments, however, certain older VVER models raise a high level of concern, especially the VVER-440 V230. The VVER-440 V230 has no containment building, but only has a structure capable of confining steam surrounding the RPV. This is a volume of thin steel, perhaps in thickness, grossly insufficient by Western standards. Has no ECCS. Can survive at most one pipe break (there are many pipes greater than that size within the design). Has six steam generator loops, adding unnecessary complexity. Apparently steam generator loops can be isolated, however, in the event that a break occurs in one of these loops. The plant can remain operating with one isolated loop—a feature found in few Western reactors. The interior of the pressure vessel is plain alloy steel, exposed to water. This can lead to rust, if the reactor is exposed to water. One point of distinction in which the VVER surpasses the West is the reactor water cleanup facility—built, no doubt, to deal with the enormous volume of rust within the primary coolant loop—the product of the slow corrosion of the RPV. This model is viewed as having inadequate process control systems. Bulgaria had a number of VVER-440 V230 models, but they opted to shut them down upon joining the EU rather than backfit them, and are instead building new VVER-1000 models. Many non-EU states maintain V230 models, including Russia and the CIS. Many of these states, rather than abandon the reactors entirely, have opted to install an ECCS, develop standard procedures, and install proper instrumentation and control systems. Though confinements cannot be transformed into containments, the risk of a limiting fault resulting in core damage can be greatly reduced. The VVER-440 V213 model was built to the first set of Soviet nuclear safety standards. It possesses a modest containment building, and the ECCS systems, though not completely to Western standards, are reasonably comprehensive. Many VVER-440 V213 models operated by former Soviet bloc countries have been upgraded to fully automated Western-style instrumentation and control systems, improving safety to Western levels for accident prevention—but not for accident containment, which is of a modest level compared to Western plants. These reactors are regarded as "safe enough" by Western standards to continue operation without major modifications, though most owners have performed major modifications to bring them up to generally equivalent levels of nuclear safety. During the 1970s, Finland built two VVER-440 V213 models to Western standards with a large-volume full containment and world-class instrumentation, control standards and an ECCS with multiple redundant and diversified components. In addition, passive safety features such as 900-tonne ice condensers have been installed, making these two units safety-wise the most advanced VVER-440s in the world. The VVER-1000 type has a definitely adequate Western-style containment, the ECCS is sufficient by Western standards, and instrumentation and control has been markedly improved to Western 1970s-era levels. Effects The effects of a nuclear meltdown depend on the safety features designed into a reactor. A modern reactor is designed both to make a meltdown unlikely, and to contain one should it occur. In a modern reactor, a nuclear meltdown, whether partial or total, should be contained inside the reactor's containment structure. Thus (assuming that no other major disasters occur) while the meltdown will severely damage the reactor itself, possibly contaminating the whole structure with highly radioactive material, a meltdown alone should not lead to significant radioactivity release or danger to the public. Reactor design Although pressurized water reactors are more susceptible to nuclear meltdown in the absence of active safety measures, this is not a universal feature of civilian nuclear reactors. Much of the research in civilian nuclear reactors is for designs with passive nuclear safety features that may be less susceptible to meltdown, even if all emergency systems failed. For example, pebble bed reactors are designed so that complete loss of coolant for an indefinite period does not result in the reactor overheating. The General Electric ESBWR and Westinghouse AP1000 have passively activated safety systems. The CANDU reactor has two low-temperature and low-pressure water systems surrounding the fuel (i.e. moderator and shield tank) that act as back-up heat sinks and preclude meltdowns and core-breaching scenarios. Liquid fueled reactors can be stopped by draining the fuel into tankage, which not only prevents further fission but draws decay heat away statically, and by drawing off the fission products (which are the source of post-shutdown heating) incrementally. The ideal is to have reactors that fail-safe through physics rather than through redundant safety systems or human intervention. Certain fast breeder reactor designs may be more susceptible to meltdown than other reactor types, due to their larger quantity of fissile material and the higher neutron flux inside the reactor core. Other reactor designs, such as Integral Fast Reactor model EBR II, had been explicitly engineered to be meltdown-immune. It was tested in April 1986, just before the Chernobyl failure, to simulate loss of coolant pumping power, by switching off the power to the primary pumps. As designed, it shut itself down, in about 300 seconds, as soon as the temperature rose to a point designed as higher than proper operation would require. This was well below the boiling point of the unpressurised liquid metal coolant, which had entirely sufficient cooling ability to deal with the heat of fission product radioactivity, by simple convection. The second test, deliberate shut-off of the secondary coolant loop that supplies the generators, caused the primary circuit to undergo the same safe shutdown. This test simulated the case of a water-cooled reactor losing its steam turbine circuit, perhaps by a leak. United States The Westinghouse TR-2 suffered partial core damage in 1960 when a likely fuel cladding defect caused one fuel element (out of over 200) to overheat and melt. The reactor at EBR-I suffered a partial meltdown during a coolant flow test on 29 November 1955. The Sodium Reactor Experiment in Santa Susana Field Laboratory was an experimental nuclear reactor that operated from 1957 to 1964 and was the first commercial power plant in the world to experience a core meltdown in July 1959. The partial meltdown at the Fermi 1 experimental fast breeder reactor, in 1966, required the reactor to be repaired, though it never achieved full operation afterward. The SNAP8DR reactor at the Santa Susana Field Laboratory experienced damage to approximately a third of its fuel in an accident in 1969. The Three Mile Island accident, in 1979, referred to in the press as a "partial core melt", led to the total dismantlement and the permanent shutdown of reactor 2. Unit 1 continued to operate until 2019. Soviet Union A number of Soviet Navy nuclear submarines experienced nuclear meltdowns, including K-27, K-140, and K-431. Reactor 4 of Chernobyl experienced a full reactor meltdown, after the failure of a test. Japan During the Fukushima Daiichi nuclear disaster following the earthquake and tsunami in March 2011, three of the power plant's six reactors suffered meltdowns. Most of the fuel in the reactor No. 1 Nuclear Power Plant melted. Switzerland The Lucens reactor, Switzerland, in 1969. Canada NRX (military), Ontario, Canada, in 1952 United Kingdom Windscale (military), Sellafield, England, in 1957 (see Windscale fire) Chapelcross nuclear power station (civilian), Scotland, in 1967 France Saint-Laurent Nuclear Power Plant (civilian), France, in 1969 China syndrome The China syndrome (loss-of-coolant accident) is a nuclear reactor operations accident characterized by the severe meltdown of the core components of the reactor, which then burn through the containment vessel and the housing building, then (figuratively) through the crust and body of the Earth until reaching the opposite end, presumed to be in "China". While the antipodes of China include Argentina with its Atucha Nuclear Power Plant the phrasing is metaphorical; there is no way a core could penetrate the several-kilometer thickness of the Earth's crust, and even if it did melt to the center of the Earth, it would not travel back upwards against the pull of gravity. Moreover, any tunnel behind the material would be closed by immense lithostatic pressure. History The system design of the nuclear power plants built in the late 1960s raised the concern that a severe reactor accident could release large quantities of radioactive materials into the atmosphere and environment. By 1970, there were doubts about the ability of the emergency core cooling system to cope with the effects of a loss of coolant accident and the consequent meltdown of the fuel core. In 1971, in the article Thoughts on Nuclear Plumbing, former Manhattan Project (1942–1946) nuclear physicist Ralph Lapp used the term "China syndrome" to describe a possible burn-through, after a loss of coolant accident, of the nuclear fuel rods and core components melting the containment structures, and the subsequent escape of radioactive material(s) into the atmosphere and environment; the hypothesis derived from a 1967 report by a group of nuclear physicists, headed by W. K. Ergen. In the event, Lapp’s hypothetical nuclear accident was cinematically adapted as The China Syndrome (1979). The real scare, however, came from a quote in the 1979 film The China Syndrome, which stated, "It melts right down through the bottom of the plant—theoretically to China, but of course, as soon as it hits ground water, it blasts into the atmosphere and sends out clouds of radioactivity. The number of people killed would depend on which way the wind was blowing, rendering an area the size of Pennsylvania permanently uninhabitable." The actual threat of this was coincidentally tested just 12 days after the release of the film when a meltdown at Pennsylvania's Three Mile Island Plant 2 (TMI-2) created a molten core that moved toward "China" before the core froze at the bottom of the reactor pressure vessel. Thus, the TMI-2 reactor fuel and fission products breached the fuel rods, but the melted core itself did not break the containment of the reactor vessel. A similar concern arose during the Chernobyl disaster. After the reactor was destroyed, a liquid corium mass from the melting core began to breach the concrete floor of the reactor vessel, which was situated above the bubbler pool (a large water reservoir for emergency pumps and to contain any steam pipe rupture). There was concern that a steam explosion would have occurred if the hot corium made contact with the water, resulting in more radioactive materials being released into the air. Due to damages from the accident, three station workers manually operated the valves necessary to drain this pool. However, this concern was proven to be unfounded as (unknown to those at the time) the corium already contacted the reservoir before it could be drained, where instead of creating a steam explosion it harmlessly cooled rapidly and created a light-brown ceramic pumice that floated on the water. See also Behavior of nuclear fuel during a reactor accident Comparison of Chernobyl and other radioactivity releases Effects of the Chernobyl disaster High-level radioactive waste management International Nuclear Event Scale List of civilian nuclear accidents Lists of nuclear disasters and radioactive incidents Nuclear safety and security Nuclear power Nuclear power debate Scram or SCRAM, an emergency shutdown of a nuclear reactor Notes References External links Annotated bibliography on civilian nuclear accidents from the Alsos Digital Library for Nuclear Issues Partial Fuel Meltdown Events Nuclear reactor safety
Nuclear meltdown
[ "Technology" ]
8,568
[ "Environmental impact of nuclear power", "Civilian nuclear power accidents" ]
162,549
https://en.wikipedia.org/wiki/MacOS%20version%20history
The history of macOS, Apple's current Mac operating system formerly named Mac OS X until 2011 and then OS X until 2016, began with the company's project to replace its "classic" Mac OS. That system, up to and including its final release Mac OS 9, was a direct descendant of the operating system Apple had used in its Mac computers since their introduction in 1984. However, the current macOS is a UNIX operating system built on technology that had been developed at NeXT from the 1980s until Apple purchased the company in early 1997. macOS components derived from BSD include multiuser access, TCP/IP networking, and memory protection. Although it was originally marketed as simply "version 10" of Mac OS (indicated by the Roman numeral "X"), it has a completely different codebase from Mac OS 9, as well as substantial changes to its user interface. The transition was a technologically and strategically significant one. To ease the transition for users and developers, versions 10.0 through 10.4 were able to run Mac OS 9 and its applications in the Classic Environment, a compatibility layer. macOS was first released in 1999 as Mac OS X Server 1.0. It was built using the technologies Apple acquired from NeXT, but did not include the signature Aqua user interface (UI). The desktop version aimed at regular users—Mac OS X 10.0—shipped in March 2001. Since then, several more distinct desktop and server editions of macOS have been released. Starting with Mac OS X 10.7 Lion, macOS Server is no longer offered as a standalone operating system; instead, server management tools are available for purchase as an add-on. The macOS Server app was discontinued on April 21, 2022, and will stop working on macOS 13 Ventura or later. Starting with the Intel build of Mac OS X 10.5 Leopard, most releases have been certified as Unix systems conforming to the Single UNIX Specification. Lion was referred to by Apple as "Mac OS X Lion" and sometimes as "OS X Lion"; Mountain Lion was officially referred to as just "OS X Mountain Lion", with the "Mac" being completely dropped. The operating system was further renamed to "macOS" starting with macOS Sierra. From the introduction of machines not supporting the classic Mac OS in 2003 until the introduction of iPhone OS in early 2007, Mac OS X was Apple's only software platform. macOS retained the major version number 10 throughout its development history until the release of macOS 11 Big Sur in 2020. Mac OS X 10.0 and 10.1 were given names of big cats as internal code names ("Cheetah" and "Puma"). Starting with Mac OS X 10.2 Jaguar, big-cat names were used as marketing names; starting with OS X 10.9 Mavericks, names of locations in California were used as marketing names instead. The current major version, macOS 15 Sequoia, was announced on June 10, 2024, at WWDC 2024 and released on September 16 of that year. Development Development outside Apple After Apple removed Steve Jobs from management in 1985, he left the company and attempted to create the "next big thing", with funding from Ross Perot and himself. The result was the NeXT Computer. As the first workstation to include a digital signal processor (DSP) and a high-capacity optical disc drive, NeXT hardware was advanced for its time, but was expensive relative to the rapidly commoditizing workstation market. The hardware was phased out in 1993; however, the company's object-oriented operating system NeXTSTEP had a more lasting legacy as it eventually became the basis for Mac OS X. NeXTSTEP was based on the Mach kernel developed at CMU (Carnegie Mellon University) and BSD, an implementation of Unix dating back to the 1970s. It featured an object-oriented programming framework based on the Objective-C language. This environment is known today in the Mac world as Cocoa. It also supported the innovative Enterprise Objects Framework database access layer and WebObjects application server development environment, among other notable features. All but abandoning the idea of an operating system, NeXT managed to maintain a business selling WebObjects and consulting services, only ever making modest profits in its last few quarters as an independent company. NeXTSTEP underwent an evolution into OPENSTEP which separated the object layers from the operating system below, allowing it to run with less modification on other platforms. OPENSTEP was, for a short time, adopted by Sun and HP. However, by this point, a number of other companies — notably Apple, IBM, Microsoft, and even Sun itself — were claiming they would soon be releasing similar object-oriented operating systems and development tools of their own. Some of these efforts, such as Taligent, did not fully come to fruition; others, like Java, gained widespread adoption. On February 4, 1997, Apple Computer acquired NeXT for $427 million, and used OPENSTEP as the basis for Mac OS X, as it was called at the time. Traces of the NeXT software heritage can still be seen in macOS. For example, in the Cocoa development environment, the Objective-C library classes have "NS" prefixes, and the HISTORY section of the manual page for the defaults command in macOS straightforwardly states that the command "First appeared in NeXTStep." Internal development Meanwhile, Apple was facing commercial difficulties of its own. The decade-old Macintosh System Software had reached the limits of its single-user, co-operative multitasking architecture, and its once-innovative user interface was looking increasingly outdated. A massive development effort to replace it, known as Copland, was started in 1994, but was generally perceived outside Apple to be a hopeless case due to political infighting and conflicting goals. By 1996, Copland was nowhere near ready for release, and the project was eventually cancelled. Some elements of Copland were incorporated into Mac OS 8, released on July 26, 1997. After considering the purchase of BeOS — a multimedia-enabled, multi-tasking OS designed for hardware similar to Apple's, the company decided instead to acquire NeXT and use OPENSTEP as the basis for their new OS. Avie Tevanian took over OS development, and Steve Jobs was brought on as a consultant. At first, the plan was to develop a new operating system based almost entirely on an updated version of OPENSTEP, with the addition of a virtual machine subsystem — known as the Blue Box — for running "classic" Macintosh applications. The result was known by the code name Rhapsody, slated for release in late 1998. Apple expected that developers would port their software to the considerably more powerful OPENSTEP libraries once they learned of its power and flexibility. Instead, several major developers such as Adobe told Apple that this would never occur, and that they would rather leave the platform entirely. This "rejection" of Apple's plan was largely the result of a string of previous broken promises from Apple; after watching one "next OS" after another disappear and Apple's market share dwindle, developers were not interested in doing much work on the platform at all, let alone a re-write. Changed direction under Jobs Apple's financial losses continued and the board of directors lost confidence in CEO Gil Amelio, asking him to resign. The board asked Steve Jobs to lead the company on an interim basis, essentially giving him carte blanche to make changes to return the company to profitability. When Jobs announced at the World Wide Developer's Conference that what developers really wanted was a modern version of the Mac OS, and Apple was going to deliver it, he was met with applause. Over the next two years, a major effort was applied to porting the original Macintosh API to Unix libraries known as Carbon. Mac OS applications could be ported to Carbon without the need for a complete re-write, making them operate as native applications on the new operating system. Meanwhile, applications written using the older toolkits would be supported using the "Classic" Mac OS 9 environment. Support for C, C++, Objective-C, Java, and Python were added, furthering developer comfort with the new platform. During this time, the lower layers of the operating system (the Mach kernel and the BSD layers on top of it) were re-packaged and released under the Apple Public Source License. They became known as Darwin. The Darwin kernel provides a stable and flexible operating system, which takes advantage of the contributions of programmers and independent open-source projects outside Apple; however, it sees little use outside the Macintosh community. During this period, the Java programming language had increased in popularity, and an effort was started to improve Mac Java support. This consisted of porting a high-speed Java virtual machine to the platform, and exposing macOS-specific "Cocoa" APIs to the Java language. The first release of the new OS — Mac OS X Server 1.0 — used a modified version of the Mac OS GUI, but all client versions starting with Mac OS X Developer Preview 3 used a new theme known as Aqua. Aqua was a substantial departure from the Mac OS 9 interface, which had evolved with little change from that of the original Macintosh operating system: it incorporated full color scalable graphics, anti-aliasing of text and graphics, simulated shading and highlights, transparency and shadows, and animation. A new feature was the Dock, an application launcher which took advantage of these capabilities. Despite this, Mac OS X maintained a substantial degree of consistency with the traditional Mac OS interface and Apple's own Apple Human Interface Guidelines, with its pull-down menu at the top of the screen, familiar keyboard shortcuts, and support for a single-button mouse. The development of Aqua was delayed somewhat by the switch from OpenStep's Display PostScript engine to one developed in-house that was free of any license restrictions, known as Quartz. Releases With the exception of Mac OS X Server 1.0 and the original public beta, the first several macOS versions were named after big cats. Prior to its release, version 10.0 was code named "Cheetah" internally at Apple, and version 10.1 was code named internally as "Puma". After the code name "Jaguar" for version 10.2 received publicity in the media, Apple began openly using the names to promote the operating system: 10.3 was marketed as "Panther", 10.4 as "Tiger", 10.5 as "Leopard", 10.6 as "Snow Leopard", 10.7 as "Lion", and 10.8 as "Mountain Lion". "Panther", "Tiger", and "Leopard" were registered as trademarks. Apple registered "Lynx" and "Cougar", but these were allowed to lapse. Apple started using the name of locations in California for subsequent releases: 10.9 Mavericks was named after Mavericks, a popular surfing destination; 10.10 Yosemite was named after Yosemite National Park; 10.11 El Capitan was named for the El Capitan rock formation in Yosemite National Park; 10.12 Sierra was named for the Sierra Nevada mountain range; and 10.13 High Sierra was named for the area around the High Sierra Camps. In 2016, OS X was renamed to macOS. A few years later, in 2020, with the release of macOS Big Sur, the first component of the version number was incremented from 10 to 11, so Big Sur's initial release's version number was 11.0 instead of 10.16, making the version numbers of macOS behave the way the version numbers of Apple's other operating systems do. All subsequent major releases also increased the first component of the version number. Mac OS X Public Beta On September 13, 2000, Apple released a $29.95 "preview" version of Mac OS X (internally codenamed Kodiak) in order to gain feedback from users. It marked the first public availability of the Aqua interface, and Apple made many changes to the UI based on customer feedback. Mac OS X Public Beta expired and ceased to function in spring 2001. Mac OS X 10.0 "Cheetah" On March 24, 2001, Apple released Mac OS X 10.0 (internally codenamed Cheetah). The initial version was slow, incomplete, and had very few applications available at the time of its launch, mostly from independent developers. Critics suggested that the operating system was not ready for mainstream adoption, but they recognized the importance of its initial launch as a base to improve upon. Simply releasing Mac OS X was received by the Macintosh community as a great accomplishment, for attempts to completely overhaul the Mac OS had been underway since 1996, and delayed by countless setbacks. Following some bug fixes, kernel panics became much less frequent. Mac OS X 10.1 "Puma" Mac OS X 10.1 (internally codenamed Puma) was released on September 25, 2001. It has better performance and provided missing features, such as DVD playback. Apple released 10.1 as a free upgrade CD for 10.0 users. Apple released a upgrade CD for Mac OS 9. On January 7, 2002, Apple announced that Mac OS X was to be the default operating system for all Macintosh products by the end of that month. Mac OS X 10.2 Jaguar On August 23, 2002, Apple followed up with Mac OS X 10.2 Jaguar, the first release to use its code name as part of the branding. It brought great raw performance improvements, a sleeker look, and many powerful user-interface enhancements (over 150, according to Apple ), including Quartz Extreme for compositing graphics directly on an ATI Radeon or Nvidia GeForce2 MX AGP-based video card with at least 16 MB of VRAM, a system-wide repository for contact information in the new Address Book, and an instant messaging client named iChat. The Happy Mac which had appeared during the Mac OS startup sequence for almost 18 years was replaced with a large grey Apple logo with the introduction of Mac OS X 10.2. Mac OS X 10.3 Panther Mac OS X Panther was released on October 24, 2003. In addition to providing much improved performance, it also incorporated the most extensive update yet to the user interface. Panther included as many or more new features as Jaguar had the year before, including an updated Finder, incorporating a brushed-metal interface, Fast user switching, Exposé (Window manager), FileVault, Safari, iChat AV (which added videoconferencing features to iChat), improved Portable Document Format (PDF) rendering and much greater Microsoft Windows interoperability. Support for some early G3 computers such as the Power Macintosh and PowerBook was discontinued. Mac OS X 10.4 Tiger Mac OS X Tiger was released on April 29, 2005. Apple stated that Tiger contained more than 200 new features. As with Panther, certain older machines were no longer supported; Tiger requires a Mac with a built-in FireWire port. Among the new features, Tiger introduced Spotlight, Dashboard, Smart Folders, updated Mail program with Smart Mailboxes, QuickTime 7, Safari 2, Automator, VoiceOver, Core Image and Core Video. The initial release of the Apple TV used a modified version of Tiger with a different graphical interface and fewer applications and services. On January 10, 2006, Apple released the first Intel x86-based Macs along with the 10.4.4 update to Tiger. This operating system functioned identically on the PowerPC-based Macs and the new Intel-based machines, with the exception of the Intel release dropping support for the Classic environment. 10.4.4 introduced Rosetta, which translated 32-bit PowerPC machine code to 32-bit x86 code, allowing applications for PowerPC to run on Intel-based Macs without modification. Only PowerPC Macs can be booted from retail copies of the Tiger client DVD, but there is a Universal DVD of Tiger Server 10.4.7 (8K1079) that can boot both PowerPC and Intel Macs. Mac OS X 10.5 Leopard Mac OS X Leopard was released on October 26, 2007. Apple called it "the largest update of Mac OS X". Leopard supports both PowerPC- and Intel x86-based Macintosh computers; support for Macs with the G3 processor was dropped, and Macs with the G4 processor required a minimum clock rate of 867 MHz and at least 512 MB of RAM to be installed. The single DVD works for all supported Macs (including 64-bit machines). New features include a new look, an updated Finder, Time Machine, Spaces, Boot Camp pre-installed, full support for 64-bit applications (including graphical applications), new features in Mail and iChat, and a number of new security features. Leopard is an Open Brand UNIX 03 registered product on the Intel platform. It was also the first BSD-based OS to receive UNIX 03 certification. Leopard dropped support for the Classic Environment and all Classic applications, and was the final version of Mac OS X to support the PowerPC architecture. Mac OS X 10.6 Snow Leopard Mac OS X Snow Leopard was released on August 28, 2009, the last version to be available on disc. Rather than delivering big changes to the appearance and end user functionality like the previous releases of , the development of Snow Leopard was deliberately focused on "under the hood" changes, increasing the performance, efficiency, and stability of the operating system. For most users, the most noticeable changes were a difference in the disk space that the operating system frees up after a clean installation when compared to Mac OS X 10.5 Leopard, a more responsive Finder rewritten in Cocoa, faster Time Machine backups, more reliable and user friendly disk ejects, a more powerful version of the Preview application, and a faster Safari web browser. An update also introduced support for the Mac App Store, Apple's digital distribution platform for macOS applications and subsequent macOS upgrades. Snow Leopard only supports Macs with Intel CPUs, requires at least 1 GB of RAM, and drops default support for applications built for the PowerPC architecture. However, Rosetta can be installed as an additional component to retain support for PowerPC-only applications. Mac OS X 10.7 Lion Mac OS X Lion (also known as OS X Lion) was released on July 20, 2011. It brought developments made in Apple's iOS, such as an easily navigable display of installed applications (Launchpad) and (a greater use of) multi-touch gestures, to the Mac. This release removed Rosetta, making it incapable of running PowerPC applications. It dropped support for 32-bit Intel processors and requires 2 GB of memory. Changes made to the graphical user interface (GUI) include the Launchpad (similar to the home screen of iOS and iPadOS devices), auto-hiding scrollbars that only appear when they are being used, and Mission Control, which unifies Exposé, Spaces, Dashboard, and full-screen applications within a single interface. Apple also made changes to applications: they resume in the same state as they were before they were closed (similar to iOS). Documents auto-save by default. OS X 10.8 Mountain Lion OS X Mountain Lion was released on July 25, 2012. It incorporates some features seen in iOS 5, which include Game Center, support for iMessage in the new Messages messaging application, and Reminders as a to-do list app separate from iCal (which is renamed as Calendar, like the iOS app). It also includes support for storing iWork documents in iCloud. 2 GB of memory is required. Application pop-ups are now concentrated on the corner of the screen, and the Center itself is pulled from the right side of the screen. Mountain Lion also includes more Chinese features, including support for Baidu as an option for Safari search engine. Notification Center is added, providing an overview of alerts from applications. It is a desktop version similar to the one in iOS 5.0 and higher. Notes is added, as an application separate from Mail, synching with its iOS counterpart through the iCloud service. Messages, an instant messaging software application, replaces iChat. OS X 10.9 Mavericks OS X Mavericks was released on October 22, 2013, as a free update through the Mac App Store worldwide. It placed emphasis on battery life, Finder enhancements, other enhancements for power users, and continued iCloud integration, as well as bringing more of Apple's iOS apps to the OS X platform. iBooks and Apple Maps applications were added. Mavericks requires 2 GB of memory to operate. It is the first version named under Apple's then-new theme of places in California, dubbed Mavericks after the surfing location. Unlike previous versions of OS X, which had progressively decreasing prices since 10.6, 10.9 was available at no charge to all users of compatible systems running Snow Leopard (10.6) or later, beginning Apple's policy of free upgrades for life on its operating system and business software. OS X 10.10 Yosemite OS X Yosemite was released to the general public on October 16, 2014, as a free update through the Mac App Store worldwide. It featured a major overhaul of user interface, replaced skeuomorphism with flat graphic design and blurred translucency effects, following the aesthetic introduced with iOS 7. It introduced features called Continuity and Handoff, which allow for tighter integration between paired OS X and iOS devices: the user can handle phone calls or text messages on either their Mac or their iPhone, and edit the same Pages document on either their Mac or their iPad. A later update of the OS included Photos as a replacement for iPhoto and Aperture. OS X 10.11 El Capitan OS X El Capitan was revealed on June 8, 2015, during the WWDC 2015 keynote speech. It was made available as a public beta in July and was made available publicly on September 30, 2015. Apple described this release as containing "Refinements to the Mac Experience" and "Improvements to System Performance" rather than new features. Refinements include public transport built into the Maps application, GUI improvements to the Notes application, as well as adopting San Francisco as the system font. Metal API, an application enhancing software, had debuted in this operating system, being available to "all Macs since 2012". macOS 10.12 Sierra macOS Sierra was announced on June 13, 2016, during the WWDC16 keynote speech. The update brought the Siri assistant to macOS, featuring several Mac-specific features, like searching for files. It also allowed websites to support Apple Pay as a method of transferring payment, using either a nearby iOS device or Touch ID to authenticate. iCloud also received several improvements, such as the ability to store a user's Desktop and Documents folders on iCloud so they could be synced with other Macs on the same Apple ID. It was released publicly on September 20, 2016. macOS 10.13 High Sierra macOS High Sierra was announced on June 5, 2017, during the WWDC17 keynote speech. It was released on September 25, 2017. The release includes many under-the-hood improvements, including a switch to Apple File System (APFS), the introduction of Metal 2, support for HEVC video, and improvements to virtual reality support. In addition, numerous changes were made to standard applications including Photos, Safari, Notes, and Spotlight. macOS 10.14 Mojave macOS Mojave was announced on June 4, 2018, during the WWDC18 keynote speech. It was released on September 24, 2018. Some of the key new features were Dark wallpaper in dark mode, Desktop stacks and Dynamic Desktop, which changes the desktop background image to correspond to the user's current time of day. macOS 10.15 Catalina macOS Catalina was announced on June 3, 2019, during the WWDC19 keynote speech. It was released on October 7, 2019. It primarily focuses on updates to built-in apps, such as replacing iTunes with separate Music, Podcasts, and TV apps, redesigned Reminders and Books apps, and a new Find My app. It also features Sidecar, which allows the user to use an iPad as a second screen for their computer, or even simulate a graphics tablet with an Apple Pencil. It is the first version of macOS not to support 32-bit applications. The Dashboard application was also removed in the update. Since macOS Catalina, iOS apps can run on macOS with Project Catalyst but requires the app to be made compatible unlike ARM-powered Apple silicon Macs that can run all iOS apps by default. macOS 11 Big Sur macOS Big Sur was announced on June 22, 2020, during the WWDC20 keynote speech. It was released November 12, 2020. The major version number is changed, for the first time since "Mac OS X" was released, making it macOS 11. It brings ARM support, new icons, GUI changes to the system, and other bug fixes. Since macOS 11.2.3, it is no longer possible to install iOS apps by default from an IPA file instead of the Mac App Store on Apple silicon Macs, which now requires third-party software to unlock the functionality. Big Sur introduced Rosetta 2 to allow 64-bit Intel applications to run on Apple silicon Macs. However, Intel-based Macs are unable to run ARM-based applications, including iOS and iPadOS apps. macOS 12 Monterey macOS Monterey was announced on June 7, 2021, during the WWDC21 keynote speech. It was released on October 25, 2021. macOS Monterey introduces new features such as Universal Control, which allows users to use a single keyboard and mouse to move between devices; AirPlay, which now allows users to present and share almost anything; the Shortcuts app, also introduced to macOS, gives users access to galleries of pre-built shortcuts, designed for Macs, a service brought from iOS, and users can now also set up shortcuts, among other things. macOS Monterey is the final version of macOS that officially supports macOS Server. macOS 13 Ventura macOS Ventura was announced on June 6, 2022, during the WWDC22 keynote speech. It was released on October 24, 2022. macOS Ventura introduces Stage Manager, a new and optional window manager, a redesigned settings app, and Continuity Camera, which is a program that allows Mac users to use their iPhone as a camera, and several other new features. It is also the first version of macOS without macOS Server support. macOS 14 Sonoma macOS Sonoma was announced on June 5, 2023, during the WWDC23 keynote speech. Key changes include a revamp of Widgets, the user lock screen, and a video wallpaper/screensaver feature using Apple TV's screen saver videos. It was released on September 26, 2023. macOS 15 Sequoia macOS Sequoia was announced on June 10, 2024, during the WWDC24 keynote speech. It was released on September 16, 2024. Timeline of Macintosh operating systems See also Mac operating systems Architecture of macOS List of macOS built-in apps iOS version history References External links MacOS
MacOS version history
[ "Technology" ]
5,667
[ "History of software", "History of computing" ]
162,557
https://en.wikipedia.org/wiki/Collusion
Collusion is a deceitful agreement or secret cooperation between two or more parties to limit open competition by deceiving, misleading or defrauding others of their legal right. Collusion is not always considered illegal. It can be used to attain objectives forbidden by law; for example, by defrauding or gaining an unfair market advantage. It is an agreement among firms or individuals to divide a market, set prices, limit production or limit opportunities. It can involve "unions, wage fixing, kickbacks, or misrepresenting the independence of the relationship between the colluding parties". In legal terms, all acts effected by collusion are considered void. Definition In the study of economics and market competition, collusion takes place within an industry when rival companies cooperate for their mutual benefit. Conspiracy usually involves an agreement between two or more sellers to take action to suppress competition between sellers in the market. Because competition among sellers can provide consumers with low prices, conspiracy agreements increase the price consumers pay for the goods. Because of this harm to consumers, it is against antitrust laws to fix prices by agreement between producers, so participants must keep it a secret. Collusion often takes place within an oligopoly market structure, where there are few firms and agreements that have significant impacts on the entire market or industry. To differentiate from a cartel, collusive agreements between parties may not be explicit; however, the implications of cartels and collusion are the same. Under competition law, there is an important distinction between direct and covert collusion. Direct collusion generally refers to a group of companies communicating directly with each other to coordinate and monitor their actions, such as cooperating through pricing, market allocation, sales quotas, etc. On the other hand, tacit collusion is where companies coordinate and monitor their behavior without direct communication. This type of collusion is generally not considered illegal, so companies guilty of tacit conspiracy should face no penalties even though their actions would have a similar economic impact as explicit conspiracy. Collusion results from less competition through mutual understanding, where competitors can independently set prices and market share. A core principle of antitrust policy is that companies must not communicate with each other. Even if conversations between multiple companies are illegal but not enforceable, the incentives to comply with collusive agreements are the same with and without communication. It is against competition law for companies to have explicit conversations in private. If evidence of conversations is accidentally left behind, it will become the most critical and conclusive evidence in antitrust litigation. Even without communication, businesses can coordinate prices by observation, but from a legal standpoint, this tacit handling leaves no evidence. Most companies cooperate through invisible collusion, so whether companies communicate is at the core of antitrust policy. Collusion is illegal in the United States, Canada, Australia and most of the EU due to antitrust laws, but implicit collusion in the form of price leadership and tacit understandings still takes place. Tacit Collusion Covert collusion is known as tacit collusion and is considered legal. Adam Smith in The Wealth of Nations explains that since the masters (business owners) are fewer in number, it is easier to collude to serve common interests among those involved, such as maintaining low wages, whilst it is difficult for the labour to coordinate to protect their interests due to their vast numbers. Hence, business owners have a bigger advantage over the working class. Nevertheless, according to Adam Smith, the public rarely hears about coordination and collaborations that occur between business owners as it takes place in informal settings. Some forms of explicit collusion are not considered impactful enough on an individual basis to be considered illegal, such as that which occurred by the social media group WallStreetBets in the GameStop short squeeze. There are many ways that implicit collusion tends to develop: The practice of stock analyst conference calls and meetings of industry participants almost necessarily results in tremendous amounts of strategic and price transparency. This allows each firm to see how and why every other firm is pricing their products. If the practice of the industry causes more complicated pricing, which is hard for the consumer to understand (such as risk-based pricing, hidden taxes and fees in the wireless industry, negotiable pricing), this can cause competition based on price to be meaningless (because it would be too complicated to explain to the customer in a short advertisement). This causes industries to have essentially the same prices and compete on advertising and image, something theoretically as damaging to consumers as normal price fixing. Base model of (Price) Collusion For a cartel to work successfully, it must: Co-ordinate on the conspiracy agreement (bargaining, explicit or implicit communication). Monitor compliance. Punish non-compliance. Control the expansion of non-cartel supply. Avoid inspection by customers and competition authorities. Regarding stability within the cartel: Collusion on high prices means that members have an incentive to deviate. In a one-off situation, high prices are not sustainable. Requires long-term vision and repeated interactions. Companies need to choose between two approaches: Insist on collusion agreements (now) and promote cooperation (future). Turn away from the alliance (now) and face punishment (future). Two factors influence this choice: (1) deviations must be detectable (2) penalties for deviations must have a significant effect. Collusion is illegal, contracts between cartels establishing collusion are not protected by law, cannot be enforced by courts, and must have other forms of punishment Variations Suppose this market has firms. At the collusive price, the firms are symmetric, so they divide the profits equally between the whole industry, represented as . If and only if the profit of choosing to deviate is greater than that of sticking to collude, i.e. (Companies have no incentive to deviate unilaterally) Therefore, the cartel alliance will be stable when is the case, i.e. the firm has no incentive to deviate unilaterally. So as the number of firms increases, the more difficult it is for The Cartel to maintain stability. As the number of firms in the market increases, so does the factor of the minimum discount required for collusion to succeed. According to neoclassical price-determination theory and game theory, the independence of suppliers forces prices to their minimum, increasing efficiency and decreasing the price-determining ability of each firm. However if all firms collude to increase prices, loss of sales will be minimized, as consumers lack choices at lower prices and must decide between what is available. This benefits the colluding firms, as they generate more sales at the cost of efficiency to society. However, depending on the assumptions made in the theoretical model on the information available to all firms, there are some outcomes, based on Cooperative Game Theory, where collusion may have higher efficiency than if firms did not collude. One variation of this traditional theory is the theory of kinked demand. Firms face a kinked demand curve if, when one firm decreases its price, other firms are expected to follow suit to maintain sales. When one firm increases its price, its rivals are unlikely to follow, as they would lose the sales gains they would otherwise receive by holding prices at the previous level. Kinked demand potentially fosters supra-competitive prices because any one firm would receive a reduced benefit from cutting price, as opposed to the benefits accruing under neoclassical theory and certain game-theoretic models such as Bertrand competition. Collusion may also occur in auction markets, where independent firms coordinate their bids (bid rigging). Deviation Actions that generate sufficient returns in the future are important to every company, and the probability of continued interaction and the company discount factor must be high enough. The sustainability of cooperation between companies also depends on the threat of punishment, which is also a matter of credibility. Firms that deviate from cooperative pricing will use MMC in each market. MMC increases the loss of deviation, and incremental loss is more important than incremental gain when the firm's objective function is concave. Therefore, the purpose of MMC is to strengthen corporate compliance or inhibit deviant collusion. The principle of collusion: firms give up deviation gains in the short term in exchange for continued collusion in the future. Collusion occurs when companies place more emphasis on future profits Collusion is easier to sustain when the choice deviates from the maximum profit to be gained is lower (i.e. the penalty profit is lower) and the penalty is greater. Future collusive profits − future punishment profits ≥ current deviation profits − current collusive profits-collusion can sustain. Scholars in economics and management have tried to identify factors explaining why some firms are more or less likely to be involved in collusion. Some have noted the role of the regulatory environment and the existence of leniency programs. Indicators Some actions that may indicate collusion among competitors are: Charging uniform prices or setting prices that are either too high or too low without justification Paying or receiving kickbacks and agreeing to refer customers only to each other Dividing territories and horizontal territorial allocation of markets among themselves Tying agreements and anticompetitive Product bundling (although, not all product bundling is anticompetitive) Refusal to deal with certain customers or suppliers and exclusive dealing with certain customers or suppliers Selling products below cost in order to drive out competitors (also known as dumping) Restricting the distribution or supply of products along the supply chain through vertical restraints Bid rigging by fixing bids or agreeing not to bid for certain contracts Examples In the example in the picture, the dots in Pc and Q represent competitive industry prices. If firms collude, they can limit production to Q2 and raise the price to P2. Collusion usually involves some form of agreement to seek a higher price. When companies discriminate, price collusion is less likely, so the discount factor needed to ensure stability must be increased. In such price competition, competitors use delivered pricing to discriminate in space, but this does not mean that firms using delivered pricing to discriminate cannot collude. United States Market division and price-fixing among manufacturers of heavy electrical equipment in the 1960s, including General Electric. An attempt by Major League Baseball owners to restrict players' salaries in the mid-1980s. The sharing of potential contract terms by NBA free agents in an effort to help a targeted franchise circumvent the salary cap. Price fixing within food manufacturers providing cafeteria food to schools and the military in 1993. Market division and output determination of livestock feed additive, called lysine, by companies in the US, Japan and South Korea in 1996, Archer Daniels Midland being the most notable of these. Chip dumping in poker or any other card game played for money. Ben and Jerry's and Häagen-Dazs collusion of products in 2013: Ben and Jerry's makes chunkier flavors with more treats in them, while Häagen-Dazs sticks to smoother flavors. Google and Apple against employee poaching, a collusion case in 2015 wherein it was revealed that both companies agreed not to hire employees from one another in order to halt the rise in wages. Google has been hit with a series of antitrust lawsuits. In October 2020, the US Department of Justice filed a landmark lawsuit alleging that Google unlawfully boxed out competitors by reaching deals with phone makers, including Apple and Samsung, to be the default search engine on their devices. Another lawsuit filed by nearly 40 attorneys general on Dec. 17, 2020 alleges that Google’s search results favored its own services over those of more-specialized rivals, a tactic that harmed competitors. Europe The illegal collusion between the giant German automakers BMW, Daimler and Volkswagen, discovered by the European Commission in 2019, to hinder technological progress in improving the quality of vehicle emissions in order to reduce the cost of production and maximize profits. Australia Japanese shipping company Kawasaki Kisen Kaisha Ltd (K-Line) were fined $34.5 million by the Federal Court for engaging in criminal cartel conduct. The court found that K-Line participated in a cartel with other shipping companies to fix prices on the transportation of cars, trucks, and buses to Australia between 2009 and 2012. K-Line pleaded guilty in April 2018 and the fine is the largest ever imposed under the Competition and Consumer Act. The court noted that the penalty should serve as a strong warning to businesses that cartel conduct will not be tolerated and will result in serious consequences. Between 2004 and 2013, Dr Esra Ogru, the former CEO of an Australian biotech company called Phosphagenics, colluded with two colleagues by using false invoicing and credit card reimbursements to defraud her employer of more than $6.1 million. Barriers There can be significant barriers to collusion. In any given industry, these may include: The number of firms: As the number of firms in an industry increases, it is more difficult to successfully organize, collude and communicate. Cost and demand differences between firms: If costs vary significantly between firms, it may be impossible to establish a price at which to fix output. Firms generally prefer to produce at a level where marginal cost meets marginal revenue, if one firm can produce at a lower cost, it will prefer to produce more units, and would receive a larger share of profits than its partner in the agreement. Asymmetry of information: Colluding firms may not have all the correct information about all other firms, from a quantitative perspective (firms may not know all other firms' cost and demand conditions) or a qualitative perspective (moral hazard). In either situation, firms may not know each others' preferences or actions, and any discrepancy would incentive at least one actor to renege. Cheating: There is considerable incentive to cheat on collusion agreements; although lowering prices might trigger price wars, in the short term the defecting firm may gain considerably. This phenomenon is frequently referred to as "chiseling". Potential entry: New firms may enter the industry, establishing a new baseline price and eliminating collusion (though anti-dumping laws and tariffs can prevent foreign companies from entering the market). Economic recession: An increase in average total cost or a decrease in revenue provides the incentive to compete with rival firms in order to secure a larger market share and increased demand. Anti-collusion legal framework and collusive lawsuit. Many countries with anti-collusion laws outlaw side-payments, which are an indication of collusion as firms pay each other to incentivize the continuation of the collusive relationship, may see less collusion as firms will likely prefer situations where profits are distributed towards themselves rather than the combined venture. Leniency Programs: Leniency programs are policies that reduce sanctions against collusion if a participant voluntarily confesses their behavior or cooperates with the public authority’s investigation. One example of a leniency program is offering immunity to the first firm who comes clean and gives the government information about collusion. These programs are designed to destabilize collusion and increase deterrence by encouraging firms to report illegal behavior. Conditions Conducive to Collusion There are several industry traits that are thought to be conducive to collusion or empirically associated with collusion. These traits include: High market concentration: High market concentration refers to a market with few firms, which makes it easier for these firms to collude and coordinate their actions. Homogeneous products: Homogeneous products refer to products that are similar in nature, which makes it easier for firms to agree on prices and reduces the incentive for firms to compete on product differentiation Stable demand and/or excess capacity: Stable demand and capacity implies predictability and therefore demand and capacity does not fluctuate significantly, which makes it easier for firms to coordinate their actions and maintain a collusive agreement. This can also refer to a situation where firms have more production capacity than is needed to meet demand. Government Intervention Collusion often occurs within an oligopoly market structure, which is a type of market failure. Therefore, natural market forces alone may be insufficient to prevent or deter collusion, and government intervention is often necessary. Fortunately, various forms of government intervention can be taken to reduce collusion among firms and promote natural market competition. Fines and imprisonment to companies that collude and their executives who are personally liable. Detect collusion by screening markets for suspicious pricing activity and high profitability. Provide immunity (leniency) to the first company to confess and provide the government with information about the collusion. See also Conscious parallelism Corporate crime Competition law Further reading Chassang, Sylvain; Ortner, Juan (2023). "Regulating Collusion". Annual Review of Economics 15 (1) References General references Vives, X. (1999) Oligopoly pricing, MIT Press, Cambridge MA (readable; suitable for advanced undergraduates.) Tirole, J. (1988) The Theory of Industrial Organization, MIT Press, Cambridge MA (An organized introduction to industrial organization) Tirole, J. (1986), "Hierarchies and Bureaucracies", Journal of Law Economics and Organization, vol. 2, pp. 181–214. Tirole, J. (1992), "Collusion and the Theory of Organizations", Advances in Economic Theory: Proceedings of the Sixth World Congress of the Econometric Society, ed by J.-J. Laffont. Cambridge: Cambridge University Press, vol.2:151-206. Inline citations Anti-competitive practices Game theory Bidding strategy
Collusion
[ "Mathematics" ]
3,593
[ "Game theory", "Strategy (game theory)" ]
162,600
https://en.wikipedia.org/wiki/Hacktivism
Hacktivism (or hactivism; a portmanteau of hack and activism), is the use of computer-based techniques such as hacking as a form of civil disobedience to promote a political agenda or social change. A form of Internet activism with roots in hacker culture and hacker ethics, its ends are often related to free speech, human rights, or freedom of information movements. Hacktivist activities span many political ideals and issues. Freenet, a peer-to-peer platform for censorship-resistant communication, is a prime example of translating political thought and freedom of speech into code. Hacking as a form of activism can be carried out by a singular activist or through a network of activists, such as Anonymous and WikiLeaks, working in collaboration toward common goals without an overarching authority figure. For context, according to a statement by the U.S. Justice Department, Julian Assange, the founder of WikiLeaks, plotted with hackers connected to the "Anonymous" and "LulzSec" groups, who have been linked to multiple cyberattacks worldwide. In 2012, Assange, who was being held in the United Kingdom on a request for extradition from the United States, gave the head of LulzSec a list of targets to hack and informed him that the most significant leaks of compromised material would come from the National Security Agency, the Central Intelligence Agency, or the New York Times. "Hacktivism" is a controversial term with several meanings. The word was coined to characterize electronic direct action as working toward social change by combining programming skills with critical thinking. But just as hack can sometimes mean cyber crime, hacktivism can be used to mean activism that is malicious, destructive, and undermining the security of the Internet as a technical, economic, and political platform. In comparison to previous forms of social activism, hacktivism has had unprecedented success, bringing in more participants, using more tools, and having more influence in that it has the ability to alter elections, begin conflicts, and take down businesses. According to the United States 2020–2022 Counterintelligence Strategy, in addition to state adversaries and transnational criminal organizations, "ideologically motivated entities such as hacktivists, leaktivists, and public disclosure organizations, also pose significant threats". Origins and definitions Writer Jason Sack first used the term hacktivism in a 1995 article in conceptualizing New Media artist Shu Lea Cheang's film Fresh Kill. However, the term is frequently attributed to the Cult of the Dead Cow (cDc) member "Omega," who used it in a 1996 e-mail to the group. Due to the variety of meanings of its root words, the definition of hacktivism is nebulous and there exists significant disagreement over the kinds of activities and purposes it encompasses. Some definitions include acts of cyberterrorism while others simply reaffirm the use of technological hacking to effect social change. Forms and methods Self-proclaimed "hacktivists" often work anonymously, sometimes operating in groups while other times operating as a lone wolf with several cyber-personas all corresponding to one activist within the cyberactivism umbrella that has been gaining public interest and power in pop-culture. Hacktivists generally operate under apolitical ideals and express uninhibited ideas or abuse without being scrutinized by society while representing or defending themselves publicly under an anonymous identity giving them a sense of power in the cyberactivism community. In order to carry out their operations, hacktivists might create new tools; or integrate or use a variety of software tools readily available on the Internet. One class of hacktivist activities includes increasing the accessibility of others to take politically motivated action online. Repertoire of contention of hacktivism includes among others: Code: Software and websites can achieve political goals. For example, the encryption software PGP can be used to secure communications; PGP's author, Phil Zimmermann said he distributed it first to the peace movement. Jim Warren suggests PGP's wide dissemination was in response to Senate Bill 266, authored by Senators Biden and DeConcini, which demanded that "...communications systems permit the government to obtain the plain text contents of voice, data, and other communications...". WikiLeaks is an example of a politically motivated website: it seeks to "keep governments open". Mirroring: Website mirroring is used as a circumvention tool in order to bypass various censorship blocks on websites. This technique copies the contents of a censored website and disseminates it on other domains and sub-domains that are not censored. Document mirroring, similar to website mirroring, is a technique that focuses on backing up various documents and other works. RECAP is software that was written with the purpose to 'liberate US case law' and make it openly available online. The software project takes the form of distributed document collection and archival. Major mirroring projects include initiatives such as the Internet Archive and Wikisource. Anonymity: A method of speaking out to a wide audience about human rights issues, government oppression, etc. that utilizes various web tools such as free and/or disposable email accounts, IP masking, and blogging software to preserve a high level of anonymity. Doxing: The practice in which private and/or confidential documents and records are hacked into and made public. Hacktivists see this as a form of assured transparency, experts claim it is harassment. Denial-of-service attacks: These attacks, commonly referred to as DoS attacks, use large arrays of personal and public computers that hackers take control of via malware executable files usually transmitted through email attachments or website links. After taking control, these computers act like a herd of zombies, redirecting their network traffic to one website, with the intention of overloading servers and taking a website offline. Virtual sit-ins: Similar to DoS attacks but executed by individuals rather than software, a large number of protesters visit a targeted website and rapidly load pages to overwhelm the site with network traffic to slow the site or take it offline. Website defacements: Hackers infiltrate a web server to replace a specific web page with one of their own, usually to convey a specific message. Website redirects: This method involves changing the address of a website within the server so would-be visitors of the site are redirected to a site created by the perpetrator, typically to denounce the original site. Geo-bombing: A technique in which netizens add a geo-tag while editing YouTube videos so that the location of the video can be seen in Google Earth. Protestware: The use of malware to promote a social cause or protest. Protestware is self-inflicted by a project's maintainer in order to spread a message; most commonly in a disruptive manner. The term was popularized during the Russo-Ukrainian War after the peacenotwar supply chain attack on the npm ecosystem. Controversy Depending on who is using the term, hacktivism can be a politically motivated technology hack, a constructive form of anarchic civil disobedience, or an undefined anti-systemic gesture. It can signal anticapitalist or political protest; it can denote anti-spam activists, security experts, or open source advocates. Some people describing themselves as hacktivists have taken to defacing websites for political reasons, such as attacking and defacing websites of governments and those who oppose their ideology. Others, such as Oxblood Ruffin (the "foreign affairs minister" of Cult of the Dead Cow and Hacktivismo), have argued forcefully against definitions of hacktivism that include web defacements or denial-of-service attacks. Hacktivism is often seen as shadowy due to its anonymity, commonly attributed to the work of fringe groups and outlying members of society. The lack of responsible parties to be held accountable for the social-media attacks performed by hactivists has created implications in corporate and federal security measures both on and offline. While some self-described hacktivists have engaged in DoS attacks, critics suggest that DoS attacks are an attack on free speech and that they have unintended consequences. DoS attacks waste resources and they can lead to a "DoS war" that nobody will win. In 2006, Blue Security attempted to automate a DoS attack against spammers; this led to a massive DoS attack against Blue Security which knocked them, their old ISP and their DNS provider off the Internet, destroying their business. Following denial-of-service attacks by Anonymous on multiple sites, in reprisal for the apparent suppression of WikiLeaks, John Perry Barlow, a founding member of the EFF, said "I support freedom of expression, no matter whose, so I oppose DDoS attacks regardless of their target... they're the poison gas of cyberspace...". On the other hand, Jay Leiderman, an attorney for many hacktivists, argues that DDoS can be a legitimate form of protest speech in situations that are reasonably limited in time, place and manner. Notable hacktivist events In late 1990s, the Hong Kong Blondes helped Chinese citizens get access to blocked websites by targeting the Chinese computer networks. The group identified holes in the Chinese internet system, particularly in the area of satellite communications. The leader of the group, Blondie Wong, also described plans to attack American businesses that were partnering with China. In 1996, the title of the United States Department of Justice's homepage was changed to "Department of Injustice". Pornographic images were also added to the homepage to protest the Communications Decency Act. In 1998, members of the Electronic Disturbance Theater created FloodNet, a web tool that allowed users to participate in DDoS attacks (or what they called electronic civil disobedience) in support of Zapatista rebels in Chiapas. In December 1998, a hacktivist group from the US called Legions of the Underground emerged. They declared a cyberwar against Iraq and China and planned on disabling internet access in retaliation for the countries' human rights abuses. Opposing hackers criticized this move by Legions of the Underground, saying that by shutting down internet systems, the hacktivist group would have no impact on providing free access to information. In July 2001, Hacktivismo, a sect of the Cult of the Dead Cow, issued the "Hacktivismo Declaration". This served as a code of conduct for those participating in hacktivism, and declared the hacker community's goals of stopping "state-sponsored censorship of the Internet" as well as affirming the rights of those therein to "freedom of opinion and expression". During the 2009 Iranian election protests, Anonymous played a role in disseminating information to and from Iran by setting up the website Anonymous Iran; they also released a video manifesto to the Iranian government. Google worked with engineers from SayNow and Twitter to provide communications for the Egyptian people in response to the government sanctioned Internet blackout during the 2011 protests. The result, Speak To Tweet, was a service in which voicemail left by phone was then tweeted via Twitter with a link to the voice message on Google's SayNow. On Saturday 29 May 2010 a hacker calling himself 'Kaka Argentine' hacked into the Ugandan State House website and posted a conspicuous picture of Adolf Hitler with the swastika, a Nazi Party symbol. During the Egyptian Internet black out, January 28 – February 2, 2011, Telecomix provided dial up services, and technical support for the Egyptian people. Telecomix released a video stating their support of the Egyptian people, describing their efforts to provide dial-up connections, and offering methods to avoid internet filters and government surveillance. The hacktivist group also announced that they were closely tracking radio frequencies in the event that someone was sending out important messages. Project Chanology, also known as "Operation Chanology", was a hacktivist protest against the Church of Scientology to punish the church for participating in Internet censorship relating to the removal of material from a 2008 interview with Church of Scientology member Tom Cruise. Hacker group Anonymous attempted to "expel the church from the Internet" via DDoS attacks. In February 2008 the movement shifted toward legal methods of nonviolent protesting. Several protests were held as part of Project Chanology, beginning in 2008 and ending in 2009. On June 3, 2011, LulzSec took down a website of the FBI. This was the first time they had targeted a website that was not part of the private sector. That week, the FBI was able to track the leader of LulzSec, Hector Xavier Monsegur. On June 20, 2011, LulzSec targeted the Serious Organised Crime Agency of the United Kingdom, causing UK authorities to take down the website. In August 2011 a member of Anonymous working under the name "Oliver Tucket" took control of the Syrian Defense Ministry website and added an Israeli government web portal in addition to changing the mail server for the website to one belonging to the Chinese navy. Anonymous and New World Hackers claimed responsibility for the 2016 Dyn cyberattack in retaliation for Ecuador's rescinding Internet access to WikiLeaks founder Julian Assange at their embassy in London. WikiLeaks alluded to the attack. Subsequently, FlashPoint stated that the attack was most likely done by script kiddies. In 2013, as an online component to the Million Mask March, Anonymous in the Philippines crashed 30 government websites and posted a YouTube video to congregate people in front of the parliament house on November 5 to demonstrate their disdain toward the Filipino government. In 2014, Sony Pictures Entertainment was hacked by a group by the name of Guardians Of Peace (GOP) who obtained over 100 Terabytes of data including unreleased films, employee salary, social security data, passwords, and account information. GOP hacked various social media accounts and hijacked them by changing their passwords to diespe123 (die pictures entertainment) and posting threats on the pages. In 2016, Turkish programmer Azer Koçulu removed his software package, left-pad, from npm, causing a cascading failure of other software packages that contained left-pad as a dependency. This was done after Kik, a messaging application, threatened legal action against Koçulu after he refused to rename his kik package. npm ultimately sided with Kik, prompting Koçulu to unpublish all of his packages from npm in protest, including left-pad. British hacker Kane Gamble, who was sentenced to 2 years in youth detention, posed as John Brennan, the then director of the CIA, and Mark F. Giuliano, a former deputy director of the FBI, to access highly sensitive information. The judge said Gamble engaged in "politically motivated cyber-terrorism." In 2021, Anonymous hacked and leaked the databases of American web hosting company Epik. As a response against 2022 Russian invasion of Ukraine, Anonymous performed multiple cyberattacks against Russian computer systems. Following the Israel-Hamas war since 2023, multiple cyberattacks attacks were seen from pro-Israel and pro-Palestine hacktivist groups. India's pro-Israel hacktivists took down the portals of Palestinian National Bank, the National Telecommunications Company and the website of Hamas. Multiple Israeli websites were flooded with malicious traffic by pro-Palestine hacktivists. Israeli newspaper The Jerusalem Post reported that its website was down due to a series of cyberattacks initiated against them. Notable hacktivist people/groups WikiLeaks WikiLeaks is a media organisation and publisher founded in 2006. It operates as a non-profit and is funded by donations and media partnerships. It has published classified documents and other media provided by anonymous sources. It was founded by Julian Assange, an Australian editor, publisher, and activist, who is currently challenging extradition to the United States over his work with WikiLeaks. Since September 2018, Kristinn Hrafnsson has served as its editor-in-chief. Its website states that it has released more than ten million documents and associated analyses. WikiLeaks' most recent publication was in 2021, and its most recent publication of original documents was in 2019. Beginning in November 2022, many of the documents on the organisation's website could not be accessed. WikiLeaks has released document caches and media that exposed serious violations of human rights and civil liberties by various governments. It released footage, which it titled Collateral Murder, of the 12July 2007 Baghdad airstrike, in which Iraqi Reuters journalists and several civilians were killed by a U.S. helicopter crew. WikiLeaks has also published leaks such as diplomatic cables from the United States and Saudi Arabia, emails from the governments of Syria and Turkey, corruption in Kenya and at Samherji. WikiLeaks has also published documents exposing cyber warfare and surveillance tools created by the CIA, and surveillance of the French president by the National Security Agency. During the 2016 U.S. presidential election campaign, WikiLeaks released emails from the Democratic National Committee (DNC) and from Hillary Clinton's campaign manager, showing that the party's national committee had effectively acted as an arm of the Clinton campaign during the primaries, seeking to undercut the campaign of Bernie Sanders. These releases resulted in the resignation of the chairwoman of the DNC and caused significant harm to the Clinton campaign. During the campaign, WikiLeaks promoted false conspiracy theories about Hillary Clinton, the Democratic Party and the murder of Seth Rich. WikiLeaks has won a number of awards and has been commended for exposing state and corporate secrets, increasing transparency, assisting freedom of the press, and enhancing democratic discourse while challenging powerful institutions. WikiLeaks and some of its supporters say the organisation's publications have a perfect record of publishing authentic documents. The organisation has been the target of campaigns to discredit it, including aborted ones by Palantir and HBGary. WikiLeaks has also had its donation systems disrupted by problems with its payment processors. As a result, the Wau Holland Foundation helps process WikiLeaks' donations. The organisation has been criticised for inadequately curating some of its content and violating the personal privacy of individuals. WikiLeaks has, for instance, revealed Social Security numbers, medical information, credit card numbers and details of suicide attempts. News organisations, activists, journalists and former members have also criticised the organisation over allegations of anti-Clinton and pro-Trump bias, various associations with the Russian government, buying and selling of leaks, and a lack of internal transparency. Journalists have also criticised the organisation for promotion of false flag conspiracy theories, and what they describe as exaggerated and misleading descriptions of the contents of leaks. The CIA defined the organisation as a "non-state hostile intelligence service" after the release of Vault 7. Anonymous Perhaps the most prolific and well known hacktivist group, Anonymous has been prominent and prevalent in many major online hacks over the past decade. Anonymous is a decentralized group that originated on the forums of 4chan during 2003, but didn't rise to prominence until 2008 when they directly attacked the Church of Scientology in a massive DoS attack. Since then, Anonymous has participated in a great number of online projects such as Operation: Payback and Operation: Safe Winter. However, while a great number of their projects have been for a charitable cause, they have still gained notoriety from the media due to the nature of their work mostly consisting of illegal hacking. Following the Paris terror attacks in 2015, Anonymous posted a video declaring war on ISIS, the terror group that claimed responsibility for the attacks. Since declaring war on ISIS, Anonymous since identified several Twitter accounts associated with the movement in order to stop the distribution of ISIS propaganda. However, Anonymous fell under heavy criticism when Twitter issued a statement calling the lists Anonymous had compiled "wildly inaccurate," as it contained accounts of journalists and academics rather than members of ISIS. Anonymous has also been involved with the Black Lives Matter movement. Early in July 2015, there was a rumor circulating that Anonymous was calling for a Day of Rage protests in retaliation for the shootings of Alton Sterling and Philando Castile, which would entail violent protests and riots. This rumor was based on a video that was not posted with the official Anonymous YouTube account. None of the Twitter accounts associated with Anonymous had tweeted anything in relation to a Day of Rage, and the rumors were identical to past rumors that had circulated in 2014 following the death of Mike Brown. Instead, on July 15, a Twitter account associated with Anonymous posted a series of tweets calling for a day of solidarity with the Black Lives Matter movement. The Twitter account used the hashtag "#FridayofSolidarity" to coordinate protests across the nation, and emphasized the fact that the Friday of Solidarity was intended for peaceful protests. The account also stated that the group was unaware of any Day of Rage plans. In February 2017 the group took down more than 10,000 sites on the Dark web related to child porn. DkD[|| DkD[||, a French cyberhacktivist, was arrested by the OCLCTIC (office central de lutte contre la criminalité liée aux technologies de l’information et de la communication), in March 2003. DkD[|| defaced more than 2000 pages, many were governments and US military sites. Eric Voulleminot of the Regional Service of Judicial Police in Lille classified the young hacker as "the most wanted hacktivist in France" DkD[|| was a very known defacer in the underground for his political view, doing his defacements for various political reasons. In response to his arrest, The Ghost Boys defaced many sites using the “Free DkD[||!!” slogan. LulzSec In May 2011, five members of Anonymous formed the hacktivist group Lulz Security, otherwise known as LulzSec. LulzSec's name originated from the conjunction of the internet slang term "lulz", meaning laughs, and "sec", meaning security. The group members used specific handles to identify themselves on Internet Relay Channels, the most notable being: "Sabu," "Kayla," "T-Flow," "Topiary," "AVUnit," and "Pwnsauce." Though the members of LulzSec would spend up to 20 hours a day in communication, they did not know one another personally, nor did they share personal information. For example, once the members' identities were revealed, "T-Flow" was revealed to be 15 years old. Other members, on the basis of his advanced coding ability, thought he was around 30 years old. One of the first notable targets that LulzSec pursued was HBGary, which was performed in response to a claim made by the technology security company that it had identified members of Anonymous. Following this, the members of LulzSec targeted an array of companies and entities, including but not limited to: Fox Television, Tribune Company, PBS, Sony, Nintendo, and the Senate.gov website. The targeting of these entities typically involved gaining access to and downloading confidential user information, or defacing the website at hand. LulzSec while not as strongly political as those typical of WikiLeaks or Anonymous, they shared similar sentiments for the freedom of information. One of their distinctly politically driven attacks involved targeting the Arizona State Police in response to new immigration laws. The group's first attack that garnered significant government attention was in 2011, when they collectively took down a website of the FBI. Following the incident, the leader of LulzSec, "Sabu," was identified as Hector Xavier Monsegur by the FBI, and he was the first of the group to be arrested. Immediately following his arrest, Monsegur admitted to criminal activity. He then began his cooperation with the US government, helping FBI authorities to arrest 8 of his co-conspirators, prevent 300 potential cyber attacks, and helped to identify vulnerabilities in existing computer systems. In August 2011, Monsegur pleaded guilty to "computer hacking conspiracy, computer hacking, computer hacking in furtherance of fraud, conspiracy to commit access device fraud, conspiracy to commit bank fraud, and aggravated identity theft pursuant to a cooperation agreement with the government." He served a total of one year and seven months and was charged a $1,200 fine. SiegedSec SiegedSec, short for Sieged Security and commonly self-referred to as the "Gay Furry Hackers", is a black-hat criminal hacktivist group that was formed in early 2022, that has committed a number of high-profile cyber attacks, including attacks on NATO, The Idaho National Laboratory, and Real America's Voice. On July 10, 2024, the group announced that they would be disbanding after attacking The Heritage Foundation. SiegedSec is led by an individual under the alias "vio". Short for "Sieged Security", SiegedSec's Telegram channel was first created in April 2022, and they commonly refer to themselves as "gay furry hackers". On multiple occasions, the group has targeted right-wing movements through breaching data, including The Heritage Foundation, Real America's Voice, as well as various U.S. states that have pursued legislative decisions against gender-affirming care. Related practices Culture jamming Hacking has been sometime described as a form of culture jamming. This term refers to the practice of subverting and criticizing political messages as well as media culture with the aim of challenging the status quo. It is often targeted toward subliminal thought processes taking place in the viewers with the goal of raising awareness as well as causing a paradigm shift. Culture jamming takes many forms including billboard hacking, broadcast signal intrusion, ad hoc art performances, simulated legal transgressions, memes, and artivism. The term "culture jamming" was first coined in 1984 by American musician Donald Joyce of the band Negativland. However, some speculation remains as to when the practice of culture jamming first began. Social researcher Vince Carducci believes culture jamming can be traced back to the 1950s with European social activist group Situationist International. Author and cultural critic Mark Dery believes medieval carnival is the earliest form of culture jamming as a way to subvert the social hierarchy at the time. Culture jamming is sometimes confused with acts of vandalism. However, unlike culture jamming, the main goal of vandalism is to cause destruction with any political themes being of lesser importance. Artivism usually has the most questionable nature as a form of culture jamming because defacement of property is usually involved. Media hacking Media hacking refers to the usage of various electronic media in an innovative or otherwise abnormal fashion for the purpose of conveying a message to as large a number of people as possible, primarily achieved via the World Wide Web. A popular and effective means of media hacking is posting on a blog, as one is usually controlled by one or more independent individuals, uninfluenced by outside parties. The concept of social bookmarking, as well as Web-based Internet forums, may cause such a message to be seen by users of other sites as well, increasing its total reach. Media hacking is commonly employed for political purposes, by both political parties and political dissidents. A good example of this is the 2008 US Election, in which both the Democratic and Republican parties used a wide variety of different media in order to convey relevant messages to an increasingly Internet-oriented audience. At the same time, political dissidents used blogs and other social media like Twitter in order to reply on an individual basis to the presidential candidates. In particular, sites like Twitter are proving important means in gauging popular support for the candidates, though the site is often used for dissident purposes rather than a show of positive support. Mobile technology has also become subject to media hacking for political purposes. SMS has been widely used by political dissidents as a means of quickly and effectively organising smart mobs for political action. This has been most effective in the Philippines, where SMS media hacking has twice had a significant impact on whether or not the country's Presidents are elected or removed from office. Reality hacking Reality hacking is any phenomenon that emerges from the nonviolent use of illegal or legally ambiguous digital tools in pursuit of politically, socially, or culturally subversive ends. These tools include website defacements, URL redirections, denial-of-service attacks, information theft, web-site parodies, virtual sit-ins, and virtual sabotage. Art movements such as Fluxus and Happenings in the 1970s created a climate of receptibility in regard to loose-knit organizations and group activities where spontaneity, a return to primitivist behavior, and an ethics where activities and socially engaged art practices became tantamount to aesthetic concerns. The conflation of these two histories in the mid-to-late 1990s resulted in cross-overs between virtual sit-ins, electronic civil disobedience, denial-of-service attacks, as well as mass protests in relation to groups like the International Monetary Fund and the World Bank. The rise of collectives, net.art groups, and those concerned with the fluid interchange of technology and real life (often from an environmental concern) gave birth to the practice of "reality hacking". Reality hacking relies on tweaking the everyday communications most easily available to individuals with the purpose of awakening the political and community conscience of the larger population. The term first came into use among New York and San Francisco artists, but has since been adopted by a school of political activists centered around culture jamming. In fiction The 1999 science fiction-action film The Matrix, among others, popularized the simulation hypothesis — the suggestion that reality is in fact a simulation of which those affected by the simulants are generally unaware. In this context, "reality hacking" is reading and understanding the code which represents the activity of the simulated reality environment (such as Matrix digital rain) and also modifying it in order to bend the laws of physics or otherwise modify the simulated reality. Reality hacking as a mystical practice is explored in the Gothic-Punk aesthetics-inspired White Wolf urban fantasy role-playing game Mage: The Ascension. In this game, the Reality Coders (also known as Reality Hackers or Reality Crackers) are a faction within the Virtual Adepts, a secret society of mages whose magick revolves around digital technology. They are dedicated to bringing the benefits of cyberspace to real space. To do this, they had to identify, for lack of a better term, the "source code" that allows our Universe to function. And that is what they have been doing ever since. Coders infiltrated a number of levels of society in order to gather the greatest compilation of knowledge ever seen. One of the Coders' more overt agendas is to acclimate the masses to the world that is to come. They spread Virtual Adept ideas through video games and a whole spate of "reality shows" that mimic virtual reality far more than "real" reality. The Reality Coders consider themselves the future of the Virtual Adepts, creating a world in the image of visionaries like Grant Morrison or Terence McKenna. In a location-based game (also known as a pervasive game), reality hacking refers to tapping into phenomena that exist in the real world, and tying them into the game story universe. Academic interpretations There have been various academic approaches to deal with hacktivism and urban hacking. In 2010, Günther Friesinger, Johannes Grenzfurthner and Thomas Ballhausen published an entire reader dedicated to the subject. They state: See also Crypto-anarchism Cyberterrorism E-democracy Open-source governance Patriotic hacking Tactical media 1984 Network Liberty Alliance Chaos Computer Club Cicada 3301 Decocidio Jester Internet vigilantism The Internet's Own Boy: The Story of Aaron Swartz – a documentary film milw0rm 2600: The Hacker Quarterly Citizen Lab HackThisSite Cypherpunk Jeremy Hammond Mr. Robot – a television series References Further reading Olson, Parmy. (05–14–2013). We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency. . Coleman, Gabriella. (2014–11–4). Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous. Verso Books. . Shantz, Jeff; Tomblin, Jordon (2014-11-28). Cyber Disobedience: Re://Presenting Online Anarchy. John Hunt Publishing. . Deseriis, Marco (2017). Hacktivism: On the Use of Botnets in Cyberattacks. Theory, Culture & Society 34(4): 131–152. External links Hacktivism and Politically Motivated Computer Crime History, types of activity and cases studies Activism by type Hacking (computer security) Politics and technology Internet terminology 2000s neologisms Culture jamming techniques Hacker culture Articles containing video clips
Hacktivism
[ "Technology" ]
6,848
[ "Computing terminology", "Internet terminology" ]
162,614
https://en.wikipedia.org/wiki/Decimal%20calendar
A decimal calendar is a calendar which includes units of time based on the decimal system. For example, a "decimal month" would consist of a year with 10 months and 36.52422 days per month. History Egyptian calendar The ancient Egyptian calendar consisted of twelve months, each divided into three weeks of ten days, with five intercalary days. Calendar of Romulus The original Roman calendar consisted of ten months; however, the calendar year only lasted 304 days, with 61 days during winter not assigned to any month. The months of Ianuarius and Februarius were added to the calendar by Numa Pompilius in 700 BCE. French Republican Calendar The French Republican Calendar was introduced (along with decimal time) in 1793, and was similar to the ancient Egyptian calendar. It consisted of twelve months, each divided into three décades of ten days, with five or six intercalary days called sansculottides. The calendar was abolished by Napoleon on January 1, 1806. Proposals The modern Gregorian calendar does not use decimal units of time; however, several proposed calendar systems do. None of these have achieved widespread use. See also Decimal time References Specific calendars Calendar
Decimal calendar
[ "Mathematics" ]
241
[ "Quantity", "Decimalisation", "Units of measurement" ]
162,654
https://en.wikipedia.org/wiki/Ren%C3%A9%20Antoine%20Ferchault%20de%20R%C3%A9aumur
René Antoine Ferchault de Réaumur (; ; 28 February 1683 – 17 October 1757) was a French entomologist and writer who contributed to many different fields, especially the study of insects. He introduced the Réaumur temperature scale. Life Réaumur was born in a prominent La Rochelle family and educated in Paris. He learned philosophy in the Jesuits' college at Poitiers, and in 1699 went to Bourges to study civil law and mathematics under the charge of an uncle, canon of La Sainte-Chapelle. In 1703 he went to Paris, where he continued the study of mathematics and physics. In 1708, at the age of 24, he was nominated by Pierre Varignon (who taught him mathematics) and elected a member of the Académie des Sciences. From this time onwards for nearly half a century hardly a year passed in which the did not contain at least one paper by Réaumur. At first, his attention was occupied by mathematical studies, especially in geometry. In 1710, he was named the chief editor of the Descriptions of the Arts and Trades, a major government project which resulted in the establishment of manufactures new to France and the revival of neglected industries. For discoveries regarding iron and steel he was awarded a pension of 12,000 livres. Content with his ample private income, he requested that the money should go to the Académie des Sciences for the furtherance of experiments on improved industrial processes. In 1731 he became interested in meteorology, and invented the thermometer scale which bears his name: the Réaumur. In 1735, for family reasons, he accepted the post of commander and intendant of the royal and military Order of Saint Louis. He discharged his duties with scrupulous attention, but refused the pay. He took great delight in the systematic study of natural history. His friends often called him "the Pliny of the 18th century". He loved retirement and lived at his country residences, including his chateau La Bermondière, Saint-Julien-du-Terroux, Maine, where he had a serious fall from a horse, which led to his death. He bequeathed his manuscripts, which filled 138 portfolios, and his natural history collections to the Académie des Sciences. Réaumur's scientific papers deal with many branches of science. His first, in 1708, was on a general problem in geometry. His last, in 1756, on the forms of birds' nests. He proved experimentally the fact that the strength of a rope is more than the sum of the strengths of its separate strands. He examined and reported on the auriferous (gold-bearing) rivers, the turquoise mines, the forests and the fossil beds of France. He devised the method of tinning iron that is still employed, and investigated the differences between iron and steel, correctly showing that the amount of carbon is greatest in cast iron, less in steel, and least in wrought iron. His book on this subject (1722) was translated into English and German. He was noted for a thermometer he constructed on the principle of taking the freezing point of water as 0°, and graduating the tube into degrees each of which was one-thousandth of the volume contained by the bulb and tube up to the zero mark. It was an accident dependent on the particular alcohol employed which made the boiling-point of water 108°; mercurial thermometers graduated into 80 equal parts between the freezing- and boiling-points of water are named Réaumur thermometers but diverge from his design and intention. Réaumur wrote much on natural history. Early in life he described the locomotor system of the Echinodermata, and showed that the supposed ability of replacing their lost limbs was actually true. He has been considered as a founder of ethology. In 1710 he wrote a paper on the possibility of spiders being used to produce silk, which was so celebrated at the time that the Kangxi Emperor of China had it translated into Chinese. His observations of wasps making paper from wood fibres have led some to credit him with this change in paper-making techniques. It was over a century before wood pulp was used on any industrial scale in paper making. He studied the relationship between the growth of insects and temperature. He also computed the rate of growth of insect populations and noted that there must be natural checks since the theoretical population numbers achievable by geometric progression were not matched by observations of actual populations. He also studied botanical and agricultural matters, and devised processes for preserving birds and eggs. He elaborated a system of artificial incubation, and made important observations on the digestion of carnivorous and graminivorous (grass-eating) birds. One of his greatest works is the , 6 vols., with 267 plates (Amsterdam, 1734–1742). It describes the appearance, habits and locality of all the known insects except the beetles, and is a marvel of patient and accurate observation. Among other important facts stated in this work are the experiments which enabled Réaumur to prove the correctness of Peyssonel's hypothesis, that corals are animals and not plants. He was elected a Fellow of the Royal Society in November 1738 by virtue of the fact that:His Name hath been known for many years among the Learned by Several Curious disertations published in the Memoirs of the Royal Academy of Sciences at Paris & in particular by a very Learned and usefull book wrote in French entitled 'The Art of Converting Forged Iron into Steel' and 'the Art of Soft'ning Cast Iron' printed at Paris 1722 4to and lately by his 'Curious Memoires relating to the History of Insects' at Paris in 4to three Volumes of which work have been Laid before the Royal Society.He was elected a foreign member of the Royal Swedish Academy of Sciences in 1748. He is commemorated in numerous place names including the rue Réaumur and the Réaumur - Sébastopol metro station in Paris and the Place Réaumur, Le Havre. Selected works Réaumur, R.-A. F. de. 1722. L'art de convertir le fer forgé en acier, et l'art d'adoucir le fer fondu, ou de faire des ouvrages de fer fondu aussi finis que le fer forgé. Paris, France. Réaumur, R.-A. F. de. 1734–1742. Mémoires pour servir à l'histoire des insectes. Six volumes. Académie Royale des Sciences, Paris, France. Réaumur, R.-A. F. de. 1749. Art de faire éclorre et d'élever en tout saison des oiseaux Domestiques de toutes espèces. Two volumes. Imprimerie royale, Paris, France. Réaumur, R.-A. F. de. 1750. The art of hatching and bringing up domestic fowls. London, UK. Réaumur, R.-A. F. de. 1800. Short history of bees I. The natural history of bees . . . Printed for Vernor and Hood in the Poultry, by J. Cundee, London, UK. Réaumur, R.-A. F. de. 1926. The natural history of ants, from an unpublished manuscript. W. M. Wheeler, editor and translator. [Includes French text.] Knopf, New York City, USA. Reprinted 1977. Arno Press, New York City, USA. Réaumur, R.-A. F. de. 1939. Morceaux choisis. Jean Torlais, editor. Gallimard, Paris, France. Réaumur, R.-A. F. de. 1955. Histoire des scarabées. M. Caullery, introduction. Volume 11 of Encyclopédie Entomologique. Paul Lechevalier, Paris, France. Réaumur, R.-A. F. de. 1956. Memoirs on steel and iron. A. G. Sisco, translator. C. S. Smith, introduction and notes. University of Chicago Press, Chicago, Illinois, USA. Publications Notes References External links Digitalies text of Mémoires pour servir a l'histoire des insectes Website of the Manoir Des Sciences at Reaumur Gaedike, R.; Groll, E. K. & Taeger, A. 2012: Bibliography of the entomological literature from the beginning until 1863 : online database – version 1.0 – Senckenberg Deutsches Entomologisches Institut. 1683 births 1757 deaths 18th-century French writers 18th-century French male writers Fellows of the Royal Society French entomologists Creators of temperature scales French Roman Catholics Members of the French Academy of Sciences Members of the Royal Swedish Academy of Sciences Order of Saint Louis recipients People from La Rochelle
René Antoine Ferchault de Réaumur
[ "Physics" ]
1,830
[ "Scales of temperature", "Physical quantities", "Creators of temperature scales" ]
162,736
https://en.wikipedia.org/wiki/Heterogamy
Heterogamy is a term applied to a variety of distinct phenomena in different scientific domains. Usually having to do with some kind of difference, "hetero", in reproduction, "gamy". See below for more specific senses. Science Reproductive biology In reproductive biology, heterogamy is the alternation of differently organized generations, applied to the alternation between parthenogenetic and a sexual generation. This type of heterogamy occurs for example in some aphids. Alternately, heterogamy or heterogamous is often used as a synonym of heterogametic, meaning the presence of two unlike chromosomes in a sex. For example, XY males and ZW females are called the heterogamous sex. Cell biology In cell biology, heterogamy is a synonym of anisogamy, the condition of having differently sized male and female gametes produced by different sexes or mating types in a species. Botany In botany, a plant is heterogamous when it carries at least two different types of flowers in regard to their reproductive structures, for example male and female flowers or bisexual and female flowers. Stamens and carpels are not regularly present in each flower or floret. Social science In sociology, heterogamy refers to a marriage between two individuals that differ in a certain criterion, and is contrasted with homogamy for a marriage or union between partners that match according to that criterion. For example, ethnic heterogamy refers to marriages involving individuals of different ethnic groups. Age heterogamy refers to marriages involving partners of significantly different ages. Heterogamy and homogamy are also used to describe marriage or union between people of unlike and like sex (or gender) respectively. See also Heterogametic Homogametic References Plant reproduction Genetics Exogamy
Heterogamy
[ "Biology" ]
376
[ "Behavior", "Plant reproduction", "Plants", "Genetics", "Reproduction" ]
162,757
https://en.wikipedia.org/wiki/Speed%20Graphic
The Speed Graphic was a press camera produced by Graflex in Rochester, New York. Although the first Speed Graphic cameras were produced in 1912, production of later versions continued until 1973; with significant improvements occurring in 1947 with the introduction of the Pacemaker Speed Graphic (and Pacemaker Crown Graphic, which was lighter and lacked the focal plane shutter). Description Despite the common appellation of Speed Graphic, various Graphic models were produced between 1912 and 1973. The authentic Speed Graphic has a focal plane shutter that the Crown Graphic and Century Graphic models lack. The eponymous name "speed" came from the maximum speed of 1/1000 sec. that could be achieved with the focal plane shutter. The Speed Graphic was available in 2¼ × 3¼ inch, 3¼ × 4¼ inch, 5 × 7 inch and the most common format 4 × 5 inch. Because of the focal plane shutter, the Speed Graphic can also use lenses that do not have shutters (known as barrel lenses). Using a Speed Graphic, especially with the rear shutter system, was a slow process. Setting the focal plane shutter speed required selecting both a slit width and a spring tension. Each exposure required the photographer to change the film holder, open the lens shutter, cock the focal plane shutter, remove the dark slide from the inserted film holder, focus the camera, and release the focal plane shutter. Conversely, if the lens shutter were used, the focal plane shutter (on the Speed Graphic and Pacemaker Graphic models with both shutters) had to be opened prior to cocking using the "T" or TIME setting, and then releasing the shutter in the lens. If indoors, the photographer also had to change the flashbulb. Each film holder contained one or two pieces of sheet film, which the photographer had to load in complete darkness. Faster shooting could be achieved with the Grafmatic film holder—a six sheet film "changer" that holds each sheet in a septum. Even faster exposures could be taken if the photographer was shooting film packs of 12 exposures, or later 16 exposures (discontinued in the late 1970s). With film packs one could shoot as fast as one could pull the tab and cock the shutter, and film packs could be loaded in daylight. A roll film adapter that used 120 or 220 film was available for 2.25 × 3.25, 3.25 × 4.25 and 4 × 5 inch cameras that permitted 8 to 20 exposures per roll, depending on the model of the adapter. Photographers had to be conservative and anticipate when the action was about to take place to take the right picture. The cry, "Just one more!" if a shot was missed was common. President Harry Truman introduced the White House photographers as the "Just One More Club." Operation of the focal plane shutter The focal plane shutter consists of a rubberized flexible curtain with slits of varying widths that cross the film plane at speeds determined by the tension setting of the spring mechanism. There are 4 slits with widths of 1/8 in, 3/8 in, 3/4 in, 1 1/2 in and “T” (T = “time” setting, used when lens diaphragm shutter is used to control exposure duration. The focal plane shutter is left completely open until manually released. The opening covers the entire area of the film for the size of the camera.) On Speed Graphic models, there are 6 tension settings, adjusted by a butterfly winding knob that increases the speed that the slit crosses the film plane. On Pacemaker Graphic models, there are only 2 settings (high and low). The combination of the slit width and the spring tension allows for exposure speeds varying from 1/10 to 1/1000 sec. Famous users A famous Speed Graphic user was New York City press photographer Arthur "Weegee" Fellig, who covered the city in the 1930s and 1940s. Barbara Morgan used a Speed Graphic to photograph Martha Graham's choreography. Recent auctions show Irving Klaw used one in his studio for his iconic pin up & bondage photos of models such as Bettie Page. In the 1950s and 1960s, the iconic photo-journalists of the Washington Post and the former Washington Evening Star shot on Speed Graphics exclusively. Some of the most famous photographs of this era were taken on the device by the twin brothers, Frank P. Hoy (for the Post) and Tom Hoy (for the Star). The 1942-1953 Pulitzer Prizes for photography were taken with Speed Graphic cameras, including AP photographer Joe Rosenthal's image of Marines raising the American flag on Iwo Jima in 1945. A few winning photographs after 1954 were taken with Rolleiflex or Kodak cameras. 1961 was the last Pulitzer Prize-winning photograph with a Speed Graphic, which taken by Yasushi Nagao showing Otoya Yamaguchi assassinating Inejiro Asanuma on stage. In 2004, American photojournalist David Burnett used his 4×5 inch Speed Graphic with a 178 mm f/2.5 Aero Ektar lens removed from a K-21 aerial camera to cover John Kerry's presidential campaign. Burnett also used a 4×5 inch Speed Graphic to shoot images at the Winter and Summer Olympics. Graflex manufacturing history The company name changed several times over the years as it was acquired and later spun off by the Eastman Kodak Company, finally becoming a division of the Singer Corporation and then dissolved in 1973. The award-winning Graflex plant in Pittsford, New York is still standing and is home to Veramark Technologies, Inc., formerly known as the MOSCOM Corporation. Graflex model history The Speed Graphic was manufactured in a number of sizes, 4×5" being the most common, but also in 2.25×3.25", 3.25×4.25" and 5×7". Notes See also Graflex Press camera References External links The Graflex Speed Graphic FAQ on Graflex.org Cameras
Speed Graphic
[ "Technology" ]
1,221
[ "Recording devices", "Cameras" ]
162,780
https://en.wikipedia.org/wiki/Megafauna
In zoology, megafauna (from Greek μέγας megas "large" and Neo-Latin fauna "animal life") are large animals. The precise definition of the term varies widely, though a common threshold is approximately , with other thresholds as low as or as high as . Large body size is generally associated with other traits, such as having a slow rate of reproduction and, in large herbivores, reduced or negligible adult mortality from being killed by predators. Megafauna species have considerable effects on their local environment, including the suppression of the growth of woody vegetation and a consequent reduction in wildfire frequency. Megafauna also play a role in regulating and stabilizing the abundance of smaller animals. During the Pleistocene, megafauna were diverse across the globe, with most continental ecosystems exhibiting similar or greater species richness in megafauna as compared to ecosystems in Africa today. During the Late Pleistocene, particularly from around 50,000 years ago onwards, most large mammal species became extinct, including 80% of all mammals greater than , while small animals were largely unaffected. This pronouncedly size-biased extinction is otherwise unprecedented in the geological record. Humans and climatic change have been implicated by most authors as the likely causes, though the relative importance of either factor has been the subject of significant controversy. History One of the earliest occurrences of the term "megafauna" is Alfred Russel Wallace's 1876 work The geographical distribution of animals. He described the animals as "the hugest, and fiercest, and strangest forms". In the 20th and 21st centuries, the term usually refers to large animals. There are variations in thresholds used to define megafauna as a whole or certain groups of megafauna. Many scientific literature adopt Paul S. Martin's proposed threshold of to classify animals as megafauna. However, for freshwater species, is the preferred threshold. Some scientists define herbivorous terrestrial megafauna as having a weight exceeding , and terrestrial carnivorous megafauna as more than . Additionally, Owen-Smith coined the term megaherbivore to describe herbivores that weighed over , which has seen some use by other researchers. Among living animals, the term megafauna is most commonly used for the largest extant terrestrial mammals, which includes (but is not limited to) elephants, giraffes, hippopotamuses, rhinoceroses, and larger bovines. Of these five categories of large herbivores, only bovines are presently found outside of Africa and Asia, but all the others were formerly more wide-ranging, with their ranges and populations continually shrinking and decreasing over time. Wild equines are another example of megafauna, but their current ranges are largely restricted to the Old World, specifically in Africa and Asia. Megafaunal species may be categorized according to their dietary type: megaherbivores (e.g., elephants), megacarnivores (e.g., lions), and megaomnivores (e.g., bears). Ecological strategy Megafauna animals – in the sense of the largest mammals and birds – are generally K-strategists, with high longevity, slow population growth rates, low mortality rates, and (at least for the largest) few or no natural predators capable of killing adults. These characteristics, although not exclusive to such megafauna, make them vulnerable to human overexploitation, in part because of their slow population recovery rates. Evolution of large body size One observation that has been made about the evolution of larger body size is that rapid rates of increase that are often seen over relatively short time intervals are not sustainable over much longer time periods. In an examination of mammal body mass changes over time, the maximum increase possible in a given time interval was found to scale with the interval length raised to the 0.25 power. This is thought to reflect the emergence, during a trend of increasing maximum body size, of a series of anatomical, physiological, environmental, genetic and other constraints that must be overcome by evolutionary innovations before further size increases are possible. A strikingly faster rate of change was found for large decreases in body mass, such as may be associated with the phenomenon of insular dwarfism. When normalized to generation length, the maximum rate of body mass decrease was found to be over 30 times greater than the maximum rate of body mass increase for a ten-fold change. In terrestrial mammals Subsequent to the Cretaceous–Paleogene extinction event that eliminated the non-avian dinosaurs about Ma (million years) ago, terrestrial mammals underwent a nearly exponential increase in body size as they diversified to occupy the ecological niches left vacant. Starting from just a few kg before the event, maximum size had reached ~ a few million years later, and ~ by the end of the Paleocene. This trend of increasing body mass appears to level off about 40 Ma ago (in the late Eocene), suggesting that physiological or ecological constraints had been reached, after an increase in body mass of over three orders of magnitude. However, when considered from the standpoint of rate of size increase per generation, the exponential increase is found to have continued until the appearance of Indricotherium 30 Ma ago. (Since generation time scales with body mass0.259, increasing generation times with increasing size cause the log mass vs. time plot to curve downward from a linear fit.) Megaherbivores eventually attained a body mass of over . The largest of these, indricotheres and proboscids, have been hindgut fermenters, which are believed to have an advantage over foregut fermenters in terms of being able to accelerate gastrointestinal transit in order to accommodate very large food intakes. A similar trend emerges when rates of increase of maximum body mass per generation for different mammalian clades are compared (using rates averaged over macroevolutionary time scales). Among terrestrial mammals, the fastest rates of increase of body mass0.259 vs. time (in Ma) occurred in perissodactyls (a slope of 2.1), followed by rodents (1.2) and proboscids (1.1), all of which are hindgut fermenters. The rate of increase for artiodactyls (0.74) was about a third of the perissodactyls. The rate for carnivorans (0.65) was slightly lower yet, while primates, perhaps constrained by their arboreal habits, had the lowest rate (0.39) among the mammalian groups studied. Terrestrial mammalian carnivores from several eutherian groups (the artiodactyl Andrewsarchus – formerly considered a mesonychid, the oxyaenid Sarkastodon, and the carnivorans Amphicyon and Arctodus) all reached a maximum size of about (the carnivoran Arctotherium and the hyaenodontid Simbakubwa may have been somewhat larger). The largest known metatherian carnivore, Proborhyaena gigantea, apparently reached , also close to this limit. A similar theoretical maximum size for mammalian carnivores has been predicted based on the metabolic rate of mammals, the energetic cost of obtaining prey, and the maximum estimated rate coefficient of prey intake. It has also been suggested that maximum size for mammalian carnivores is constrained by the stress the humerus can withstand at top running speed. Analysis of the variation of maximum body size over the last 40 Ma suggests that decreasing temperature and increasing continental land area are associated with increasing maximum body size. The former correlation would be consistent with Bergmann's rule, and might be related to the thermoregulatory advantage of large body mass in cool climates, better ability of larger organisms to cope with seasonality in food supply, or other factors; the latter correlation could be explained in terms of range and resource limitations. However, the two parameters are interrelated (due to sea level drops accompanying increased glaciation), making the driver of the trends in maximum size more difficult to identify. In marine mammals Since tetrapods (first reptiles, later mammals) returned to the sea in the Late Permian, they have dominated the top end of the marine body size range, due to the more efficient intake of oxygen possible using lungs. The ancestors of cetaceans are believed to have been the semiaquatic pakicetids, no larger than dogs, of about 53 million years (Ma) ago. By 40 Ma ago, cetaceans had attained a length of or more in Basilosaurus, an elongated, serpentine whale that differed from modern whales in many respects and was not ancestral to them. Following this, the evolution of large body size in cetaceans appears to have come to a temporary halt and then to have backtracked, although the available fossil records are limited. However, in the period from 31 Ma ago (in the Oligocene) to the present, cetaceans underwent a significantly more rapid sustained increase in body mass (a rate of increase in body mass0.259 of a factor of 3.2 per million years) than achieved by any group of terrestrial mammals. This trend led to the largest animal of all time, the modern blue whale. Several reasons for the more rapid evolution of large body size in cetaceans are possible. Fewer biomechanical constraints on increases in body size may be associated with suspension in water as opposed to standing against the force of gravity, and with swimming movements as opposed to terrestrial locomotion. Also, the greater heat capacity and thermal conductivity of water compared to air may increase the thermoregulatory advantage of large body size in marine endotherms, although diminishing returns apply. Among the toothed whales, maximum body size appears to be limited by food availability. Larger size, as in sperm and beaked whales, facilitates deeper diving to access relatively easily-caught, large cephalopod prey in a less competitive environment. Compared to odontocetes, the efficiency of baleen whales' filter feeding scales more favorably with increasing size when planktonic food is dense, making larger sizes more advantageous. The lunge feeding technique of rorquals appears to be more energy efficient than the ram feeding of balaenid whales; the latter technique is used with less dense and patchy plankton. The cooling trend in Earth's recent history may have generated more localities of high plankton abundance via wind-driven upwellings, facilitating the evolution of gigantic whales. Cetaceans are not the only marine mammals to reach tremendous sizes. The largest mammal carnivorans of all time are marine pinnipeds, the largest of which is the southern elephant seal, which can reach more than in length and weigh up to . Other large pinnipeds include the northern elephant seal at , walrus at , and Steller sea lion at . The sirenians are another group of marine mammals which adapted to fully aquatic life around the same time as the cetaceans did. Sirenians are closely related to elephants. The largest sirenian was the Steller's sea cow, which reached up to in length and weighed , and was hunted to extinction in the 18th century. In flightless birds Because of the small initial size of all mammals following the extinction of the non-avian dinosaurs, nonmammalian vertebrates had a roughly ten-million-year-long window of opportunity (during the Paleocene) for evolution of gigantism without much competition. During this interval, apex predator niches were often occupied by reptiles, such as terrestrial crocodilians (e.g. Pristichampsus), large snakes (e.g. Titanoboa) or varanid lizards, or by flightless birds (e.g. Paleopsilopterus in South America). This is also the period when megafaunal flightless herbivorous gastornithid birds evolved in the Northern Hemisphere, while flightless paleognaths evolved to large size on Gondwanan land masses and Europe. Gastornithids and at least one lineage of flightless paleognath birds originated in Europe, both lineages dominating niches for large herbivores while mammals remained below (in contrast with other landmasses like North America and Asia, which saw the earlier evolution of larger mammals) and were the largest European tetrapods in the Paleocene. Flightless paleognaths, termed ratites, have traditionally been viewed as representing a lineage separate from that of their small flighted relatives, the Neotropic tinamous. However, recent genetic studies have found that tinamous nest well within the ratite tree, and are the sister group of the extinct moa of New Zealand. Similarly, the small kiwi of New Zealand have been found to be the sister group of the extinct elephant birds of Madagascar. These findings indicate that flightlessness and gigantism arose independently multiple times among ratites via parallel evolution. Predatory megafaunal flightless birds were often able to compete with mammals in the early Cenozoic. Later in the Cenozoic, however, they were displaced by advanced carnivorans and died out. In North America, the bathornithids Paracrax and Bathornis were apex predators but became extinct by the Early Miocene. In South America, the related phorusrhacids shared the dominant predatory niches with metatherian sparassodonts during most of the Cenozoic but declined and ultimately went extinct after eutherian predators arrived from North America (as part of the Great American Interchange) during the Pliocene. In contrast, large herbivorous flightless ratites have survived to the present. However, none of the flightless birds of the Cenozoic, including the predatory Brontornis, possibly omnivorous Dromornis stirtoni or herbivorous Aepyornis, ever grew to masses much above , and thus never attained the size of the largest mammalian carnivores, let alone that of the largest mammalian herbivores. It has been suggested that the increasing thickness of avian eggshells in proportion to egg mass with increasing egg size places an upper limit on the size of birds. The largest species of Dromornis, D. stirtoni, may have gone extinct after it attained the maximum avian body mass and was then outcompeted by marsupial diprotodonts that evolved to sizes several times larger. In giant turtles Giant tortoises were important components of late Cenozoic megafaunas, being present in every nonpolar continent until the arrival of homininans. The largest known terrestrial tortoise was Megalochelys atlas, an animal that probably weighed about . Some earlier aquatic Testudines, e.g. the marine Archelon of the Cretaceous and freshwater Stupendemys of the Miocene, were considerably larger, weighing more than . Megafaunal mass extinctions Timing and possible causes Numerous extinctions occurred during the latter half of the Last Glacial Period when most large mammals went extinct in the Americas, Australia-New Guinea, and Eurasia, including over 80% of all terrestrial animals with a body mass greater than . Small animals and other organisms like plants were generally unaffected by the extinctions, which is unprecented in previous extinctions during the last 30 million years. Various theories have attributed the wave of extinctions to human hunting, climate change, disease, extraterrestrial impact, competition from other animals or other causes. However, this extinction near the end of the Pleistocene was just one of a series of megafaunal extinction pulses that have occurred during the last 50,000 years over much of the Earth's surface, with Africa and Asia (where the local megafauna had a chance to evolve alongside modern humans) being comparatively less affected. The latter areas did suffer gradual attrition of megafauna, particularly of the slower-moving species (a class of vulnerable megafauna epitomized by giant tortoises), over the last several million years. Outside the mainland of Afro-Eurasia, these megafaunal extinctions followed a highly distinctive landmass-by-landmass pattern that closely parallels the spread of humans into previously uninhabited regions of the world, and which shows no overall correlation with climatic history (which can be visualized with plots over recent geological time periods of climate markers such as marine oxygen isotopes or atmospheric carbon dioxide levels). Australia and nearby islands (e.g., Flores) were struck first around 46,000 years ago, followed by Tasmania about 41,000 years ago (after formation of a land bridge to Australia about 43,000 years ago). The role of humans in the extinction of Australia and New Guinea's megafauna has been disputed, with multiple studies showing a decline in the number of species prior to the arrival of humans on the continent and the absence of any evidence of human predation; the impact of climate change has instead been cited for their decline. Similarly, Japan lost most of its megafauna apparently about 30,000 years ago, North America 13,000 years ago and South America about 500 years later, Cyprus 10,000 years ago, the Antilles 6,000 years ago, New Caledonia and nearby islands 3,000 years ago, Madagascar 2,000 years ago, New Zealand 700 years ago, the Mascarenes 400 years ago, and the Commander Islands 250 years ago. Nearly all of the world's isolated islands could furnish similar examples of extinctions occurring shortly after the arrival of humans, though most of these islands, such as the Hawaiian Islands, never had terrestrial megafauna, so their extinct fauna were smaller, but still displayed island gigantism. An analysis of the timing of Holarctic megafaunal extinctions and extirpations over the last 56,000 years has revealed a tendency for such events to cluster within interstadials, periods of abrupt warming, but only when humans were also present. Humans may have impeded processes of migration and recolonization that would otherwise have allowed the megafaunal species to adapt to the climate shift. In at least some areas, interstadials were periods of expanding human populations. An analysis of Sporormiella fungal spores (which derive mainly from the dung of megaherbivores) in swamp sediment cores spanning the last 130,000 years from Lynch's Crater in Queensland, Australia, showed that the megafauna of that region virtually disappeared about 41,000 years ago, at a time when climate changes were minimal; the change was accompanied by an increase in charcoal, and was followed by a transition from rainforest to fire-tolerant sclerophyll vegetation. The high-resolution chronology of the changes supports the hypothesis that human hunting alone eliminated the megafauna, and that the subsequent change in flora was most likely a consequence of the elimination of browsers and an increase in fire. The increase in fire lagged the disappearance of megafauna by about a century, and most likely resulted from accumulation of fuel once browsing stopped. Over the next several centuries grass increased; sclerophyll vegetation increased with a lag of another century, and a sclerophyll forest developed after about another thousand years. During two periods of climate change about 120,000 and 75,000 years ago, sclerophyll vegetation had also increased at the site in response to a shift to cooler, drier conditions; neither of these episodes had a significant impact on megafaunal abundance. Similar conclusions regarding the culpability of human hunters in the disappearance of Pleistocene megafauna were derived from high-resolution chronologies obtained via an analysis of a large collection of eggshell fragments of the flightless Australian bird Genyornis newtoni, from analysis of Sporormiella fungal spores from a lake in eastern North America and from study of deposits of Shasta ground sloth dung left in over half a dozen caves in the American Southwest. Continuing human hunting and environmental disturbance has led to additional megafaunal extinctions in the recent past, and has created a serious danger of further extinctions in the near future (see examples below). Direct killing by humans, primarily for meat or other body parts, is the most significant factor in contemporary megafaunal decline. A number of other mass extinctions occurred earlier in Earth's geologic history, in which some or all of the megafauna of the time also died out. Famously, in the Cretaceous–Paleogene extinction event, the non-avian dinosaurs and most other giant reptiles were eliminated. However, the earlier mass extinctions were more global and not so selective for megafauna; i.e., many species of other types, including plants, marine invertebrates and plankton, went extinct as well. Thus, the earlier events must have been caused by more generalized types of disturbances to the biosphere. Consequences of depletion of megafauna Depletion of herbivorous megafauna results in increased growth of woody vegetation, and a consequent increase in wildfire frequency. Megafauna may help to suppress the growth of invasive plants. Large herbivores and carnivores can suppress the abundance of smaller animals, resulting in their population increase when megafauna are removed. Effect on nutrient transport Megafauna play a significant role in the lateral transport of mineral nutrients in an ecosystem, tending to translocate them from areas of high to those of lower abundance. They do so by their movement between the time they consume the nutrient and the time they release it through elimination (or, to a much lesser extent, through decomposition after death). In South America's Amazon Basin, it is estimated that such lateral diffusion was reduced over 98% following the megafaunal extinctions that occurred roughly 12,500 years ago. Given that phosphorus availability is thought to limit productivity in much of the region, the decrease in its transport from the western part of the basin and from floodplains (both of which derive their supply from the uplift of the Andes) to other areas is thought to have significantly impacted the region's ecology, and the effects may not yet have reached their limits. In the sea, cetaceans and pinnipeds that feed at depth are thought to translocate nitrogen from deep to shallow water, enhancing ocean productivity, and counteracting the activity of zooplankton, which tend to do the opposite. Effect on methane emissions Large populations of megaherbivores have the potential to contribute greatly to the atmospheric concentration of methane, which is an important greenhouse gas. Modern ruminant herbivores produce methane as a byproduct of foregut fermentation in digestion and release it through belching or flatulence. Today, around 20% of annual methane emissions come from livestock methane release. In the Mesozoic, it has been estimated that sauropods could have emitted 520 million tons of methane to the atmosphere annually, contributing to the warmer climate of the time (up to 10 °C (18 °F) warmer than at present). This large emission follows from the enormous estimated biomass of sauropods, and because methane production of individual herbivores is believed to be almost proportional to their mass. Recent studies have indicated that the extinction of megafaunal herbivores may have caused a reduction in atmospheric methane. This hypothesis is relatively new. One study examined the methane emissions from the bison that occupied the Great Plains of North America before contact with European settlers. The study estimated that the removal of the bison caused a decrease of as much as 2.2 million tons per year. Another study examined the change in the methane concentration in the atmosphere at the end of the Pleistocene epoch after the extinction of megafauna in the Americas. After early humans migrated to the Americas about 13,000 BP, their hunting and other associated ecological impacts led to the extinction of many megafaunal species there. Calculations suggest that this extinction decreased methane production by about 9.6 million tons per year. This suggests that the absence of megafaunal methane emissions may have contributed to the abrupt climatic cooling at the onset of the Younger Dryas. The decrease in atmospheric methane that occurred at that time, as recorded in ice cores, was 2 to 4 times more rapid than any other decrease in the last half million years, suggesting that an unusual mechanism was at work. Gallery Pleistocene extinct megafauna Other extinct Cenozoic megafauna Extant See also Australian megafauna Bergmann's rule Charismatic megafauna Cope's rule Deep-sea gigantism Island gigantism Largest organisms Largest prehistoric animals List of heaviest land mammals List of largest mammals List of megafauna discovered in modern times Megafauna (mythology) Megafaunal wolf Megaflora Megaherb Quaternary extinction event Notes References External links Megafauna – "First Victims of the Human-Caused Extinction" Extinction Zoology Animal size
Megafauna
[ "Biology" ]
5,167
[ "Animal size", "Zoology", "Organism size" ]
162,791
https://en.wikipedia.org/wiki/Evolutionary%20economics
Evolutionary economics is a school of economic thought that is inspired by evolutionary biology. Although not defined by a strict set of principles and uniting various approaches, it treats economic development as a process rather than an equilibrium and emphasizes change (qualitative, organisational, and structural), innovation, complex interdependencies, self-evolving systems, and limited rationality as the drivers of economic evolution. The support for the evolutionary approach to economics in recent decades seems to have initially emerged as a criticism of the mainstream neoclassical economics, but by the beginning of the 21st century it had become part of the economic mainstream itself. Evolutionary economics does not take the characteristics of either the objects of choice or of the decision-maker as fixed. Rather, it focuses on the non-equilibrium processes that transform the economy from within and their implications, considering interdependencies and feedback. The processes in turn emerge from the actions of diverse agents with bounded rationality who may learn from experience and interactions and whose differences contribute to the change. Roots of evolutionary economics Early ideas The idea of human society and the world in general as subject to evolution has been following mankind throughout its existence. Hesiod, an ancient Greek poet thought to be the first Western written poet regarding himself as an individual, described five Ages of Man – the Golden Age, the Silver Age, the Bronze Age, the Heroic Age, and the Iron Age – following from divine existence to toil and misery. Modern scholars consider his works as one of the sources for early economic thought. The concept is also present in the Metamorphoses by an ancient Roman poet Ovid. His Four Ages include technological progress: in the Golden Age, men did not know arts and craft, whereas by the Iron Age people had learnt and discovered agriculture, architecture, mining, navigation, and national boundaries, but had also become violent and greedy. This concept was not exclusive to the Greek and Roman civilizations (see, for instance, Yuga Cycles in Hinduism, the Three Ages of Buddhism, Aztecs’ Five Suns), but a common feature is the path towards misery and destruction, with technological advancements accompanied by moral degradation. Medieval and early modern times Medieval views on society, economics and politics (at least in Europe and Pax Islamica) were influenced by religious norms and traditions. Catholic and Islamic scholars debated on the moral appropriateness of certain economic practices, such as interest. The subject of changes was thought of in existential terms. For instance, Augustine of Hippo regarded time as a phenomenon of the universe created by God and a measure of change, whereas God exists outside of time. A major contribution to the views on the evolution of society was Leviathan by Thomas Hobbes. A human, according to Hobbes, is a matter in motion with one's own appetites and desires. Due to these numerous desires and the scarcity of resources, the natural state of a human is a war of all against all: “In such condition there is no place for industry, because the fruit thereof is uncertain, and consequently no culture of the earth, no navigation nor the use of commodities that may be imported by sea, no commodious building, no instruments of moving and removing such things as require much force, no knowledge of the face of the earth, no account of time, no arts, no letters, no society, and which is worst of all, continual fear and danger of violent death, and the life of man, solitary, poor, nasty, brutish, and short.” In order to overcome this natural anarchy, Hobbs saw it necessary to impose an ultimate restraint in the form of a sovereign. Economic development and socialism Further theoretical developments relate to the names of prominent socialists of the 19th century, who viewed economic and political systems as products of social evolution (in contrast to the notions of natural rights and morality). In his book What is Property?, Pierre-Joseph Proudhon noted: “Thus, in a given society, the authority of man over man is inversely proportional to the stage of intellectual development which that society has reached.” The approach was also employed by Karl Marx. In his view, over the course of history superior economic systems would replace inferior ones. Inferior systems were beset by internal contradictions and inefficiencies that made them impossible to survive in the long term. In Marx's scheme, feudalism was replaced by capitalism, which would eventually be superseded by socialism. Emergence and development The term "evolutionary economics" might have been first coined by Thorstein Veblen. Veblen saw the need for taking into account cultural variation in his economic approach; no universal "human nature" could possibly be invoked to explain the variety of norms and behaviours that the new science of anthropology showed to be the rule rather than an exception. He also argued that social institutions are subject to selection process and that economic science should embrace the Darwinian theory. Veblen's followers quickly abandoned his evolutionary legacy. When they finally returned to the use of the term “evolutionary”, they referred to development and change in general, without its Darwinian meaning. Further researchers, such as Joseph Schumpeter, studied entrepreneurship and innovation using this term, but not in the Darwinian sense. Another prominent economist, Friedrich von Hayek, also employed the elements of the evolutionary approach, especially criticizing “the fatal conceit” of socialists who believed they could and should design a new society while disregarding human nature. However, Hayek seemed to see the Darwin theory not as a revolution itself, but rather as an intermediary step in the line of evolutionary thinking. There were other notable contributors to the evolutionary approach in economics, such as Armen Alchian, who argued that, faced with uncertainty and incomplete information, firms adapt to the environment instead of pursuing profit maximization. An Evolutionary Theory of Economic Change and beyond The publication of An Evolutionary Theory of Economic Change by Richard R. Nelson and Sidney G. Winter in 1982 marked a turning point in the field of evolutionary economics. Inspired by Alchian's work about the decision-making process of firms under uncertainty and the behavioural theory of the firm by Richard Cyert and James March, Nelson and Winter constructed a comprehensive evolutionary theory of business behavior using the concept of natural selection. In this framework, firms operate on the basis of organizational routines, which they evaluate and may change while functioning in a certain selection environment. Since then, evolutionary economics, as noted by Nicolai Foss, has been concerned with “the transformation of already existing structures and the emergence and possible spread of novelties.” Economies have been viewed as a complex system, a result of causal interactions (non-linear and chaotic) between different agents and entities with varied characteristics. Instead of perfect information and rationality, Herbert Simon's concept of bounded rationality has become prevailing. By the 1990s, as put by Geoffrey Hodgson, “it was possible to write of an international network or ‘invisible college’ of ‘evolutionary economists’ who, despite their analytical differences, were focusing on the problem of analyzing structural, technological, cultural and institutional change in economic systems… They were also united by their common dislike of the static and equilibrium approaches that dominated mainstream economics.” In 2020, Yoshinori Shiozawa published a paper "A new framework for analyzing technological change" Journal of Evolutionary Economics 30: 989-1034, in which the author proved that (1) technological change induces the economic growth in the sense that real wage rate increases for all workers and (2) it is the major source of economic growth. Evolutionary economics and the Unified Growth Theory The role of evolutionary forces in the process of economic development over the course of human history has been further explored during the past few decades, primarily by Oded Galor and his colleagues. A pioneer of the Unified Growth Theory, Galor depicts economic growth and development throughout human history as a continuous process driven by technological progress and the accumulation of human capital as well as by the accumulation of those biological, social and cultural features that favour further development. In Unified Growth Theory (2011), Galor presents a dynamic system capable of describing economic development in this way. According to Galor's model, technological advancements in the early eras of the mankind (during the Malthusian epoch, with limited resources and near-subsistence levels of income) would lead to increases in the size of population, which in turn would further accelerate technological progress due to the production of new ideas and the increase in demand for them. At some point technological advancements would require higher levels of education and generate the demand for educated labour force. After that, an economy would move into a new phase characterised by demographic transition (given that investment into less children, although more costly, would yield higher returns) and sustained economic growth. The process is accompanied by improvements in living standards, the position of the working class as necessary in order to complement technological progress (contrary to Marx and his followers, who predicted its further impoverishment), and the position of women, paving the way for further social and gender equality improvements. Interdependent, these elements facilitate each other, creating a unified process of growth and development, although the pace may be different for different societies. Galor's theory also refers to other fields of science, including evolutionary biology. He invokes, among other things, the sophisticated human brain and the anatomy of the human hand as key advantages that bolstered the development of humans (both as a species and as a society). In The Journey of Humanity: The Origins of Wealth and Inequality (2022) Galor provides some statements that exemplify his evolutionary approach: “Consider… two large clans: the Quanty and the Qualy… Suppose that Quanty households bear on average four children each, of whom only two reach adulthood and find a reproductive partner. Meanwhile, Qualy households bear on average only two children each, because their budget does not allow them to invest in the education and health of additional offspring [sic!], and yet, thanks to the investment that they do make, both children not only reach adulthood and find a reproductive partner but they also find jobs in commercial and skill-intensive occupations… Now suppose the society in which they live is one where technological development boosts the demand for the services of blacksmiths, carpenters and other trades who can manufacture tools and more efficient machines. This increase in earning capacity would place the Qualy clan at a distinct evolutionary advantage. Within a generation or two, its families are likely to enjoy higher incomes and amass greater resources.” Galor, his colleagues and contemporaries have also used the evolutionary approach in order to explain the origins of more particular elements of economic and social behavior. Using the genealogical record of half a million people in Quebec during the period 1608-1800, it was suggested that moderate fertility, and hence the tendency towards investment in child quality, was beneficial for the long-run reproductive success, reflecting the quality-quantity tradeoff observed and discussed in earlier works. A natural experiment regarding the expansion of the New World crops into the Old World and vice versa during the Columbian exchange led to the conclusion that beneficial pre-industrial agro-climatic characteristics may have positively affected the formation of a future-oriented mindset in corresponding contemporary societies. Key concepts related to behavioural economics, such as risk aversion and loss aversion, were also studied through evolutionary lenses. For instance, Galor and Savitsky (2018) provided empirical evidence that the intensity of loss aversion may be correlated with historical exposure to climatic shocks and their effects on reproductive success, with greater climatic volatility in some regions leading to more loss-neutrality among contemporary individuals and ethnic groups originating from there. As for risk aversion, Galor and Michalopoulos (2012) suggested there was a reversal in the course of human history, with risk-tolerance presenting an evolutionary advantage during early stages of development by promoting technological advancements, and with risk-aversion being an advantage during later stages, when risk-tolerant individuals channel less resources towards children and natural selection favours risk-averse individuals. Adaptive market hypothesis Andrew Lo proposed the adaptive market hypothesis, a view that financial systems may follow principles of the efficient market hypothesis as well as evolutionary principles such as adaption and natural selection. Criticism The emergence of modern evolutionary economics was welcomed by the critics of the neoclassical mainstream. However, the field, especially the approach by Nelson and Winter, has also drawn critical attitude from other heterodox economists. A year after An Evolutionary Theory of Economic Change was published, Philip Mirowski expressed his doubts that this framework represented genuine evolutionary economics research (i.e., in the vein of Veblen) and not just a variant of neoclassical methodology, especially since the authors admitted their framework could include neoclassical orthodoxy. Some Veblenian institutionalists claim this framework is only a “protective modification of the neoclassical economics and is antithetical to Veblen's evolutionary economics.” Another possible shortcoming recognized by the proponents of modern evolutionary economics is that the field is heterogenous, with no convergence on an integrated approach. Related fields Evolutionary psychology Evolutionary psychology is a theoretical approach in psychology that examines cognition and behaviour from a modern evolutionary perspective. It seeks to identify human psychological adaptations with regards to the ancestral problems they evolved to solve. In this framework, psychological traits and mechanisms are either functional products of natural and sexual selection or non-adaptive by-products of other adaptive traits. Economic concepts can also be viewed through these lenses. For instance, apparent anomalies in decision-making, such as violations of the maximization principle, may be a result of the human brain evolution. Another concept suitable for evolutionary analysis is the utility function, which may essentially be represented as the fitness evolutionary function. Evoltunionary Psychology and Economic Behavior There have been efforts to apply the insights of evolutionary psychology to understand economic behavior. An important part of this effort has been to use evolutionary psychology to analyze and add structure to the human utility function. Paul H. Rubin has made significant contributions to this area of resesrch. His influential book, "Darwinian Politics," delves into the intersection of evolutionary theory and political and economic behavior, exploring how evolutionary principles shape human political preferences. This book shows that many errors in political decision making, such as a dislike of free trade, are based in our evolved mental architecture. This analysis is extended in "Folk Economics" which shows that our evolved brains are subject to zero-sum thinking. Rubin, P. H. (2002). "Darwinian Politics." Rutgers Uuniverdity Press. "Rubin, P. H. (2008). "Folk Economics." Southern Evonomic Journal Evolutionary game theory Evolutionary game theory is the application of game theory to evolving populations in biology. It defines a framework of contests, strategies, and analytics into which Darwinian competition can be modelled. It originated in 1973 with John Maynard Smith and George R. Price's formalisation of contests, analysed as strategies, and the mathematical criteria that can be used to predict the results of competing strategies. Evolutionary game theory differs from classical game theory in focusing more on the dynamics of strategy change. This is influenced by the frequency of the competing strategies in the population. Evolutionary game theory has helped to explain the basis of altruistic behaviours in Darwinian evolution. It has in turn become of interest to sociologists, anthropologists, philosophers, and economists. See also Adaptive market hypothesis Behavioural economics Complexity economics Cultural economics Heterodox economics Institutional economics Mainstream economics Neoclassical economics Non-equilibrium economics Ecological model of competition Population dynamics Creative destruction Innovation system Evolutionary psychology Evolutionary socialism Universal Darwinism Association for Evolutionary Economics European Association for Evolutionary Political Economy Geoffrey Hodgson Oded Galor Richard R. Nelson Sidney G. Winter Thorstein Veblen References Further reading Veblen, T. B. (1898). Why Is Economics Not an Evolutionary Science? The Quarterly Journal of Economics, 12(3), pp. 373-97. Veblen, T. B. (1899). The Theory of the Leisure Class: An Economic Study in the Evolution of Institutions. New York: Huebsch. Archived from on November 22, 2021. Nelson, R. R., Winter, S. G. (1982). An Evolutionary Theory of Economic Change. Cambridge, MA: Harvard University Press. Archived from on March 23, 2023. Hodgson, G. M. (2004) The Evolution of Institutional Economics: Agency, Structure and Darwinism in American Institutionalism. London and New York: Routledge. Hodgson, G. M. (2012). Evolutionary Economics, in Fundamental Economics, edited by Mukul Majumdar, Ian Wills, Pasquale Michael Sgro, John M. Gowdy, in Encyclopedia of Life Support Systems (EOLSS), Developed under the Auspices of the UNESCO, EOLSS Publishers, Paris, France, . Archived from on April 24, 2023. Oded Galor (2022). The Journey of Humanity: The Origins of Wealth and Inequality. Penguin Random House, 2022. Journals Journal of Economic Issues, sponsored by the Association for Evolutionary Economics. Journal of Evolutionary Economics, sponsored by the International Josef Schumpeter Society. Journal of Institutional Economics, sponsored by the European Association for Evolutionary Political Economy. Criticisms of economics Innovation economics Schools of economic thought Thorstein Veblen
Evolutionary economics
[ "Biology" ]
3,545
[ "Evolutionary biology" ]
162,797
https://en.wikipedia.org/wiki/Individuation
The principle of individuation, or , describes the manner in which a thing is identified as distinct from other things. The concept appears in numerous fields and is encountered in works of Leibniz, Carl Jung, Gunther Anders, Gilbert Simondon, Bernard Stiegler, Friedrich Nietzsche, Arthur Schopenhauer, David Bohm, Henri Bergson, Gilles Deleuze, and Manuel DeLanda. Usage The word individuation occurs with different meanings and connotations in different fields. In philosophy Philosophically, "individuation" expresses the general idea of how a thing is identified as an individual thing that "is not something else". This includes how an individual person is held to be different from other elements in the world and how a person is distinct from other persons. By the seventeenth century, philosophers began to associate the question of individuation or what brings about individuality at any one time with the question of identity or what constitutes sameness at different points in time. In Jungian psychology In analytical psychology, individuation is the process by which the individual self develops out of an undifferentiated unconscious – seen as a developmental psychic process during which innate elements of personality, the components of the immature psyche, and the experiences of the person's life become, if the process is more or less successful, integrated over time into a well-functioning whole. Other psychoanalytic theorists describe it as the stage where an individual transcends group attachment and narcissistic self-absorption. In the news industry The news industry has begun using the term individuation to denote new printing and on-line technologies that permit mass customization of the contents of a newspaper, a magazine, a broadcast program, or a website so that its contents match each user's unique interests. This differs from the traditional mass-media practice of producing the same contents for all readers, viewers, listeners, or on-line users. Communications theorist Marshall McLuhan alluded to this trend when discussing the future of printed books in an electronically interconnected world in the 1970s and 1980s. In privacy and data protection law From around 2016, coinciding with increased government regulation of the collection and handling of personal data, most notably the GDPR in EU Law, individuation has been used to describe the ‘singling out’ of a person from a crowd – a threat to privacy, autonomy and dignity. Most data protection and privacy laws turn on the identifiability of an individual as the threshold criterion for when data subjects will need legal protection. However, privacy advocates argue privacy harms can also arise from the ability to disambiguate or ‘single out’ a person. Doing so enables the person, at an individual level, to be tracked, profiled, targeted, contacted, or subject to a decision or action which impacts them - even if their civil or legal ‘identity’ is not known (or knowable). In some jurisdictions the wording of the statute already includes the concept of individuation. In other jurisdictions regulatory guidance has suggested that the concept of 'identification' includes individuation - i.e., the process by which an individual can be 'singled out' or distinguished from all other members of a group. However, where privacy and data protection statutes use only the word ‘identification’ or ‘identifiability’, different court decisions mean that there is not necessarily a consensus about whether the legal concept of identification already encompasses individuation or not. Rapid advances in technologies including artificial intelligence, and video surveillance coupled with facial recognition systems have now altered the digital environment to such an extent that ‘not identifiable by name’ is no longer an effective proxy for ‘will suffer no privacy harm’. Many data protection laws may require redrafting to give adequate protection to privacy interests, by explicitly regulating individuation as well as identification of individual people. In physics Two quantum entangled particles cannot be understood independently. Two or more states in quantum superposition, e.g., as in Schrödinger's cat being simultaneously dead and alive, is mathematically not the same as assuming the cat is in an individual alive state with 50% probability. The Heisenberg's uncertainty principle says that complementary variables, such as position and momentum, cannot both be precisely known – in some sense, they are not individual variables. A natural criterion of individuality has been suggested. Arthur Schopenhauer For Schopenhauer, the principium individuationis is constituted of time and space, being the ground of multiplicity. In his view, the mere difference in location suffices to make two systems different, with each of the two states having its own real physical state, independent of the state of the other. This view influenced Albert Einstein. Schrödinger put the Schopenhaurian label on a folder of papers in his files “Collection of Thoughts on the physical Principium individuationis.” Carl Jung According to Jungian psychology, individuation () is a process of psychological integration. "In general, it is the process by which individual beings are formed and differentiated [from other human beings]; in particular, it is the development of the psychological individual as a being distinct from the general, collective psychology." Individuation is a process of transformation whereby the personal and collective unconscious are brought into consciousness (e.g., by means of dreams, active imagination, or free association) to be assimilated into the whole personality. It is a completely natural process that is necessary for the integration of the psyche. Individuation has a holistic healing effect on the person, both mentally and physically. In addition to Jung's theory of complexes, his theory of the individuation process forms conceptions of an unconscious filled with mythic images, a non-sexual libido, the general types of extraversion and introversion, the compensatory and prospective functions of dreams, and the synthetic and constructive approaches to fantasy formation and utilization. "The symbols of the individuation process . . . mark its stages like milestones, prominent among them for Jungians being the shadow, the wise old man . . . and lastly the anima in man and the animus in woman." Thus, "There is often a movement from dealing with the persona at the start . . . to the ego at the second stage, to the shadow as the third stage, to the anima or animus, to the Self as the final stage. Some would interpose the Wise Old Man and the Wise Old Woman as spiritual archetypes coming before the final step of the Self." "The most vital urge in every being, the urge to self-realize, is the motivating force behind the individuation process. With the internal compass of our very nature set toward self-realization, the thrust to become who and what we are derives its power from the instincts. On taking up the study of alchemy, Jung realized his long-held desire to find a body of work expressive of the psychological processes involved in the overarching process of individuation." Gilbert Simondon In L'individuation psychique et collective, Gilbert Simondon developed a theory of individual and collective individuation in which the individual subject is considered as an effect of individuation rather than a cause. Thus, the individual atom is replaced by a never-ending ontological process of individuation. Simondon also conceived of "pre-individual fields" which make individuation possible. Individuation is an ever-incomplete process, always leaving a "pre-individual" left over, which makes possible future individuations. Furthermore, individuation always creates both an individual subject and a collective subject, which individuate themselves concurrently. Like Maurice Merleau-Ponty, Simondon believed that the individuation of being cannot be grasped except by a correlated parallel and reciprocal individuation of knowledge. Bernard Stiegler The philosophy of Bernard Stiegler draws upon and modifies the work of Gilbert Simondon on individuation and also upon similar ideas in Friedrich Nietzsche and Sigmund Freud. During a talk given at the Tate Modern art gallery in 2004, Stiegler summarized his understanding of individuation. The essential points are the following: The I, as a psychic individual, can only be thought in relationship to we, which is a collective individual. The I is constituted in adopting a collective tradition, which it inherits and in which a plurality of I ’s acknowledge each other’s existence. This inheritance is an adoption, in that I can very well, as the French grandson of a German immigrant, recognize myself in a past which was not the past of my ancestors but which I can make my own. This process of adoption is thus structurally factual. The I is essentially a process, not a state, and this process is an in-dividuation — it is a process of psychic individuation. It is the tendency to become one, that is, to become indivisible. This tendency never accomplishes itself because it runs into a counter-tendency with which it forms a metastable equilibrium. (It must be pointed out how closely this conception of the dynamic of individuation is to the Freudian theory of drives and to the thinking of Nietzsche and Empedocles.) The we is also such a process (the process of collective individuation). The individuation of the I is always inscribed in that of the we, whereas the individuation of the we takes place only through the individuations, polemical in nature, of the I ’s which constitute it. That which links the individuations of the I and the we is a pre-individual system possessing positive conditions of effectiveness that belong to what Stiegler calls retentional apparatuses. These retentional apparatuses arise from a technical system which is the condition of the encounter of the I and the we — the individuation of the I and the we is, in this respect, also the individuation of the technical system. The technical system is an apparatus which has a specific role wherein all objects are inserted — a technical object exists only insofar as it is disposed within such an apparatus with other technical objects (this is what Gilbert Simondon calls the technical group). The technical system is also that which founds the possibility of the constitution of retentional apparatuses, springing from the processes of grammatization growing out of the process of individuation of the technical system. And these retentional apparatuses are the basis for the dispositions between the individuation of the I and the individuation of the we in a single process of psychic, collective, and technical individuation composed of three branches, each branching out into process groups. This process of triple individuation is itself inscribed within a vital individuation which must be apprehended as: the vital individuation of natural organs the technological individuation of artificial organs and the psycho-social individuation of organizations linking them together In the process of individuation, wherein knowledge as such emerges, there are individuations of mnemo-technological subsystems which overdetermine, qua specific organizations of what Stiegler calls tertiary retentions, the organization, transmission, and elaboration of knowledge stemming from the experience of the sensible. See also Akrasia Deindividuation Identical particles Identity formation Indiscernibles Nekyia Positive disintegration Principle of individuation Rationalization (sociology) Self-actualization References Bibliography Gilbert Simondon, Du mode d'existence des objets techniques (Méot, 1958; Paris: Aubier, 1989, second edition). Gilbert Simondon, On the Mode of Existence of Technical Objects, Part 1, link to PDF of 1980 translation. Gilbert Simondon, L'individu et sa genèse physico-biologique (l'individuation à la lumière des notions de forme et d'information) (Paris: PUF, 1964; J.Millon, coll. Krisis, 1995, second edition). Gilbert Simondon, The Individual and Its Physico-Biological Genesis, Part 1, Part 2, links to HTML files of unpublished 2007 translation. Gilbert Simondon, L'Individuation psychique et collective (1964; Paris: Aubier, 1989). Bernard Stiegler, Acting Out. Bernard Stiegler, Temps et individuation technique, psychique, et collective dans l’oeuvre de Simondon. Gilles Deleuze Child development Analytical psychology Media studies Biology terminology Personhood Concepts in social philosophy Metaphysical principles
Individuation
[ "Technology", "Biology" ]
2,678
[ "Philosophy of technology", "Science and technology studies", "nan" ]
162,840
https://en.wikipedia.org/wiki/Concentration%20ratio
In economics, concentration ratios are used to quantify market concentration and are based on companies' market shares in a given industry. A concentration ratio (CR) is the sum of the percentage market shares of (a pre-specified number of) the largest firms in an industry. An n-firm concentration ratio is a common measure of market structure and shows the combined market share of the n largest firms in the market. For example, if n = 5, CR5 defines the combined market share of the five largest firms in an industry. Calculation The concentration ratio is calculated as follows: where defines the market share of the th largest firm in an industry as a percentage of total industry market share, and defines the number of firms included in the concentration ratio calculation. The and concentration ratios are commonly used. Concentration ratios show the extent of largest firms' market shares in a given industry. Specifically, a concentration ratio close to 0% denotes a low concentration industry, and a concentration ratio near 100% shows that an industry has high concentration. Concentration levels Concentration ratios range from 0%–100%. Concentration levels are explained as follows: Benefits and shortfalls Concentration ratios can readily be calculated from industry data, but they are a simplistic, single parameter statistic. They can be used to quantify market concentration in a given industry in a relevant and succinct manner, but do not capture all available information about the distribution of market shares. In particular, the definition of the concentration ratio does not use the market shares of all the firms in the industry and does not account for the distribution of firm size. Also, it does not provide much detail about competitiveness of an industry. The following example exposes the aforementioned shortfalls of the concentration ratio. Example The table below shows the market shares of the largest firms in two different industries (Industry A and Industry B). Aside from the tabulated market shares for Industry A and Industry B, both industries are the same in terms of the number of firms operating in the industry and their respective market shares. In this example, in both cases, all other firms have a share of less than 10%. It is evident from these figures that Industry B is more concentrated than Industry A, since the market share is distributed more heavily towards the more dominant firms. However, Industry A and Industry B both have CR4 ratios of 80%. This shows that the CR ratio does not fully take into account the distribution of market share amongst the most dominant firms. References See also National Statistics Economic Trends: Concentration Ratios 2004 Market form Herfindahl index Microeconomics Market dominance strategies Concentration indicators Macroeconomic indicators Monopoly (economics) Ratios
Concentration ratio
[ "Mathematics" ]
535
[ "Arithmetic", "Ratios" ]
162,843
https://en.wikipedia.org/wiki/Color%20television
Color television (American English) or colour television (Commonwealth English) is a television transmission technology that includes color information for the picture, so the video image can be displayed in color on the television set. It improves on the monochrome or black-and-white television technology, which displays the image in shades of gray (grayscale). Television broadcasting stations and networks in most parts of the world upgraded from black-and-white to color transmission between the 1960s and the 1980s. The invention of color television standards was an important part of the history and technology of television. Transmission of color images using mechanical scanners had been conceived as early as the 1880s. A demonstration of mechanically scanned color television was given by John Logie Baird in 1928, but its limitations were apparent even then. Development of electronic scanning and display made a practical system possible. Monochrome transmission standards were developed prior to World War II, but civilian electronics development was frozen during much of the war. In August 1944, Baird gave the world's first demonstration of a practical fully electronic color television display. In the United States, competing color standards were developed, finally resulting in the NTSC color standard that was compatible with the prior monochrome system. Although the NTSC color standard was proclaimed in 1953, and limited programming soon became available, it was not until the early 1970s that color television in North America outsold black-and-white units. Color broadcasting in Europe did not standardize on the PAL or SECAM formats until the 1960s. Broadcasters began to upgrade from analog color television technology to higher resolution digital television ; the exact year varies by country. While the changeover is complete in many countries, analog television remains in use in some countries. Development The human eye's detection system in the retina consists primarily of two types of light detectors: rod cells that capture light, dark, and shapes/figures, and the cone cells that detect color. A typical retina contains 120 million rods and 4.5 million to 6 million cones, which are divided into three types, each one with a characteristic profile of excitability by different wavelengths of the spectrum of visible light. This means that the eye has far more resolution in brightness, or "luminance", than in color. However, post-processing of the optic nerve and other portions of the human visual system combine the information from the rods and cones to re-create what appears to be a high-resolution color image. The eye has limited bandwidth to the rest of the visual system, estimated at just under 8 Mbit/s. This manifests itself in a number of ways, but the most important in terms of producing moving images is the way that a series of still images displayed in quick succession will appear to be continuous smooth motion. This illusion starts to work at about 16 frame/s, and common motion pictures use 24 frame/s. Television, using power from the electrical grid, historically tuned its rate in order to avoid interference with the alternating current being supplied – in North America, some Central and South American countries, Taiwan, Korea, part of Japan, the Philippines, and a few other countries, this was 60 video fields per second to match the 60 Hz power, while in most other countries it was 50 fields per second to match the 50 Hz power. The NTSC color system changed from the black-and-white 60-fields-per-second standard to 59.94 fields per second to make the color circuitry simpler; the 1950s TV sets had matured enough that the power frequency/field rate mismatch was no longer important. Modern TV sets can display multiple field rates (50, 59.94, or 60, in either interlaced or progressive scan) while accepting power at various frequencies (often the operating range is specified as 48–62 Hz). In its most basic form, a color broadcast can be created by broadcasting three monochrome images, one each in the three colors of red, green, and blue (RGB). When displayed together or in rapid succession, these images will blend together to produce a full-color image as seen by the viewer. To do so without making the images flicker, the refresh time of all three images put together would have to be above the critical limit, and generally the same as a single black and white image. This would require three times the number of images to be sent in the same time, greatly increasing the amount of radio bandwidth required to send the complete signal and thus similarly increasing the required radio spectrum. Early plans for color television in the United States included a move from very high frequency (VHF) to ultra high frequency (UHF) to open up additional spectrum. One of the great technical challenges of introducing color broadcast television was the desire to conserve bandwidth. In the United States, after considerable research, the National Television Systems Committee approved an all-electronic system developed by RCA that encoded the color information separately from the brightness information and greatly reduced the resolution of the color information in order to conserve bandwidth. The brightness image remained compatible with existing black-and-white television sets at slightly reduced resolution, while color-capable televisions could decode the extra information in the signal and produce a limited-resolution color display. The higher resolution black-and-white and lower resolution color images combine in the eye to produce a seemingly high-resolution color image. The NTSC standard represented a major technical achievement. Early television Experiments with facsimile image transmission systems that used radio broadcasts to transmit images date to the 19th century. It was not until the 20th century that advances in electronics and light detectors made television practical. A key problem was the need to convert a 2D image into a "1D" radio signal; some form of image scanning was needed to make this work. Early systems generally used a device known as a "Nipkow disk", which was a spinning disk with a series of holes punched in it that caused a spot to scan across and down the image. A single photodetector behind the disk captured the image brightness at any given spot, which was converted into a radio signal and broadcast. A similar disk was used at the receiver side, with a light source behind the disk instead of a detector. A number of such mechanical television systems were being used experimentally in the 1920s. The best-known was John Logie Baird's, which was actually used for regular public broadcasting in Britain for several years. Indeed, Baird's system was demonstrated to members of the Royal Institution in London in 1926 in what is generally recognized as the first demonstration of a true, working television system. In spite of these early successes, all mechanical television systems shared a number of serious problems. Being mechanically driven, perfect synchronization of the sending and receiving discs was not easy to ensure, and irregularities could result in major image distortion. Another problem was that the image was scanned within a small, roughly rectangular area of the disk's surface, so that larger, higher-resolution displays required increasingly unwieldy disks and smaller holes that produced increasingly dim images. Rotating drums bearing small mirrors set at progressively greater angles proved more practical than Nipkow discs for high-resolution mechanical scanning, allowing images of 240 lines and more to be produced, but such delicate, high-precision optical components were not commercially practical for home receivers. It was clear to a number of developers that a completely electronic scanning system would be superior, and that the scanning could be achieved in a vacuum tube via electrostatic or magnetic means. Converting this concept into a usable system took years of development and several independent advances. The two key advances were Philo Farnsworth's electronic scanning system, and Vladimir Zworykin's Iconoscope camera. The Iconoscope, based on Kálmán Tihanyi's early patents, superseded the Farnsworth-system. With these systems, the BBC began regularly scheduled black-and-white television broadcasts in 1936, but these were shut down again with the start of World War II in 1939. In this time thousands of television sets had been sold. The receivers developed for this program, notably those from Pye Ltd., played a key role in the development of radar. By 22 March 1935, 180-line black-and-white television programs were being broadcast from the Paul Nipkow TV station in Berlin. In 1936, under the guidance of the Minister of Public Enlightenment and Propaganda, Joseph Goebbels, direct transmissions from fifteen mobile units at the Olympic Games in Berlin were transmitted to selected small television houses () in Berlin and Hamburg. In 1941, the first NTSC meetings produced a single standard for US broadcasts. US television broadcasts began in earnest in the immediate post-war era, and by 1950 there were 6 million televisions in the United States. All-mechanical color The basic idea of using three monochrome images to produce a color image had been experimented with almost as soon as black-and-white televisions had first been built. Among the earliest published proposals for television was one by Maurice Le Blanc in 1880 for a color system, including the first mentions in television literature of line and frame scanning, although he gave no practical details. Polish inventor Jan Szczepanik patented a color television system in 1897, using a selenium photoelectric cell at the transmitter and an electromagnet controlling an oscillating mirror and a moving prism at the receiver. But his system contained no means of analyzing the spectrum of colors at the transmitting end, and could not have worked as he described it. An Armenian inventor, Hovannes Adamian, also experimented with color television as early as 1907. The first color television project is claimed by him, and was patented in Germany on 31 March 1908, patent number 197183, then in Britain, on 1 April 1908, patent number 7219, in France (patent number 390326) and in Russia in 1910 (patent number 17912). Shortly after his practical demonstration of black and white television, on 3 July 1928, Baird demonstrated the world's first color transmission. This used scanning discs at the transmitting and receiving ends with three spirals of apertures, each spiral with filters of a different primary color; and three light sources, controlled by the signal, at the receiving end, with a commutator to alternate their illumination. The demonstration was of a young girl wearing different colored hats. The girl, Noele Gordon, later became a TV actress in the soap opera Crossroads. Baird also made the world's first color over-the-air broadcast on 4 February 1938, sending a mechanically scanned 120-line image from Baird's Crystal Palace studios to a projection screen at London's Dominion Theatre. Mechanically scanned color television was also demonstrated by Bell Laboratories in June 1929 using three complete systems of photoelectric cells, amplifiers, glow-tubes, and color filters, with a series of mirrors to superimpose the red, green, and blue images into one full-color image. Hybrid systems As was the case with black-and-white television, an electronic means of scanning would be superior to the mechanical systems like Baird's. The obvious solution on the broadcast end would be to use three conventional Iconoscopes with colored filters in front of them to produce an RGB signal. Using three separate tubes each looking at the same scene would produce slight differences in parallax between the frames, so in practice a single lens was used with a mirror or prism system to separate the colors for the separate tubes. Each tube captured a complete frame and the signal was converted into radio in a fashion essentially identical to the existing black-and-white systems. The problem with this approach was there was no simple way to recombine them on the receiver end. If each image was sent at the same time on different frequencies, the images would have to be "stacked" somehow on the display, in real time. The simplest way to do this would be to reverse the system used in the camera: arrange three separate black-and-white displays behind colored filters and then optically combine their images using mirrors or prisms onto a suitable screen, like frosted glass. RCA built just such a system in order to present the first electronically scanned color television demonstration on 5 February 1940, privately shown to members of the US Federal Communications Commission at the RCA plant in Camden, New Jersey. This system, however, suffered from the twin problems of costing at least three times as much as a conventional black-and-white set, as well as having very dim pictures, the result of the fairly low illumination given off by tubes of the era. Projection systems of this sort would become common decades later, however, with improvements in technology. Another solution would be to use a single screen, but break it up into a pattern of closely spaced colored phosphors instead of an even coating of white. Three receivers would be used, each sending its output to a separate electron gun, aimed at its colored phosphor. However, this solution was not practical. The electron guns used in monochrome televisions had limited resolution, and if one wanted to retain the resolution of existing monochrome displays, the guns would have to focus on individual dots three times smaller. This was beyond the state of the art of the technology at the time. Instead, a number of hybrid solutions were developed that combined a conventional monochrome display with a colored disk or mirror. In these systems the three colored images were sent one after each other, in either complete frames in the "field-sequential color system", or for each line in the "line-sequential" system. In both cases a colored filter was rotated in front of the display in sync with the broadcast. Since three separate images were being sent in sequence, if they used existing monochrome radio signaling standards they would have an effective refresh rate of only 20 fields, or 10 frames, a second, well into the region where flicker would become visible. In order to avoid this, these systems increased the frame rate considerably, making the signal incompatible with existing monochrome standards. The first practical example of this sort of system was again pioneered by John Logie Baird. In 1940 he publicly demonstrated a color television combining a traditional black-and-white display with a rotating colored disk. This device was very "deep", but was later improved with a mirror folding the light path into an entirely practical device resembling a large conventional console. However, Baird was not happy with the design, and as early as 1944 had commented to a British government committee that a fully electronic device would be better. In 1939, Hungarian engineer Peter Carl Goldmark introduced an electro-mechanical system while at CBS, which contained an Iconoscope sensor. The CBS field-sequential color system was partly mechanical, with a disc made of red, blue, and green filters spinning inside the television camera at 1,200 rpm, and a similar disc spinning in synchronization in front of the cathode ray tube inside the receiver set. The system was first demonstrated to the Federal Communications Commission (FCC) on 29 August 1940, and shown to the press on 4 September. CBS began experimental color field tests using film as early as 28 August 1940, and live cameras by 12 November. NBC (owned by RCA) made its first field test of color television on 20 February 1941. CBS began daily color field tests on 1 June 1941. These color systems were not compatible with existing black-and-white television sets, and as no color television sets were available to the public at this time, viewing of the color field tests was restricted to RCA and CBS engineers and the invited press. The War Production Board halted the manufacture of television and radio equipment for civilian use from 22 April 1942, to 20 August 1945, limiting any opportunity to introduce color television to the general public. Fully electronic As early as 1940, Baird had started work on a fully electronic system he called the "Telechrome". Early Telechrome devices used two electron guns aimed at either side of a phosphor plate. The phosphor was patterned so the electrons from the guns only fell on one side of the patterning or the other. Using cyan and magenta phosphors, a reasonable limited-color image could be obtained. Baird's demonstration on 16 August 1944, was the first example of a practical color television system. Work on the Telechrome continued and plans were made to introduce a three-gun version for full color. However, Baird's untimely death in 1946 ended the development of the Telechrome system. Similar concepts were common through the 1940s and 1950s, differing primarily in the way they re-combined the colors generated by the three guns. The Geer tube was similar to Baird's concept, but used small pyramids with the phosphors deposited on their outside faces, instead of Baird's 3D patterning on a flat surface. The Penetron used three layers of phosphor on top of each other and increased the power of the beam to reach the upper layers when drawing those colors. The Chromatron used a set of focusing wires to select the colored phosphors arranged in vertical stripes on the tube. FCC color In the immediate post-war era, the Federal Communications Commission (FCC) was inundated with requests to set up new television stations. Worrying about congestion of the limited number of channels available, the FCC put a moratorium on all new licenses in 1948 while considering the problem. A solution was immediately forthcoming; rapid development of radio receiver electronics during the war had opened a wide band of higher frequencies to practical use, and the FCC set aside a large section of these new UHF bands for television broadcast. At the time, black-and-white television broadcasting was still in its infancy in the U.S., and the FCC started to look at ways of using this newly available bandwidth for color broadcasts. Since no existing television would be able to tune in these stations, they were free to pick an incompatible system and allow the older VHF channels to die off over time. The FCC called for technical demonstrations of color systems in 1948, and the Joint Technical Advisory Committee (JTAC) was formed to study them. CBS displayed improved versions of its original design, now using a single 6 MHz channel (like the existing black-and-white signals) at 144 fields per second and 405 lines of resolution. Color Television Inc. (CTI) demonstrated its line-sequential system, while Philco demonstrated a dot-sequential system based on its beam-index tube-based "Apple" tube technology. Of the entrants, the CBS system was by far the best-developed, and won head-to-head testing every time. While the meetings were taking place it was widely known within the industry that RCA was working on a dot-sequential system that was compatible with existing black-and-white broadcasts, but RCA declined to demonstrate it during the first series of meetings. Just before the JTAC presented its findings, on 25 August 1949, RCA broke its silence and introduced its system as well. The JTAC still recommended the CBS system, and after the resolution of an ensuing RCA lawsuit, color broadcasts using the CBS system started on 25 June 1951. By this point the market had changed dramatically; when color was first being considered in 1948 there were fewer than a million television sets in the U.S., but by 1951 there were well over 10 million. The idea that the VHF band could be allowed to "die" was no longer practical. During its campaign for FCC approval, CBS gave the first demonstrations of color television to the general public, showing an hour of color programs daily Mondays through Saturdays, beginning 12 January 1950, and running for the remainder of the month, over WOIC in Washington, D.C., where the programs could be viewed on eight 16-inch color receivers in a public building. Due to high public demand, the broadcasts were resumed 13–21 February, with several evening programs added. CBS initiated a limited schedule of color broadcasts from its New York station WCBS-TV Mondays to Saturdays beginning 14 November 1950, making ten color receivers available for the viewing public. All were broadcast using the single color camera that CBS owned. The New York broadcasts were extended by coaxial cable to Philadelphia's WCAU-TV beginning 13 December, and to Chicago on 10 January, making them the first network color broadcasts. After a series of hearings beginning in September 1949, the FCC found the RCA and CTI systems fraught with technical problems, inaccurate color reproduction, and expensive equipment, and so formally approved the CBS system as the U.S. color broadcasting standard on 11 October 1950. An unsuccessful lawsuit by RCA delayed the first commercial network broadcast in color until 25 June 1951, when a musical variety special titled simply Premiere was shown over a network of five East Coast CBS affiliates. Viewing was again restricted: the program could not be seen on black-and-white sets, and Variety estimated that only thirty prototype color receivers were available in the New York area. Regular color broadcasts began that same week with the daytime series The World Is Yours and Modern Homemakers. While the CBS color broadcasting schedule gradually expanded to twelve hours per week (but never into prime time), and the color network expanded to eleven affiliates as far west as Chicago, its commercial success was doomed by the lack of color receivers necessary to watch the programs, the refusal of television manufacturers to create adapter mechanisms for their existing black-and-white sets, and the unwillingness of advertisers to sponsor broadcasts seen by almost no one. CBS had bought a television manufacturer in April, and in September 1951, production began on the only CBS-Columbia color television model, with the first color sets reaching retail stores on 28 September. However, it was too little, too late. Only 200 sets had been shipped, and only 100 sold, when CBS discontinued its color television system on 20 October 1951, ostensibly by request of the National Production Authority for the duration of the Korean War, and bought back all the CBS color sets it could to prevent lawsuits by disappointed customers. RCA chairman David Sarnoff later charged that the NPA's order had come "out of a situation artificially created by one company to solve its own perplexing problems" because CBS had been unsuccessful in its color venture. Compatible color While the FCC was holding its JTAC meetings, development was taking place on a number of systems allowing true simultaneous color broadcasts, "dot-sequential color systems". Unlike the hybrid systems, dot-sequential televisions used a signal very similar to existing black-and-white broadcasts, with the intensity of every dot on the screen being sent in succession. In 1938 Georges Valensi demonstrated an encoding scheme that would allow color broadcasts to be encoded so they could be picked up on existing black-and-white sets as well. In his system the output of the three camera tubes were re-combined to produce a single "luminance" value that was very similar to a monochrome signal and could be broadcast on the existing VHF frequencies. The color information was encoded in a separate "chrominance" signal, consisting of two separate signals, the original blue signal minus the luminance (B'–Y'), and red-luma (R'–Y'). These signals could then be broadcast separately on a different frequency; a monochrome set would tune in only the luminance signal on the VHF band, while color televisions would tune in both the luminance and chrominance on two different frequencies, and apply the reverse transforms to retrieve the original RGB signal. The downside to this approach is that it required a major boost in bandwidth use, something the FCC was interested in avoiding. RCA used Valensi's concept as the basis of all of its developments, believing it to be the only proper solution to the broadcast problem. However, RCA's early sets using mirrors and other projection systems all suffered from image and color quality problems, and were easily bested by CBS's hybrid system. But solutions to these problems were in the pipeline, and RCA in particular was investing massive sums (later estimated at $100 million) to develop a usable dot-sequential tube. RCA was beaten to the punch by the Geer tube, which used three B&W tubes aimed at different faces of colored pyramids to produce a color image. All-electronic systems included the Chromatron, Penetron and beam-index tube that were being developed by various companies. While investigating all of these, RCA's teams quickly started focusing on the shadow mask system. In July 1938 the shadow mask color television was patented by Werner Flechsig (1900–1981) in Germany, and was demonstrated at the International radio exhibition Berlin in 1939. Most CRT color televisions used today are based on this technology. His solution to the problem of focusing the electron guns on the tiny colored dots was one of brute-force; a metal sheet with holes punched in it allowed the beams to reach the screen only when they were properly aligned over the dots. Three separate guns were aimed at the holes from slightly different angles, and when their beams passed through the holes the angles caused them to separate again and hit the individual spots a short distance away on the back of the screen. The downside to this approach was that the mask cut off the vast majority of the beam energy, allowing it to hit the screen only 15% of the time, requiring a massive increase in beam power to produce acceptable image brightness. The first publicly announced network demonstration of a program using a "compatible color" system was an episode of NBC's Kukla, Fran and Ollie on 10 October 1949, viewable in color only at the FCC. It did not receive FCC approval. In spite of these problems in both the broadcast and display systems, RCA pressed ahead with development and was ready for a second assault on the standards by 1950. Second NTSC The possibility of a compatible color broadcast system was so compelling that the NTSC decided to re-form, and held a second series of meetings starting in January 1950. Having only recently selected the CBS system, the FCC heavily opposed the NTSC's efforts. One of the FCC Commissioners, R. F. Jones, went so far as to assert that the engineers testifying in favor of a compatible system were "in a conspiracy against the public interest". Unlike the FCC approach where a standard was simply selected from the existing candidates, the NTSC would produce a board that was considerably more pro-active in development. Starting before CBS color even got on the air, the U.S. television industry, represented by the National Television System Committee, worked in 1950–1953 to develop a color system that was compatible with existing black-and-white sets and would pass FCC quality standards, with RCA developing the hardware elements. RCA first made publicly announced field tests of the dot sequential color system over its New York station WNBT in July 1951. When CBS testified before Congress in March 1953 that it had no further plans for its own color system, the National Production Authority dropped its ban on the manufacture of color television receivers, and the path was open for the NTSC to submit its petition for FCC approval in July 1953, which was granted on 17 December. The first publicly announced network demonstration of a program using the NTSC "compatible color" system was an episode of NBC's Kukla, Fran and Ollie on 30 August 1953, although it was viewable in color only at the network's headquarters. The first network broadcast to go out over the air in NTSC color was a performance of the opera Carmen on 31 October 1953. Adoption North America Canada Colour broadcasts from the United States were available to Canadian population centres near the border from the mid-1950s. At the time that NTSC colour broadcasting was officially introduced into Canada in 1966, less than one percent of Canadian households had a colour television set. Colour television in Canada was launched on the Canadian Broadcasting Corporation's (CBC) English language TV service on 1 September 1966. Private television broadcaster CTV also started colour broadcasts in early September 1966. The CBC's French-language service, Radio-Canada, was broadcasting colour programming on its television network for 15 hours a week in 1968. Full-time colour transmissions started in 1974 on the CBC, with other private sector broadcasters in the country doing so by the end of the 1970s. The following provinces and areas of Canada introduced colour television by the years as stated Saskatchewan, Alberta, Manitoba, British Columbia, Ontario, Quebec (1966; Major networks only – private sector around 1968 to 1972) Newfoundland and Labrador (1967) Nova Scotia, New Brunswick (1968) Prince Edward Island (1969) Yukon (1971) Northwest Territories (including Nunavut) (1972; Major networks in large centers, many remote areas in the far north did not get colour until at least 1977 and 1978) Cuba Cuba in 1958 became the second country in the world to introduce color television broadcasting, with Havana's Channel 12 using the American NTSC standard and technology patented by RCA. But the color transmissions ended when broadcasting stations were seized in the Cuban Revolution in 1959, and did not return until 1975, using equipment acquired from Japan's NEC Corporation, and SECAM equipment from the Soviet Union, adapted for the American NTSC standard. Mexico Guillermo González Camarena independently invented and developed a field-sequential tricolor disk system in Mexico in the late 1930s, for which he requested a patent in Mexico on 19 August 1940, and in the United States in 1941. González Camarena produced his color television system in his Gon-Cam laboratory for the Mexican market and exported it to the Columbia College of Chicago, which regarded it as the best system in the world. Goldmark had actually applied for a patent for the same field-sequential tricolor system in the US on 7 September 1940, while González Camarena had made his Mexican filing 19 days before, on 19 August. On 31 August 1946, González Camarena sent his first color transmission from his lab in the offices of the Mexican League of Radio Experiments at Lucerna St. No. 1, in Mexico City. The video signal was transmitted at a frequency of 115 MHz and the audio in the 40-metre band. He obtained authorization to make the first publicly announced color broadcast in Mexico, on 8 February 1963, of the program Paraíso Infantil on Mexico City's XHGC-TV, using the NTSC system that had by now been adopted as the standard for color programming. González Camarena also invented the "simplified Mexican color TV system" as a much simpler and cheaper alternative to the NTSC system. Due to its simplicity, NASA used a modified version of the system in its Voyager mission of 1979, to take pictures and video of Jupiter. United States Although all-electronic color was introduced in the US in 1953, high prices and the scarcity of color programming greatly slowed its acceptance in the marketplace. The first national color broadcast (the 1954 Tournament of Roses Parade) occurred on 1 January 1954, but over the next dozen years most network broadcasts, and nearly all local programming, continued to be in black-and-white. In 1956, NBC's The Perry Como Show became the first live network television series to present a majority of episodes in color. The CBS television production of Rodgers & Hammerstein's Cinderella was broadcast live in color on 31 March 1957. It was their only musical written directly for television, and had the highest one-night number of viewers to date at 107 million. CBS's The Big Record, starring pop vocalist Patti Page, in 1957–1958 became the first television show broadcast in color for an entire season. The production costs for these shows were greater than most movies were at the time, not only because of all the stars featured in the musical and on the hour-long variety extravaganza, but also due to the extremely high-intensity lighting and electronics required for the new RCA TK-41 cameras, which were the first practical color television cameras. It was not until the mid-1960s that color sets started selling in large numbers, due in part to the color transition of 1965 in which it was announced that over half of all network prime-time programming would be broadcast in color that autumn. The first all-color prime-time season came just one year later. NBC's pioneering coast-to-coast color broadcast of the 1954 Tournament of Roses Parade was accompanied by public demonstrations given across the United States on prototype color receivers by manufacturers RCA, General Electric, Philco, Raytheon, Hallicrafters, Hoffman, Pacific Mercury, and others. Two days earlier, Admiral had demonstrated to its distributors the prototype of Admiral's first color television set planned for consumer sale using the NTSC standards, priced at $1,175 (). It is not known when actual commercial sales of this receiver began. Production was extremely limited, and no advertisements for it were published in New York newspapers, nor those in Washington, DC. A color model from Admiral C1617A became available in the Chicago area on 4 January 1954 and appeared in various stores throughout the country, including those in Maryland on 6 January 1954, San Francisco, 14 January 1954, Indianapolis on 17 January 1954, Pittsburgh on 25 January 1954, and Oakland on 26 January 1954, among other cities thereafter. A color model from Westinghouse H840CK15 ($1,295, or ) became available in the New York area on 28 February 1954; Only 30 sets were sold in its first month. A less expensive color model from RCA (CT-100) reached dealers in April 1954. Television's first prime time network color series was The Marriage, a situation comedy broadcast live by NBC in the summer of 1954. NBC's anthology series Ford Theatre became the first network color-filmed series that October; however, due to the high cost of the first fifteen color episodes, Ford ordered that two black-and-white episodes be filmed for every color episode. The first series to be filmed entirely in color was NBC's Norby, a sitcom that lasted 13 weeks, from January to April 1955, and was replaced by repeats of Ford Theatres color episodes. Early color telecasts could be preserved only on the black-and-white kinescope process introduced in 1947. It was not until September 1956 that NBC began using color film to time-delay and preserve some of its live color telecasts. Ampex introduced a color videotape recorder in 1958, which NBC used to tape An Evening with Fred Astaire, the oldest surviving network color videotape. This system was also used to unveil a demonstration of color television for the press. On 22 May 1958, President Dwight D. Eisenhower visited the WRC-TV NBC studios in Washington, D.C., and gave a speech touting the new technology's merits. His speech was recorded in color, and a copy of this videotape was given to the Library of Congress for posterity. The syndicated The Cisco Kid had been filmed in color since 1949 in anticipation of color broadcasting. Several other syndicated shows had episodes filmed in color during the 1950s, including The Lone Ranger, My Friend Flicka, and Adventures of Superman. The first was carried by some stations equipped for color telecasts well before NBC began its regular weekly color dramas in 1959, beginning with the Western series Bonanza. NBC was at the forefront of color programming because its parent company RCA manufactured the most successful line of color sets in the 1950s and, at the end of August 1956, announced that in comparison with 1955–56 (when only three of its regularly scheduled programs were broadcast in color) the 1956–57 season would feature 17 series in color. By 1959 RCA was the only remaining major manufacturer of color sets, competitors having discontinued models that used RCA picture tubes because of poor sales, while working on their own improved tube designs. CBS and ABC, not affiliated with set manufacturers and not eager to promote their competitor's product, were much slower to broadcast in color. CBS broadcast color specials and sometimes aired its big weekly variety shows in color, but it offered no regularly scheduled color programming until the fall of 1965. At least one CBS show, The Lucy Show, was filmed in color beginning in 1963, but continued to be telecast in black and white through the end of the 1964–65 season. ABC delayed its first color programs until 1962, but these were initially only broadcasts of the cartoon shows The Flintstones, The Jetsons and Beany and Cecil. The DuMont network, although it did have a television-manufacturing parent company, was in financial decline by 1954 and was dissolved two years later. The only known original color programming broadcast over the DuMont network was a high school football Thanksgiving game from New Jersey in 1957, a year after the network had ceased regular operations. The relatively small amount of network color programming, combined with the high cost of color television sets, meant that as late as 1964 only 3.1 percent of television households in the US had a color set. However, by the mid-1960s, the subject of color programming turned into a ratings war. A 1965 American Research Bureau (ARB) study that proposed an emerging trend in color television set sales convinced NBC that a full shift to color would gain a ratings advantage over its two competitors. As a result, NBC provided the catalyst for rapid color expansion by announcing that its prime time schedule for fall 1965 would be almost entirely in color. ABC and CBS followed suit and over half of their combined prime-time programming also moved to color that season, but they were still reluctant to telecast all their programming in color due to production costs. All three broadcast networks were airing full color prime time schedules by the 1966–67 broadcast season, and ABC aired its last new black-and-white daytime programming in December 1967. Public broadcasting networks like NET, however, did not use color for a majority of their programming until 1968. The number of color television sets sold in the US did not exceed black-and-white sales until 1972, which was also the first year that more than fifty percent of television households in the US had a color set. This was also the year that "in color" notices before color television programs ended, due to the rise in color television set sales, and color programming having become the norm. In a display of foresight, Disney had filmed many of its earlier shows in color so they were able to be repeated on NBC, and since most of Disney's feature-length films were also made in color, they could now also be telecast in that format. To emphasize the new feature, the series was re-dubbed Walt Disney's Wonderful World of Color, which premiered in September 1961, and retained that moniker until 1969. By the mid-1970s, the only stations broadcasting in black-and-white were a few high-numbered UHF stations in small markets, and a handful of low-power repeater stations in even smaller markets such as vacation spots. By 1979, even the last of these had converted to color and by the early 1980s, B&W sets had been pushed into niche markets, notably low-power uses, small portable sets, or use as video monitor screens in lower-cost consumer equipment. These black-and-white displays were still compatible with color signals and remained usable through the 1990s and the first decade of the 21st Century for uses that did not require a full color display. The digital television transition in the United States in 2009 rendered the remaining black-and-white television sets obsolete; all digital television receivers are capable of displaying full color. Color broadcasting in Hawaii started on 5 May 1957. One of the last television stations in North America to convert to color, WQEX (now WINP-TV) in Pittsburgh, started broadcasting in color on 16 October 1986, after its black-and-white transmitter, which dated from the 1950s, broke down in February 1985 and the parts required to fix it were no longer available. The owner of WQEX, PBS member station WQED, used some of its pledge money to buy a color transmitter. Early color sets were either floor-standing console models or tabletop versions nearly as bulky and heavy, so in practice, they remained firmly anchored in one place. The introduction of GE's relatively compact and lightweight Porta-Color set in the spring of 1966 made watching color television a more flexible and convenient proposition. In 1972, the year sales of color sets finally surpassed sales of black-and-white sets, the last holdout among daytime network programs converted to color, resulting in the first completely all-color network season. Europe The first two color television broadcasts in Europe were made by early tests in France (SECAM) between 1963 and 1966, then officially launched in October 1967 and by the UK's BBC2 beginning on 1 July 1967 and West Germany's Das Erste and ZDF in August, both using the PAL system. They were followed by the Netherlands in September (PAL). On 1 October 1968, the first scheduled television program in color was broadcast in Switzerland. Denmark, Norway, Sweden, Finland, Austria, East Germany, Czechoslovakia, and Hungary all started regular color broadcasts around 1969–1970. Ireland's national TV station RTÉ began using color in 1968 for recorded programs; the first outside broadcast made in color for RTÉ Television was when Ireland hosted the Eurovision Song Contest in Dublin in 1971. The PAL system spread through most of Western Europe. More European countries introduced color television using the PAL system in the 1970s and early 1980s; examples include Belgium (1971), Bulgaria (1971, but not fully implemented until 1972), SFR Yugoslavia (1971), Spain (1972, but not fully implemented until 1977), Iceland (1973, but not fully implemented until 1976), Portugal (1975, but not fully implemented until 1980), Albania (1981), Turkey (1981) and Romania (1983, but not fully implemented until 1985–1991). In Italy there were debates to adopt a national color television system, the ISA, developed by Indesit, but that idea was scrapped. As a result, and after a test during the 1972 Summer Olympics, Italy was one of the last European countries to officially adopt the PAL system in the 1976–1977 season. France, Luxembourg, and most of the Eastern Bloc along with their overseas territories opted for SECAM. SECAM was a popular choice in countries with much hilly terrain, and countries with a very large installed base of older monochrome equipment, which could cope much better with the greater ruggedness of the SECAM signal. However, for many countries the decision was more down to politics than technical merit. A drawback of SECAM for production is that, unlike PAL or NTSC, certain post-production operations of encoded SECAM signals are not really possible without a significant drop in quality. As an example, a simple fade to black is trivial in NTSC and PAL: one merely reduces the signal level until it is zero. However, in SECAM the color difference signals, which are frequency modulated, need first to be decoded to e.g. RGB, then the fade-to-black is applied, and finally the resulting signal is re-encoded into SECAM. Because of this, much SECAM video editing was actually done using PAL equipment, then the resultant signal was converted to SECAM. Another drawback of SECAM is that comb filtering, allowing better color separation, is of limited use in SECAM receivers. This was not, however, much of a drawback in the early days of SECAM as such filters were not readily available in high-end TV sets before the 1990s. The first regular color broadcasts in SECAM were started on 1 October 1967, on France's Second Channel (ORTF 2e chaîne). In France and the UK color broadcasts were made on 625-line UHF frequencies, the VHF band being used for black and white, 405 lines in UK or 819 lines in France, until the beginning of the 1980s. Countries elsewhere that were already broadcasting 625-line monochrome on VHF and UHF, simply transmitted color programs on the same channels. Some British television programs, particularly those made by or for ITC Entertainment, were shot on color film before the introduction of color television to the UK, for the purpose of sales to US networks. The first British show to be made in color was the drama series The Adventures of Sir Lancelot (1956–57), which was initially made in black and white but later shot in color for sale to the NBC network in the United States. Other British color television programs made before the introduction of color television in the UK include Stingray (1964–1965), which was claimed to be the first British TV show to be filmed entirely in color, although when this claim was made in the 1960s it was protested by Francis Coudrill who said his series The Stoopendus Adventures of Hank had been shot entirely in color some years previously; Thunderbirds (1965–1966), The Baron (1966–1967), The Saint (from 1966 to 1969), The Avengers (from 1967 to 1969), Man in a Suitcase (1967–1968), The Prisoner (1967–1968) and Captain Scarlet and the Mysterons (1967–1968). However, most UK series predominantly made using videotape, such as Doctor Who (1963–89; 2005–present) did not begin color production until later, with the first color Doctor Who episodes not airing until 1970. (The first four, comprising the story Spearhead from Space, were shot on film owing to a technician's strike, with videotape being used thereafter). Although marginal, some UK viewers are still using black and white tv sets. The number of black and white licenses issued was 212000 in 2000 and 6586 in 2019. The last country in Europe to introduce color television was Romania in 1983. Asia and the Pacific In Japan, NHK and NTV introduced color television, using a variation of the NTSC system (called NTSC-J) on 10 September 1960, making it the first country in Asia to introduce color television. The Philippines (1966) and Taiwan (1969) also adopted the NTSC system. Other countries in the region instead used the PAL system, starting with Australia (1967, originally scheduled for 1972, but not fully implemented until 1975–1978), and then Thailand (1967–69; this country converted from 525-line NTSC to 625-line PAL), Hong Kong (1967–70), the People's Republic of China (1970, but not fully implemented until 1984), New Zealand (1973), North Korea (1974), Singapore (1974), Indonesia (1974, but not fully implemented until 1979–82), Pakistan (1976, but not fully implemented until 1982), Kazakhstan (1977), Vietnam (1977), Malaysia (1978, but not fully implemented until 1980), India (1979, but not fully implemented until 1982–86), Burma (1980), and Bangladesh (1980). South Korea did not introduce color television (using NTSC) until 1980–1981, although it was already manufacturing color television sets for export. The last country in Asia and the world to introduce color television was Cambodia in 1986. China The People's Republic of China began plans and early testing for color TV as early as 1960, but were quickly cancelled. China started testing again in 1970 and adopted PAL the next year. Regular full-time color broadcasts on what is now CCTV-2 since October 1973, and full-time color transmissions for the CCTV's then-two channels since July 1977. The following provinces and areas of China introduced color television by the years as stated: Beijing (1973) Shanghai (1974) Jilin (1977) Tibet and Inner Mongolia (1979) Ningxia (1980) Xinjiang (1982, peripheral in 1984) Henan (1983) Middle East Nearly all of the countries in the Middle East use PAL. The first country in the Middle East to introduce color television was Lebanon in 1967. Jordan, Iraq and Oman, become second in the early-1970s. Saudi Arabia, the United Arab Emirates, Kuwait, Bahrain, and Qatar followed in the mid-1970s, but Israel and Cyprus continued to broadcast in black and white until the early 1980s. Israeli television even erased the color signals using a device called the mehikon. Africa The first color television service in Africa was introduced on the Tanzanian island of Zanzibar, in 1973, using PAL. In 1973 also, MBC of Mauritius broadcast the OCAMM Conference, in color, using SECAM. At the time, South Africa did not have a television service at all, owing to opposition from the apartheid regime, but in 1976, one was finally launched. Nigeria adopted PAL for color transmissions in 1974 in the Benue Plateau state in the north central region of the country, but countries such as Zimbabwe and Ghana continued with black and white until 1982 and 1985 respectively. The Sierra Leone Broadcasting Service (SLBS) started television broadcasting in 1963 as a cooperation between the SLBS and commercial interests; coverage was extended to all districts in 1978 when the service was also upgraded to color. South America Unlike most other countries in the Americas, which had adopted NTSC, Brazil began broadcasting in color using PAL-M, on 19 February 1972. Ecuador was the first South American country to broadcast in color using NTSC, on 5 November 1974. In 1978, Argentina started international broadcasting in color using PAL-B in connection with the country's hosting of the FIFA World Cup. However domestic color broadcasting remained black & white until 1 May 1980 when regular broadcasting started using PAL-N, a variation of PAL-B specially suited for Argentina, Uruguay and Paraguay. Also in April 1978, Chile adopted color television officially through the NTSC standard, This led to experimental broadcasts during the Viña del Mar Festival and the widespread use of color TV during the 1978 FIFA World Cup, followed by the charity event Teletón in December of the same year. Some other countries in South America, including Bolivia, Paraguay, Peru, and Uruguay [1981], did not broadcast full-time color television until the early 1980s. Cor Dillen, director and later CEO of the South American branch of Philips, was responsible for bringing color television to South America. Color standards There are three main analog broadcast television systems in use around the world, PAL (Phase Alternating Line), NTSC (National Television Standards Committee), and SECAM (Séquentiel Couleur à Mémoire—Sequential Color with Memory). The system used in The Americas and part of the Far East is NTSC. Most of Asia, Western Europe, Australia, Africa, and Eastern South America use PAL (though Brazil and Cambodia uses a hybrid PAL-M system). Eastern Europe and France uses SECAM. Generally, a device (such as a television) can only read or display video encoded to a standard that the device is designed to support; otherwise, the source must be converted (such as when European programs are broadcast in North America or vice versa). This table illustrates the differences: [1] For SECAM the color sub-carrier alternates between 4.25000 MHz for the lines containing the Db color signal and 4.40625 MHz for the Dr signal (both are frequency modulated unlike both PAL and NTSC, which are phase modulated). The frequency of the sub-carrier is the only means that the decoder has of determining which color difference signal is actually being transmitted. Digital television broadcasting standards, such as ATSC, DVB-T, DVB-T2, and ISDB, have superseded these analog transmission standards in many countries. See also Ban on CBS Color TVs Beam-index tube Triniscope References Further reading External links "Television in Color". Popular Mechanics. April 1944. One of the earliest magazine articles detailing the new technology of color television. "TV Color Controversy". Life. 27 February 1950. About the FCC debating which color television system to approve for US broadcasts. Television Television technology Telecommunications-related introductions in 1940 Display technology
Color television
[ "Technology", "Engineering" ]
10,615
[ "Information and communications technology", "Electronic engineering", "Television technology", "Display technology" ]
162,874
https://en.wikipedia.org/wiki/Pentamidine
Pentamidine is an antimicrobial medication used to treat African trypanosomiasis, leishmaniasis, Balamuthia infections, babesiosis, and to prevent and treat pneumocystis pneumonia (PCP) in people with poor immune function. In African trypanosomiasis it is used for early disease before central nervous system involvement, as a second line option to suramin. It is an option for both visceral leishmaniasis and cutaneous leishmaniasis. Pentamidine can be given by injection into a vein or muscle or by inhalation. Common side effects of the injectable form include low blood sugar, pain at the site of injection, nausea, vomiting, low blood pressure, and kidney problems. Common side effects of the inhaled form include wheezing, cough, and nausea. It is unclear if doses should be changed in those with kidney or liver problems. Pentamidine is not recommended in early pregnancy but may be used in later pregnancy. Its safety during breastfeeding is unclear. Pentamidine is in the aromatic diamidine family of medications. While the way the medication works is not entirely clear, it is believed to involve decreasing the production of DNA, RNA, and protein. Pentamidine came into medical use in 1937. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In regions of the world where trypanosomiasis is common pentamidine is provided for free by the World Health Organization (WHO). Medical uses Treatment of PCP caused by Pneumocystis jirovecii Prevention of PCP in adults with HIV who have one or both of the following: History of PCP CD4+ count ≤ 200mm³ Treatment of leishmaniasis Treatment of African trypanosomiasis caused by Trypanosoma brucei gambiense Balamuthia infections Pentamidine is classified as an orphan drug by the U.S. Food and Drug Administration Other uses Use as an antitumor drug has also been proposed. Pentamidine is also identified as a potential small molecule antagonist that disrupts this interaction between S100P and RAGE receptor. Special Populations Pregnancy It has not been shown to cause birth defects in animal studies when given intravenously. There are no controlled studies to show if pentamidine can harm the fetus in pregnant women. It is only recommended if the drug of choice trimethoprim-sulfamethoxazole is contraindicated. Breastfeeding There is no information regarding the excretion of pentamidine in breast milk, but since the adverse effects on breastfed infants are unknown currently, it is recommended by the manufacturer for the infant to not be breastfed or for the mother to stop the drug. Risks versus benefits for the mother should be considered when making this decision. Children Pentamidine can be used in the prevention of PCP in children with HIV who cannot tolerate Trimethoprim/Sulfamethoxazole and can use a nebulizer. Intranvenous solutions of pentamidine should only be used in children with HIV older than 2 years old when other treatments are unavailable Elderly There is no data for the use of pentamidine in this specific population. Contraindications Patients with a history of anaphylaxis or hypersensitivity to pentamidine isethionate Side effects Common Burning pain, dryness, or sensation of lump in throat Chest pain Coughing difficulty in breathing difficulty in swallowing skin rash wheezing Rare Nausea and vomiting Pain in upper abdomen, possibly radiating to the back Severe pain in side of chest Shortness of breath Others Blood: Pentamidine frequently causes leukopenia and less often thrombopenia, which may cause symptomatic bleeding. Some cases of anemia, possibly related to folic acid deficiency, have been described. Cardiovascular: Hypotension, which may be severe. Severe or fatal arrhythmias and heart failure are quite frequent. Kidney: 25 percent develop signs of nephrotoxicity ranging from mild, asymptomatic azotemia (increased serum creatinine and urea) to irreversible renal failure. Ample fluids or intravenous hydration may prevent some nephrotoxicity. Liver: Elevated liver enzymes are associated with intravenous use of pentamidine. Hepatomegaly and hepatitis have been encountered with long term prophylactic use of pentamidine inhalation. Neurological: Dizziness, drowsiness, neuralgia, confusion, hallucinations, seizures and other central side effects are reported. Pancreas: Hypoglycemia that requires symptomatic treatment is frequently seen. On the other hand, pentamidine may cause or worsen diabetes mellitus. Respiratory: Cough and bronchospasm, most frequently seen with inhalation. Skin: Severe local reactions after extravasculation of intravenous solutions or following intramuscular injection treatment have been seen. Pentamidine itself may cause rash, or rarely Stevens–Johnson syndrome or Lyell syndrome. Eye discomfort, conjunctivitis, throat irritation, splenomegaly, Herxheimer reaction, electrolyte imbalances (e.g. hypocalcemia). Drug interactions The additional or sequential use of other nephrotoxic drugs like aminoglycosides, amphotericin B, capreomycin, colistin, polymyxin B, vancomycin, foscarnet, or cisplatin should be closely monitored, or whenever possible completely avoided. Mechanism of action The mechanism seems to vary with different organisms and is not well understood. However, pentamidine is suspected to work through various methods of interference of critical functions in DNA, RNA, phospholipid and protein synthesis. Pentamidine binds to adenine-thymine-rich regions of the Trypanosoma parasite DNA, forming a cross-link between two adenines four to five base pairs apart. The drug also inhibits topoisomerase enzymes in the mitochondria of Pneumocystis jirovecii. Similarly, pentamidine inhibits type II topoisomerase in the mitochondria of the Trypanosoma parasite, resulting in a broken and unreadable mitochondrial genome. Resistance Strains of the Trypanosoma brucei parasite that are resistant to pentamidine have been discovered. Pentamidine is brought into the mitochondria through carrier proteins, and the absence of these carriers prevents the drug from reaching its site of action. Pharmacokinetics Absorption: Pentamidine is completely absorbed when given intravenously or intramuscularly. When inhaled through a nebulizer, pentamidine accumulates in the bronchoalveolar fluid of the lungs at a higher concentration compared to injections. The inhaled form is minimally absorbed in the blood. Absorption is unreliable when given orally. Distribution: When injected, pentamidine binds to tissues and proteins in the plasma. It accumulates in the kidney, liver, lungs, pancreas, spleen, and adrenal glands. Additionally, pentamidine does not reach curative levels in the cerebrospinal fluid. It has a volume of distribution of 286-1356 liters when given intravenously and 1658-3790 liters when given intramuscularly. Inhaled pentamidine is mainly deposited into the bronchoalveolar lavage fluid of the lungs. Metabolism: Pentamidine is primarily metabolized by Cytochrome P450 enzymes in the liver. Up to 12% of pentamidine is eliminated in the urine unchanged. Elimination: Pentamidine has an average half-life of 5–8 hours when given intravenously and 7–11 hours when given intramuscularly. However, these may increase with severe kidney problems. Pentamidine can remain in the system for as long as 8 months after the first injection. Chemistry Pentamidine isethionate for injection is commercially available as a lyophilized, white crystalline powder for reconstitution with sterile water or 5% Dextrose. After reconstitution, the mixture should be free from discoloration and precipitation. Reconstitution with sodium chloride should be avoided due to formation of precipitates. Intravenous solutions of pentamidine can be mixed with intravenous HIV medications like zidovidine and intravenous heart medications like diltiazem. However, intravenous solutions of antiviral foscarnet and antifungal fluconazole are incompatible with pentamidine. To avoid side-effects associated with intravenous administration, the solution should be slowly infused to minimize the release of histamine. History Pentamidine was first used to treat African trypanosomiasis in 1937 and leishmaniasis in 1940 before it was registered as pentamidine mesylate in 1950. The sudden increase in requests for use of Pentamidine isethionate in then unlicensed form from the CDC in the early 1980s for treating Pneumocystis jirovecii in young male patients was key in identifying the emergence of the HIV/AIDS epidemic at that time. Its efficacy against Pneumocystis jirovecii was demonstrated in 1987, following its re-emergence on the drug market in 1984 in the current isethionate form. Trade names and dose form For oral inhalation and for nebulizer use: NebuPent Nebulizer (APP Pharmaceuticals LLC - US) For intravenous and intramuscular use: US and Canada: Pentacarinat 300 injection powder 300 mg vial (Avantis Pharma Inc - Canada) Pentam 300 (APP Pharmaceuticals LLC - US) Pentamidine isethionate 300 mg for injection (David Bull Laboratories LTD - Canada, Hospira Healthcare Corporation - Canada) International Brands: Pentamidine isethionate (Abbott) Pentacarinat (Sanofi-Aventis) Pentacrinat (Abbott) Pentam (Abbott) Pneumopent See also Netropsin Lexitropsin References External links Antifungals Antiprotozoal agents Amidines DNA-binding substances NMDA receptor antagonists Phenol ethers World Health Organization essential medicines Wikipedia medicine articles ready to translate
Pentamidine
[ "Chemistry", "Biology" ]
2,215
[ "Genetics techniques", "Antiprotozoal agents", "Amidines", "Functional groups", "DNA-binding substances", "Biocides", "Bases (chemistry)" ]
162,971
https://en.wikipedia.org/wiki/STMicroelectronics
STMicroelectronics NV (commonly referred to as ST or STMicro) is a European multinational semiconductor contract manufacturing and design company. It is the largest of such companies in Europe. It was founded in 1987 from the merger of two state-owned semiconductor corporations: Thomson Semiconducteurs of France and SGS Microelettronica of Italy. The company is incorporated in the Netherlands and headquartered in Plan-les-Ouates, Switzerland. Its shares are traded on Euronext Paris, the Borsa Italiana and the New York Stock Exchange. History ST was formed in 1987 by the merger of two government-owned semiconductor companies: Italian SGS Microelettronica (where SGS stands for Società Generale Semiconduttori, "General Semiconductor Company"), and French Thomson Semiconducteurs, the semiconductor arm of Thomson. SGS Microelettronica originated in 1972 from a previous merger of two companies: ATES (Aquila Tubi e Semiconduttori), a vacuum tube and semiconductor maker headquartered in L'Aquila, the regional capital of the region of Abruzzo in Southern Italy, which in 1961 changed its name to Azienda Tecnica ed Elettronica del Sud and relocated its manufacturing plant in the Industrial Zone of Catania, in Sicily; Società Generale Semiconduttori (founded in 1957 by Jewish-Italian engineer, politician, and industrialist Adriano Olivetti). Thomson Semiconducteurs was created in 1982 by the French government's widespread nationalization of industries following the election of François Mitterrand to the presidency. It included: the semiconductor activities of the French electronics company Thomson; in 1985 it bought Mostek, a US company founded in 1969 as a spin-off of Texas Instruments, from United Technologies; Silec, founded in 1977; Eurotechnique, founded in 1979 in Rousset, Bouches-du-Rhône as a joint-venture between Saint-Gobain of France and US-based National Semiconductor; EFCIS (Étude et la Fabrication de Circuits Intégrés Spéciaux), founded in 1972 at CEA-Leti; SESCOSEM, founded in 1969. At the time of the merger of these two companies in 1987, the new corporation was named SGS-THOMSON and was led by chief executive officer Pasquale Pistorio. The company took its current name of STMicroelectronics in May 1998 following Thomson's sale of its shares. After its creation ST was ranked 14th among the top 20 semiconductor suppliers with sales of around US$850 million. The company has participated in the consolidation of the semiconductor industry since its formation, with acquisitions including: In 1989, British company Inmos known for its transputer microprocessors from parent Thorn EMI; In 1994, Canada-based Nortel's semiconductor activities; In 1999, UK-based VLSI-Vision CMOS Image Sensor research & development company, a spin-out of Edinburgh University. Incorporated on 1 January 2000, the company became STMicroelectronics Imaging Division, currently part of the Analog MEMS and Sensors business group; In 2000, WaferScale Integration Inc. (WSI, Fremont, California), a vendor of EPROM and flash memory-based programmable system-chips; In 2002, Alcatel's Microelectronics division, which along with the incorporation of smaller ventures such as UK company, Synad Ltd, helped the company expand into the Wireless-LAN market; In 2007, US-based Genesis Microchip. Genesis Microchip is known for their strength in video processing technology (Faroudja) and has design centres located in Santa Clara, California, Toronto, Taipei City and Bangalore. On 8 December 1994, the company completed its initial public offering on the Paris and New York stock exchanges. Owner Thomson SA sold its stake in the company in 1998 when the company also listed on the Italian Bourse in Milan. In 2002, Motorola and TSMC joined ST and Philips in a new technology partnership. The Crolles 2 Alliance was created with a new 12" wafer manufacturing facility located in Crolles, France. In 2005, chief executive officer Pasquale Pistorio was succeeded by Carlo Bozotti, who then headed the memory products division and had been with the company’s predecessor since 1977. By 2005, ST was ranked fifth, behind Intel, Samsung, Texas Instruments and Toshiba, but ahead of Infineon, Renesas, NEC, NXP Semiconductors and Freescale. The company was the largest European semiconductors supplier, ahead of Infineon and NXP. Early in 2007, NXP Semiconductors (formerly Philips Semiconductors) and Freescale (formerly Motorola Semiconductors) decided to stop their participation in Crolles 2 Alliance. Under the terms of the agreement the Alliance came to an end on December 31, 2007. On May 22, 2007, ST and Intel created a joint venture in the memory application called Numonyx: this new company merged ST and Intel Flash Memory activities. Semiconductor market consolidation continued with ST and NXP announcing on April 10, 2008, the creation of a new joint venture of their mobile activities, with ST owning 80% of the new company and NXP 20%. This joint venture began on August 20, 2008. On February 10, 2009, ST Ericsson, a joint venture bringing together ST-NXP Wireless and Ericsson Mobile Platforms, was established. ST Ericsson was a multinational manufacturer of wireless products and semiconductors, supplying to mobile device manufacturers. ST-Ericsson was a 50/50 joint venture of STMicroelectronics and Ericsson established on February 3, 2009, and dissolved on August 2, 2013. Headquartered in Geneva, Switzerland, it was a fabless company, outsourcing semiconductor manufacturing to foundry companies. In 2011, ST announced the creation of a joint lab with Sant'Anna School of Advanced Studies. The lab focuses on research and innovation in biorobotics, smart systems and microelectronics. Past collaborations with Sant'Anna School of Advanced Studies included DustBot, a platform that integrated self-navigating "service robots" for waste collection. In 2015, the MEMS division of ST was ranked as the biggest European competitor of Silex Microsystems. In 2018, chief executive Carlo Bozotti was succeeded by Jean-Marc Chery. In 2023, STMicroelectronics partnered with Synopsys to design a working chip on Microsoft Corp’s cloud, marking the first time AI software had been utilized for chip design. In 2024, ST became the sixth shareholder of Quintauris, a joint company with the goal of standardizing RISC-V ecosystem. Shareholders As of December 31, 2014, the shareholders were: 68.4% public (New York Stock Exchange, Euronext Paris, Borsa Italiana Milano); 4.1% treasury shares; 27.6% STMicroelectronics Holding B.V.: 50% FT1CI (Bpifrance 79.2% and French Alternative Energies and Atomic Energy Commission (CEA) 20.8%; previously ); 50% Ministry of Economy and Finance of Italy . Manufacturing facilities Unlike fabless semiconductor companies, STMicroelectronics owns and operates its own semiconductor wafer fabs. The company owned five 8-inch (200 mm) wafer fabs and 1 12-inch (300 mm) wafer fab in 2006. Most of the production is scaled at 0.18 μm, 0.13 μm, 90 nm and 65 nm (measurements of transistor gate length). STMicroelectronics also owns back-end plants, where silicon dies are assembled and bonded into plastic or ceramic packages. Major sites include: Grenoble, France Grenoble is one of the company's most important R&D centres, employing around 4,000 staff. The Polygone site employs 2,200 staff and is one of the historical bases of the company (ex SGS). All the historical wafer fab lines are now closed but the site hosts the headquarters of many divisions (marketing, design, industrialization) and a R&D centre, focused on silicon and software design and fab process development. The Crolles site hosts a and a fab and was originally built as a common R&D centre for submicrometre technologies as part of the 1990 Grenoble 92 partnership between SGS-Thomson and CNET, the R&D center of French telecom company France Telecom. The fab was inaugurated by French president Jacques Chirac, on 27 February 2003. It includes an R&D centre which focuses on developing new nanometric technology processes for 90-nm to 32-nm scale using wafers and it was developed for The Crolles 2 Alliance. This alliance of STMicroelectronics, TSMC, NXP Semiconductors (formerly Philips semiconductor) and Freescale (formerly Motorola semiconductor) partnered in 2002 to develop the facility and to work together on process development. The technologies developed at the facility were also used by global semiconductor foundry TSMC of Taiwan, allowing TSMC to build the products developed in Crolles on behalf of the Alliance partners who required such foundry capacity. Rousset, France Employing around 3,000 staff, Rousset hosts several division headquarters including smartcards, microcontrollers, and EEPROM as well as several R&D centers. Rousset also hosts an 8-inch (200-mm) fab, which was opened on May 15, 2000 by French prime minister Lionel Jospin. The site opened in 1979 as a fab operated by Eurotechnique, a joint venture between Saint-Gobain of France and National Semiconductor of the US. Rousset was sold to Thomson-CSF in 1982 as part of the French government's 1981–82 nationalization of several industries. As part of the nationalisation, a former Thomson plant in the center of Aix-en-Provence operating since the 1960s was closed and staff were transferred to the new Rousset site. The original fab was upgraded into and later fab in 1996. It is now being shut down. The site also has a "Wafer Level Chip Scale Packaging" accreditation for eSIM ICs. In 1988, a small group of employees from the Thomson Rousset plant (including the director, Marc Lassus) founded a start-up company, Gemalto (formerly known as Gemplus), which became a leader in the smartcard industry. Tours, France Employing 1,500 staff, this site hosts a fab and R&D centres. Milan, Italy Employing 6,000 staff, the Milan facilities match Grenoble in importance. Agrate Brianza employs around 4,000 staff and is a historical base of the company (ex SGS). The site has several fab lines (including a fab) and an R&D center. Castelletto, employs 300 to 400 staff and hosts some divisions and R&D centres. Catania, Italy The Catania plant in Sicily employs 5,000 staff and hosts several R&D centers and divisions, focusing on flash memory technologies as well as two fabs. The plant was launched in 1961 by ATES to supply under licensing to RCA of the US and initially using germanium. The site's two major wafer fabs are a fab, opened in April 1997 by then-Italian Prime Minister Romano Prodi, and a fab that has never been completed and which was transferred in its current state to "Numonyx" in 2008. A new manufacturing facility for silicon carbide (SiC) substrates of 150 mm should open here in 2023. In October 2022, the EU supported STMicroelectronics for the construction of a silicon carbide wafer plant in Catania with €293 million through the Recovery and Resilience Facility to be completed in 2026, and in line with the European Chips Act. Caserta, Italy STmicro eSIM and SIM production facility for embedded form factor eSIM. Kirkop, Malta As of 2010, ST employed around 1,800 people in Kirkop, making it the largest private sector employer, and the country's leading exporter. Singapore In 1970, SGS created its first assembly back-end plant in Singapore, in the area of Toa Payoh. Then in 1981, SGS decided to build a wafer fab in Singapore. Converted up to fab, this is now an important wafer fab of the group. Ang Mo Kio also hosts some design centres. As of 2004, the site employed 6,000 staff. Tunis, Tunisia Application, design and support. about 110 employees. Bouskoura, Morocco Founded in 1979 as a radiofrequency products facility, the Bouskoura site now hosts back-end manufacturing activity, which includes chip testing and packaging. Since 2022 it also features a production line for silicon carbide products that primarily will be used in electric vehicles. Norrköping, Sweden The Norrköping plant is a wafer fab that, at the start of production in 2021, was the first to produce 200mm (8 in) Silicone Carbide wafers. The wafers are mostly used for SiC power devices. Other sites Administrative headquarters Geneva, Switzerland: Corporate headquarter which hosts most of the ST top management. It totals some hundred of employees. Saint-Genis-Pouilly, France, near Geneva: A few hundred of employees. Headquarters for logistics. Paris: Marketing and support. Regional headquarters Coppell, Texas: US headquarters. Singapore: Headquarters for the Asia-Pacific region. Tokyo: Headquarters for Japan and Korea operations. Shanghai: Headquarters for China operations. Assembly plants Malta: In 1981, SGS-Thomson (now STMicroelectronics) built its first assembly plant in Malta. STMicroelectronics is, as of 2008, the largest private employer on the island, employing around 1,800 people. Muar, Malaysia: around 4000 employees. This site was built in 1974 by Thomson and is now an assembly plant. Shenzhen, Guangdong province, China: In 1994, ST and the Shenzhen Electronics Group signed a partnership to construct and jointly operate an assembly plant (ST has majority with 60%). The plant is located in Futian Free Trade Zone and became operational in 1996. It has around 3,300 employees. A new assembly plant is built in Longgang since 2008, and closed up till 2014. The R&D, design, sales and marketing office is located in the Hi-tech industrial park in Nanshan, Shenzhen. Calamba in the province of Laguna, Philippines: In 2008, ST acquired this plant from NXP Semiconductors. Initially as part of joint venture with NXP but later acquired the whole share turning it into a full-fledged STMicroelectronics Assembly and Testing plant. Currently it employs 2,000 employees. Design centres Cairo, Egypt: Hardware and software design center, started in 2020, with 50 employees. Rabat, Morocco: A design center that employs 160 people. Naples, Italy: A design center employing 300 people. Lecce, Italy: HW & SW Design Center which hosts 20 researchers in the Advanced System Technology group. Ang Mo Kio, Singapore: In 1970, SGS created its first assembly back-end plant in Singapore, in the area of Toa Payoh. Then in 1981, SGS decided to build a wafer fab in Singapore. The Singapore technical engineers have been trained in Italy and the fab of Ang Mo Kio started to produce its first wafers in 1984. Converted up to 8 inch (200 mm) fab, this is now an important 8 inch (200 mm) wafer fab of the ST group. Greater Noida, India: The Noida site was launched in 1992 to conduct software engineering activities. A silicon design centre was inaugurated in 1995. With 120 employees, it was the largest design center of the company outside Europe at the time. In 2006, the site was shifted to Greater Noida for further expansion. The site hosts mainly design teams. Santa Clara, California, (Silicon Valley), United States: 120 staff in marketing, design and applications. La Jolla, California, (San Diego, United States): 80 staff in design and applications. Lancaster, Pennsylvania, United States: Application, support, and marketing. Prague, Czech Republic: 100 to 200 employees. Application, design and support. Tunis, Tunisia: 110 employees. Application, design and support. Sophia Antipolis, near Nice, France: Design center with a few hundred employees. Edinburgh, Scotland: 200 staff focused in the field of imaging and photon detection. Ottawa, Ontario, Canada: In 1993, SGS-Thomson purchased the semiconductor activities of Nortel which owned in Ottawa an R&D center and a fab. The fab was closed in 2000, however, a design, R&D centre and sales office is operating in the city. Toronto, Ontario, Canada: HW & SW Design Center primarily involved with the design of video processor ICs as part of ST's TVM Division. Bangalore, India: HW and SW design center employing more than 250 people (Including the employees of ST Ericsson and Genesis Microchip). Zaventem, Belgium: 100 employees. Design & Application Center. Helsinki, Finland: Design Center. Turku, Finland: Design Center. Oulu, Finland: Design Center. Tampere, Finland: Design Center. Longmont, Colorado United States: Design Center. Graz, Austria: NFC Competence Center. Pisa, Italy: A design center employing more than 50 people. R&D, analog and digital design. Closing sites The Phoenix, Arizona 8 inch (200 mm) fab, the Carrollton, Texas 6 inch (150 mm) fab, and the Ain Sebaa, Morocco fab were beginning rampdown plans, and were destined to close by 2010. The Casablanca, Morocco site consists of two assembly parts (Bouskoura and Aïn Sebaâ) and totals around 4000 employees. It was opened in the 1960s by Thomson. The Bristol, United Kingdom site employing well over 300 at its peak (in 2001/2) but was ramped down to approx. 150 employees at close by early 2014. The Ottawa, Ontario, Canada plant (approx. 450 employees) was to be close down by 2013 end. Closed sites Rennes, France hosted a 6-inch (150 mm) fab and was closed in 2004 Rancho Bernardo, California, US a 4-inch (100 mm) fab created by Nortel and purchased by SGS-Thomson in 1994, after which it was converted into a 6-inch (150 mm) fab in 1996. SGS's first presence in the US was a sales office based in Phoenix in the early 1980s. Later, under SGS-Thomson, an 8-inch (200 mm) fab was completed in Phoenix in 1995. The company's second 8" fab after Crolles 1, the site was first dedicated to producing microprocessors for Cyrix. On 10 July 2007, ST said that it would close this site, and in July 2010 the shell of the Phoenix PF1 FAB was bought by Western Digital Corporation. The Carrollton, Texas, US site was built in 1969 by Mostek, an American company founded by former employees of Texas Instruments. In 1979, Mostek was acquired by United Technologies, which sold it to Thomson Semiconducteurs in 1985. Initially equipped with a 4-inch (100 mm) fab, it was converted into a 6-inch (150 mm) fab in 1988. The activities of INMOS in the US were transferred to Carrollton in 1989 following its acquisition by SGS Thomson. It was closed in 2010. Bristol, UK This R&D site housed Inmos, which in 1978 began development of the Transputer microprocessor. The site was acquired with Inmos in 1989, and was primarily involved with the design of home video and entertainment products (e.g. Set-Top Box), GPS chips, and accompanying software. At its peak the site employed more than 250 employees. The site closed in 2014. Future locations On 8 August 2007, ST bought Nokia's microchip development team and plans to invest heavily in development of cellular ASIC applications. The purchase included Nokia's ASIC team in Southwood (UK) and the company plans several sites in Finland. In June 2023, ST announced its partnership with GlobalFoundries to build a new factory in Crolles, France. See also Altitude SEE Test European Platform (ASTEP) Interuniversity Microelectronics Centre (IMEC) Numonyx ST-Ericsson List of semiconductor fabrication plants STM8 STM32 STMicroelectronics Small Shareholders' Group (STM.S.S.G.) Collectif Autonome et Démocratique de STMicroelectronics (CAD-ST) References External links Electronics companies established in 1987 Companies listed on Euronext Paris Semiconductor companies of Switzerland Electronics companies of France Electronics companies of Italy Government-owned companies of Italy Manufacturing companies based in Geneva Partly privatized companies of Italy Photovoltaics manufacturers Government-owned companies of France Multinational companies headquartered in Switzerland CAC 40 Companies in the FTSE MIB MEMS factories
STMicroelectronics
[ "Materials_science", "Engineering" ]
4,478
[ "Photovoltaics manufacturers", "Microelectronic and microelectromechanical systems", "MEMS factories", "Engineering companies" ]
162,999
https://en.wikipedia.org/wiki/Plesiochronous%20system
In telecommunications, a plesiochronous system is one where different parts of the system are almost, but not quite, perfectly synchronised. According to ITU-T standards, a pair of signals are plesiochronous if their significant instants occur at nominally the same rate, with any variation in rate being constrained within specified limits. A sender and receiver operate plesiosynchronously if they operate at the same nominal clock frequency but may have a slight clock frequency mismatch, which leads to a drifting phase. The mismatch between the two systems' clocks is known as the plesiochronous difference. In general, plesiochronous systems behave similarly to synchronous systems, except they must employ some means in order to cope with "sync slips", which will happen at intervals due to the plesiochronous nature of the system. The most common example of a plesiochronous system design is the plesiochronous digital hierarchy networking standard. The asynchronous serial communication protocol is asynchronous on the byte level, but plesiochronous on the bit level. The receiver detects the start of a byte by detecting a transition that may occur at a random time after the preceding byte. The indefinite wait and lack of external synchronization signals makes byte detection asynchronous. Then the receiver samples at predefined intervals to determine the values of the bits in the byte; this is plesiochronous since it depends on the transmitter to transmit at roughly the same rate the receiver expects, without coordination of the rate while the bits are being transmitted. The modern tendency in systems engineering is towards using systems that are either fundamentally asynchronous (such as Ethernet), or fundamentally synchronous (such as synchronous optical networking), and layering these where necessary, rather than using a mixture between the two in a single technology. The term plesiochronous comes from the Greek πλησίος plesios ("near") and χρόνος chrónos ("time"). See also Buffer (telecommunication) Clock drift Isochronous timing Jitter Mesochronous network Synchronization in telecommunications References Network architecture Synchronization
Plesiochronous system
[ "Engineering" ]
492
[ "Network architecture", "Telecommunications engineering", "Computer networks engineering", "Synchronization" ]
163,005
https://en.wikipedia.org/wiki/Genlock
Genlock (generator locking) is a common technique where the video output of one source (or a specific reference signal from a signal generator) is used to synchronize other picture sources together. The aim in video applications is to ensure the coincidence of signals in time at a combining or switching point. When video instruments are synchronized in this way, they are said to be generator-locked, or genlocked. Possible problems Video signals generated and output by generator-locked instruments are said to be syntonized. Syntonized video signals will be precisely frequency-locked, but because of delays caused by the unequal transmission path lengths, the synchronized signals will exhibit differing phases at various points in the television system. Modern video equipment such as production switchers that have multiple video inputs often include a variable delay on each input to compensate for the phase differences and time all the input signals to precise phase coincidence. Where two or more video signals are combined or being switched between, the horizontal and vertical timing of the picture sources should be coincident with each other. If they are not, the picture will appear to jump when switching between the sources whilst the display device re-adjusts the horizontal and/or vertical scan to correctly reframe the image. Where composite video is in use, the phase of the chrominance subcarrier of each source being combined or switched should also be coincident. This is to avoid changes in colour hue and/or saturation during a transition between sources. Scope Generator locking can be used to synchronize as few as two isolated sources (e.g., a television camera and a videotape machine feeding a vision mixer (production switcher)), or in a wider facility where all the video sources are locked to a single synchronizing pulse generator (e.g., a fast-paced sporting event featuring multiple cameras and recording devices). Generator locking can also be used to ensure that multiple CRT monitors that appear in a movie are flicker-free. Generator locking is also used to synchronize two cameras for Stereoscopic 3D video recording. In broadcast systems, an analog generator-lock signal usually consists of vertical and horizontal synchronizing pulses together with chrominance phase reference in the form of colorburst. No picture information is usually carried to avoid disturbing the timing signals, and the name reference, black and burst, color black, or black burst is usually given to such a signal. A composite colour video signal inherently carries the same reference signals and can be used as a generator-locking signal, albeit at the risk of being disturbed by out-of-specification picture signals. Although some high-definition broadcast systems may use a standard-definition reference signal as a generator-locking reference signal, the use of tri-level synchronising pulses directly related to the frame and line rate is increasing within HD systems. A tri-level sync pulse is a signal that initially goes from 0 volts DC to a negative voltage, then a positive voltage, before returning to zero volts DC again. The voltage excursions are typically 300 mV either side of zero volts, and the duration each of the two excursions is the same as a particular number of digital picture samples. Connections Most television studio and professional video cameras have dedicated generator-locking ports on the camera. If the camera is tethered with a triaxial cable or optical fibre cable, the analog generator-locking signal is used to lock the camera control unit, which in turn locks the camera head by means of information carried within a data channel transmitted along the cable. If the camera is an ENG-type camera, one without a triax/fibre connection or without a dockable head, the generator-locking signal is carried through a separate cable from the video. Variants Natlock is a picture-source synchronizing system using audio tone signals to describe the timing discrepancies between composite video signals, while Icelock uses digital information conveyed in the vertical blanking interval of a composite video signal. See also Frame synchronizer (video) Time base corrector References Synchronization Film and video technology Broadcast engineering Television terminology
Genlock
[ "Engineering" ]
848
[ "Broadcast engineering", "Electronic engineering", "Telecommunications engineering", "Synchronization" ]
163,066
https://en.wikipedia.org/wiki/Price%E2%80%93earnings%20ratio
The price–earnings ratio, also known as P/E ratio, P/E, or PER, is the ratio of a company's share (stock) price to the company's earnings per share. The ratio is used for valuing companies and to find out whether they are overvalued or undervalued. As an example, if share A is trading at and the earnings per share for the most recent 12-month period is , then share A has a P/E ratio of = years. Put another way, the purchaser of the share is expecting 8 years to recoup the share price. Companies with losses (negative earnings) or no profit have an undefined P/E ratio (usually shown as "not applicable" or "N/A"); sometimes, however, a negative P/E ratio may be shown. There is a general consensus among most investors that a P/E ratio of around 20 is 'fairly valued'. Versions There are multiple versions of the P/E ratio, depending on whether earnings are projected or realized, and the type of earnings. "Trailing P/E" uses the weighted average share price of common shares in issue divided by the net income for the most recent 12-month period. This is the most common meaning of "P/E" if no other qualifier is specified. Monthly earnings data for individual companies are not available, and in any case usually fluctuate seasonally, so the previous four quarterly earnings reports are used and earnings per share are updated quarterly. Note, each company chooses its own financial year so the timing of updates varies from one to another. "Trailing P/E from continued operations" uses operating earnings, which exclude earnings from discontinued operations, extraordinary items (e.g. one-off windfalls and write-downs), and accounting changes. "Forward P/E": Instead of net income, this uses estimated net earnings over next 12 months. Estimates are typically derived as the mean of those published by a select group of analysts (selection criteria are rarely cited). Some people also use the formula to calculate the P/E ratio. This formula often gives the same answer as , (if new capital has been issued it gives the wrong answer), as = . Variations on the standard trailing and forward P/E ratios are common. Generally, alternative P/E measures substitute different measures of earnings, such as rolling averages over longer periods of time (to attempt to "smooth" volatile or cyclical earnings, for example), or "corrected" earnings figures that exclude certain extraordinary events or one-off gains or losses. The definitions may not be standardized. For companies that are loss-making, or whose earnings are expected to change dramatically, a "primary" P/E can be used instead, based on the earnings projections made for the next years to which a discount calculation is applied. Interpretation The price/earnings ratio (PER) is the most widely used method for determining whether shares are "correctly" valued in relation to one another. But the PER does not in itself indicate whether the share is a bargain. The PER depends on the market's perception of the risk and future growth in earnings. A company with a low PER indicates that the market perceives it as higher risk or lower growth or both as compared to a company with a higher PER. The PER of a listed company's share is the result of the collective perception of the market as to how risky the company is and what its earnings growth prospects are in relation to that of other companies. Investors use the PER to compare their own perception of the risk and growth of a company against the market's collective perception of the risk and growth as reflected in the current PER. If investors believe that their perception is superior to that of the market, they can make the decision to buy or sell accordingly. Historical P/E ratios for the U.S. stock market Since 1900, the average P/E ratio for the S&P 500 index has ranged from 4.78 in Dec 1920 to 44.20 in Dec 1999. However, except for some brief periods, during 1920–1990 the market P/E ratio was mostly between 10 and 20. The average P/E of the market varies in relation with, among other factors, expected growth of earnings, expected stability of earnings, expected inflation, and yields of competing investments. For example, when U.S. treasury bonds yield high returns, investors pay less for a given earnings per share and P/E's fall. The average U.S. equity P/E ratio from 1900 to 2005 is 14 (or 16, depending on whether the geometric mean or the arithmetic mean, respectively, is used to average). Jeremy Siegel has suggested that the average P/E ratio of about 15 (or earnings yield of about 6.6%) arises due to the long-term returns for stocks of about 6.8%. In Stocks for the Long Run, (2002 edition) he had argued that with favorable developments like the lower capital gains tax rates and transaction costs, P/E ratio in "low twenties" is sustainable, despite being higher than the historic average. Set out below are the recent year end values of the S&P 500 index and the associated P/E as reported. For a list of recent contractions (recessions) and expansions see U.S. Business Cycle Expansions and Contractions. Note that at the height of the Dot-com bubble P/E had risen to 32. The collapse in earnings caused P/E to rise to 46.50 in 2001. It has declined to a more sustainable region of 17. Its decline in recent years has been due to higher earnings growth. Due to the collapse in earnings and rapid stock market recovery following the 2020 Coronavirus Crash, the trailing P/E ratio reached 38.3 on October 12, 2020. This elevated level was only attained twice in history, 2001-2002 and 2008-2009. In business culture The P/E ratio of a company is a major focus for many managers. They are usually paid in company stock or options on their company's stock (a form of payment that is supposed to align the interests of management with the interests of other stock holders). The stock price can increase in one of two ways: either through improved earnings or through an improved multiple that the market assigns to those earnings. In turn, the primary drivers for multiples such as the P/E ratio is through higher and more sustained earnings growth rates. Consequently, managers have strong incentives to boost earnings per share, even in the short term, and/or improve long-term growth rates. This can influence business decisions in several ways: If a company wants to acquire companies with a higher P/E ratio than its own, it usually prefers paying in cash or debt rather than in stock. Though in theory the method of payment makes no difference to value, doing it this way offsets or avoids earnings dilution (see accretion/dilution analysis). Conversely, companies with higher P/E ratios than their targets are more tempted to use their stock to pay for acquisitions. Companies with high P/E ratios but volatile earnings may be tempted to find ways to smooth earnings and diversify risk—this is the theory behind building conglomerates. Conversely, companies with low P/E ratios may be tempted to acquire small high-growth businesses in an effort to "rebrand" their portfolio of activities and burnish their image as growth stocks and thus obtain a higher PE rating. Companies try to smooth earnings, for example by "slush fund accounting" (hiding excess earnings in good years to cover for losses in lean years). Such measures are designed to create the image that the company always slowly but steadily increases profits, with the goal to increase the P/E ratio. Companies with low P/E ratios are usually more open to leveraging their balance sheet. As seen above, this mechanically lowers the P/E ratio, which means the company looks cheaper than it did before leverage, and also improves earnings growth rates. Both of these factors help drive up the share price. Strictly speaking, the ratio is measured in years, since the price is measured in dollars and earnings are measured in dollars per year. Therefore, the ratio demonstrates how many years it takes to cover the price, if earnings stay the same. Investor expectations In general, a high price–earning ratio indicates that investors are expecting higher growth of company's earnings in the future compared to companies with a lower price–earning ratio. A low price–earning ratio may indicate either that a company may currently be undervalued or that the company is doing exceptionally well relative to its past trends. The price-to-earnings ratio can also be seen as a means of standardizing the value of one dollar of earnings throughout the stock market. In theory, by taking the median of P/E ratios over a period of several years, one could formulate something of a standardized P/E ratio, which could then be seen as a benchmark and used to indicate whether or not a stock is worth buying. In private equity, the extrapolation of past performance is driven by stale investments. State and local governments that are more fiscally stressed by higher unfunded pension liabilities assume higher portfolio returns through higher inflation assumptions, but this factor does not attenuate the extrapolative effects of past returns. Negative earnings When a company has no earnings or is posting losses, in both cases P/E will be expressed as "N/A." Though it is possible to calculate a negative P/E, this is not the common convention. Related measures Cyclically adjusted price-to-earnings ratio Price-to-earnings-growth ratio Present value of growth opportunities Price-to-dividend ratio Return on investment Social earnings ratio EV/Ebitda Earnings yield – the inverse of price–earnings ratio See also Cyclically adjusted price-to-earnings ratio Fundamental analysis Index of accounting articles Outline of economics Market value Price–sales ratio Stock market bubble Stock market crash Stock valuation using discounted cash flows Value investing Valuation using multiples Tobin's q References Financial ratios
Price–earnings ratio
[ "Mathematics" ]
2,071
[ "Financial ratios", "Quantity", "Metrics" ]
163,103
https://en.wikipedia.org/wiki/Future
The future is the time after the past and present. Its arrival is considered inevitable due to the existence of time and the laws of physics. Due to the apparent nature of reality and the unavoidability of the future, everything that currently exists and will exist can be categorized as either permanent, meaning that it will exist forever, or temporary, meaning that it will end. In the Occidental view, which uses a linear conception of time, the future is the portion of the projected timeline that is anticipated to occur. In special relativity, the future is considered absolute future, or the future light cone. In the philosophy of time, presentism is the belief that only the present exists and the future and the past are unreal. Religions consider the future when they address issues such as karma, life after death, and eschatologies that study what the end of time and the end of the world will be. Religious figures such as prophets and diviners have claimed to see into the future. Future studies, or futurology, is the science, art, and practice of postulating possible futures. Modern practitioners stress the importance of alternative and plural futures, rather than one monolithic future, and the limitations of prediction and probability, versus the creation of possible and preferable futures. Predeterminism is the belief that the past, present, and future have been already decided. The concept of the future has been explored extensively in cultural production, including art movements and genres devoted entirely to its elucidation, such as the 20th-century movement futurism. In physics In physics, time is the fourth dimension. Physicists argue that spacetime can be understood as a sort of stretchy fabric that bends due to forces such as gravity. In classical physics the future is just a half of the timeline, which is the same for all observers. In special relativity the flow of time is relative to the observer's frame of reference. The faster an observer is traveling away from a reference object, the slower that object seems to move through time. Hence, the future is not an objective notion anymore. A more modern notion is absolute future, or the future light cone. While a person can move backward or forwards in the three spatial dimensions, many physicists argue you are only able to move forward in time. One of the outcomes of Special Relativity Theory is that a person can travel into the future (but never come back) by traveling at very high speeds. While this effect is negligible under ordinary conditions, space travel at very high speeds can change the flow of time considerably. As depicted in many science fiction stories and movies (e.g. Déjà Vu), a person traveling for even a short time at near light speed will return to an Earth that is many years in the future. Some physicists claim that by using a wormhole to connect two regions of spacetime a person could theoretically travel in time. Physicist Michio Kaku points out that to power this hypothetical time machine and "punch a hole into the fabric of space-time" would require the energy of a star. Another theory is that a person could travel in time with cosmic strings. In philosophy In the philosophy of time, presentism is the belief that only the present exists, and the future and past are unreal. Past and future "entities" are construed as logical constructions or fictions. The opposite of presentism is 'eternalism', which is the belief that things in the past and things yet to come exist eternally. Another view (not held by many philosophers) is sometimes called the 'growing block' theory of time—which postulates that the past and present exist, but the future does not. Presentism is compatible with Galilean relativity, in which time is independent of space, but is probably incompatible with Lorentzian/Albert Einsteinian relativity in conjunction with certain other philosophical theses that many find uncontroversial. Saint Augustine proposed that the present is a knife edge between the past and the future and could not contain any extended period of time. Contrary to Saint Augustine, some philosophers propose that conscious experience is extended in time. For instance, William James said that time is "...the short duration of which we are immediately and incessantly sensible." Augustine proposed that God is outside of time and present for all times, in eternity. Other early philosophers who were presentists include the Buddhists (in the tradition of Indian Buddhism). A leading scholar from the modern era on Buddhist philosophy is Stcherbatsky, who has written extensively on Buddhist presentism: In psychology Human behavior is known to encompass anticipation of the future. Anticipatory behavior can be the result of a psychological outlook toward the future, for examples optimism, pessimism, and hope. Optimism is an outlook on life such that one maintains a view of the world as a positive place. People would say that optimism is seeing the glass "half full" of water as opposed to half empty. It is the philosophical opposite of pessimism. Optimists generally believe that people and events are inherently good, so that most situations work out in the end for the best. Hope is a belief in a positive outcome related to events and circumstances in one's life. Hope implies a certain amount of despair, wanting, wishing, suffering or perseverance—i.e., believing that a better or positive outcome is possible even when there is some evidence to the contrary. "Hopefulness" is somewhat different from optimism in that hope is an emotional state, whereas some theories point to optimism as a conclusion reached through a deliberate thought pattern that leads to a positive personal attitudes and by extension is linked to more philanthropic behaviours. Pessimism as stated before is the opposite of optimism. It is the tendency to see, anticipate, or emphasize only bad or undesirable outcomes, results, or problems. The word originates in Latin from Pessimus meaning worst and Malus meaning bad and has a link to misanthropic belief systems. In religion Religions consider the future when they address issues such as karma, life after death, and eschatologies which consider what the end of time and the end of the world will be like. In religion, major prophets are said to have the power to change the future. Common religious figures have claimed to see into the future, such as minor prophets and diviners. The term "afterlife" refers to the continuation of existence of the soul, spirit or mind of a human (or animal) after physical death, typically in a spiritual or ghostlike afterworld. Deceased persons are usually believed to go to a specific region or plane of existence in this afterworld, often depending on the rightness of their actions during life. Some believe the afterlife includes some form of preparation for the soul to transfer to another body (reincarnation). The major views on the afterlife derive from religion, esotericism and metaphysics. There are those who are skeptical of the existence of the afterlife, or believe that it is absolutely impossible, such as the materialist-reductionists, who believe that the topic is supernatural, therefore does not really exist or is unknowable. In metaphysical models, theists generally, believe some sort of afterlife awaits people when they die. Atheists generally do not believe in a life after death. Members of some generally non-theistic religions such as Buddhism, tend to believe in an afterlife like reincarnation but without reference to God. Agnostics generally hold the position that like the existence of God, the existence of supernatural phenomena, such as souls or life after death, is unverifiable and therefore unknowable. Many religions, whether they believe in the soul's existence in another world like Christianity, Islam and many pagan belief systems, or in reincarnation like many forms of Hinduism and Buddhism, believe that one's status in the afterlife is a reward or punishment for their conduct during life, with the exception of Calvinistic variants of Protestant Christianity, which believe one's status in the afterlife is a gift from God and cannot be earned during life. Eschatology is a part of theology and philosophy concerned with the final events in the Human history, or the ultimate destiny of humanity, commonly referred to as the end of the world. While in mysticism the phrase refers metaphorically to the end of ordinary reality and reunion with the Divine, in many traditional religions it is taught as an actual future event prophesied in sacred texts or folklore. More broadly, eschatology may encompass related concepts such as the Messiah or Messianic Age, the end time, and the end of days. In grammar In grammar, actions are classified according to one of the following twelve verb tenses: past (past, past continuous, past perfect, or past perfect continuous), present (present, present continuous, present perfect, or present perfect continuous), or future (future, future continuous, future perfect, or future perfect continuous). The future tense refers to actions that have not yet happened, but which are due, expected, or may occur in the future. For example, in the sentence, "She will walk home," the verb "will walk" is in the future tense because it refers to an action that is going to, or may, happen at a point in time beyond the present. Verbs in the future continuous tense indicate actions that will happen beyond the present and will continue for a period of time. In the sentence, "She will be walking home," the verb phrase "will be walking" is in the future continuous tense because the action described is not happening now, but will happen sometime afterwards and is expected to continue happening for some time. Verbs in the future perfect tense indicate actions that will be completed at a particular point in the future. For example, the verb phrase, "will have walked," in the sentence, "She will have walked home," is in the future perfect tense because it refers to an action that is completed as of a specific time in the future. Finally, verbs in the future perfect continuous tense combine the features of the perfect and continuous tenses, describing the future status of actions that have been happening continually from now or the past through to a particular time in the future. In the sentence, "She will have been walking home," the verb phrase "will have been walking" is in the future perfect continuous tense because it refers to an action that the speaker anticipates will be finished in the future. Another way to think of the various future tenses is that actions described by the future tense will be completed at an unspecified time in the future, actions described by the future continuous tense will keep happening in the future, actions described by the future perfect tense will be completed at a specific time in the future, and actions described by the future perfect continuous tense are expected to be continuing as of a specific time in the future. Linear and cyclic culture The linear view of time (common in Western thought) draws a stronger distinction between past and future than does the more common cyclic time of cultures such as India, where past and future can coalesce much more readily. Futures studies Futures studies or futurology is the science, art, and practice of postulating possible, probable, and preferable futures and the worldviews and myths that underlie them. Futures studies seek to understand what is likely to continue, what is likely to change, and what is novel. Part of the discipline thus seeks a systematic and pattern-based understanding of past and present, and to determine the likelihood of future events and trends. A key part of this process is understanding the potential future impact of decisions made by individuals, organizations, and governments. Leaders use the results of such work to assist in decision-making. Futures is an interdisciplinary field, studying yesterday's and today's changes, and aggregating and analyzing both lay and professional strategies, and opinions with respect to tomorrow. It includes analyzing the sources, patterns, and causes of change and stability in the attempt to develop foresight and to map possible futures. Modern practitioners stress the importance of alternative and plural futures, rather than one monolithic future, and the limitations of prediction and probability, versus the creation of possible and preferable futures. Three factors usually distinguish futures studies from the research conducted by other disciplines (although all disciplines overlap, to differing degrees). First, futures studies often examines not only possible but also probable, preferable, and "wild card" futures. Second, futures studies typically attempts to gain a holistic or systemic view based on insights from a range of different disciplines. Third, futures studies challenges and unpacks the assumptions behind dominant and contending views of the future. The future thus is not empty but fraught with hidden assumptions. Futures studies do not generally include the work of economists who forecast movements of interest rates over the next business cycle, or of managers or investors with short-term time horizons. Most strategic planning, which develops operational plans for preferred futures with time horizons of one to three years, is also not considered futures. But plans and strategies with longer time horizons that specifically attempt to anticipate and be robust to possible future events, are part of a major subdiscipline of futures studies called strategic foresight. The futures field also excludes those who make future predictions through professed supernatural means. At the same time, it does seek to understand the model's such groups use and the interpretations they give to these models. Forecasting Forecasting is the process of estimating outcomes in uncontrolled situations. Forecasting is applied in many areas, such as weather forecasting, earthquake prediction, transport planning, and labour market planning. Due to the element of the unknown, risk and uncertainty are central to forecasting. Statistically based forecasting employs time series with cross-sectional or longitudinal data. Econometric forecasting methods use the assumption that it is possible to identify the underlying factors that might influence the variable that is being forecast. If the causes are understood, projections of the influencing variables can be made and used in the forecast. Judgmental forecasting methods incorporate intuitive judgments, opinions, and probability estimates, as in the case of the Delphi method, scenario building, and simulations. Prediction is similar to forecasting but is used more generally, for instance, to also include baseless claims on the future. Organized efforts to predict the future began with practices like astrology, haruspicy, and augury. These are all considered to be pseudoscience today, evolving from the human desire to know the future in advance. Modern efforts such as futures studies attempt to predict technological and societal trends, while more ancient practices, such as weather forecasting, have benefited from scientific and causal modelling. Despite the development of cognitive instruments for the comprehension of future, the stochastic and chaotic nature of many natural and social processes has made precise forecasting of the future elusive. In art and culture Futurism Futurism as an art movement originated in Italy at the beginning of the 20th century. It developed largely in Italy and in Russia, although it also had adherents in other countries—in England and Portugal for example. The Futurists explored every medium of art, including painting, sculpture, poetry, theatre, music, architecture, and even gastronomy. Futurists had passionate loathing of ideas from the past, especially political and artistic traditions. They also espoused a love of speed, technology, and violence. Futurists dubbed the love of the past passéisme. The car, the plane, and the industrial town were all legendary for the Futurists because they represented the technological triumph of people over nature. The Futurist Manifesto of 1909 declared: "We will glorify war—the world's only hygiene—militarism, patriotism, the destructive gesture of freedom-bringers, beautiful ideas worth dying for, and scorn for woman." Though it owed much of its character and some of its ideas to radical political movements, it had little involvement in politics until the autumn of 1913. Futurism in Classical Music arose during this same time period. Closely identified with the central Italian Futurist movement were brother composers Luigi Russolo (1885–1947) and Antonio Russolo (1877–1942), who used instruments known as intonarumori—essentially sound boxes used to create music out of noise. Luigi Russolo's futurist manifesto, "The Art of Noises", is considered one of the most important and influential texts in 20th-century musical aesthetics. Other examples of futurist music include Arthur Honegger's "Pacific 231" (1923), which imitates the sound of a steam locomotive, Prokofiev's "The Steel Step" (1926), Alexander Mosolov's "Iron Foundry" (1927), and the experiments of Edgard Varèse. Literary futurism made its debut with F.T. Marinetti's Manifesto of Futurism (1909). Futurist poetry used unexpected combinations of images and hyper-conciseness (not to be confused with the actual length of the poem). Futurist theater works have scenes a few sentences long, use nonsensical humor, and try to discredit the deep-rooted dramatic traditions with parody. Longer literature forms, such as novels, had no place in the Futurist aesthetic, which had an obsession with speed and compression. Futurism expanded to encompass other artistic domains and ultimately included painting, sculpture, ceramics, graphic design, industrial design, interior design, theatre design, textiles, drama, literature, music and architecture. In architecture, it featured a distinctive thrust towards rationalism and modernism through the use of advanced building materials. The ideals of futurism remain as significant components of modern Western culture; the emphasis on youth, speed, power and technology finding expression in much of modern commercial cinema and commercial culture. Futurism has produced several reactions, including the 1980s-era literary genre of cyberpunk—which often treated technology with a critical eye. Science fiction More generally, one can regard science fiction as a broad genre of fiction that often involves speculations based on current or future science or technology. Science fiction is found in books, art, television, films, games, theater, and other media. Science fiction differs from fantasy in that, within the context of the story, its imaginary elements are largely possible within scientifically established or scientifically postulated laws of nature (though some elements in a story might still be pure imaginative speculation). Settings may include the future, or alternative time-lines, and stories may depict new or speculative scientific principles (such as time travel or psionics), or new technology (such as nanotechnology, faster-than-light travel or robots). Exploring the consequences of such differences is the traditional purpose of science fiction, making it a "literature of ideas". Some science fiction authors construct a postulated history of the future called a "future history" that provides a common background for their fiction. Sometimes authors publish a timeline of events in their history, while other times the reader can reconstruct the order of the stories from information in the books. Some published works constitute "future history" in a more literal sense—i.e., stories or whole books written in the style of a history book but describing events in the future. Examples include H.G. Wells' The Shape of Things to Come (1933)—written in the form of a history book published in the year 2106 and in the manner of a real history book with numerous footnotes and references to the works of (mostly fictitious) prominent historians of the 20th and 21st centuries. See also References Philosophy of time Time
Future
[ "Physics", "Mathematics" ]
4,038
[ "Physical quantities", "Time", "Future", "Quantity", "Philosophy of time", "Spacetime", "Wikipedia categories named after physical quantities" ]
163,106
https://en.wikipedia.org/wiki/Ethane
Ethane ( , ) is a naturally occurring organic chemical compound with chemical formula . At standard temperature and pressure, ethane is a colorless, odorless gas. Like many hydrocarbons, ethane is isolated on an industrial scale from natural gas and as a petrochemical by-product of petroleum refining. Its chief use is as feedstock for ethylene production. The ethyl group is formally, although rarely practically, derived from ethane. History Ethane was first synthesised in 1834 by Michael Faraday, applying electrolysis of a potassium acetate solution. He mistook the hydrocarbon product of this reaction for methane and did not investigate it further. The process is now called Kolbe electrolysis: CH3COO− → CH3• + CO2 + e− CH3• + •CH3 → C2H6 During the period 1847–1849, in an effort to vindicate the radical theory of organic chemistry, Hermann Kolbe and Edward Frankland produced ethane by the reductions of propionitrile (ethyl cyanide) and ethyl iodide with potassium metal, and, as did Faraday, by the electrolysis of aqueous acetates. They mistook the product of these reactions for the methyl radical (), of which ethane () is a dimer. This error was corrected in 1864 by Carl Schorlemmer, who showed that the product of all these reactions was in fact ethane. Ethane was discovered dissolved in Pennsylvanian light crude oil by Edmund Ronalds in 1864. Properties At standard temperature and pressure, ethane is a colorless, odorless gas. It has a boiling point of and melting point of . Solid ethane exists in several modifications. On cooling under normal pressure, the first modification to appear is a plastic crystal, crystallizing in the cubic system. In this form, the positions of the hydrogen atoms are not fixed; the molecules may rotate freely around the long axis. Cooling this ethane below ca. changes it to monoclinic metastable ethane II (space group P 21/n). Ethane is only very sparingly soluble in water. The bond parameters of ethane have been measured to high precision by microwave spectroscopy and electron diffraction: rC−C = 1.528(3) Å, rC−H = 1.088(5) Å, and ∠CCH = 111.6(5)° by microwave and rC−C = 1.524(3) Å, rC−H = 1.089(5) Å, and ∠CCH = 111.9(5)° by electron diffraction (the numbers in parentheses represents the uncertainties in the final digits). Rotating a molecular substructure about a twistable bond usually requires energy. The minimum energy to produce a 360° bond rotation is called the rotational barrier. Ethane gives a classic, simple example of such a rotational barrier, sometimes called the "ethane barrier". Among the earliest experimental evidence of this barrier (see diagram at left) was obtained by modelling the entropy of ethane. The three hydrogens at each end are free to pinwheel about the central carbon–carbon bond when provided with sufficient energy to overcome the barrier. The physical origin of the barrier is still not completely settled, although the overlap (exchange) repulsion between the hydrogen atoms on opposing ends of the molecule is perhaps the strongest candidate, with the stabilizing effect of hyperconjugation on the staggered conformation contributing to the phenomenon. Theoretical methods that use an appropriate starting point (orthogonal orbitals) find that hyperconjugation is the most important factor in the origin of the ethane rotation barrier. As far back as 1890–1891, chemists suggested that ethane molecules preferred the staggered conformation with the two ends of the molecule askew from each other. Atmospheric and extraterrestrial Ethane occurs as a trace gas in the Earth's atmosphere, currently having a concentration at sea level of 0.5 ppb. Global ethane quantities have varied over time, likely due to flaring at natural gas fields. Global ethane emission rates declined from 1984 to 2010, though increased shale gas production at the Bakken Formation in the U.S. has arrested the decline by half. Although ethane is a greenhouse gas, it is much less abundant than methane, has a lifetime of only a few months compared to over a decade, and is also less efficient at absorbing radiation relative to mass. In fact, ethane's global warming potential largely results from its conversion in the atmosphere to methane. It has been detected as a trace component in the atmospheres of all four giant planets, and in the atmosphere of Saturn's moon Titan. Atmospheric ethane results from the Sun's photochemical action on methane gas, also present in these atmospheres: ultraviolet photons of shorter wavelengths than 160 nm can photo-dissociate the methane molecule into a methyl radical and a hydrogen atom. When two methyl radicals recombine, the result is ethane: CH4  →  CH3• + •H CH3• + •CH3  →  C2H6 In Earth's atmosphere, hydroxyl radicals convert ethane to methanol vapor with a half-life of around three months. It is suspected that ethane produced in this fashion on Titan rains back onto the moon's surface, and over time has accumulated into hydrocarbon seas covering much of the moon's polar regions. In mid-2005, the Cassini orbiter discovered Ontario Lacus in Titan's south polar regions. Further analysis of infrared spectroscopic data presented in July 2008 provided additional evidence for the presence of liquid ethane in Ontario Lacus. Several significantly larger hydrocarbon lakes, Ligeia Mare and Kraken Mare being the two largest, were discovered near Titan's north pole using radar data gathered by Cassini. These lakes are believed to be filled primarily by a mixture of liquid ethane and methane. In 1996, ethane was detected in Comet Hyakutake, and it has since been detected in some other comets. The existence of ethane in these distant solar system bodies may implicate ethane as a primordial component of the solar nebula from which the sun and planets are believed to have formed. In 2006, Dale Cruikshank of NASA/Ames Research Center (a New Horizons co-investigator) and his colleagues announced the spectroscopic discovery of ethane on Pluto's surface. Chemistry The reactions of ethane involve chiefly free radical reactions. Ethane can react with the halogens, especially chlorine and bromine, by free-radical halogenation. This reaction proceeds through the propagation of the ethyl radical: Cl2  →  2 Cl• C2H6• + Cl•  →  C2H5• + HCl C2H5• + Cl2  →  C2H5Cl + Cl• Cl• + C2H6  →  C2H5• + HCl The combustion of ethane releases 1559.7 kJ/mol, or 51.9 kJ/g, of heat, and produces carbon dioxide and water according to the chemical equation: 2 C2H6 + 7 O2  →  4 CO2 + 6 H2O + 3120 kJ Combustion may also occur without an excess of oxygen, yielding carbon monoxide, acetaldehyde, methane, methanol, and ethanol. At higher temperatures, especially in the range , ethylene is a significant product: Such oxidative dehydrogenation reactions are relevant to the production of ethylene. Production After methane, ethane is the second-largest component of natural gas. Natural gas from different gas fields varies in ethane content from less than 1% to more than 6% by volume. Prior to the 1960s, ethane and larger molecules were typically not separated from the methane component of natural gas, but simply burnt along with the methane as a fuel. Today, ethane is an important petrochemical feedstock and is separated from the other components of natural gas in most well-developed gas fields. Ethane can also be separated from petroleum gas, a mixture of gaseous hydrocarbons produced as a byproduct of petroleum refining. Ethane is most efficiently separated from methane by liquefying it at cryogenic temperatures. Various refrigeration strategies exist: the most economical process presently in wide use employs a turboexpander, and can recover more than 90% of the ethane in natural gas. In this process, chilled gas is expanded through a turbine, reducing the temperature to approximately . At this low temperature, gaseous methane can be separated from the liquefied ethane and heavier hydrocarbons by distillation. Further distillation then separates ethane from the propane and heavier hydrocarbons. Usage The chief use of ethane is the production of ethylene (ethene) by steam cracking. Steam cracking of ethane is fairly selective for ethylene, while the steam cracking of heavier hydrocarbons yields a product mixture poorer in ethylene and richer in heavier alkenes (olefins), such as propene (propylene) and butadiene, and in aromatic hydrocarbons. Ehane has been investigated as a feedstock for other commodity chemicals. Oxidative chlorination of ethane has long appeared to be a potentially more economical route to vinyl chloride than ethylene chlorination. Many patent exist on this theme, but poor selectivity for vinyl chloride and corrosive reaction conditions have discouraged the commercialization of most of them. Presently, INEOS operates a 1000 t/a (tonnes per annum) ethane-to-vinyl chloride pilot plant at Wilhelmshaven in Germany. SABIC operates a 34,000 t/a plant at Yanbu to produce acetic acid by ethane oxidation. The economic viability of this process may rely on the low cost of ethane near Saudi oil fields, and it may not be competitive with methanol carbonylation elsewhere in the world. Ethane can be used as a refrigerant in cryogenic refrigeration systems. In the laboratory On a much smaller scale, in scientific research, liquid ethane is used to vitrify water-rich samples for cryo-electron microscopy. A thin film of water quickly immersed in liquid ethane at −150 °C or colder freezes too quickly for water to crystallize. Slower freezing methods can generate cubic ice crystals, which can disrupt soft structures by damaging the samples and reduce image quality by scattering the electron beam before it can reach the detector. Health and safety At room temperature, ethane is an extremely flammable gas. When mixed with air at 3.0%–12.5% by volume, it forms an explosive mixture. Ethane is not a carcinogen. See also Biogas: carbon-neutral alternative to natural gas Biorefining Biodegradable plastic Drop-in bioplastic References External links International Chemical Safety Card 0266 Market-Driven Evolution of Gas Processing Technologies for NGLs Staggered and eclipsed ethane Alkanes Industrial gases Greenhouse gases
Ethane
[ "Chemistry", "Environmental_science" ]
2,297
[ "Environmental chemistry", "Organic compounds", "Industrial gases", "Greenhouse gases", "Chemical process engineering", "Alkanes" ]
163,115
https://en.wikipedia.org/wiki/Interest%20rate
An interest rate is the amount of interest due per period, as a proportion of the amount lent, deposited, or borrowed (called the principal sum). The total interest on an amount lent or borrowed depends on the principal sum, the interest rate, the compounding frequency, and the length of time over which it is lent, deposited, or borrowed. The annual interest rate is the rate over a period of one year. Other interest rates apply over different periods, such as a month or a day, but they are usually annualized. The interest rate has been characterized as "an index of the preference . . . for a dollar of present [income] over a dollar of future income". The borrower wants, or needs, to have money sooner, and is willing to pay a fee—the interest rate—for that privilege. Influencing factors Interest rates vary according to: the government's directives to the central bank to accomplish the government's goals the currency of the principal sum lent or borrowed the term to maturity of the investment the perceived default probability of the borrower supply and demand in the market the amount of collateral special features like call provisions reserve requirements compensating balance as well as other factors. Example A company borrows capital from a bank to buy assets for its business. In return, the bank charges the company interest. (The lender might also require rights over the new assets as collateral.) A bank will use the capital deposited by individuals to make loans to their clients. In return, the bank should pay interest to individuals who have deposited their capital. The amount of interest payment depends on the interest rate and the amount of capital they deposited. Related terms Base rate usually refers to the annualized effective interest rate offered on overnight deposits by the central bank or other monetary authority. The annual percentage rate (APR) may refer either to a nominal APR or an effective APR (EAPR). The difference between the two is that the EAPR accounts for fees and compounding, while the nominal APR does not. The annual equivalent rate (AER), also called the effective annual rate, is used to help consumers compare products with different compounding frequencies on a common basis, but does not account for fees. A discount rate is applied to calculate present value. For an interest-bearing security, coupon rate is the ratio of the annual coupon amount (the coupon paid per year) per unit of par value, whereas current yield is the ratio of the annual coupon divided by its current market price. Yield to maturity is a bond's expected internal rate of return, assuming it will be held to maturity, that is, the discount rate which equates all remaining cash flows to the investor (all remaining coupons and repayment of the par value at maturity) with the current market price. Based on the banking business, there are deposit interest rate and loan interest rate. Based on the relationship between supply and demand of market interest rate, there are fixed interest rate and floating interest rate. Monetary policy Interest rate targets are a vital tool of monetary policy and are taken into account when dealing with variables like investment, inflation, and unemployment. The central banks of countries generally tend to reduce interest rates when they wish to increase investment and consumption in the country's economy. However, a low interest rate as a macro-economic policy can be risky and may lead to the creation of an economic bubble, in which large amounts of investments are poured into the real-estate market and stock market. In developed economies, interest-rate adjustments are thus made to keep inflation within a target range for the health of economic activities or cap the interest rate concurrently with economic growth to safeguard economic momentum. History In the past two centuries, interest rates have been variously set either by national governments or central banks. For example, the Federal Reserve federal funds rate in the United States has varied between about 0.25% and 19% from 1954 to 2008, while the Bank of England base rate varied between 0.5% and 15% from 1989 to 2009, and Germany experienced rates close to 90% in the 1920s down to about 2% in the 2000s. During an attempt to tackle spiraling hyperinflation in 2007, the Central Bank of Zimbabwe increased interest rates for borrowing to 800%. The interest rates on prime credits in the late 1970s and early 1980s were far higher than had been recorded – higher than previous US peaks since 1800, than British peaks since 1700, or than Dutch peaks since 1600; "since modern capital markets came into existence, there have never been such high long-term rates" as in this period. Before modern capital markets, there have been accounts that savings deposits could achieve an annual return of at least 25% and up to as high as 50%. Reasons for changes Political short-term gain: Lowering interest rates can give the economy a short-run boost. Under normal conditions, most economists think a cut in interest rates will only give a short term gain in economic activity that will soon be offset by inflation. The quick boost can influence elections. Most economists advocate independent central banks to limit the influence of politics on interest rates. Deferred consumption: When money is loaned the lender delays spending the money on consumption goods. Since according to time preference theory people prefer goods now to goods later, in a free market there will be a positive interest rate. Inflationary expectations: Most economies generally exhibit inflation, meaning a given amount of money buys fewer goods in the future than it will now. The borrower needs to compensate the lender for this. Alternative investments: The lender has a choice between using his money in different investments. If he chooses one, he forgoes the returns from all the others. Different investments effectively compete for funds. Risks of investment: There is always a risk that the borrower will go bankrupt, abscond, die, or otherwise default on the loan. This means that a lender generally charges a risk premium to ensure that, across his investments, he is compensated for those that fail. Liquidity preference: People prefer to have their resources available in a form that can immediately be exchanged, rather than a form that takes time to realize. Taxes: Because some of the gains from interest may be subject to taxes, the lender may insist on a higher rate to make up for this loss. Banks: Banks can tend to change the interest rate to either slow down or speed up economy growth. This involves either raising interest rates to slow the economy down, or lowering interest rates to promote economic growth. Economy: Interest rates can fluctuate according to the status of the economy. It will generally be found that if the economy is strong then the interest rates will be high, if the economy is weak the interest rates will be low. Real versus nominal The nominal interest rate is the rate of interest with no adjustment for inflation. For example, suppose someone deposits $100 with a bank for one year, and they receive interest of $10 (before tax), so at the end of the year, their balance is $110 (before tax). In this case, regardless of the rate of inflation, the nominal interest rate is 10% per annum (before tax). The real interest rate measures the growth in real value of the loan plus interest, taking inflation into account. The repayment of principal plus interest is measured in real terms compared against the buying power of the amount at the time it was borrowed, lent, deposited or invested. If inflation is 10%, then the $110 in the account at the end of the year has the same purchasing power (that is, buys the same amount) as the $100 had a year ago. The real interest rate is zero in this case. The real interest rate is given by the Fisher equation: where p is the inflation rate. For low rates and short periods, the linear approximation applies: The Fisher equation applies both ex ante and ex post. Ex ante, the rates are projected rates, whereas ex post, the rates are historical. Market rates There is a market for investments, including the money market, bond market, stock market, and currency market as well as retail banking. Interest rates reflect: The risk-free cost of capital Expected inflation Risk premium Transaction costs Inflationary expectations According to the theory of rational expectations, borrowers and lenders form an expectation of inflation in the future. The acceptable nominal interest rate at which they are willing and able to borrow or lend includes the real interest rate they require to receive, or are willing and able to pay, plus the rate of inflation they expect. Risk The level of risk in investments is taken into consideration. Riskier investments such as shares and junk bonds are normally expected to deliver higher returns than safer ones like government bonds. The additional return above the risk-free nominal interest rate which is expected from a risky investment is the risk premium. The risk premium an investor requires on an investment depends on the risk preferences of the investor. Evidence suggests that most lenders are risk-averse. A maturity risk premium applied to a longer-term investment reflects a higher perceived risk of default. There are four kinds of risk: repricing risk basis risk yield curve risk optionality Liquidity preference Most investors prefer their money to be in cash rather than in less fungible investments. Cash is on hand to be spent immediately if the need arises, but some investments require time or effort to transfer into spendable form. The preference for cash is known as liquidity preference. A 1-year loan, for instance, is very liquid compared to a 10-year loan. A 10-year US Treasury bond, however, is still relatively liquid because it can easily be sold on the market. A market model A basic interest rate pricing model for an asset is where in is the nominal interest rate on a given investment ir is the risk-free return to capital i*n is the nominal interest rate on a short-term risk-free liquid bond (such as U.S. treasury bills). rp is a risk premium reflecting the length of the investment and the likelihood the borrower will default lp is a liquidity premium (reflecting the perceived difficulty of converting the asset into money and thus into goods). pe is the expected inflation rate. Assuming perfect information, pe is the same for all participants in the market, and the interest rate model simplifies to Spread The spread of interest rates is the lending rate minus the deposit rate. This spread covers operating costs for banks providing loans and deposits. A negative spread is where a deposit rate is higher than the lending rate. In macroeconomics Output, unemployment and inflation Interest rates affect economic activity broadly, which is the reason why they are normally the main instrument of the monetary policies conducted by central banks. Changes in interest rates will affect firms' investment behaviour, either raising or lowering the opportunity cost of investing. Interest rate changes also affect asset prices like stock prices and house prices, which again influence households' consumption decisions through a wealth effect. Additionally, international interest rate differentials affect exchange rates and consequently exports and imports. These various channels are collectively known as the monetary transmission mechanism. Consumption, investment and net exports are all important components of aggregate demand. Consequently, by influencing the general interest rate level, monetary policy can affect overall demand for goods and services in the economy and hence output and employment. Changes in employment will over time affect wage setting, which again affects pricing and consequently ultimately inflation. The relation between employment (or unemployment) and inflation is known as the Phillips curve. For economies maintaining a fixed exchange rate system, determining the interest rate is also an important instrument of monetary policy as international capital flows are in part determined by interest rate differentials between countries. Interest rate setting in the United States The Federal Reserve (often referred to as 'the Fed') implements monetary policy largely by targeting the federal funds rate (FFR). This is the rate that banks charge each other for overnight loans of federal funds, which are the reserves held by banks at the Fed. Until the 2007–2008 financial crisis, the Fed relied on open market operations, i.e. selling and buying securities in the open market to adjust the supply of reserve balances so as to keep the FFR close to the Fed's target. However, since 2008 the actual conduct of monetary policy implementation has changed considerably, the Fed using instead various administered interest rates (i.e., interest rates that are set directly by the Fed rather than being determined by the market forces of supply and demand) as the primary tools to steer short-term market interest rates towards the Fed's policy target. Impact on savings and pensions Financial economists such as World Pensions Council (WPC) researchers have argued that durably low interest rates in most G20 countries will have an adverse impact on the funding positions of pension funds as "without returns that outstrip inflation, pension investors face the real value of their savings declining rather than ratcheting up over the next few years". Current interest rates in savings accounts often fail to keep up with the pace of inflation. From 1982 until 2012, most Western economies experienced a period of low inflation combined with relatively high returns on investments across all asset classes including government bonds. This brought a certain sense of complacency amongst some pension actuarial consultants and regulators, making it seem reasonable to use optimistic economic assumptions to calculate the present value of future pension liabilities. Mathematical note Because interest and inflation are generally given as percentage increases, the formulae above are (linear) approximations. For instance, is only approximate. In reality, the relationship is so The two approximations, eliminating higher order terms, are: The formulae in this article are exact if logarithmic units are used for relative changes, or equivalently if logarithms of indices are used in place of rates, and hold even for large relative changes. Zero rate policy A so-called "zero interest-rate policy" (ZIRP) is a very low—near-zero—central bank target interest rate. At this zero lower bound the central bank faces difficulties with conventional monetary policy, because it is generally believed that market interest rates cannot realistically be pushed down into negative territory. After the crisis of 2008, the Federal Reserve kept interest rates at zero for 12 years. Negative nominal or real rates Nominal interest rates are normally positive, but not always. In contrast, real interest rates can be negative, when nominal interest rates are below inflation. When this is done via government policy (for example, via reserve requirements), this is deemed financial repression, and was practiced by countries such as the United States and United Kingdom following World War II (from 1945) until the late 1970s or early 1980s (during and following the Post–World War II economic expansion). In the late 1970s, United States Treasury securities with negative real interest rates were deemed certificates of confiscation. On central bank reserves A so-called "negative interest rate policy" (NIRP) is a negative (below zero) central bank target interest rate. Theory Given the alternative of holding cash, and thus earning 0%, rather than lending it out, profit-seeking lenders will not lend below 0%, as that will guarantee a loss, and a bank offering a negative deposit rate will find few takers, as savers will instead hold cash. Negative interest rates have been proposed in the past, notably in the late 19th century by Silvio Gesell. A negative interest rate can be described (as by Gesell) as a "tax on holding money"; he proposed it as the Freigeld (free money) component of his (free economy) system. To prevent people from holding cash (and thus earning 0%), Gesell suggested issuing money for a limited duration, after which it must be exchanged for new bills; attempts to hold money thus result in it expiring and becoming worthless. Along similar lines, John Maynard Keynes approvingly cited the idea of a carrying tax on money, (1936, The General Theory of Employment, Interest and Money) but dismissed it due to administrative difficulties. More recently, a carry tax on currency was proposed by a Federal Reserve employee (Marvin Goodfriend) in 1999, to be implemented via magnetic strips on bills, deducting the carry tax upon deposit, the tax being based on how long the bill had been held. It has been proposed that a negative interest rate can in principle be levied on existing paper currency via a serial number lottery, such as randomly choosing a number 0 through 9 and declaring that notes whose serial number end in that digit are worthless, yielding an average 10% loss of paper cash holdings to hoarders; a drawn two-digit number could match the last two digits on the note for a 1% loss. This was proposed by an anonymous student of Greg Mankiw, though more as a thought experiment than a genuine proposal. Practice Both the European Central Bank starting in 2014 and the Bank of Japan starting in early 2016 pursued the policy on top of their earlier and continuing quantitative easing policies. The latter's policy was said at its inception to be trying to "change Japan's 'deflationary mindset.'" In 2016 Sweden, Denmark and Switzerland—not directly participants in the Euro currency zone—also had NIRPs in place. Countries such as Sweden and Denmark have set negative interest on reserves—that is to say, they have charged interest on reserves. In July 2009, Sweden's central bank, the Riksbank, set its policy repo rate, the interest rate on its one-week deposit facility, at 0.25%, at the same time as setting its overnight deposit rate at −0.25%. The existence of the negative overnight deposit rate was a technical consequence of the fact that overnight deposit rates are generally set at 0.5% below or 0.75% below the policy rate. The Riksbank studied the impact of these changes and stated in a commentary report that they led to no disruptions in Swedish financial markets. On government bond yields During the European debt crisis, government bonds of some countries (Switzerland, Denmark, Germany, Finland, the Netherlands and Austria) have been sold at negative yields. Suggested explanations include desire for safety and protection against the eurozone breaking up (in which case some eurozone countries might redenominate their debt into a stronger currency). On corporate bond yields For practical purposes, investors and academics typically view the yields on government or quasi-government bonds guaranteed by a small number of the most creditworthy governments (United Kingdom, United States, Switzerland, EU, Japan) to effectively have negligible default risk. As financial theory would predict, investors and academics typically do not view non-government guaranteed corporate bonds in the same way. Most credit analysts value them at a spread to similar government bonds with similar duration, geographic exposure, and currency exposure. Through 2018 there have only been a few of these corporate bonds that have traded at negative nominal interest rates. The most notable example of this was Nestle, some of whose AAA-rated bonds traded at negative nominal interest rate in 2015. However, some academics and investors believe this may have been influenced by volatility in the currency market during this period. See also Forward rate List of sovereign states by central bank interest rates Macroeconomics Rate of return Short-rate model Spot rate Notes References Mathematical finance Monetary policy
Interest rate
[ "Mathematics" ]
3,937
[ "Applied mathematics", "Mathematical finance" ]
163,131
https://en.wikipedia.org/wiki/Maslow%27s%20hierarchy%20of%20needs
Maslow’s hierarchy of needs is a conceptualisation of the needs (or goals) that motivate human behaviour, which was proposed by the American psychologist Abraham Maslow. According to Maslow’s original formulation, there are five sets of basic needs that are related to each other in a hierarchy of prepotency (or strength). Typically, the hierarchy is depicted in the form of a pyramid although Maslow was not himself responsible for the iconic diagram. The pyramid begins at the bottom with physiological needs (the most prepotent of all) and culminates at the top with self-actualization needs. In his later writings, Maslow added a sixth level of ‘meta-needs’ and metamotivation. The hierarchy of needs developed by Maslow is one of his most enduring contributions to psychology. The hierarchy of needs remains a popular framework and tool in higher education, business and management training, sociology research, healthcare, counselling and social work. However, although widely used and researched, the hierarchy of needs has been criticized for its lack of conclusive supporting evidence and its validity remains contested. Historical development Maslow's hierarchy of needs is an idea in psychology proposed by American psychologist Abraham Maslow in his 1943 paper "A Theory of Human Motivation" in the journal Psychological Review. The theory is a classification system intended to reflect the universal needs of society as its base, then proceeding to more acquired emotions. The hierarchy is split between deficiency needs and growth needs, with two key themes involved within the theory being individualism and the prioritization of needs. According to Maslow’s original formulation, there are five sets of basic needs: physiological, safety, love, esteem and self-actualization. These needs are related to each other in a hierarchy of prepotency (or strength) beginning with the physiological needs that are the most prepotent of all. If the physiological needs are relatively well gratified, a new set of safety needs emerges. If both the physiological and safety needs are fairly well gratified, the prepotent (‘higher’) need of love (both its giving and receiving) then emerges. The next need is esteem, and finally self-actualization. Maslow also coined the term "metamotivation" to describe the motivation of people who go beyond the scope of basic needs and strive for constant betterment. The hierarchy suggests a rigid separation of needs, but Maslow stressed that a need does not require being satisfied 100% before the next need emerges. Instead, “a more realistic description of the hierarchy would be in terms of decreasing percentages of satisfaction as we go up the hierarchy of prepotency”. Pyramid Maslow's hierarchy of needs is often portrayed in the shape of a pyramid, with the largest, most fundamental needs at the bottom, and the need for self-actualization and transcendence at the top. However, Maslow himself never created a pyramid to represent the hierarchy of needs. The most fundamental four layers of the pyramid contain what Maslow called "deficiency needs" or "d-needs": esteem, friendship and love, security, and physical needs. If these "deficiency needs" are not met – except for the most fundamental (physiological) need – there may not be a physical indication, but the individual will feel anxious and tense. Deprivation is what causes deficiency, so when one has unmet needs, this motivates them to fulfill what they are being denied. The human brain is a complex system and has parallel processes running at the same time, thus many different motivations from various levels of Maslow's hierarchy can occur at the same time. Maslow spoke clearly about these levels and their satisfaction in terms such as "relative", "general", and "primarily". Instead of stating that the individual focuses on a certain need at any given time, Maslow stated that a certain need "dominates" the human organism. Thus Maslow acknowledged the likelihood that the different levels of motivation could occur at any time in the human mind, but he focused on identifying the basic types of motivation and the order in which they would tend to be met. In addition to his anthropological studies, Maslow drew on animal data that "studied and observed monkeys [...] noticing their unusual pattern of behavior that addressed priorities based on individual needs". Alternative illustrations of hierarchy In contrast to the well-known pyramid, a number of alternative schematic illustrations of the hierachy of needs have been developed. One of the earliest, in 1962, shows a more dynamic hierarchy in terms of 'waves' of different needs overlapping at the same time. As illustrated, the peak of an earlier main set of needs must be passed before the next 'higher' need can begin to assume a dominant role. Other schematic illustrations of the hierarchy use overlapping triangles to depict the interaction of the different needs. One such updated hierarchy proposes that self-actualization is removed from its privileged place atop the pyramid because it is largely subsumed within status (esteem) and mating-related motives in the new framework. Needs Physiological needs Physiological needs are the base of the hierarchy. These needs are the biological component for human survival. According to Maslow's hierarchy of needs, physiological needs are factored into internal motivation. According to Maslow's theory, humans are compelled to satisfy physiological needs first to pursue higher levels of intrinsic satisfaction. To advance higher-level needs in Maslow's hierarchy, physiological needs must be met first. This means that if a person is struggling to meet their physiological needs, they are unwilling to seek safety, belonging, esteem, and self-actualization on their own. Physiological needs include: Air, Water, Food, Heat, Clothes, Reproduction, Shelter and Sleep. Many of these physiological needs must be met for the human body to remain in homeostasis. Air, for example, is a physiological need; a human being requires air more urgently than higher-level needs, such as a sense of social belonging. Physiological needs are critical to "meet the very basic essentials of life". This allows for cravings such as hunger and thirst to be satisfied and not disrupt the regulation of the body. Safety needs Once a person's physiological needs are satisfied, their safety needs take precedence and dominate behavior. In the absence of physical safety – due to war, natural disaster, family violence, childhood abuse, etc. and/or in the absence of economic safety – (due to an economic crisis and lack of work opportunities) these safety needs manifest themselves in ways such as a preference for job security, grievance procedures for protecting the individual from unilateral authority, savings accounts, insurance policies, disability accommodations, etc. This level is more likely to predominate in children as they generally have a greater need to feel safe – especially children who have disabilities. Adults are also impacted by this, typically in economic matters; "adults are not immune to the need of safety". It includes shelter, job security, health, and safe environments. If a person does not feel safe in an environment, they will seek safety before attempting to meet any higher level of survival. This is why the "goal of consistently meeting the need for safety is to have stability in one's life", stability brings back the concept of homeostasis for humans which our bodies need. Safety needs include: Health Personal security Emotional security Financial security Love and social needs After physiological and safety needs are fulfilled, the third level of human needs is interpersonal and involves feelings of belongingness. According to Maslow, humans possess an effective need for a sense of belonging and acceptance among social groups, regardless of whether these groups are large or small; being a part of a group is crucial, regardless if it is work, sports, friends or family. The sense of belongingness is "being comfortable with and connection to others that results from receiving acceptance, respect, and love." For example, some large social groups may include clubs, co-workers, religious groups, professional organizations, sports teams, gangs or online communities. Some examples of small social connections include family members, intimate partners, mentors, colleagues, and confidants. Humans need to love and be loved – both sexually and non-sexually – by others according to Maslow. Many people become susceptible to loneliness, social anxiety, and clinical depression in the absence of this love or belonging element. This need is especially strong in childhood and it can override the need for safety as witnessed in children who cling to abusive parents. Deficiencies due to hospitalism, neglect, shunning, ostracism, etc. can adversely affect the individual's ability to form and maintain emotionally significant relationships in general. Mental health can be a huge factor when it comes to an individual's needs and development. When an individual's needs are not met, it can cause depression during adolescence. When an individual grows up in a higher-income family, it is much more likely that they will have a lower rate of depression. This is because all of their basic needs are met. Studies have shown that when a family goes through financial stress for a prolonged time, depression rates are higher, not only because their basic needs are not being met, but because this stress strains the parent-child relationship. The parent(s) is stressed about providing for their children, and they are also likely to spend less time at home because they are working more to make more money and provide for their family. Social belonging needs include: Family Friendship Intimacy Trust Acceptance Receiving and giving love and affection In certain situations, the need for belonging may overcome the physiological and security needs, depending on the strength of the peer pressure. In contrast, for some individuals, the need for self-esteem is more important than the need for belonging; and for others, the need for creative fulfillment may supersede even the most basic needs. Esteem needs Esteem is the respect, and admiration of a person, but also "self-respect and respect from others". Most people need stable esteem, meaning that which is soundly based on real capacity or achievement. Maslow noted two versions of esteem needs. The "lower" version of esteem is the need for respect from others and may include a need for status, recognition, fame, prestige, and attention. The "higher" version of esteem is the need for self-respect, and can include a need for strength, competence, mastery, self-confidence, independence, and freedom. This "higher" version takes guidelines, the "hierarchies are interrelated rather than sharply separated". This means that esteem and the subsequent levels are not strictly separated; instead, the levels are closely related. Esteem comes from day-to-day experiences which provide a learning opportunity that allows us to discover ourselves. This is incredibly important for children, which is why giving them "the opportunity to discover they are competent and capable learners" is crucial. To boost this, adults must provide opportunities for children to have successful and positive experiences to give children a greater "sense of self". Adults, especially parents and educators must create and ensure an environment for children that is supportive and provides them with opportunities that "helps children see themselves as respectable, capable individuals". It can also be found that "Maslow indicated that the need for respect or reputation is most important for children ... and precedes real self-esteem or dignity", which reflects the two aspects of esteem: for oneself and others. Cognitive needs It has been suggested that Maslow's hierarchy of needs can be extended after esteem needs into two more categories: cognitive needs and aesthetic needs. Cognitive needs crave meaning, information, comprehension and curiosity – this creates a will to learn and attain knowledge.  From an educational viewpoint, Maslow wanted humans to have intrinsic motivation to become educated people. People have cognitive needs such as creativity, foresight, curiosity, and meaning. Individuals who enjoy activities that require deliberation and brainstorming have a greater need for cognition. Individuals who are unmotivated to participate in the activity, on the other hand, have a low demand for cognitive abilities. Aesthetic needs After reaching one's cognitive needs, it would progress to aesthetic needs to beautify one's life. This would consist of having the ability to appreciate the beauty within the world around one's self, on a day-to-day basis. According to Maslow's theories, to progress toward Self-Actualization, humans require beautiful imagery or novel and aesthetically pleasing experiences. Humans must immerse themselves in nature's splendor while paying close attention to their surroundings and observing them in order to extract the world's beauty. One would accomplish this by making their environment pleasant to look at or be around. They might discover personal style choices that they feel represent them and make their environment a place that they fit well into. This higher level of need to connect with nature results in a sense of intimacy with nature and all that is endearing. Aesthetic needs also relate to beautifying oneself. This would consist of improving one's physical appearance to ensure its beauty to balance the rest of the body. This is done by making and finding ways one wants to dress and express oneself through personal beauty and grooming standards and ideas. Self-actualization "What a man can be, he must be. This quotation forms the basis of the perceived need for self-actualization. This level of need refers to the realization of one's full potential. Maslow describes this as the desire to accomplish everything that one can, to become the most that one can be. People may have a strong, particular desire to become an ideal parent, succeed athletically, or create paintings, pictures, or inventions. To understand this level of need, a person must not only succeed in the previous needs but master them. Self-actualization can be described as a value-based system when discussing its role in motivation. Self-actualization is understood as the goal or explicit motive, and the previous stages in Maslow's hierarchy fall in line to become the step-by-step process by which self-actualization is achievable; an explicit motive is the objective of a reward-based system that is used to intrinsically drive the completion of certain values or goals. Individuals who are motivated to pursue this goal seek and understand how their needs, relationships, and sense of self are expressed through their behavior. Self-actualization needs include: Partner acquisition Parenting Utilizing and developing talents and abilities Pursuing goals Transcendence needs Maslow later subdivided the triangle's top to include self-transcendence, also known as spiritual needs. Spiritual needs differ from other types of needs in that they can be met on multiple levels. When this need is met, it produces feelings of integrity and raises things to a higher plane of existence. In his later years, Maslow explored a further dimension of motivation, while criticizing his original vision of self-actualization. Maslow tells us that by transcending you have a set of roots in your current culture but you are able to look over it as well and see other viewpoints and ideas. By these later ideas, one finds the fullest realization in giving oneself to something beyond oneself—for example, in altruism or spirituality. He equated this with the desire to reach the infinite. "Transcendence refers to the very highest and most inclusive or holistic levels of human consciousness, behaving and relating, as ends rather than means, to oneself, to significant others, to human beings in general, to other species, to nature, and to the cosmos." Criticism Blackfoot influence Maslow's early (1938) anthropological research included a fieldtrip to the Blackfoot people (Siksika Nation) in southern Alberta, Canada. Based on his observations of their peaceful and cooperative way of life (in contrast to American society), Maslow concluded that human destructiveness and aggression is largely culturally determined and “most probably a secondary, reactive consequence of thwarting of or threat to the basic human needs”. However, claims have been made that Maslow had failed to acknowledge the influence of the Blackfoot philosophy in developing the hierarchy of needs. According to Kaufman, while acknowledging that Maslow learned much from the Blackfoot people, “there is nothing in these writings to suggest he borrowed or stole ideas for his hierarchy of needs”. Without wishing to discredit Maslow, Blackfoot elders and scholars have argued that Maslow did not really understand the Blackfoot philosophy. "It is not that Maslow got the hierarchy wrong or upside down, it is rather that he did not understand the circular nature in which all beings in Siksika society are interconnected and integrated. They surround each other and needs are met through these connections". Self-actualizing people Maslow studied people such as Albert Einstein, Jane Addams, Eleanor Roosevelt, and Baruch Spinoza, rather than mentally ill or neurotic people, writing that "the study of crippled, stunted, immature, and unhealthy specimens can yield only a cripple psychology and a cripple philosophy". Ranking Global ranking In a 1976 review of Maslow's hierarchy of needs, little evidence was found for the specific ranking of needs that Maslow described or for the existence of a definite hierarchy at all. This refutation was claimed to be supported by the majority of longitudinal data and cross-sectional studies at the time, with the limited support for Maslow's hierarchy criticized due to poor measurement criteria and selection of control groups. In 1984, the order in which the hierarchy is arranged was criticized as being ethnocentric by Geert Hofstede. In turn, Hofstede's work was criticized by others. Maslow's hierarchy of needs was argued as failing to illustrate and expand upon the difference between the social and intellectual needs of those raised in individualistic societies and those raised in collectivist societies. The needs and drives of those in individualistic societies tend to be more self-centered than those in collectivist societies, focusing on the improvement of the self, with self-actualization being the apex of self-improvement. In collectivist societies, the needs of acceptance and community will outweigh the needs for freedom and individuality. Criticisms towards the theory have also been expressed on the lack of consideration towards individualism and collectivism in the context of spirituality. Sex ranking The position and value of sex within Maslow's hierarchy have been a source of criticism. Maslow's hierarchy places sex in the physiological needs category, alongside food and breathing. Some critics argue that this placement of sex neglects the emotional, familial, and evolutionary implications of sex within the community, although others point out that this critique could apply to all of the basic needs. However, Maslow himself acknowledged that the satisfaction of sexual desire was likely linked to other social motives as well. Furthermore, it is recognized that physiological needs such as sex and hunger can be related to higher-order motivations. Cultural and individual variations Although recent research appears to validate the existence of universal human needs, as well as shared ordering of the way in which people seek and satisfy needs, the exact hierarchy proposed by Maslow is called into question. The most common criticism is the expectation that different individuals, with similar backgrounds and at similar junctures in their respective lives, when faced with the same situation, would end up taking the same decision. Instead of that, a common observation is that humans are driven by a unique set of motivations, and their behavior cannot be reliably predicted based on the Maslowian principles. The classification of the higher-order (self-esteem and self-actualization) and lower-order (physiological, safety, and love) needs is not universal and may vary across cultures due to individual differences and availability of resources in the region or geopolitical entity/country. In a 1997 study, exploratory factor analysis (EFA) of a thirteen-item scale showed there were two particularly important levels of needs in the US during the peacetime of 1993 to 1994: survival (physiological and safety) and psychological (love, self-esteem, and self-actualization). In 1991, a retrospective peacetime measure was established and collected during the Persian Gulf War, and US citizens were asked to recall the importance of needs from the previous year. Once again, only two levels of needs were identified; therefore, people have the ability and competence to recall and estimate the importance of needs. For citizens in the Middle East (Egypt and Saudi Arabia), three levels of needs regarding importance and satisfaction surfaced during the 1990 retrospective peacetime. These three levels were completely different from those of US citizens. Changes regarding the importance and satisfaction of needs from the retrospective peacetime to wartime due to stress varied significantly across cultures (the US vs. the Middle East). For the US citizens, there was only one level of needs, since all needs were considered equally important. With regards to satisfaction of needs during the war, in the US there were three levels: physiological needs, safety needs, and psychological needs (social, self-esteem, and self-actualization). During the war, the satisfaction of physiological needs and safety needs were separated into two independent needs, while during peacetime, they were combined as one. For the people of the Middle East, the satisfaction of needs changed from three levels to two during wartime. A study of the ordering of needs in Asia found differences between the ordering of lower and higher order needs. For instance, community (related to belongingness and considered a lower order need in Maslow's hierarchy) was found to be the highest order need across Asia, followed closely by self-acceptance and growth. A 1981 study looked at how Maslow's hierarchy might vary across age groups. A survey asked participants of varying ages to rate a set number of statements from most important to least important. The researchers found that children had higher physical need scores than the other groups, the love need emerged from childhood to young adulthood, the esteem need was highest among the adolescent group, young adults had the highest self-actualization level, and old age had the highest level of security, it was needed across all levels comparably. The authors argued that this suggested Maslow's hierarchy may be limited as a theory for developmental sequence since the sequence of the love need and the self-esteem need should be reversed according to age. The hierarchy of needs has been criticized from an Islamic point of view. See also ERG theory, further expands and explains Maslow's theory First World problem reflects on trivial concerns in the context of more pressing needs Manfred Max-Neef's Fundamental human needs, Manfred Max-Neef's model Functional prerequisites Human givens, a theory in psychotherapy that offers descriptions of the nature, needs, and innate attributes of humans Need theory, David McClelland's model Positive disintegration Self-determination theory, Edward L. Deci's and Richard Ryan's model Notes References Further reading External links 1943 introductions Developmental psychology Happiness Human development Interpersonal relationships Motivational theories Organizational behavior Personal development Personal life Positive psychology Psychological concepts
Maslow's hierarchy of needs
[ "Biology" ]
4,740
[ "Personal development", "Behavior", "Developmental psychology", "Human development", "Behavioural sciences", "Organizational behavior", "Interpersonal relationships", "Human behavior" ]
163,156
https://en.wikipedia.org/wiki/Information%20commissioner
The role of information commissioner differs from nation to nation. Most commonly it is a title given to a government regulator in the fields of freedom of information and the protection of personal data in the widest sense. The office often functions as a specialist ombudsman service. Australia The Office of the Australian Information Commissioner (OAIC) has functions relating to freedom of information and privacy, as well as information policy. The Office of the Privacy Commissioner, which was the national privacy regulator, was integrated into the OAIC on 1 November 2010. There are three independent commissioners in the OAIC: the Australian Information Commissioner, the Freedom of Information Commissioner, and the Privacy Commissioner. Bangladesh The Information Commission of Bangladesh promotes and protects access to information. It is formed under the Right to Information Act, 2009, whose stated object is to empower the citizens by promoting transparency and accountability in the working of the public and private organizations, with the ultimate aim of decreasing corruption and establishing good governance. The Act creates a regime through which the citizens of the country may have access to information under the control of public and other authorities. Canada The Information Commissioner of Canada is an independent ombudsman appointed by the Parliament of Canada who investigates complaints from people who believe they have been denied rights provided under Canada's Access to Information Act. Similar bodies at provincial level include the Information and Privacy Commissioner (Ontario). Germany The Federal Commissioner for Data Protection and Freedom of Information (FfDF) is the federal commissioner not only for data protection but also (since commencement of the German Freedom of Information Act on January 1, 2006) for freedom of information. Hong Kong The Privacy Commissioner for Personal Data (PCPD) is charged with education and enforcement of the Personal Data (Privacy) Ordinance, which first came into force in 1997. The commissioner has the power to investigate and impose fines for violations. Reforms in 2021 gave it powers to investigate and prosecute suspected doxxing incidents. India The Central Information Commission, and State Information Commissions, receive and inquire into complaints from anyone who has been refused access to any information requested under the Right to Information Act, or whose rights under that Act have otherwise been obstructed, for example by being prevented from submitting a data request or being required to pay an excessive fee. Ireland The Office of the Information Commissioner () in Ireland was set up under the terms of the Freedom of Information Act 1997, which came into effect in April 1998. The Information Commissioner may conduct reviews of the decisions of public bodies in relation to requests for access to information. Since its creation, the office has been held simultaneously with that of the Ombudsman. The Information Commissioner also holds the role of Commissioner for Environmental Information. Switzerland The Federal Data Protection and Information Commissioner is responsible for the supervision of federal authorities and private bodies with respect to data protection and freedom of information legislation. United Kingdom In the United Kingdom, the Information Commissioner's Office is responsible for regulating compliance with the Data Protection Act 2018, Freedom of Information Act 2000 and the Environmental Information Regulations 2004. The Freedom of Information (Scotland) Act 2002 is the responsibility of the Scottish Information Commissioner. Other European States All other countries of the European Union and the European Economic Area have equivalent officials created under their versions of Directive 95/46. The Europa website gives links to such bodies around Europe. Cooperation among information commissioners The Global Privacy Enforcement Network is a transnational organization for the coordination of privacy laws among its 59 member states and the European Union. See also Information privacy Freedom of information laws by country Information minister References External links List of national data protection authorities (European Union) International Conference of Data Protection and Privacy Commissioners Europe's Information Society Commissioner Information privacy
Information commissioner
[ "Engineering" ]
744
[ "Cybersecurity engineering", "Information privacy" ]
163,242
https://en.wikipedia.org/wiki/Mirror%20galvanometer
A mirror galvanometer is an ammeter that indicates it has sensed an electric current by deflecting a light beam with a mirror. The beam of light projected on a scale acts as a long massless pointer. In 1826, Johann Christian Poggendorff developed the mirror galvanometer for detecting electric currents. The apparatus is also known as a spot galvanometer after the spot of light produced in some models. Mirror galvanometers were used extensively in scientific instruments before reliable, stable electronic amplifiers were available. The most common uses were as recording equipment for seismometers and submarine cables used for telegraphy. In modern times, the term mirror galvanometer is also used for devices that move laser beams by rotating a mirror through a galvanometer set-up, often with a servo-like control loop. The name is often abbreviated as galvo. Kelvin's galvanometer The mirror galvanometer was improved significantly by William Thomson, later to become Lord Kelvin. He coined the term mirror galvanometer and patented the device in 1858. Thomson intended the instrument to read weak signal currents on very long submarine telegraph cables. This instrument was far more sensitive than any which preceded it, enabling the detection of the slightest defect in the core of a cable during its manufacture and submersion. Thomson decided that he needed an extremely sensitive instrument after he took part in the failed attempt to lay a transatlantic telegraph cable in 1857. He worked on the device while waiting for a new expedition the following year. He first looked at improving a galvanometer used by Hermann von Helmholtz to measure the speed of nerve signals in 1849. Helmholtz's galvanometer had a mirror fixed to the moving needle, which was used to project a beam of light onto the opposite wall, thus greatly amplifying the signal. Thomson intended to make this more sensitive by reducing the mass of the moving parts, but in a flash of inspiration while watching the light reflected from his monocle suspended around his neck, he realised that he could dispense with the needle and its mounting altogether. He instead used a small piece of mirrored glass with a small piece of magnetised steel glued on the back. This was suspended by a thread in the magnetic field of the fixed sensing coil. In a hurry to try the idea, Thomson first used a hair from his dog, but later used a silk thread from the dress of his niece Agnes. The following is adapted from a contemporary account of Thomson's instrument: Moving coil galvanometer Moving coil galvanometer was developed independently by Marcel Deprez and Jacques-Arsène d'Arsonval about 1880. Deprez's galvanometer was developed for high currents, while D'Arsonval designed his to measure weak currents. Unlike in the Kelvin's galvanometer, in this type of galvanometer the magnet is stationary and the coil is suspended in the magnet gap. The mirror attached to the coil frame rotates together with it. This form of instrument can be more sensitive and accurate and it replaced the Kelvin's galvanometer in most applications. The moving coil galvanometer is practically immune to ambient magnetic fields. Another important feature is self-damping generated by the electro-magnetic forces due to the currents induced in the coil by its movements the magnetic field. These are proportional to the angular velocity of the coil. Modern uses In modern times, high-speed mirror galvanometers are employed in laser light shows to move the laser beams and produce colorful geometric patterns in fog around the audience. Such high speed mirror galvanometers have proved to be indispensable in industry for laser marking systems for everything from laser etching hand tools, containers, and parts to batch-coding semiconductor wafers in semiconductor device fabrication. They typically control X and Y directions on Nd:YAG and CO2 laser markers to control the position of the infrared power laser spot. Laser ablation, laser beam machining and wafer dicing are all industrial areas where high-speed mirror galvanometers can be found. This moving coil galvanometer is mainly used to measure very feeble or low currents of order 10−9 A. To linearise the magnetic field across the coil throughout the galvanometer's range of movement, the d'Arsonval design of a soft iron cylinder is placed inside the coil without touching it. This gives a consistent radial field, rather than a parallel linear field. See also String galvanometer References Further reading External links Mirror Galvanometer - Interactive Java Tutorial National High Magnetic Field Laboratory Galvanometers Optical instruments
Mirror galvanometer
[ "Technology", "Engineering" ]
939
[ "Galvanometers", "Measuring instruments" ]
163,251
https://en.wikipedia.org/wiki/Optical%20telegraph
An optical telegraph is a line of stations, typically towers, for the purpose of conveying textual information by means of visual signals (a form of optical communication). There are two main types of such systems; the semaphore telegraph which uses pivoted indicator arms and conveys information according to the direction the indicators point, and the shutter telegraph which uses panels that can be rotated to block or pass the light from the sky behind to convey information. The most widely used system was the Chappe telegraph, which was invented in France in 1792 by Claude Chappe. It was popular in the late eighteenth to early nineteenth centuries. Chappe used the term télégraphe to describe the mechanism he had invented – that is the origin of the English word "telegraph". Lines of relay towers with a semaphore rig at the top were built within line of sight of each other, at separations of . Operators at each tower would watch the neighboring tower through a telescope, and when the semaphore arms began to move spelling out a message, they would pass the message on to the next tower. This system was much faster than post riders for conveying a message over long distances, and also had cheaper long-term operating costs, once constructed. Half a century later, semaphore lines were replaced by the electrical telegraph, which was cheaper, faster, and more private. The line-of-sight distance between relay stations was limited by geography and weather, and prevented the optical telegraph from crossing wide expanses of water, unless a convenient island could be used for a relay station. A modern derivative of the semaphore system is flag semaphore, signalling with hand-held flags. Etymology and terminology The word semaphore was coined in 1801 by the French inventor of the semaphore line itself, Claude Chappe. He composed it from the Greek elements σῆμα (sêma, "sign"); and from φορός (phorós, "carrying"), or φορά (phorá, "a carrying") from φέρειν (phérein, "to bear"). Chappe also coined the word tachygraph, meaning "fast writer". However, the French Army preferred to call Chappe's semaphore system the telegraph, meaning "far writer", which was coined by French statesman André François Miot de Mélito. The word semaphoric was first printed in English in 1808: "The newly constructed Semaphoric telegraphs (...) have been blown up", in a news report in the Naval Chronicle. The first use of the word semaphore in reference to English use was in 1816: "The improved Semaphore has been erected on the top of the Admiralty", referring to the installation of a simpler telegraph invented by Sir Home Popham. Semaphore telegraphs are also called, "Chappe telegraphs" or "Napoleonic semaphore". Early designs Optical telegraphy dates from ancient times, in the form of hydraulic telegraphs, torches (as used by ancient cultures since the discovery of fire) and smoke signals. Modern designs of semaphores developed via several paths, often simultaneously. Possibly the earliest was by the British polymath Robert Hooke, who gave a vivid and comprehensive outline of visual telegraphy to the Royal Society in a 1684 submission in which he outlined many practical details. The system (which was motivated by military concerns, following the Battle of Vienna in 1683) was never put into practice. One of the first experiments of optical signalling was carried out by the Anglo-Irish landowner and inventor, Sir Richard Lovell Edgeworth in 1767. He placed a bet with his friend, the horse racing gambler Lord March, that he could transmit knowledge of the outcome of the race in just one hour. Using a network of signalling sections erected on high ground, the signal would be observed from one station to the next by means of a telescope. The signal itself consisted of a large pointer that could be placed into eight possible positions in 45 degree increments. A series of two such signals gave a total 64 code elements and a third signal took it up to 512. He returned to his idea in 1795, after hearing of Chappe's system. While Edgeworth was developing his design, William Playfair, a Scottish political economist traveling in Europe in 1794, surreptitiously obtained the design and alphabet of the French system from a fleeing royalist. Playfair, who had numerous connections to British officials, provided a model of the system to the Duke of York, commander of British forces, then based in Flanders, and, according to the Encyclopedia Britannica, "hence the alphabet and plan of the machine came to England." Prevalence France Credit for the first successful optical telegraph goes to the French engineer Claude Chappe and his brothers in 1792, who succeeded in covering France with a network of 556 stations stretching a total distance of . Le système Chappe was used for military and national communications until the 1850s. Development in France During 1790–1795, at the height of the French Revolution, France needed a swift and reliable military communications system to thwart the war efforts of its enemies. France was surrounded by the forces of Britain, the Netherlands, Prussia, Austria, and Spain, the cities of Marseille and Lyon were in revolt, and the British Fleet held Toulon. The only advantage France held was the lack of cooperation between the allied forces due to their inadequate lines of communication. In mid-1790, the Chappe brothers set about devising a system of communication that would allow the central government to receive intelligence and to transmit orders in the shortest possible time. Chappe considered many possible methods including audio and smoke. He even considered using electricity, but could not find insulation for the conductors that would withstand the high-voltage electrostatic sources available at the time. Chappe settled on an optical system and the first public demonstration occurred on 2 March 1791 between Brûlon and Parcé, a distance of . The system consisted of a modified pendulum clock at each end with dials marked with ten numerals. The hands of the clocks almost certainly moved much faster than a normal clock. The hands of both clocks were set in motion at the same time with a synchronisation signal. Further signals indicated the time at which the dial should be read. The numbers sent were then looked up in a codebook. In their preliminary experiments over a shorter distance, the Chappes had banged a pan for synchronisation. In the demonstration, they used black and white panels observed with a telescope. The message to be sent was chosen by town officials at Brûlon and sent by René Chappe to Claude Chappe at Parcé who had no pre-knowledge of the message. The message read "si vous réussissez, vous serez bientôt couverts de gloire" (If you succeed, you will soon bask in glory). It was only later that Chappe realised that he could dispense with the clocks and the synchronisation system itself could be used to pass messages. The Chappes carried out experiments during the next two years, and on two occasions their apparatus at Place de l'Étoile, Paris was destroyed by mobs who thought they were communicating with royalist forces. Their cause was assisted by Ignace Chappe being elected to the Legislative Assembly. In the summer of 1792 Claude was appointed Ingénieur-Télégraphiste and charged with establishing a line of stations between Paris and Lille, a distance of 230 kilometres (about 143 miles). It was used to carry dispatches for the war between France and Austria. In 1794, it brought news of a French capture of Condé-sur-l'Escaut from the Austrians less than an hour after it occurred. The first symbol of a message to Lille would pass through 15 stations in only nine minutes. The speed of the line varied with the weather, but the line to Lille typically transferred 36 symbols, a complete message, in about 32 minutes. Another line of 50 stations was completed in 1798, covering 488 km between Paris and Strasbourg. From 1803 on, the French also used the 3-arm Depillon semaphore at coastal locations to provide warning of British incursions. English military engineer William Congreve observed that at the Battle of Vervik of 1793 French commanders directed their forces by using the sails of a prominent local windmill as an improvised signal station. Two of the four sails of the mill had been removed to resemble the arm of the new telegraph. Chappe system technical operation The Chappe brothers determined by experiment that it was easier to see the angle of a rod than to see the presence or absence of a panel. Their semaphore was composed of two black movable wooden arms, connected by a cross bar; the positions of all three of these components together indicated an alphabetic letter. With counterweights (named forks) on the arms, the Chappe system was controlled by only two handles and was mechanically simple and reasonably robust. Each of the two 2-metre-long arms could display seven positions, and the 4.6-metre-long cross bar connecting the two arms could display four different angles, for a total of 196 symbols (7×7×4). Night operation with lamps on the arms was unsuccessful. To speed up transmission and to provide some semblance of security, a code book was developed for use with semaphore lines. The Chappes' corporation used a code that took 92 of the basic symbols two at a time to yield 8,464 coded words and phrases. The revised Chappe system of 1795 provided not only a set of codes but also an operational protocol intended to maximize line throughput. Symbols were transmitted in cycles of "2 steps and 3 movements." Step 1, movement 1 (setup): The operator turned the indicator arms to align with the cross bar, forming a non-symbol, and then turned the cross bar into position for the next symbol. Step 1, movement 2 (transmission): The operator positioned the indicator arms for current symbol and waited for the downline station to copy it. Step 2, movement 3 (completion): The operator turned the cross bar to a vertical or horizontal position, indicating the end of a cycle. In this manner, each symbol could propagate down the line as quickly as operators could successfully copy it, with acknowledgement and flow control built into the protocol. A symbol sent from Paris took 2 minutes to reach Lille through 22 stations and 9 minutes to reach Lyon through 50 stations. A rate of 2–3 symbols per minute was typical, with the higher figure being prone to errors. This corresponds to only 0.4–0.6 wpm, but with messages limited to those contained in the code book, this could be dramatically increased. An additional benefit is that, if the code is kept secret, the content of transmitted messages can be concealed from both onlookers and system operators, even if they are aware that a message is being transmitted. This has remained an important feature of encrypted communications even as the technology for transmitting data has evolved. History in France After Chappe's initial line (between Paris and Lille), the Paris to Strasbourg with 50 stations followed soon after (1798). Napoleon Bonaparte made full use of the telegraph by obtaining speedy information on enemy movements. In 1801 he had Abraham Chappe build an extra-large station to transmit across the English Channel in preparation for an invasion of Britain. A pair of such stations were built on a test line over a comparable distance. The line to Calais was extended to Boulogne in anticipation and a new design station was briefly in operation at Boulogne, but the invasion never happened. In 1812, Napoleon took up another design of Abraham Chappe for a mobile telegraph that could be taken with him on campaign. This was still in use in 1853 during the Crimean War. The invention of the telegraph was followed by an enthusiasm concerning its potential to support direct democracy. For instance, based on Rousseau's argument that direct democracy was improbable in large constituencies, the French Intellectual Alexandre-Théophile Vandermonde commented: The operational costs of the telegraph in the year 1799/1800 were 434,000 francs ($1.2 million in 2015 in silver costs). In December 1800, Napoleon cut the budget of the telegraph system by 150,000 francs ($400,000 in 2015) leading to the Paris-Lyons line being temporarily closed. Chappe sought commercial uses of the system to make up the deficit, including use by industry, the financial sector, and newspapers. Only one proposal was immediately approved—the transmission of results from the state-run lottery. No non-government uses were approved. The lottery had been abused for years by fraudsters who knew the results, selling tickets in provincial towns after the announcement in Paris, but before the news had reached those towns. In 1819 Norwich Duff, a young British Naval officer, visiting Clermont-en-Argonne, walked up to the telegraph station there and engaged the signalman in conversation. Here is his note of the man's information: The network was reserved for government use, but an early case of wire fraud occurred in 1834 when two bankers, François and Joseph Blanc, bribed the operators at a station near Tours on the line between Paris and Bordeaux to pass Paris stock exchange information to an accomplice in Bordeaux. It took three days for the information to travel the 300 mile distance, giving the schemers plenty of time to play the market. An accomplice at Paris would know whether the market was going up or down days before the information arrived in Bordeaux via the newspapers, after which Bordeaux was sure to follow. The message could not be inserted in the telegraph directly because it would have been detected. Instead, pre-arranged deliberate errors were introduced into existing messages which were visible to an observer at Bordeaux. Tours was chosen because it was a division station where messages were purged of errors by an inspector who was privy to the secret code used and unknown to the ordinary operators. The scheme would not work if the errors were inserted prior to Tours. The operators were told whether the market was going up or down by the colour of packages (either white or grey paper wrapping) sent by mail coach, or, according to another anecdote, if the wife of the Tours operator received a package of socks (down) or gloves (up) thus avoiding any evidence of misdeed being put in writing. The scheme operated for two years until it was discovered in 1836. The French optical system remained in use for many years after other countries had switched to the electrical telegraph. Partly, this was due to inertia; France had the most extensive optical system and hence the most difficult to replace. But there were also arguments put forward for the superiority of the optical system. One of these was that the optical system is not so vulnerable to saboteurs as an electrical system with many miles of unguarded wire. Samuel Morse failed to sell the electrical telegraph to the French government. Eventually the advantages of the electrical telegraph of improved privacy, and all-weather and nighttime operation won out. A decision was made in 1846 to replace the optical telegraph with the Foy–Breguet electrical telegraph after a successful trial on the Rouen line. This system had a display which mimicked the look of the Chappe telegraph indicators to make it familiar to telegraph operators. Jules Guyot issued a dire warning of the consequences of what he considered to be a serious mistake. It took almost a decade before the optical telegraph was completely decommissioned. One of the last messages sent over the French semaphore was the report of the fall of Sebastopol in 1855. Sweden Sweden was the second country in the world, after France, to introduce an optical telegraph network. Its network became the second most extensive after France. The central station of the network was at the Katarina Church in Stockholm. The system was faster than the French system, partly due to the Swedish control panel and partly to the ease of transcribing the octal code (the French system was recorded as pictograms). The system was used primarily for reporting the arrival of ships, but was also useful in wartime for observing enemy movements and attacks. The last stationary semaphore link in regular service was in Sweden, connecting an island with a mainland telegraph line. It went out of service in 1880. Development in Sweden Inspired by news of the Chappe telegraph, the Swedish inventor Abraham Niclas Edelcrantz experimented with the optical telegraph in Sweden. He constructed a three-station experimental line in 1794 running from the royal castle in Stockholm, via Traneberg, to the grounds of Drottningholm Castle, a distance of . The first demonstration was on 1 November, when Edelcrantz sent a poem dedicated to the king, Gustav IV Adolf, on his fourteenth birthday. On 7 November the king brought Edelcrantz into his Council of Advisers with a view to building a telegraph throughout Sweden, Denmark, and Finland. Edelcrantz system technical operation After some initial experiments with Chappe-style indicator arms, Edelcrantz settled on a design with ten iron shutters. Nine of these represented a 3-digit octal number and the tenth, when closed, meant the code number should be preceded by "A". This gave 1,024 codepoints which were decoded to letters, words or phrases via a codebook. The telegraph had a sophisticated control panel which allowed the next symbol to be prepared while waiting for the previous symbol to be repeated on the next station down the line. The control panel was connected by strings to the shutters. When ready to transmit, all the shutters were set at the same time with the press of a footpedal. The shutters were painted matte black to avoid reflection from sunlight and the frame and arms supporting the shutters were painted white or red for best contrast. Around 1809 Edelcrantz introduced an updated design. The frame around the shutters was dispensed with leaving a simpler, more visible, structure of just the arms with the indicator panels on the end of them. The "A" shutter was reduced to the same size as the other shutters and offset to one side to indicate which side was the most significant digit (whether the codepoint is read left-to-right or right-to-left is different for the two adjacent stations depending on which side they are on). This was previously indicated with a stationary indicator fixed to the side of the frame, but without a frame this was no longer possible. The distance that a station could transmit depended on the size of the shutters and the power of the telescope being used to observe them. The smallest object visible to the human eye is one that subtends an angle of 40 seconds of arc, but Edelcrantz used a figure of 4 minutes of arc to account for atmospheric disturbances and imperfections of the telescope. On that basis, and with a 32× telescope, Edelcrantz specified shutter sizes ranging from 9 inches () for a distance of half a Swedish mile () to 54 inches () for 3 Swedish miles (). These figures were for the original design with square shutters. The open design of 1809 had long oblong shutters which Edelcrantz thought was more visible. Distances much further than these would require impractically high towers to overcome the curvature of the Earth as well as large shutters. Edelcrantz kept the distance between stations under 2 Swedish miles () except where large bodies of water made it unavoidable. The Swedish telegraph was capable of being used at night with lamps. On smaller stations lamps were placed behind the shutters so that they became visible when the shutter was opened. For larger stations, this was impractical. Instead, a separate tin box matrix with glass windows was installed below the daytime shutters. The lamps inside the tin box could be uncovered by pulling strings in the same way the daytime shutters were operated. Windows on both sides of the box allowed the lamps to be seen by both the upstream and downstream adjacent stations. The codepoints used at night were the complements of the codepoints used during the day. This made the pattern of lamps in open shutters at night the same as the pattern of closed shutters in daytime. First network: 1795–1809 The first operational line, Stockholm to Vaxholm, went into service in January 1795. By 1797 there were also lines from Stockholm to Fredriksborg, and Grisslehamn via Signilsskär to Eckerö in Åland. A short line near Göteborg to Marstrand on the west coast was installed in 1799. During the War of the Second Coalition, Britain tried to enforce a blockade against France. Concerned at the effect on their own trade, Sweden joined the Second League of Armed Neutrality in 1800. Britain was expected to respond with an attack on one of the Nordic countries in the league. To help guard against such an attack, the king ordered a telegraph link joining the systems of Sweden and Denmark. This was the first international telegraph connection in the world. Edelcrantz made this link between Helsingborg in Sweden and Helsingør in Denmark, across the Öresund, the narrow strait separating the two countries. A new line along the coast from Kullaberg to Malmö, incorporating the Helsingborg link, was planned in support and to provide signalling points to the Swedish fleet. Nelson's attack on the Danish fleet at Copenhagen in 1801 was reported over this link, but after Sweden failed to come to Denmark's aid it was not used again and only one station on the supporting line was ever built. In 1808 the Royal Telegraph Institution was created and Edelcrantz was made director. The Telegraph Institution was put under the jurisdiction of the military, initially as part of the Royal Engineering Corps. A new code was introduced to replace the 1796 codebook with 5,120 possible codepoints with many new messages. The new codes included punishments for delinquent operators. These included an order to the operator to stand on one of the telegraph arms (code 001-721), and a message asking an adjacent station to confirm that they could see him do it (code 001-723). By 1809, the network had 50 stations over of line employing 172 people. In comparison, the French system in 1823 had of line and employed over three thousand people. In 1808, the Finnish War broke out when Russia seized Finland, then part of Sweden. Åland was attacked by Russia and the telegraph stations destroyed. The Russians were expelled in a revolt, but attacked again in 1809. The station at Signilsskär found itself behind enemy lines, but continued to signal the position of Russian troops to the retreating Swedes. After Sweden ceded Finland in the Treaty of Fredrikshamn, the east coast telegraph stations were considered superfluous and put into storage. In 1810, the plans for a south coast line were revived but were scrapped in 1811 due to financial considerations. Also in 1811, a new line from Stockholm via Arholma to Söderarm lighthouse was proposed, but also never materialised. For a while, the telegraph network in Sweden was almost non-existent, with only four telegraphists employed by 1810. Rebuilding the network The post of Telegraph Inspector was created as early as 1811, but the telegraph in Sweden remained dormant until 1827 when new proposals were put forward. In 1834, the Telegraph Institution was moved to the Topographical Corps. The Corps head, Carl Fredrik Akrell, conducted comparisons of the Swedish shutter telegraph with more recent systems from other countries. Of particular interest was the semaphore system of Charles Pasley in England which had been on trial in Karlskrona. Tests were performed between Karlskrona and Drottningskär, and, in 1835, nighttime tests between Stockholm and Fredriksborg. Akrell concluded that the shutter telegraph was faster and easier to use, and was again adopted for fixed stations. However, Pasley's semaphore was cheaper and easier to construct, so was adopted for mobile stations. By 1836 the Swedish telegraph network had been fully restored. The network continued to expand. In 1837, the line to Vaxholm was extended to Furusund. In 1838 the Stockholm-Dalarö-Sandhamn line was extended to Landsort. The last addition came in 1854 when the Furusund line was extended to Arholma and Söderarm. The conversion to electrical telegraphy was slower and more difficult than in other countries. The many stretches of open ocean needing to be crossed on the Swedish archipelagos was a major obstacle. Akrell also raised similar concerns to those in France concerning potential sabotage and vandalism of electrical lines. Akrell first proposed an experimental electrical telegraph line in 1852. For many years the network consisted of a mix of optical and electrical lines. The last optical stations were not taken out of service until 1881, the last in operation in Europe. In some places, the heliograph replaced the optical telegraph rather than the electrical telegraph.} United Kingdom In Ireland, Richard Lovell Edgeworth returned to his earlier work in 1794, and proposed a telegraph there to warn against an anticipated French invasion; however, the proposal was not implemented. Lord George Murray, stimulated by reports of the Chappe semaphore, proposed a system of visual telegraphy to the British Admiralty in 1795. He employed rectangular framework towers with six five-foot-high octagonal shutters on horizontal axes that flipped between horizontal and vertical positions to signal. The Rev. Mr Gamble also proposed two distinct five-element systems in 1795: one using five shutters, and one using five ten-foot poles. The British Admiralty accepted Murray's system in September 1795, and the first system was the 15 site chain from London to Deal. Messages passed from London to Deal in about sixty seconds, and sixty-five sites were in use by 1808. Chains of Murray's shutter telegraph stations were built along the following routes: London–Deal and Sheerness, London–Great Yarmouth, and London–Portsmouth and Plymouth. The line to Plymouth was not completed until 4 July 1806, and so could not be used to relay the news of Trafalgar. The shutter stations were temporary wooden huts, and at the conclusion of the Napoleonic wars they were no longer necessary, and were closed down by the Admiralty in March 1816. Following the Battle of Trafalgar, the news was transmitted to London by frigate to Falmouth, from where the captain took the dispatches to London by coach along what became known as the Trafalgar Way; the journey took 38 hours. This delay prompted the Admiralty to investigate further. A replacement telegraph system was sought, and of the many ideas and devices put forward the Admiralty chose the simpler semaphore system invented by Sir Home Popham. A Popham semaphore was a single fixed vertical 30 foot pole, with two movable 8 foot arms attached to the pole by horizontal pivots at their ends, one arm at the top of the pole, and the other arm at the middle of the pole. The signals of the Popham semaphore were found to be much more visible than those of the Murray shutter telegraph. Popham's 2-arm semaphore was modelled after the 3-arm Depillon French semaphore. An experimental semaphore line between the Admiralty and Chatham was installed in July 1816, and its success helped to confirm the choice. Subsequently, the Admiralty decided to establish a permanent link to Portsmouth and built a chain of semaphore stations. Work started in December 1820 with Popham's equipment replaced with another two-arm system invented by Charles Pasley. Each of the arms of Pasley's system could take on one of eight positions and it thus had more codepoints than Popham's. In good conditions messages were sent from London to Portsmouth in less than eight minutes. The line was operational from 1822 until 1847, when the railway and electric telegraph provided a better means of communication. The semaphore line did not use the same locations as the shutter chain, but followed almost the same route with 15 stations: Admiralty (London), Chelsea Royal Hospital, Putney Heath, Coombe Warren, Coopers Hill, Chatley Heath, Pewley Hill, Bannicle Hill, Haste Hill (Haslemere), Holder Hill, (Midhurst), Beacon Hill, Compton Down, Camp Down, Lumps Fort (Southsea), and Portsmouth Dockyard. The semaphore tower at Chatley Heath, which replaced the Netley Heath station of the shutter telegraph, is currently being restored by the Landmark Trust as self-catering holiday accommodation. There will be public access on certain days when the restoration is complete. The Board of the Port of Liverpool obtained a local act of Parliament, the Liverpool Improvement Act 1825 (6 Geo. 4. c. clxxxvii), to construct a chain of Popham optical semaphore stations from Liverpool to Holyhead in 1825. The system was designed and part-owned by Barnard L. Watson, a reserve marine officer, and came into service in 1827. The line is possibly the only example of an optical telegraph built entirely for commercial purposes. It was used so that observers at Holyhead could report incoming ships to the Port of Liverpool and trading could begin in the cargo being carried before the ship docked. The line was kept in operation until 1860 when a railway line and associated electrical telegraph made it redundant. Many of the prominences on which the towers were built ('telegraph hills') are known as Telegraph Hill to this day. British empire Ireland In Ireland R.L. Edgeworth was to develop an optical telegraph based on a triangle pointer, measuring up to 16 feet in height. Following several years promoting his system, he was to get admiralty approval and engaged in its construction during 1803–1804. The completed system ran from Dublin to Galway and was to act as a rapid warning system in case of French invasion of the west coast of Ireland. Despite its success in operation, the receding threat of French invasion was to see the system disestablished in 1804. Canada In Canada, Prince Edward, Duke of Kent established the first semaphore line in North America. In operation by 1800, it ran between the city of Halifax and the town of Annapolis in Nova Scotia, and across the Bay of Fundy to Saint John and Fredericton in New Brunswick. In addition to providing information on approaching ships, the Duke used the system to relay military commands, especially as they related to troop discipline. The Duke had envisioned the line reaching as far as the British garrison at Quebec City, but the many hills and coastal fog meant the towers needed to be placed relatively close together to ensure visibility. The labour needed to build and continually man so many stations taxed the already stretched-thin British military and there is doubt the New Brunswick line was ever in operation. With the exception of the towers around Halifax harbour, the system was abandoned shortly after the Duke's departure in August 1800. Malta The British military authorities began to consider installing a semaphore line in Malta in the early 1840s. Initially, it was planned that semaphore stations be established on the bell towers and domes of the island's churches, but the religious authorities rejected the proposal. Due to this, in 1848 new semaphore towers were constructed at Għargħur and Għaxaq on the main island, and another was built at Ta' Kenuna on Gozo. Further stations were established at the Governor's Palace, Selmun Palace and the Giordan Lighthouse. Each station was staffed by the Royal Engineers. India In India, semaphore towers were introduced in 1810. A series of towers were built between Fort William, Kolkata to Chunar Fort near Varanasi.The towers in the plains were tall and those in the hills were tall, and were built at an interval of about . Van Diemen's Land In southern Van Diemens Land (Tasmania) a signalling system to announce the arrival of ships was suggested by Governor-In-Chief Lachlan Macquarie when he made his first visit in 1811 Initially a simple flag system in 1818 between Mt. Nelson and Hobart, it developed into a system with two revolving arms by 1829, the system was quite crude and the arms were difficult to operate. In 1833 Charles O'Hara Booth took over command of the Port Arthur penal settlement, as an "enthusiast in the art of signalling" he saw the value of better communications with the headquarters in Hobart. During his command the semaphore system was extended to include 19 stations on the various mountains and islands between Port Arthur and Hobart. Until 1837 three single rotating arm semaphores were used. Subsequently the network was upgraded to use signal posts with six arms - a pair top, middle and bottom. This enabled the semaphore to send 999 signal codes. Captain George King of the Port Office and Booth together contributed to the code book for the system. King drew up shipping related codes and Booth added Government, Military and penal station matters. In 1877 Port Arthur was closed and the semaphore was operated for shipping signals only, it was finally replaced with a simple flagstaff after the introduction of the telephone in 1880. In the north of the state there was a requirement to report on shipping arrivals as they entered the Tamar Estuary, some 55 kilometers from the main port at this time in Launceston. The Tamar Valley Semaphore System was based on a design by Peter Archer Mulgrave. This design used two arms, one with a cross piece at the end. The arms were rotated by ropes, and later chains. The barred arm positions indicated numbers 1 to 6 clockwise from the bottom left and the unbarred arm 7,8,9, STOP and REPEAT. A message was sent by sending numbers sequentially to make up a code. As with other systems the code was decoded via a code book. On 1 October 1835 it was announced in the Launceston Advertiser - "...that the signal stations are now complete from Launceston to George Town, and communication may he made, as well as received, from the Windmill Hill to George Town, in a very few minutes, on a clear day". The system comprised six stations - Launceston Port Office, Windmill Hill, Mt. Direction, Mt.George, George Town Port Office, Low Head lighthouse. The Tamar Valley semaphore telegraph operated for twenty-two and a half years closing on 31 March 1858 after the introduction of the electric telegraph. In the 1990s the Tamar Valley Signal Station Committee Inc. was formed to restore the system. The works were carried out over several years and the semaphore telegraph was declared complete once more on Sunday 30 September 2001. Iberia Spain In Spain, the engineer Agustín de Betancourt developed his own system which was adopted by that state; in 1798 he received a Royal Appointment, and the first stretch of line connecting Madrid and Aranjuez was in operation as of August 1800. Spain was spanned by an extensive semaphore telegraph network in the 1840s and 1850s. The three main semaphore lines radiated from Madrid. The first ran north to Irun on the Atlantic coast at the French border. The second ran east to the Mediterranean, then north along the coast through Barcelona to the French border. The third ran south to Cadiz on the Atlantic coast. These lines served many other Spanish cities, including: Aranjuez, Badajoz, Burgos, Castellon, Ciudad Real, Córdoba, Cuenca, Gerona, Pamplona, San Sebastian, Seville, Tarancon, Taragona, Toledo, Valladolid, Valencia, Vitoria and Zaragoza. The rugged topography of the Iberian peninsula that facilitated the design of semaphore lines conveying information from hilltop to hilltop, made it difficult to implement wire telegraph lines when that technology was introduced in the mid 19th century. The Madrid-Cadiz line was the first to be dismantled in 1855, but other segments of the optical system continued to function until the end of the Carlist Wars in 1876. Portugal In Portugal, the British forces fighting Napoleon in Portugal soon found that the Portuguese Army had already a very capable semaphore terrestrial system working since 1808, giving the Duke of Wellington a decisive advantage in intelligence. The innovative Portuguese telegraphs, designed by , a mathematician, were of 3 types: 3 shutters, 3 balls and 1 pointer/moveable arm. He also wrote the code book "Táboas Telegráphicas", the same for the 3 telegraph types. Since early 1810 the network was operated by "Corpo Telegráfico", the first Portuguese military Signal Corps. Other regions Once it had proved its success in France, the optical telegraph was imitated in many other countries, especially after it was used by Napoleon to coordinate his empire and army. In most of these countries, the postal authorities operated the semaphore lines. Many national services adopted signalling systems different from the Chappe system. For example, the UK and Sweden adopted systems of shuttered panels (in contradiction to the Chappe brothers' contention that angled rods are more visible). In some cases, new systems were adopted because they were thought to be improvements. But many countries pursued their own, often inferior, designs for reasons of national pride or not wanting to copy from rivals and enemies. In 1801, the Danish post office installed a semaphore line across the Great Belt strait, Storebæltstelegrafen, between islands Funen and Zealand with stations at Nyborg on Funen, on the small island Sprogø in the middle of the strait, and at Korsør on Zealand. It was in use until 1865. In the Kingdom of Prussia, Frederick William III ordered the construction of an experimental line in 1819, but due to opposition from the defence minister Karl von Hake, nothing happened until 1830 when a short three-station line between Berlin and Potsdam was built. The design was based on the Swedish telegraph with the number of shutters increased to twelve. Postrat Carl Pistor proposed instead a semaphore system based on Watson's design in England. An operational line of this design running Berlin-Magdeburg-Dortmund-Köln-Bonn-Koblenz was completed in 1833. The line employed about 200 people, comparable to Sweden, but no network ever developed and no more official lines were built. The line was decommissioned in 1849 in favour of an electrical line. Although there were no more government sponsored official lines, there was some private enterprise. Johann Ludwig Schmidt opened a commercial line from Hamburg to Cuxhaven in 1837. In 1847, Schmidt opened a second line from Bremen to Bremerhaven. These lines were used for reporting the arrival of commercial ships. The two lines were later linked with three additional stations to create possibly the only private telegraph network in the optical telegraph era. The telegraph inspector for this network was Friedrich Clemens Gerke, who would later move to the Hamburg-Cuxhaven electrical telegraph line and develop what became the International Morse Code. The Hamburg line went out of use in 1850, and the Bremen line in 1852. In Russia, Tsar Nicolas I inaugurated a line between Moscow and Warsaw of length in 1833; it needed 220 stations staffed by 1,320 operators. The stations were noted to be unused and decaying in 1859, so the line was probably abandoned long before this. In the United States, the first optical telegraph was built by Jonathan Grout in 1804 but ceased operation in 1807. This line between Martha's Vineyard with Boston transmitted shipping news. An optical telegraph system linking Philadelphia and the mouth of the Delaware Bay was in place by 1809 and had a similar purpose; a second line to New York City was operational by 1834, when its Philadelphia terminus was moved to the tower of the Merchants Exchange. One of the principal hills in San Francisco, California is also named "Telegraph Hill", after the semaphore telegraph which was established there in 1849 to signal the arrival of ships into San Francisco Bay. As first data networks The optical telegraphs put in place at the turn of the 18th/19th centuries were the first examples of data networks. Chappe and Edelcrantz independently invented many features that are now commonplace in modern networks, but were then revolutionary and essential to the smooth running of the systems. These features included control characters, routing, error control, flow control, message priority and symbol rate control. Edelcrantz documented the meaning and usage of all his control codes from the start in 1794. The details of the early Chappe system are not known precisely; the first operating instructions to survive date to 1809 and the French system is not as fully explained as the Swedish. Some of the features of these systems are considered advanced in modern practice and have been recently reinvented. An example of this is the error control codepoint 707 in the Edelcrantz code. This was used to request the repeat of a specified recent symbol. The 707 was followed by two symbols identifying the row and column in the current page of the logbook that it was required to repeat. This is an example of a selective repeat and is more efficient than the simple go back n strategy used on many modern networks. This was a later addition; both Edelcrantz (codepoint 272), and Chappe (codepoint 2H6) initially used only a simple "erase last character" for error control, taken directly from Hooke's 1684 proposal. Routing in the French system was almost permanently fixed; only Paris and the station at the remote end of a line were allowed to initiate a message. The early Swedish system was more flexible, having the ability to set up message connections between arbitrary stations. Similar to modern networks, the initialisation request contained the identification of the requesting and target station. The request was acknowledged by the target station by sending the complement of the code received. This protocol is unique with no modern equivalent. This facility was removed from the codebook in the revision of 1808. After this, only Stockholm would normally initiate messages with other stations waiting to be polled. The Prussian system required the Coblenz station (at the end of the line) to send a "no news" message (or a real message if there was one pending) back to Berlin on the hour, every hour. Intermediate stations could only pass messages by replacing the "no news" message with their traffic. On arrival in Berlin, the "no news" message was returned to Coblenz with the same procedure. This can be considered an early example of a token passing system. This arrangement required accurate clock synchronisation at all the stations. A synchronisation signal was sent out from Berlin for this purpose every three days. Another feature that would be considered advanced in a modern electronic system is the dynamic changing of transmission rates. Edelcrantz had codepoints for faster (770) and slower (077). Chappe also had this feature. In popular culture By the mid-19th century, the optical telegraph was well known enough to be referenced in popular works without special explanation. The Chappe telegraph appeared in contemporary fiction and comic strips. In "Mister Pencil" (1831), a comic strip by Rodolphe Töpffer, a dog fallen on a Chappe telegraph's arm—and its master attempting to help get it down—provoke an international crisis by inadvertently transmitting disturbing messages. In Lucien Leuwen (1834), Stendhal pictures a power struggle between Lucien Leuwen and the prefect M. de Séranville with the telegraph's director M. Lamorte. In Chapter 60 ("The Telegraph") of Alexandre Dumas' The Count of Monte Cristo (1844), the title character describes with fascination the semaphore line's moving arms: "I had at times seen rise at the end of a road, on a hillock and in the bright light of the sun, these black folding arms looking like the legs of an immense beetle." He later bribes a semaphore operator to relay a false message in order to manipulate the French financial market. Dumas also describes in detail the functioning of a Chappe telegraph line. In Hector Malot's novel Romain Kalbris (1869), one of the characters, a girl named Dielette, describes her home in Paris as "...next to a church near which there was a clock tower. On top of the tower there were two large black arms, moving all day this way and that. [I was told later] that this was Saint-Eustache church and that these large black arms were a telegraph." In the 21st century, the optical telegraph concept is mainly kept alive in popular culture through fiction such as the novel Pavane and Terry Pratchett's "Clacks" in his Discworld novels, most notably the 2004 novel Going Postal. See also History of telecommunication Telegraph code, for more information on many of the codes used Optical communication Polybius square Railway signalling San Jose Semaphore Semaphore Flag Signaling System Signal lamp Telegraph Hill, for a list of telegraph hills Wigwag, a flag signaling system that also used telescopes and towers Notes References Citations Bibliography Crowley, David and Heyer, Paul (ed) (2003) 'Chapter 17: The optical telegraph' Communication in History: Technology, Culture and Society (Fourth Edition) Allyn and Bacon, Boston pp. 123–125 Edelcrantz, Abraham Niclas, Afhandling om Telegrapher ("A Treatise on Telegraphs"), 1796, as translated in ch. 4 of Holzmann & Pehrson. Holzmann, Gerard J.; Pehrson, Bjorn, The Early History of Data Networks, John Wiley & Sons, 1995 . Huurdeman, Anton A., The Worldwide History of Telecommunications, John Wiley & Sons, 2003 . Further reading The Victorian Internet, Tom Standage, Walker & Company, 1998, The Old Telegraphs, Geoffrey Wilson, Phillimore & Co Ltd 1976 Faster Than The Wind, The Liverpool to Holyhead Telegraph, Frank Large, an avid publication The early history of data networks, Gerard Holzmann and Bjorn Pehrson, Wiley Publ., 2003, External links Chappe's semaphore (an illustrated history of optical telegraphy) Webpage including a map of England's telegraph chains Diagrams and maps of Murray's U.K. semaphore stations Chart of Murray's shutter-semaphore code Photo and diagrams of Popham's U.K. semaphore stations Map of visual telegraph (semaphore) and electrical telegraph lines in Italy, 1860 Details on the history of the Blanc brothers fraudulant use of the Semophore line Live recreation of the Spanish optical telegraph code History of telecommunications Latin-script representations Napoleonic beacons in England Optical communications Telegraphy Semaphore French inventions
Optical telegraph
[ "Engineering" ]
9,689
[ "Optical communications", "Telecommunications engineering" ]
163,389
https://en.wikipedia.org/wiki/Diamond%20dust
Diamond dust is a ground-level cloud composed of tiny ice crystals. This meteorological phenomenon is also referred to simply as ice crystals and is reported in the METAR code as IC. Diamond dust generally forms under otherwise clear or nearly clear skies, so it is sometimes referred to as clear-sky precipitation. Diamond dust is most commonly observed in Antarctica and the Arctic, but can occur anywhere with a temperature well below freezing. In the polar regions of Earth, diamond dust may persist for several days without interruption. Characteristics Diamond dust is similar to fog in that it is a cloud based at the surface; however, it differs from fog in two main ways. Generally fog refers to a cloud composed of liquid water (the term ice fog usually refers to a fog that formed as liquid water and then froze, and frequently seems to occur in valleys with airborne pollution such as Fairbanks, Alaska, while diamond dust forms directly as ice). Also, fog is a dense-enough cloud to significantly reduce visibility, while diamond dust is usually very thin and may not have any effect on visibility (there are far fewer crystals in a volume of air than there are droplets in the same volume with fog). Because mist is often classified to be more transparent than fog, diamond dust has often been referred to as ice mist. However, diamond dust still can often reduce the visibility, in some cases to under . The depth of the diamond dust layer can vary substantially from as little as to . Because diamond dust does not always reduce visibility it is often first noticed by the brief flashes caused when the tiny crystals, tumbling through the air, reflect sunlight to the eye. This glittering effect gives the phenomenon its name since it looks like many tiny diamonds are flashing in the air. Formation These ice crystals usually form when a temperature inversion is present at the surface and the warmer air above the ground mixes with the colder air near the surface. Since warmer air frequently contains more water vapor than colder air, this mixing will usually also transport water vapor into the air near the surface, causing the relative humidity of the near-surface air to increase. If the relative humidity increase near the surface is large enough then ice crystals may form. To form diamond dust the temperature must be below the freezing point of water, , or the ice cannot form or would melt. However, diamond dust is not often observed at temperatures near . At temperatures between and about increasing the relative humidity can cause either fog or diamond dust. This is because very small droplets of water can remain liquid well below the freezing point, a state known as supercooled water. In areas with a lot of small particles in the air, from human pollution or natural sources like dust, the water droplets are likely to be able to freeze at a temperature around , but in very clean areas, where there are no particles (ice nuclei) to help the droplets freeze, they can remain liquid to , at which point even very tiny, pure water droplets will freeze. In the interior of Antarctica diamond dust is fairly common at temperatures below about . Artificial diamond dust can form from snow machines which blow ice crystals into the air. These are found at ski resorts. Diamond dust may also be observed immediately downwind from manufacturing facilities or chilled water plants that produce steam. Optical properties Diamond dust is often associated with halos, such as sun dogs, light pillars, etc. Like the ice crystals in cirrus or cirrostratus clouds, diamond dust crystals form directly as simple hexagonal ice crystals — as opposed to freezing drops — and generally form slowly. This combination results in crystals with well defined shapes - usually either hexagonal plates or columns - which, like a prism, can reflect and/or refract light in specific directions. Climatology While diamond dust can be seen in any area of the world that has cold winters, it is most frequent in the interior of Antarctica, where it is common year-round. Schwerdtfeger (1970) shows that diamond dust was observed on average 316 days a year at Plateau Station in Antarctica, and Radok and Lile (1977) estimate that over 70% of the precipitation that fell at Plateau Station in 1967 fell in the form of diamond dust. Once melted, the total precipitation for the year was only . Weather reporting and interference Diamond dust may sometimes cause a problem for automated airport weather stations. The ceilometer and visibility sensor do not always correctly interpret the falling diamond dust and report the visibility and ceiling as zero (overcast skies). However, a human observer would correctly notice clear skies and unrestricted visibility. The METAR identifier for diamond dust within international hourly weather reports is IC. See also Crepuscular rays Light beam False sunrise False sunset References Further reading — An excellent reference for optical phenomena including photos of displays in Antarctica caused by diamond dust. Photo of artificial Diamond Dust External links A remarkable video filmed in Hokkaido, Japan. 1min 22sec HQ Longer version of the above video. 5min 10sec HD Note that images are different from the naked eye in that they capture out-of-focus crystals which are shown as large, blurred objects. The Science Behind Diamond Dust: How It Reflects Solar Radiation Psychrometrics Precipitation Water ice Snow or ice weather phenomena Atmospheric optical phenomena Articles containing video clips sv:Diamantstoft
Diamond dust
[ "Physics" ]
1,074
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
163,390
https://en.wikipedia.org/wiki/Feeling
According to the APA Dictionary of Psychology, a feeling is "a self-contained phenomenal experience"; feelings are "subjective, evaluative, and independent of the sensations, thoughts, or images evoking them". The term feeling is closely related to, but not the same as, emotion. Feeling may, for instance, refer to the conscious subjective experience of emotions. The study of subjective experiences is called phenomenology. Psychotherapy generally involves a therapist helping a client understand, articulate, and learn to effectively regulate the client's own feelings, and ultimately to take responsibility for the client's experience of the world. Feelings are sometimes held to be characteristic of embodied consciousness. The English noun feelings may generally refer to any degree of subjectivity in perception or sensation. However, feelings often refer to an individual sense of well-being (perhaps of wholeness, safety, or being loved). Feelings have a semantic field extending from the individual and spiritual to the social and political. The word feeling may refer to any of a number of psychological characteristics of experience, or even to reflect the entire inner life of the individual (see Mood.) As self-contained phenomenal experiences, evoked by sensations and perceptions, feelings can strongly influence the character of a person's subjective reality. Feelings can sometimes harbor bias or otherwise distort veridical perception, in particular through projection, wishful thinking, among many other such effects. Feeling may also describe the senses, such as the physical sensation of touch. History The modern conception of affect developed in the 19th century with Wilhelm Wundt. The word comes from the German Gefühl, meaning "feeling." A number of experiments have been conducted in the study of social and psychological affective preferences (i.e., what people like or dislike). Specific research has been done on preferences, attitudes, impression formation, and decision-making. This research contrasts findings with recognition memory (old-new judgments), allowing researchers to demonstrate reliable distinctions between the two. Affect-based judgments and cognitive processes have been examined with noted differences indicated. Some argue affect and cognition are under the control of separate and partially independent systems that can influence each other in a variety of ways (Zajonc, 1980). Both affect and cognition may constitute independent sources of effects within systems of information processing. Others suggest emotion is a result of an anticipated, experienced, or imagined outcome of an adaptational transaction between organism and environment, therefore cognitive appraisal processes are keys to the development and expression of an emotion (Lazarus, 1982). Emotions (in relation to feelings) Difference between feeling and emotion The neuroscientist Antonio Damasio distinguishes between emotions and feelings: Emotions are mental images (i.e. representing either internal or external states of reality) and the bodily changes accompanying them, whereas feelings are the perception of bodily changes. In other words, emotions contain a subjective element and a 3rd person observable element, whereas feelings are subjective and private. In general usage, the terms emotion and feelings are used as synonyms or interchangeable, but actually, they are not. The feeling is a conscious experience created after the physical sensation or emotional experience, whereas emotions are felt through emotional experience. They are manifested in the unconscious mind and can be associated with thoughts, desires, and actions. Sensations Sensation occurs when sense organs collect various stimuli (such as a sound or smell) for transduction, meaning transformation into a form that can be understood by the nervous system. Interoception Gut A gut feeling, or gut reaction, is a visceral emotional reaction to something. It may be negative, such as a feeling of uneasiness, or positive, such as a feeling of trust. Gut feelings are generally regarded as not modulated by conscious thought, but sometimes as a feature of intuition rather than rationality. The idea that emotions are experienced in the gut has a long historical legacy, and many nineteenth-century doctors considered the origins of mental illness to derive from the intestines. The phrase "gut feeling" may also be used as a shorthand term for an individual's "common sense" perception of what is considered "the right thing to do", such as helping an injured passerby, avoiding dark alleys and generally acting in accordance with instinctive feelings about a given situation. It can also refer to simple common knowledge phrases which are true no matter when said, such as "Fire is hot", or to ideas that an individual intuitively regards as true (see "truthiness" for examples). Heart The heart has a collection of ganglia that is called the "intrinsic cardiac nervous system". The feelings of affiliation, love, attachment, anger, hurt are usually associated with the heart, especially the feeling of love. Needs A need is something required to sustain a healthy life (e.g. air, water, food). A (need) deficiency causes a clear adverse outcome: a dysfunction or death. Abraham H. Maslow, pointed out that satisfying (i.e., gratification of) a need, is just as important as deprivation (i.e., motivation to satisfy), for it releases the focus of the satisfied need, to other emergent needs Motivation Motivation is what explains why people or animals initiate, continue or terminate a certain behavior at a particular time. Motivational states are commonly understood as forces acting within the agent that create a disposition to engage in goal-directed behavior. It is often held that different mental states compete with each other and that only the strongest state determines behavior. Valence Valence tells organisms (e.g., humans) how well or how bad an organism is doing (in relation to the environment), for meeting the organism's needs. Perception Feelings of certainty The way that we see other people express their emotions or feelings determines how we respond. The way an individual responds to a situation is based on feeling rules. If an individual is uninformed about a situation the way they respond would be in a completely different demeanor than if they were informed about a situation. For example, if a tragic event had occurred and they had knowledge of it, their response would be sympathetic to that situation. If they had no knowledge of the situation, then their response may be indifference. A lack of knowledge or information about an event can shape the way an individual sees things and the way they respond. Timothy D. Wilson, a psychology professor, tested this theory of the feeling of uncertainty along with his colleague Yoav Bar-Anan, a social psychologist. Wilson and Bar-Anan found that the more uncertain or unclear an individual is about a situation, the more invested they are. Since an individual does not know the background or the ending of a story they are constantly replaying an event in their mind which is causing them to have mixed feelings of happiness, sadness, excitement, and et cetera. If there is any difference between feelings and emotions, the feeling of uncertainty is less sure than the emotion of ambivalence: the former is precarious, the latter is not yet acted upon or decided upon. The neurologist Robert Burton, writes in his book On Being Certain, that feelings of certainty may stem from involuntary mental sensations, much like emotions or perceptual recognition (another example might be the tip of the tongue phenomenon). Individuals in society want to know every detail about something in hopes to maximize the feeling for that moment, but Wilson found that feeling uncertain can lead to something being more enjoyable because it has a sense of mystery. In fact, the feeling of not knowing can lead them to constantly think and feel about what could have been. Sense of agency & sense of ownership Feelings about feelings Individuals in society predict that something will give them a certain desired outcome or feeling. Indulging in what one might have thought would've made them happy or excited might only cause a temporary thrill, or it might result in the opposite of what was expected and wanted. Events and experiences are done and relived to satisfy one's feelings. Details and information about the past are used to make decisions, as past experiences of feelings tend to influence current decision-making, how people will feel in the future, and if they want to feel that way again. Gilbert and Wilson conducted a study to show how pleased a person would feel if they purchased flowers for themselves for no specific reason (birthday, anniversary, or promotion etc.) and how long they thought that feeling would last. People who had no experience of purchasing flowers for themselves and those who had experienced buying flowers for themselves were tested. Results showed that those who had purchased flowers in the past for themselves felt happier and that feeling lasted longer for them than for a person who had never experienced purchasing flowers for themselves. Arlie Russell Hochschild, a sociologist, depicted two accounts of emotion. The organismic emotion is the outburst of emotions and feelings. In organismic emotion, emotions/feelings are instantly expressed. Social and other factors do not influence how the emotion is perceived, so these factors have no control on how or if the emotion is suppressed or expressed. In interactive emotion, emotions and feelings are controlled. The individual is constantly considering how to react or what to suppress. In interactive emotion, unlike in organismic emotion, the individual is aware of their decision on how they feel and how they show it. Erving Goffman, a sociologist and writer, compared how actors withheld their emotions to the everyday individual. Like actors, individuals can control how emotions are expressed, but they cannot control their inner emotions or feelings. Inner feelings can only be suppressed in order to achieve the expression one wants people to see on the outside. Goffman explains that emotions and emotional experience are an ongoing thing that an individual is consciously and actively working through. Individuals want to conform to society with their inner and outer feelings. Anger, happiness, joy, stress, and excitement are some of the feelings that can be experienced in life. In response to these emotions, our bodies react as well. For example, nervousness can lead to the sensation of having "knots in the stomach" or "butterflies in the stomach". Self-harm Negative feelings can lead to harm. When an individual is dealing with an overwhelming amount of stress and problems in their lives, there is the possibility that they might consider self-harm. When one is in a good state of feeling, they never want it to end; conversely, when someone is in a bad state of mind, they want that feeling to disappear. Inflicting harm or pain to oneself is sometimes the answer for many individuals because they want something to keep their mind off the real problem. These individuals cut, stab, and starve themselves in an effort to feel something other than what they currently feel, as they believe the pain to be not as bad as their actual problem. Distraction is not the only reason why many individuals choose to inflict self-harm. Some people inflict self-harm to punish themselves for feeling a certain way. Other psychological factors could be low self-esteem, the need to be perfect, social anxiety, and so much more. See also Affect Alexithymia Consciousness Cognitive neuroscience Emotion in animals Hard problem of consciousness Intuition Mind–body problem Mood Myers–Briggs Type Indicator Needs Pain Psychological Types Psychosomatic illness Qualia Sensation (psychology) Vedanā, the Buddhist concept of feeling Footnotes External links Further reading Madge, N., Hewitt, A., Hawton, K., Wilde, E.J.D., Corcoran, P., Fekete, S., Heeringen, K.V., Leo, D.D. and Ystgaard, M., 2008. Deliberate self‐harm within an international community sample of young people: comparative findings from the Child & Adolescent Self‐harm in Europe (CASE) Study. Journal of child Psychology and Psychiatry, 49(6), pp. 667–677. Mruk, C. (2006). Self-Esteem research, theory, and practice: Toward a positive psychology of self-esteem (3rd ed.). New York: Springer. Subjective experience
Feeling
[ "Biology" ]
2,478
[ "Emotion", "Behavior", "Human behavior" ]
163,503
https://en.wikipedia.org/wiki/The%20Wiki%20Way
The Wiki Way: Quick Collaboration on the Web is a 2001 book about wikis by Bo Leuf and Ward Cunningham. It was the first major book published about using wikis. Cunningham invented wikis when he wrote WikiWikiWeb, the first wiki website software. The book is about how to manage wiki systems, followed by a perspective on the nature of wiki-style online communication. Reception Eugene Eric Kim wrote in Web Techniques that "Leuf and Cunningham do a good job of explaining what a Wiki is" and said "The Wiki Way is about the way we work, and that makes it a worthwhile read." David Mattison of Searcher tried the book's QuickiWiki script. Simon Worthington stated, "The Wiki Way book is a manifesto and a software manual in one, with the essentials for Wiki installation attached on CD." References External links Book homepage Copy of the book in Archive.org Wikis Books about the Internet 2001 non-fiction books Addison-Wesley books
The Wiki Way
[ "Technology" ]
216
[ "Computing stubs", "Computer book stubs" ]
163,526
https://en.wikipedia.org/wiki/Hacker%20culture
The hacker culture is a subculture of individuals who enjoy—often in collective effort—the intellectual challenge of creatively overcoming the limitations of software systems or electronic hardware (mostly digital electronics), to achieve novel and clever outcomes. The act of engaging in activities (such as programming or other media) in a spirit of playfulness and exploration is termed hacking. However, the defining characteristic of a hacker is not the activities performed themselves (e.g. programming), but how it is done and whether it is exciting and meaningful. Activities of playful cleverness can be said to have "hack value" and therefore the term "hacks" came about, with early examples including pranks at MIT done by students to demonstrate their technical aptitude and cleverness. The hacker culture originally emerged in academia in the 1960s around the Massachusetts Institute of Technology (MIT)'s Tech Model Railroad Club (TMRC) and MIT Artificial Intelligence Laboratory. Hacking originally involved entering restricted areas in a clever way without causing any major damage. Some famous hacks at the Massachusetts Institute of Technology were placing of a campus police cruiser on the roof of the Great Dome and converting the Great Dome into R2-D2. Richard Stallman explains about hackers who program: Hackers from this subculture tend to emphatically differentiate themselves from whom they pejoratively call "crackers"; those who are generally referred to by media and members of the general public using the term "hacker", and whose primary focusbe it to malign or for malevolent purposeslies in exploiting weaknesses in computer security. Definition The Jargon File, an influential but not universally accepted compendium of hacker slang, defines hacker as "A person who enjoys exploring the details of programmable systems and stretching their capabilities, as opposed to most users, who prefer to learn only the minimum necessary." The Request for Comments (RFC) 1392, the Internet Users' Glossary, amplifies this meaning as "A person who delights in having an intimate understanding of the internal workings of a system, computers and computer networks in particular." As documented in the Jargon File, these hackers are disappointed by the mass media and general public's usage of the word hacker to refer to security breakers, calling them "crackers" instead. This includes both "good" crackers ("white hat hackers"), who use their computer security-related skills and knowledge to learn more about how systems and networks work and to help to discover and fix security holes, as well as those more "evil" crackers ("black hat hackers"), who use the same skills to author harmful software (such as viruses or trojans) and illegally infiltrate secure systems with the intention of doing harm to the system. The programmer subculture of hackers, in contrast to the cracker community, generally sees computer security-related activities as contrary to the ideals of the original and true meaning of the hacker term, that instead related to playful cleverness. History The word "hacker" derives from the Late Middle English words hackere, hakker, or hakkere - one who cuts wood, woodchopper, or woodcutter. Although the idea of "hacking", in the modern sense, existed long before the modern term "hacker"with the most notable example of Lightning Ellsworth, it was not a word that the first programmers used to describe themselves. In fact, many of the first programmers were from engineering or physics backgrounds. There was a growing awareness of a style of programming different from the cut and dried methods employed at first, but it was not until the 1960s that the term "hackers" began to be used to describe proficient computer programmers. Therefore, the fundamental characteristic that links all who identify themselves as hackers is that each is someone who enjoys "…the intellectual challenge of creatively overcoming and circumventing limitations of programming systems and who tries to extend their capabilities" (47). With this definition in mind, it can be clear where the negative implications of the word "hacker" and the subculture of "hackers" came from. Some common nicknames among this culture include "crackers", who are considered to be unskilled thieves who mainly rely on luck, and "phreaks", which refers to skilled crackers and "warez d00dz" (crackers who acquire reproductions of copyrighted software). Hackers who are hired to test security are called "pentesters" or "tiger teams". Before communications between computers and computer users were as networked as they are now, there were multiple independent and parallel hacker subcultures, often unaware or only partially aware of each other's existence. All of these had certain important traits in common: Creating software and sharing it with each other Placing a high value on freedom of inquiry Hostility to secrecy Information-sharing as both an ideal and a practical strategy Upholding the right to fork Emphasis on rationality Distaste for authority Playful cleverness, taking the serious humorously and humor seriously These sorts of subcultures were commonly found at academic settings such as college campuses. The MIT Artificial Intelligence Laboratory, the University of California, Berkeley and Carnegie Mellon University were particularly well-known hotbeds of early hacker culture. They evolved in parallel, and largely unconsciously, until the Internet, where a legendary PDP-10 machine at MIT, called AI, that was running ITS, provided an early meeting point of the hacker community. This and other developments such as the rise of the free software movement and community drew together a critically large population and encouraged the spread of a conscious, common, and systematic ethos. Symptomatic of this evolution were an increasing adoption of common slang and a shared view of history, similar to the way in which other occupational groups have professionalized themselves, but without the formal credentialing process characteristic of most professional groups. Over time, the academic hacker subculture has tended to become more conscious, more cohesive, and better organized. The most important consciousness-raising moments have included the composition of the first Jargon File in 1973, the promulgation of the GNU Manifesto in 1985, and the publication of Eric Raymond's The Cathedral and the Bazaar in 1997. Correlated with this has been the gradual recognition of a set of shared culture heroes, including: Bill Joy, Donald Knuth, Dennis Ritchie, Alan Kay, Ken Thompson, Richard M. Stallman, Linus Torvalds, Larry Wall, and Guido van Rossum. The concentration of academic hacker subculture has paralleled and partly been driven by the commoditization of computer and networking technology, and has, in turn, accelerated that process. In 1975, hackerdom was scattered across several different families of operating systems and disparate networks; today it is largely a Unix and TCP/IP phenomenon, and is concentrated around various operating systems based on free software and open-source software development. Ethics and principles Many of the values and tenets of the free and open source software movement stem from the hacker ethics that originated at MIT and at the Homebrew Computer Club. The hacker ethics were chronicled by Steven Levy in Hackers: Heroes of the Computer Revolution and in other texts in which Levy formulates and summarizes general hacker attitudes: Access to computers-and anything that might teach you something about the way the world works-should be unlimited and total. All information should be free. Hackers should be judged by their hacking, not bogus criteria such as degrees, age, race, or position. You can create art and beauty on a computer. Computers can change your life for the better. Hacker ethics are concerned primarily with sharing, openness, collaboration, and engaging in the hands-on imperative. Linus Torvalds, one of the leaders of the open source movement (known primarily for developing the Linux kernel), has noted in the book The Hacker Ethic that these principles have evolved from the known Protestant ethics and incorporates the spirits of capitalism, as introduced in the early 20th century by Max Weber. Hack value is the notion used by hackers to express that something is worth doing or is interesting. This is something that hackers often feel intuitively about a problem or solution. An aspect of hack value is performing feats for the sake of showing that they can be done, even if others think it is difficult. Using things in a unique way outside their intended purpose is often perceived as having hack value. Examples are using a dot matrix impact printer to produce musical notes, using a flatbed scanner to take ultra-high-resolution photographs or using an optical mouse as barcode reader. A solution or feat has "hack value" if it is done in a way that has finesse, cleverness or brilliance, which makes creativity an essential part of the meaning. For example, picking a difficult lock has hack value; smashing it does not. As another example, proving Fermat's Last Theorem by linking together most of modern mathematics has hack value; solving a combinatorial problem by exhaustively trying all possibilities does not. Hacking is not using process of elimination to find a solution; it's the process of finding a clever solution to a problem. Uses While using hacker to refer to someone who enjoys playful cleverness is most often applied to computer programmers, it is sometimes used for people who apply the same attitude to other fields. For example, Richard Stallman describes the silent composition 4′33″ by John Cage and the 14th-century palindromic three-part piece "Ma Fin Est Mon Commencement" by Guillaume de Machaut as hacks. According to the Jargon File, the word hacker was used in a similar sense among radio amateurs in the 1950s, predating the software hacking community. Programming The Boston Globe in 1984 defined "hackers" as "computer nuts". In their programmer subculture, a hacker is a person who follows a spirit of playful cleverness and loves programming. It is found in an originally academic movement unrelated to computer security and most visibly associated with free software, open source and demoscene. It also has a hacker ethic, based on the idea that writing software and sharing the result on a voluntary basis is a good idea, and that information should be free, but that it's not up to the hacker to make it free by breaking into private computer systems. This hacker ethic was publicized and perhaps originated in Steven Levy's Hackers: Heroes of the Computer Revolution (1984). It contains a codification of its principles. The programmer subculture of hackers disassociates from the mass media's pejorative use of the word 'hacker' referring to computer security, and usually prefer the term 'cracker' for that meaning. Complaints about supposed mainstream misuse started as early as 1983, when media used "hacker" to refer to the computer criminals involved in The 414s case. In the programmer subculture of hackers, a computer hacker is a person who enjoys designing software and building programs with a sense for aesthetics and playful cleverness. The term hack in this sense can be traced back to "describe the elaborate college pranks that...students would regularly devise" (Levy, 1984 p. 10). To be considered a 'hack' was an honor among like-minded peers as "to qualify as a hack, the feat must be imbued with innovation, style and technical virtuosity" (Levy, 1984 p. 10) The MIT Tech Model Railroad Club Dictionary defined hack in 1959 (not yet in a computer context) as "1) an article or project without constructive end; 2) a project undertaken on bad self-advice; 3) an entropy booster; 4) to produce, or attempt to produce, a hack(3)", and "hacker" was defined as "one who hacks, or makes them". Much of TMRC's jargon was later imported into early computing culture, because the club started using a DEC PDP-1 and applied its local model railroad slang in this computing context. Initially incomprehensible to outsiders, the slang also became popular in MIT's computing environments beyond the club. Other examples of jargon imported from the club are 'losing' ("when a piece of equipment is not working") and 'munged' ("when a piece of equipment is ruined"). Others did not always view hackers with approval. MIT living groups in 1989 avoided advertising their sophisticated Project Athena workstations to prospective members because they wanted residents who were interested in people, not computers, with one fraternity member stating that "We were worried about the hacker subculture". According to Eric S. Raymond, the Open Source and Free Software hacker subculture developed in the 1960s among 'academic hackers' working on early minicomputers in computer science environments in the United States. Hackers were influenced by and absorbed many ideas of key technological developments and the people associated with them. Most notable is the technical culture of the pioneers of the ARPANET, starting in 1969. The PDP-10 AI machine at MIT, running the ITS operating system and connected to the ARPANET, provided an early hacker meeting point. After 1980 the subculture coalesced with the culture of Unix. Since the mid-1990s, it has been largely coincident with what is now called the free software and open source movement. Many programmers have been labeled "great hackers", but the specifics of who that label applies to is a matter of opinion. Certainly major contributors to computer science such as Edsger Dijkstra and Donald Knuth, as well as the inventors of popular software such as Linus Torvalds (Linux), and Ken Thompson and Dennis Ritchie (Unix and C programming language) are likely to be included in any such list; see also List of programmers. People primarily known for their contributions to the consciousness of the programmer subculture of hackers include Richard Stallman, the founder of the free software movement and the GNU project, president of the Free Software Foundation and author of the famous Emacs text editor as well as the GNU Compiler Collection (GCC), and Eric S. Raymond, one of the founders of the Open Source Initiative and writer of the famous text The Cathedral and the Bazaar and many other essays, maintainer of the Jargon File (which was previously maintained by Guy L. Steele, Jr.). Within the computer programmer subculture of hackers, the term hacker is also used for a programmer who reaches a goal by employing a series of modifications to extend existing code or resources. In this sense, it can have a negative connotation of using inelegant kludges to accomplish programming tasks that are quick, but ugly, inelegant, difficult to extend, hard to maintain and inefficient. This derogatory form of the noun "hack" derives from the everyday English sense "to cut or shape by or as if by crude or ruthless strokes" [Merriam-Webster] and is even used among users of the positive sense of "hacker" who produces "cool" or "neat" hacks. In other words, to "hack" at an original creation, as if with an axe, is to force-fit it into being usable for a task not intended by the original creator, and a "hacker" would be someone who does this habitually. (The original creator and the hacker may be the same person.) This usage is common in both programming, engineering and building. In programming, hacking in this sense appears to be tolerated and seen as a necessary compromise in many situations. Some argue that it should not be, due to this negative meaning; others argue that some kludges can, for all their ugliness and imperfection, still have "hack value". In non-software engineering, the culture is less tolerant of unmaintainable solutions, even when intended to be temporary, and describing someone as a "hacker" might imply that they lack professionalism. In this sense, the term has no real positive connotations, except for the idea that the hacker is capable of doing modifications that allow a system to work in the short term, and so has some sort of marketable skills. However, there is always the understanding that a more skillful or technical logician could have produced successful modifications that would not be considered a "hack-job". The definition is similar to other, non-computer based uses of the term "hack-job". For instance, a professional modification of a production sports car into a racing machine would not be considered a hack-job, but a cobbled together backyard mechanic's result could be. Even though the outcome of a race of the two machines could not be assumed, a quick inspection would instantly reveal the difference in the level of professionalism of the designers. The adjective associated with hacker is "hackish" (see the Jargon file). In a very universal sense, hacker also means someone who makes things work beyond perceived limits in a clever way in general, without necessarily referring to computers, especially at MIT. That is, people who apply the creative attitude of software hackers in fields other than computing. This includes even activities that predate computer hacking, for example reality hackers or urban spelunkers (exploring undocumented or unauthorized areas in buildings). One specific example is clever pranks traditionally perpetrated by MIT students, with the perpetrator being called hacker. For example, when MIT students surreptitiously put a fake police car atop the dome on MIT's Building 10, that was a hack in this sense, and the students involved were therefore hackers. Other types of hacking are reality hackers, wetware hackers ("hack your brain"), and media hackers ("hack your reputation"). In a similar vein, a "hack" may refer to a math hack, that is, a clever solution to a mathematical problem. All of these uses have spread beyond MIT. Ethical Hacking CSO Online defined ethical hacking as going into devices and computer systems belonging to an organization, with its explicit permissions, to assess and test the efficacy of the organization's cybersecurity defenses. Generally, organizations engage the services of ethical hackers either through third-party cybersecurity firms or under contract. Their main job is to identify and fix security gaps before threat-actors find them and exploit them. This proactive approach to cybersecurity testing leads to significant cost savings for organizations. Ethical hacking is the process of software engines running real-world cyber threats to assess the survivability of a company's digital structure. Ethical hackers play the role of cyber attackers by executing assessments, penetration tests, and modeling tactics, techniques, and procedures used by threat-actors. This careful examination provides an organization with the identification of weaknesses in its security systems, enabling the organization to employ necessary measures towards fortifying its defense. Cyber-attacks can have significant financial implications for a company. In such cases, the organizations could have been saved from these gigantic financial losses by identifying and fixing the vulnerabilities discovered by an ethical hacker. Moreover, for smaller organizations, the impact can be even more dramatic as it can potentially save the business's very existence. Furthermore, the act of ethical hacking also molds the larger hacker culture. Hacking skills, traditionally associated with breaking the law, have changed dramatically with the emergence of ethical hacking. Ethical hacking helped legitimize hacking skills which can now be talked about publicly. This shift challenges the stereotypical perception of hackers as criminals, allowing for greater emphasis on their positive contributions to cybersecurity. Ethical hacking has drastically changed the public perception of hackers. Rather than viewing persons with hacker skills as perpetrators of cybercrime, they can be viewed as part of the solution in fighting against cybercrime. The ethical hacker with knowledge and expertise stands as guardian to the digital assets, working beforehand alongside organizations to build up a more secure online landscape. Ethical hacking is not only a proactive defense for organizations but also brings about the desired cultural revolution within the realm of the hacking fraternity. Ethical hacking, on its part through focusing on the constructive application of hacking skills, has become an integral activity in the collective effort towards fortification of cybersecurity and redefining hackers' image in the public eye. Home computing enthusiasts In yet another context, a hacker is a computer hobbyist who pushes the limits of software or hardware. The home computer hacking subculture relates to the hobbyist home computing of the late 1970s, beginning with the availability of MITS Altair. An influential organization was the Homebrew Computer Club. However, its roots go back further to amateur radio enthusiasts. The amateur radio slang referred to creatively tinkering to improve performance as "hacking" already in the 1950s. A large overlaps between hobbyist hackers and the programmer subculture hackers existed during the Homebrew Club's days, but the interests and values of both communities somewhat diverged. Today, the hobbyists focus on commercial computer and video games, software cracking and exceptional computer programming (demo scene). Also of interest to some members of this group is the modification of computer hardware and other electronic devices, see modding. Electronics hobbyists working on machines other than computers also fall into this category. This includes people who do simple modifications to graphing calculators, video game consoles, electronic musical keyboards or other device (see CueCat for a notorious example) to expose or add functionality to a device that was unintended for use by end users by the company who created it. A number of techno musicians have modified 1980s-era Casio SK-1 sampling keyboards to create unusual sounds by doing circuit bending: connecting wires to different leads of the integrated circuit chips. The results of these DIY experiments range from opening up previously inaccessible features that were part of the chip design to producing the strange, dis-harmonic digital tones that became part of the techno music style. Companies take different attitudes towards such practices, ranging from open acceptance (such as Texas Instruments for its graphing calculators and Lego for its Lego Mindstorms robotics gear) to outright hostility (such as Microsoft's attempts to lock out Xbox hackers or the DRM routines on Blu-ray Disc players designed to sabotage compromised players.) In this context, a "hack" refers to a program that (sometimes illegally) modifies another program, often a video game, giving the user access to features otherwise inaccessible to them. As an example of this use, for Palm OS users (until the 4th iteration of this operating system), a "hack" refers to an extension of the operating system which provides additional functionality. Term also refers to those people who cheat on video games using special software. This can also refer to the jailbreaking of iPhones. Hacker artists Hacker artists create art by hacking on technology as an artistic medium. This has extended the definition of the term and what it means to be a hacker. Such artists may work with graphics, computer hardware, sculpture, music and other audio, animation, video, software, simulations, mathematics, reactive sensory systems, text, poetry, literature, or any combination thereof. Dartmouth College musician Larry Polansky states: Another description is offered by Jenny Marketou: A successful software and hardware hacker artist is Mark Lottor (mkl), who has created the 3-D light art projects entitled the Cubatron, and the Big Round Cubatron. This art is made using custom computer technology, with specially designed circuit boards and programming for microprocessor chips to manipulate the LED lights. Don Hopkins is a software hacker artist well known for his artistic cellular automata. This art, created by a cellular automata computer program, generates objects which randomly bump into each other and in turn create more objects and designs, similar to a lava lamp, except that the parts change color and form through interaction. Hopkins Says: Some hacker artists create art by writing computer code, and others, by developing hardware. Some create with existing software tools such as Adobe Photoshop or GIMP. The creative process of hacker artists can be more abstract than artists using non-technological media. For example, mathematicians have produced visually stunning graphic presentations of fractals, which hackers have further enhanced, often producing detailed and intricate graphics and animations from simple mathematical formulas. Art Burning Man Festival Computer art Computer music Digital art Demoscene Electronic art Electronic art music Electronica Experiments in Art and Technology Generative art Internet art Maker movement Media art Robotic art Software art Hacker art mentions "Vector in Open Space" by Gerfried Stocker 1996. Switch|Journal Jun 14 1998. Eye Weekly "Tag – who's it?" by Ingrid Hein, July 16, 1998. Linux Today "Playing the Open Source Game" by , Jul 5, 1999. Canterbury Christ Church University Library Resources by Subject – Art & Design, 2001. SuperCollider Workshop / Seminar Joel Ryan describes collaboration with hacker artists of Silicon Valley. 21 March 2002 Anthony Barker's Weblog on Linux, Technology and the Economy "Why Geeks Love Linux", Sept 2003. Live Art Research Gesture and Response in Field-Based Performance by Sha Xin Wei & Satinder Gill, 2005. Hackers, Who Are They "The Hackers Identity", October 2014. See also Cowboy coding: software development without the use of strict software development methodologies Demoscene History of free software Maker culture Unix philosophy References Further reading The Jargon File has had a role in acculturating hackers since its origins in 1975. These academic and literary works helped shape the academic hacker subculture: Olson, Parmy. (05-14-2013). We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency. . Coleman, Gabriella. (Nov 4, 2014). Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous. Verso Books. . Shantz, Jeff; Tomblin, Jordon (2014-11-28). Cyber Disobedience: Re://Presenting Online Anarchy. John Hunt Publishing. . External links A Brief History of Hackerdom Hack, Hackers, and Hacking (see Appendix A) Gabriella Coleman: The Anthropology of Hackers The Atlantic, 2010. Gabriella Coleman: Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous at Open Library Subcultures Computing and society Do it yourself Hobbies Culture
Hacker culture
[ "Technology" ]
5,432
[ "Computing and society" ]
163,806
https://en.wikipedia.org/wiki/Oil%20platform
An oil platform (also called an oil rig, offshore platform, oil production platform, etc.) is a large structure with facilities to extract and process petroleum and natural gas that lie in rock formations beneath the seabed. Many oil platforms will also have facilities to accommodate the workers, although it is also common to have a separate accommodation platform linked by bridge to the production platform. Most commonly, oil platforms engage in activities on the continental shelf, though they can also be used in lakes, inshore waters, and inland seas. Depending on the circumstances, the platform may be fixed to the ocean floor, consist of an artificial island, or float. In some arrangements the main facility may have storage facilities for the processed oil. Remote subsea wells may also be connected to a platform by flow lines and by umbilical connections. These sub-sea facilities may include one or more subsea wells or manifold centres for multiple wells. Offshore drilling presents environmental challenges, both from the produced hydrocarbons and the materials used during the drilling operation. Controversies include the ongoing US offshore drilling debate. There are many different types of facilities from which offshore drilling operations take place. These include bottom-founded drilling rigs (jackup barges and swamp barges), combined drilling and production facilities, either bottom-founded or floating platforms, and deepwater mobile offshore drilling units (MODU), including semi-submersibles and drillships. These are capable of operating in water depths up to . In shallower waters, the mobile units are anchored to the seabed. However, in deeper water (more than ), the semisubmersibles or drillships are maintained at the required drilling location using dynamic positioning. History Jan Józef Ignacy Łukasiewicz (Polish pronunciation: [iɡˈnatsɨ wukaˈɕɛvitʂ] ⓘ; 8 March 1822 – 7 January 1882) was a Polish pharmacist, engineer, businessman, inventor, and philanthropist. He was one of the most prominent philanthropists in the Kingdom of Galicia and Lodomeria, crown land of Austria-Hungary. He was a pioneer who in 1856 built the world's first modern oil refinery Around 1891, the first submerged oil wells were drilled from platforms built on piles in the fresh waters of the Grand Lake St. Marys (a.k.a. Mercer County Reservoir) in Ohio. The wide but shallow reservoir was built from 1837 to 1845 to provide water to the Miami and Erie Canal. Around 1896, the first submerged oil wells in salt water were drilled in the portion of the Summerland field extending under the Santa Barbara Channel in California. The wells were drilled from piers extending from land out into the channel. Other notable early submerged drilling activities occurred on the Canadian side of Lake Erie since 1913 and Caddo Lake in Louisiana in the 1910s. Shortly thereafter, wells were drilled in tidal zones along the Gulf Coast of Texas and Louisiana. The Goose Creek field near Baytown, Texas, is one such example. In the 1920s, drilling was done from concrete platforms in Lake Maracaibo, Venezuela. The oldest offshore well recorded in Infield's offshore database is the Bibi Eibat well which came on stream in 1923 in Azerbaijan. Landfill was used to raise shallow portions of the Caspian Sea. In the early 1930s, the Texas Company developed the first mobile steel barges for drilling in the brackish coastal areas of the gulf. In 1937, Pure Oil Company (now Chevron Corporation) and its partner Superior Oil Company (now part of ExxonMobil Corporation) used a fixed platform to develop a field in of water, one mile (1.6 km) offshore of Calcasieu Parish, Louisiana. In 1938, Humble Oil built a mile-long wooden trestle with railway tracks into the sea at McFadden Beach on the Gulf of Mexico, placing a derrick at its end – this was later destroyed by a hurricane. In 1945, concern for American control of its offshore oil reserves caused President Harry Truman to issue an Executive Order unilaterally extending American territory to the edge of its continental shelf, an act that effectively ended the 3-mile limit "freedom of the seas" regime. In 1946, Magnolia Petroleum (now ExxonMobil) drilled at a site off the coast, erecting a platform in of water off St. Mary Parish, Louisiana. In early 1947, Superior Oil erected a drilling/production platform in of water some 18 miles off Vermilion Parish, Louisiana. But it was Kerr-McGee Oil Industries (now part of Occidental Petroleum), as operator for partners Phillips Petroleum (ConocoPhillips) and Stanolind Oil & Gas (BP), that completed its historic Ship Shoal Block 32 well in October 1947, months before Superior actually drilled a discovery from their Vermilion platform farther offshore. In any case, that made Kerr-McGee's well the first oil discovery drilled out of sight of land. The British Maunsell Forts constructed during World War II are considered the direct predecessors of modern offshore platforms. Having been pre-constructed in a very short time, they were then floated to their location and placed on the shallow bottom of the Thames and the Mersey estuary. In 1954, the first jackup oil rig was ordered by Zapata Oil. It was designed by R. G. LeTourneau and featured three electro-mechanically operated lattice-type legs. Built on the shores of the Mississippi River by the LeTourneau Company, it was launched in December 1955, and christened "Scorpion". The Scorpion was put into operation in May 1956 off Port Aransas, Texas. It was lost in 1969. When offshore drilling moved into deeper waters of up to , fixed platform rigs were built, until demands for drilling equipment was needed in the to depth of the Gulf of Mexico, the first jack-up rigs began appearing from specialized offshore drilling contractors such as forerunners of ENSCO International. The first semi-submersible resulted from an unexpected observation in 1961. Blue Water Drilling Company owned and operated the four-column submersible Blue Water Rig No.1 in the Gulf of Mexico for Shell Oil Company. As the pontoons were not sufficiently buoyant to support the weight of the rig and its consumables, it was towed between locations at a draught midway between the top of the pontoons and the underside of the deck. It was noticed that the motions at this draught were very small, and Blue Water Drilling and Shell jointly decided to try operating the rig in its floating mode. The concept of an anchored, stable floating deep-sea platform had been designed and tested back in the 1920s by Edward Robert Armstrong for the purpose of operating aircraft with an invention known as the "seadrome". The first purpose-built drilling semi-submersible Ocean Driller was launched in 1963. Since then, many semi-submersibles have been purpose-designed for the drilling industry mobile offshore fleet. The first offshore drillship was the CUSS 1 developed for the Mohole project to drill into the Earth's crust. As of June, 2010, there were over 620 mobile offshore drilling rigs (Jackups, semisubs, drillships, barges) available for service in the competitive rig fleet. One of the world's deepest hubs is currently the Perdido in the Gulf of Mexico, floating in 2,438 meters of water. It is operated by Shell plc and was built at a cost of $3 billion. The deepest operational platform is the Petrobras America Cascade FPSO in the Walker Ridge 249 field in 2,600 meters of water. Main offshore basins Notable offshore basins include: the North Sea the Gulf of Mexico (offshore Texas, Louisiana, Mississippi, Alabama and Florida) California (in the Los Angeles Basin and Santa Barbara Channel, part of the Ventura Basin) the Caspian Sea (notably some major fields offshore Azerbaijan) the Campos and Santos Basins off the coasts of Brazil Newfoundland and Nova Scotia (Atlantic Canada) several fields off West Africa, south of Nigeria, and central Africa, west of Angola offshore fields in South East Asia and Sakhalin, Russia major offshore oil fields are located in the Persian Gulf such as Safaniya, Manifa and Marjan which belong to Saudi Arabia and are developed by Saudi Aramco fields in India (Mumbai High, K G Basin-East Coast Of India, Tapti Field, Gujarat, India) the Baltic Sea oil and gas fields the Taranaki Basin in New Zealand the Kara Sea north of Siberia the Arctic Ocean off the coasts of Alaska and Canada's Northwest Territories the offshore fields in the Adriatic Sea Types Larger lake- and sea-based offshore platforms and drilling rig for oil. 1) & 2) Conventional fixed platforms (deepest: Shell's Bullwinkle in 1991 at 412 m/1,353 ft GOM) 3) Compliant tower (deepest: ChevronTexaco's Petronius in 1998 at 534 m /1,754 ft GOM) 4) & 5) Vertically moored tension leg and mini-tension leg platform (deepest: ConocoPhillips's Magnolia in 2004 1,425 m/4,674 ft GOM) 6) Spar (deepest: Shell's Perdido in 2010, 2,450 m/8,000 ft GOM) 7) & 8) Semi-submersibles (deepest: Shell's NaKika in 2003, 1920 m/6,300 ft GOM) 9) Floating production, storage, and offloading facility (deepest: 2005, 1,345 m/4,429 ft Brazil) 10) Sub-sea completion and tie-back to host facility (deepest: Shell's Coulomb tie to NaKika 2004, 2,307 m/ 7,570 ft) (Numbered from left to right; all records from 2005 data) Fixed platforms These platforms are built on concrete or steel legs, or both, anchored directly onto the seabed, supporting the deck with space for drilling rigs, production facilities and crew quarters. Such platforms are, by virtue of their immobility, designed for very long term use (for instance the Hibernia platform). Various types of structure are used: steel jacket, concrete caisson, floating steel, and even floating concrete. Steel jackets are structural sections made of tubular steel members, and are usually piled into the seabed. To see more details regarding Design, construction and installation of such platforms refer to: and. Concrete caisson structures, pioneered by the Condeep concept, often have in-built oil storage in tanks below the sea surface and these tanks were often used as a flotation capability, allowing them to be built close to shore (Norwegian fjords and Scottish firths are popular because they are sheltered and deep enough) and then floated to their final position where they are sunk to the seabed. Fixed platforms are economically feasible for installation in water depths up to about . Compliant towers These platforms consist of slender, flexible towers and a pile foundation supporting a conventional deck for drilling and production operations. Compliant towers are designed to sustain significant lateral deflections and forces, and are typically used in water depths ranging from . Semi-submersible platform These platforms have hulls (columns and pontoons) of sufficient buoyancy to cause the structure to float, but of weight sufficient to keep the structure upright. Semi-submersible platforms can be moved from place to place and can be ballasted up or down by altering the amount of flooding in buoyancy tanks. They are generally anchored by combinations of chain, wire rope or polyester rope, or both, during drilling and/or production operations, though they can also be kept in place by the use of dynamic positioning. Semi-submersibles can be used in water depths from . Jack-up drilling rigs Jack-up Mobile Drilling Units (or jack-ups), as the name suggests, are rigs that can be jacked up above the sea using legs that can be lowered, much like jacks. These MODUs (Mobile Offshore Drilling Units) are typically used in water depths up to , although some designs can go to depth. They are designed to move from place to place, and then anchor themselves by deploying their legs to the ocean bottom using a rack and pinion gear system on each leg. Drillships A drillship is a maritime vessel that has been fitted with drilling apparatus. It is most often used for exploratory drilling of new oil or gas wells in deep water but can also be used for scientific drilling. Early versions were built on a modified tanker hull, but purpose-built designs are used today. Most drillships are outfitted with a dynamic positioning system to maintain position over the well. They can drill in water depths up to . Floating production systems The main types of floating production systems are FPSO (floating production, storage, and offloading system). FPSOs consist of large monohull structures, generally (but not always) shipshaped, equipped with processing facilities. These platforms are moored to a location for extended periods, and do not actually drill for oil or gas. Some variants of these applications, called FSO (floating storage and offloading system) or FSU (floating storage unit), are used exclusively for storage purposes, and host very little process equipment. This is one of the best sources for having floating production. The world's first floating liquefied natural gas (FLNG) facility is in production. See the section on particularly large examples below. Tension-leg platform TLPs are floating platforms tethered to the seabed in a manner that eliminates most vertical movement of the structure. TLPs are used in water depths up to about . The "conventional" TLP is a 4-column design that looks similar to a semisubmersible. Proprietary versions include the Seastar and MOSES mini TLPs; they are relatively low cost, used in water depths between . Mini TLPs can also be used as utility, satellite or early production platforms for larger deepwater discoveries. Gravity-based structure A GBS can either be steel or concrete and is usually anchored directly onto the seabed. Steel GBS are predominantly used when there is no or limited availability of crane barges to install a conventional fixed offshore platform, for example in the Caspian Sea. There are several steel GBS's in the world today (e.g. offshore Turkmenistan Waters (Caspian Sea) and offshore New Zealand). Steel GBS do not usually provide hydrocarbon storage capability. It is mainly installed by pulling it off the yard, by either wet-tow or/and dry-tow, and self-installing by controlled ballasting of the compartments with sea water. To position the GBS during installation, the GBS may be connected to either a transportation barge or any other barge (provided it is large enough to support the GBS) using strand jacks. The jacks shall be released gradually whilst the GBS is ballasted to ensure that the GBS does not sway too much from target location. Spar platforms Spars are moored to the seabed like TLPs, but whereas a TLP has vertical tension tethers, a spar has more conventional mooring lines. Spars have to-date been designed in three configurations: the "conventional" one-piece cylindrical hull; the "truss spar", in which the midsection is composed of truss elements connecting the upper buoyant hull (called a hard tank) with the bottom soft tank containing permanent ballast; and the "cell spar", which is built from multiple vertical cylinders. The spar has more inherent stability than a TLP since it has a large counterweight at the bottom and does not depend on the mooring to hold it upright. It also has the ability, by adjusting the mooring line tensions (using chain-jacks attached to the mooring lines), to move horizontally and to position itself over wells at some distance from the main platform location. The first production spar was Kerr-McGee's Neptune, anchored in in the Gulf of Mexico; however, spars (such as Brent Spar) were previously used as FSOs. Eni's Devil's Tower located in of water in the Gulf of Mexico, was the world's deepest spar until 2010. The world's deepest platform as of 2011 was the Perdido spar in the Gulf of Mexico, floating in 2,438 metres of water. It is operated by Royal Dutch Shell and was built at a cost of $3 billion. The first truss spars were Kerr-McGee's Boomvang and Nansen. The first (and, as of 2010, only) cell spar is Kerr-McGee's Red Hawk. Normally unmanned installations (NUI) These installations, sometimes called toadstools, are small platforms, consisting of little more than a well bay, helipad and emergency shelter. They are designed to be operated remotely under normal conditions, only to be visited occasionally for routine maintenance or well work. Conductor support systems These installations, also known as satellite platforms, are small unmanned platforms consisting of little more than a well bay and a small process plant. They are designed to operate in conjunction with a static production platform which is connected to the platform by flow lines or by umbilical cable, or both. Particularly large examples The Petronius Platform is a compliant tower in the Gulf of Mexico modeled after the Hess Baldpate platform, which stands above the ocean floor. It is one of the world's tallest structures. The Hibernia platform in Canada is the world's heaviest offshore platform, located on the Jeanne D'Arc Basin, in the Atlantic Ocean off the coast of Newfoundland. This gravity base structure (GBS), which sits on the ocean floor, is high and has storage capacity for of crude oil in its high caisson. The platform acts as a small concrete island with serrated outer edges designed to withstand the impact of an iceberg. The GBS contains production storage tanks and the remainder of the void space is filled with ballast with the entire structure weighing in at 1.2 million tons. Royal Dutch Shell has developed the first Floating Liquefied Natural Gas (FLNG) facility, which is situated approximately 200 km off the coast of Western Australia. It is the largest floating offshore facility. It is approximately 488m long and 74m wide with displacement of around 600,000t when fully ballasted. Maintenance and supply A typical oil production platform is self-sufficient in energy and water needs, housing electrical generation, water desalinators and all of the equipment necessary to process oil and gas such that it can be either delivered directly onshore by pipeline or to a floating platform or tanker loading facility, or both. Elements in the oil/gas production process include wellhead, production manifold, production separator, glycol process to dry gas, gas compressors, water injection pumps, oil/gas export metering and main oil line pumps. Larger platforms are assisted by smaller ESVs (emergency support vessels) like the British Iolair that are summoned when something has gone wrong, e.g. when a search and rescue operation is required. During normal operations, PSVs (platform supply vessels) keep the platforms provisioned and supplied, and AHTS vessels can also supply them, as well as tow them to location and serve as standby rescue and firefighting vessels. Crew Essential personnel Not all of the following personnel are present on every platform. On smaller platforms, one worker can perform a number of different jobs. The following also are not names officially recognized in the industry: OIM (offshore installation manager) who is the ultimate authority during his/her shift and makes the essential decisions regarding the operation of the platform; Operations Team Leader (OTL); Offshore Methods Engineer (OME) who defines the installation methodology of the platform; Offshore Operations Engineer (OOE) who is the senior technical authority on the platform; PSTL or operations coordinator for managing crew changes; Dynamic positioning operator, navigation, ship or vessel maneuvering (MODU), station keeping, fire and gas systems operations in the event of incident; Automation systems specialist, to configure, maintain and troubleshoot the process control systems (PCS), process safety systems, emergency support systems and vessel management systems; Second mate to meet manning requirements of flag state, operates fast rescue craft, cargo operations, fire team leader; Third mate to meet manning requirements of flag state, operate fast rescue craft, cargo operations, fire team leader; Ballast control operator to operate fire and gas systems; Crane operators to operate the cranes for lifting cargo around the platform and between boats; Scaffolders to rig up scaffolding for when it is required for workers to work at height; Coxswains to maintain the lifeboats and manning them if necessary; Control room operators, especially FPSO or production platforms; Catering crew, including people tasked with performing essential functions such as cooking, laundry and cleaning the accommodation; Production techs to run the production plant; Helicopter pilot(s) living on some platforms that have a helicopter based offshore and transporting workers to other platforms or to shore on crew changes; Maintenance technicians (instrument, electrical or mechanical). Fully qualified medic. Radio operator to operate all radio communications. Store Keeper, keeping the inventory well supplied Technician to record the fluid levels in tanks Incidental personnel Drill crew will be on board if the installation is performing drilling operations. A drill crew will normally comprise: Toolpusher Driller Roughnecks Roustabouts Company man Mud engineer Motorman See: Glossary of oilfield jargon Derrickhand Geologist Welders and Welder Helpers Well services crew will be on board for well work. The crew will normally comprise: Well services supervisor Wireline or coiled tubing operators Pump operator Pump hanger and ranger Drawbacks Risks The nature of their operation—extraction of volatile substances sometimes under extreme pressure in a hostile environment—means risk; accidents and tragedies occur regularly. The U.S. Minerals Management Service reported 69 offshore deaths, 1,349 injuries, and 858 fires and explosions on offshore rigs in the Gulf of Mexico from 2001 to 2010. On July 6, 1988, 167 people died when Occidental Petroleum's Piper Alpha offshore production platform, on the Piper field in the UK sector of the North Sea, exploded after a gas leak. The resulting investigation conducted by Lord Cullen and publicized in the first Cullen Report was highly critical of a number of areas, including, but not limited to, management within the company, the design of the structure, and the Permit to Work System. The report was commissioned in 1988, and was delivered in November 1990. The accident greatly accelerated the practice of providing living accommodations on separate platforms, away from those used for extraction. The offshore can be in itself a hazardous environment. In March 1980, the 'flotel' (floating hotel) platform Alexander L. Kielland capsized in a storm in the North Sea with the loss of 123 lives. In 2001, Petrobras 36 in Brazil exploded and sank five days later, killing 11 people. Given the number of grievances and conspiracy theories that involve the oil business, and the importance of gas/oil platforms to the economy, platforms in the United States are believed to be potential terrorist targets. Agencies and military units responsible for maritime counter-terrorism in the US (Coast Guard, Navy SEALs, Marine Recon) often train for platform raids. On April 21, 2010, the Deepwater Horizon platform, 52 miles off-shore of Venice, Louisiana, (property of Transocean and leased to BP) exploded, killing 11 people, and sank two days later. The resulting undersea gusher, conservatively estimated to exceed as of early June 2010, became the worst oil spill in US history, eclipsing the Exxon Valdez oil spill. Ecological effects In British waters, the cost of removing all platform rig structures entirely was estimated in 2013 at £30 billion. Aquatic organisms invariably attach themselves to the undersea portions of oil platforms, turning them into artificial reefs. In the Gulf of Mexico and offshore California, the waters around oil platforms are popular destinations for sports and commercial fishermen, because of the greater numbers of fish near the platforms. The United States and Brunei have active Rigs-to-Reefs programs, in which former oil platforms are left in the sea, either in place or towed to new locations, as permanent artificial reefs. In the US Gulf of Mexico, as of September 2012, 420 former oil platforms, about 10 percent of decommissioned platforms, have been converted to permanent reefs. On the US Pacific coast, marine biologist Milton Love has proposed that oil platforms off California be retained as artificial reefs, instead of being dismantled (at great cost), because he has found them to be havens for many of the species of fish which are otherwise declining in the region, in the course of 11 years of research. Love is funded mainly by government agencies, but also in small part by the California Artificial Reef Enhancement Program. Divers have been used to assess the fish populations surrounding the platforms. Effects on the environment Offshore oil production involves environmental risks, most notably oil spills from oil tankers or pipelines transporting oil from the platform to onshore facilities, and from leaks and accidents on the platform. Produced water is also generated, which is water brought to the surface along with the oil and gas; it is usually highly saline and may include dissolved or unseparated hydrocarbons. Offshore rigs are shut down during hurricanes. In the Gulf of Mexico hurricanes are increasing because of the increasing number of oil platforms that heat surrounding air with methane, it is estimated that U.S. Gulf of Mexico, oil and gas facilities emit approximately 500000 tons of methane each year, corresponding to a loss of produced gas of 2.9 percent. The increasing number of oil rigs also increase movement of oil tankers which also increases levels which directly warm water in the zone, warm waters are a key factor for hurricanes to form. To reduce the amount of carbon emissions otherwise released into the atmosphere, methane pyrolysis of natural gas pumped up by oil platforms is a possible alternative to flaring for consideration. Methane pyrolysis produces non-polluting hydrogen in high volume from this natural gas at low cost. This process operates at around 1000 °C and removes carbon in a solid form from the methane, producing hydrogen. The carbon can then be pumped underground and is not released into the atmosphere. It is being evaluated in such research laboratories as Karlsruhe Liquid-metal Laboratory (KALLA). and the chemical engineering team at University of California – Santa Barbara Repurposing If not decommissioned, old platforms can be repurposed to pump into rocks below the seabed. Others have been converted to launch rockets into space, and more are being redesigned for use with heavy-lift launch vehicles. In Saudi Arabia, there are plans to repurpose decommissioned oil rigs into a theme park. Challenges Offshore oil and gas production is more challenging than land-based installations due to the remote and harsher environment. Much of the innovation in the offshore petroleum sector concerns overcoming these challenges, including the need to provide very large production facilities. Production and drilling facilities may be very large and a large investment, such as the Troll A platform standing on a depth of 300 meters. Another type of offshore platform may float with a mooring system to maintain it on location. While a floating system may be lower cost in deeper waters than a fixed platform, the dynamic nature of the platforms introduces many challenges for the drilling and production facilities. The ocean can add several thousand meters or more to the fluid column. The addition increases the equivalent circulating density and downhole pressures in drilling wells, as well as the energy needed to lift produced fluids for separation on the platform. The trend today is to conduct more of the production operations subsea, by separating water from oil and re-injecting it rather than pumping it up to a platform, or by flowing to onshore, with no installations visible above the sea. Subsea installations help to exploit resources at progressively deeper waters—locations that had been inaccessible—and overcome challenges posed by sea ice such as in the Barents Sea. One such challenge in shallower environments is seabed gouging by drifting ice features (means of protecting offshore installations against ice action includes burial in the seabed). Offshore manned facilities also present logistics and human resources challenges. An offshore oil platform is a small community in itself with cafeteria, sleeping quarters, management and other support functions. In the North Sea, staff members are transported by helicopter for a two-week shift. They usually receive higher salaries than onshore workers do. Supplies and waste are transported by ship, and the supply deliveries need to be carefully planned because storage space on the platform is limited. Today, much effort goes into relocating as many of the personnel as possible onshore, where management and technical experts are in touch with the platform by video conferencing. An onshore job is also more attractive for the aging workforce in the petroleum industry, at least in the western world. These efforts among others are contained in the established term integrated operations. The increased use of subsea facilities helps achieve the objective of keeping more workers onshore. Subsea facilities are also easier to expand, with new separators or different modules for different oil types, and are not limited by the fixed floor space of an above-water installation. Deepest platforms The world's deepest oil platform is the floating Perdido, which is a spar platform in the Gulf of Mexico in a water depth of . Non-floating compliant towers and fixed platforms, by water depth: Petronius Platform, Baldpate Platform, Troll A Platform, Bullwinkle Platform, Pompano Platform, Benguela-Belize Lobito-Tomboco Platform, Gullfaks C Platform, Tombua Landana Platform, Harmony Platform, See also List of tallest oil platforms Accommodation platform Chukchi Cap Deep sea mining Deepwater drilling Drillship North Sea oil Offshore geotechnical engineering Offshore oil and gas in the United States Oil drilling Protocol for the Suppression of Unlawful Acts against the Safety of Fixed Platforms Located on the Continental Shelf SAR201 Shallow water drilling Submarine pipeline TEMPSC Texas Towers References External links Oil Rig Disasters Listing of oil rig accidents Oil Rig Photos Collection of pictures of drilling rigs and production platforms An independent review of offshore platforms in the North Sea Overview of Conventional Platforms Pictorial treatment on the installation of platforms which extend from the seabed to the ocean surface Offshore engineering Petroleum production Drilling technology Natural gas technology Structural engineering
Oil platform
[ "Chemistry", "Engineering" ]
6,266
[ "Oil platforms", "Structural engineering", "Offshore engineering", "Petroleum technology", "Construction", "Civil engineering", "Natural gas technology" ]
163,873
https://en.wikipedia.org/wiki/English%20numerals
English number words include numerals and various words derived from them, as well as a large number of words borrowed from other languages. Cardinal numbers Cardinal numbers refer to the size of a group. In English, these words are numerals. If a number is in the range 21 to 99, and the second digit is not zero, the number is typically written as two words separated by a hyphen. In English, the hundreds are perfectly regular, except that the word hundred remains in its singular form regardless of the number preceding it. So too are the thousands, with the number of thousands followed by the word "thousand". The number one thousand may be written 1 000 or 1000 or 1,000; larger numbers are written for example 10 000 or 10,000 for ease of reading. European languages that use the comma as a decimal separator may correspondingly use the period as a thousands separator. As a result, some style guides recommend avoidance of the comma (,) as either separator and the use of the period (.) only as a decimal point. Thus one-half would be written 0.5 in decimal, base ten notation, and fifty thousand as 50 000, and not 50.000 nor 50,000 nor 50000. In American usage, four-digit numbers are often named using multiples of "hundred" and combined with tens and ones: "eleven hundred three", "twelve hundred twenty-five", "forty-seven hundred forty-two", or "ninety-nine hundred ninety-nine". In British usage, this style is common for multiples of 100 between 1,000 and 2,000 (e.g. 1,500 as "fifteen hundred") but not for higher numbers. Americans may pronounce four-digit numbers with non-zero tens and ones as pairs of two-digit numbers without saying "hundred" and inserting "oh" for zero tens: "twenty-six fifty-nine" or "forty-one oh five". This usage probably evolved from the distinctive usage for years; "nineteen-eighty-one", or from four-digit numbers used in the American telephone numbering system which were originally two letters followed by a number followed by a four-digit number, later by a three-digit number followed by the four-digit number. It is avoided for numbers less than 2500 if the context may mean confusion with time of day: "ten ten" or "twelve oh four". Intermediate numbers are read differently depending on their use. Their typical naming occurs when the numbers are used for counting. Another way is for when they are used as labels. The second column method is used much more often in American English than British English. The third column is used in British English but rarely in American English (although the use of the second and third columns is not necessarily directly interchangeable between the two regional variants). In other words, British English and American English can seemingly agree, but it depends on a specific situation (in this example, bus numbers). Note: When a cheque (or check) is written, the number 100 is always written "one hundred". It is never "a hundred". In American English, many students are taught not to use the word and anywhere in the whole part of a number, so it is not used before the tens and ones. It is instead used as a verbal delimiter when dealing with compound numbers. Thus, instead of "three hundred and seventy-three", "three hundred seventy-three" would be said. Despite this rule, some Americans use the and in reading numbers containing tens and ones as an alternative. Very large numbers For numbers above a million, there are three main systems used to form numbers in English. (For the use of prefixes such as kilo- for a thousand, mega- for a million, milli- for a thousandth, etc. see SI units.) These are: the long scale — designates a system of numeric names formerly used in British English, but now obsolete, in which a billion is used for a million million (and similarly, with trillion, quadrillion etc., the prefix denoting the power of a million); and a thousand million is sometimes called a milliard. This system is still used in several other European languages. There is some favour for this scale in astronomy, due to the issue of the vastness of the Universe. the short scale — always used in American English and almost always in British English since the politically-ordained formal adoption of this scale in the 1970s — designates a system of numeric names in which a thousand million is called a billion, and the word milliard is not used. the Indian numbering system, used widely across Indian subcontinent. Many people have no direct experience of manipulating numbers this large, and many non-American readers may interpret billion as 1012 (even if they are young enough to have been taught otherwise at school); moreover, usage of the "long" billion is standard in some non-English-speaking countries. For these reasons, defining the word may be advisable when writing for the public. The numbers past one trillion in the short scale, in ascending powers of 1000, are as follows: quadrillion, quintillion, sextillion, septillion, octillion, nonillion, decillion, undecillion, duodecillion, tredecillion, quattuordecillion, quindecillion, sexdecillion, septendecillion, octodecillion, novemdecillion and vigintillion (which is 10 to the 63rd power, or a one followed by 63 zeros). The highest number in this series listed in modern dictionaries is centillion, which is 10 to the 303rd power. The interim powers of one thousand between vigintillion and centillion do not have standardized names, nor do any higher powers, but there are many extensions in use. The highest number listed in Robert Munafo's table of such unofficial names is milli-millillion, which was coined as a name for 10 to the 3,000,003rd power. The googolplex was often cited as the largest named number in English. If a googol is ten to the one hundredth power, then a googolplex is one followed by a googol of zeros (that is, ten to the power of a googol). There is the coinage, of very little use, of ten to the googolplex power, of the word googolplexplex. The terms arab, kharab, padm and shankh are more commonly found in old books on Indian mathematics. Here are some approximate composite large numbers in American English: Often, large numbers are written with (preferably non-breaking) half-spaces or thin spaces separating the thousands (and, sometimes, with normal spaces or apostrophes) instead of commas—to ensure that confusion is not caused in countries where a decimal comma is used. Thus, a million is often written 1 000 000. In some areas, a point (. or ·) may also be used as a thousands separator, but then the decimal separator must be a comma (,). In English the point (.) is used as the decimal separator, and the comma (,) as the thousands separator. Special names Some numbers have special names in addition to their regular names, most depending on context. 0: zero: formal scientific usage nought: mostly British usage, common in science to refer to subscript 0 indicating an initial state naught: archaic term for nothingness, which may or may not be equivalent to the number; mostly American usage, old-fashioned spelling of nought aught: proscribed but still occasionally used when a digit is 0 (as in "thirty-aught-six", the .30-06 Springfield rifle cartridge and by association guns that fire it). Aughts also refers to the decade of 2000–2009 in American English. oh: used when spelling numbers (like telephone, bank account, bus line [British: bus route]) but can cause confusion with the letter o if reading a mix of numbers and letters nil: in general sport scores, British usage ("The score is two–nil.") nothing: in general sport scores, American usage ("The score is two–nothing.") null: to an object or idea related to nothingness. The 0th aleph number () is pronounced "aleph-null". love: in tennis, badminton, squash and similar sports (origin disputed, said by the Oxford English Dictionary to be from the idea that when one does a thing "for love", that is for no monetary gain, the word "love" implies "nothing". The previously held belief that it originated from , due to its shape, is no longer widely accepted) zilch, (from Spanish), zip: used informally when stressing nothingness; this is true especially in combination with one another ("You know nothing—zero, zip, , zilch!"); American usage nix: also used as a verb; mostly American usage cypher / cipher: archaic, from French , in turn from Arabic , meaning zero goose egg (informal) duck (used in cricket when a batsman is dismissed without scoring) blank the half of a domino tile with no pips 1: ace in certain sports and games, as in tennis or golf, indicating success with one stroke, and the face of a die, playing card or domino half with one pip birdie in golf denotes one stroke less than par, and bogey, one stroke more than par solo unit linear the degree of a polynomial is 1; also for explicitly denoting the first power of a unit: linear metre unity in mathematics protagonist first actor in theatre of Ancient Greece, similarly Proto-Isaiah and proton 2: couple brace, from Old French "arms" (the plural of arm), as in "what can be held in two arms". pair deuce the face of a die, playing card or domino half with two pips eagle in golf denotes two strokes less than par duo quadratic the degree of a polynomial is 2 also square or squared for denoting the second power of a unit: square metre or metre squared penultimate, second from the end deuteragonist second actor in theatre of Ancient Greece, similarly Deutero-Isaiah and deuteron 3: trey the face of a die or playing card with three pips, a three-point field goal in basketball, nickname for the third carrier of the same personal name in a family trio trips: three-of-a-kind in a poker hand. a player has three cards with the same numerical value cubic the degree of a polynomial is 3 also cube or cubed for denoting the third power of a unit: cubic metre or metre cubed albatross in golf denotes three strokes less than par. Sometimes called double eagle hat-trick or hat trick: achievement of three feats in sport or other contexts antepenultimate third from the end tritagonist third actor in theatre of Ancient Greece, similarly Trito-Isaiah and triton turkey in bowling, three consecutive strikes 4: cater: (rare) the face of a die or playing card with four pips quartet quartic or biquadratic the degree of a polynomial is 4 quad (short for quadruple or the like) several specialized sets of four, such as four of a kind in poker, a carburetor with four inputs, etc., condor in golf denotes four strokes less than par preantepenultimate fourth from the end 5: cinque or cinq (rare) the face of a die or playing card with five pips quintet nickel (informal American, from the value of the five-cent US nickel, but applied in non-monetary references) quintic the degree of a polynomial is 5 quint (short for quintuplet or the like) several specialized sets of five, such as quintuplets, etc. 6: half a dozen sice (rare) the face of a die or playing card with six pips sextet sextic or hectic the degree of a polynomial is 6 7: septet septic or heptic the degree of a polynomial is 7 8: octet 9: nonet 10: dime (informal American, from the value of the ten-cent US dime, but applied in non-monetary references) decet decade, used for years but also other groups of 10 as in rosary prayers or Braille symbols 11: undecet a banker's dozen 12: duodecet a dozen (first power of the duodecimal base), used mostly in commerce 13: a baker's dozen 20: a score (first power of the vigesimal base), nowadays archaic; famously used in the opening of the Gettysburg Address: "Four score and seven years ago..." The Number of the Beast in the King James Bible is rendered "Six hundred threescore and six". Also in The Book of Common Prayer, Psalm 90 as used in the Burial Service—"The days of our age are threescore years and ten; ...." 25: a pony is a bet of £25 in British betting slang. 50: half-century, literally half of a hundred, usually used in cricket scores. 55: double-nickel (informal American) 60: a shock: historical commercial count, described as "three scores". 100: A century, also used in cricket scores and in cycling for 100 miles. A ton, in Commonwealth English, the speed of 100 mph or 100 km/h. A small hundred or short hundred (archaic, see 120 below) 120: A great hundred or long hundred (twelve tens; as opposed to the small hundred, i.e. 100 or ten tens), also called small gross (ten dozens), both archaic Also sometimes referred to as duodecimal hundred, although that could literally also mean 144, which is twelve squared 144: a gross (a dozen dozens, second power of the duodecimal base), used mostly in commerce 500: a ream, usually of paper. a monkey is a bet of £500 in British betting slang. 1000: a grand, colloquially used especially when referring to money, also in fractions and multiples, e.g. half a grand, two grand, etc. Grand can also be shortened to "G" in many cases. K, originally from the abbreviation of kilo-, e.g. "He only makes $20K a year." Millennium (plural: millennia), a period of one thousand years. kilo- (Greek for "one thousand"), a decimal unit prefix in the Metric system denoting multiplication by "one thousand". For example: 1 kilometre = 1000 metres. 1728: a great gross (a dozen gross, third power of the duodecimal base), used historically in commerce 10,000: a myriad (a hundred hundred), commonly used in the sense of an indefinite very high number 100,000: a lakh (a hundred thousand), in Indian English 10,000,000: a crore (a hundred lakh), in Indian English and written as 100,00,000. 10100: googol (1 followed by 100 zeros), used in mathematics 10googol: googolplex (1 followed by a googol of zeros) 10googolplex: googolplexplex (1 followed by a googolplex of zeros) Combinations of numbers in most sports scores are read as in the following examples: 1–0    British English: one-nil; American English: one-nothing, one-zip, or one-zero 0–0    British English: nil-nil or nil all; American English: zero-zero or nothing-nothing, (occasionally scoreless or no score) 2–2    two-two or two all; American English also twos, two to two, even at two, or two up. Naming conventions of Tennis scores (and related sports) are different from other sports. The centuries of Italian culture have names in English borrowed from Italian: duecento "(one thousand and) two hundred" for the years 1200 to 1299, or approximately 13th century trecento 14th century quattrocento 15th century cinquecento 16th century seicento 17th century settecento 18th century ottocento 19th century novecento 20th century ventesimo 21st century When reading numbers in a sequence, such as a telephone or serial number, British people will usually use the terms double followed by the repeated number. Hence 007 is double oh seven. Exceptions are the emergency telephone number 999, which is always nine nine nine and the apocalyptic "Number of the Beast", which is always six six six. In the US, 911 (the US emergency telephone number) is usually read nine one one, while 9/11 (in reference to the September 11, 2001, attacks) is usually read nine eleven. Multiplicative adverbs and adjectives A few numbers have specialised multiplicative numbers (adverbs), also called adverbial numbers, which express how many times some event happens: Compare these specialist multiplicative numbers to express how many times some thing exists (adjectives): English also has some multipliers and distributive numbers, such as singly. Negative numbers The name of a negative number is the name of the corresponding positive number preceded by "minus" or (American English) "negative". Thus −5.2 is "minus five point two" or "negative five point two". For temperatures, North Americans colloquially say "below"—short for "below zero"—so a temperature of −5° is "five below" (in contrast, for example, to "two above" for 2°). This is occasionally used for emphasis when referring to several temperatures or ranges both positive and negative. This is particularly common in Canada where the use of Celsius in weather forecasting means that temperatures can regularly drift above and below zero at certain times of year. Ordinal numbers Ordinal numbers refer to a position (also called index or rank) in a sequence. Common ordinals include: Zeroth only has a meaning when counting starts with zero, which happens in a mathematical or computer science context. Ordinal numbers predate the invention of zero and positional notation. Ordinal numbers such as 21st, 33rd, etc., are formed by combining a cardinal ten with an ordinal unit. Higher ordinals are not often written in words, unless they are round numbers (thousandth, millionth, billionth). They are written with digits and letters as described below. Some rules should be borne in mind. The suffixes -th, -st, -nd and -rd are occasionally written superscript above the number itself. If the tens digit of a number is 1, then "th" is written after the number. For example: 13th, 19th, 112th, 9,311th. If the tens digit is not equal to 1, then the following table could be used: For example: 2nd, 7th, 20th, 23rd, 52nd, 135th, 301st. These ordinal abbreviations are actually hybrid contractions of a numeral and a word. 1st is "1" + "st" from "first". Similarly, "nd" is used for "second" and "rd" for "third". In the legal field and in some older publications, the ordinal abbreviation for "second" and "third" is simply "d". For example: 42d, 33d, 23d. NB: "D" still often denotes "second" and "third" in the numeric designations of units in the US armed forces, for example, 533d Squadron, and in legal citations for the second and third series of case reporters. Dates There are a number of ways to read years. The following table offers a list of valid pronunciations and alternate pronunciations for any given year of the Gregorian calendar and Julian calendar. Twelve thirty-four would be the norm on both sides of the Atlantic for the year 1234. The years 2000 to 2009 are most often read as two thousand, two thousand (and) one and the like by both British and American speakers. For years after 2009, twenty eleven, twenty fourteen, etc. are more common, even in years earlier than 2009 BC/BCE. Likewise, the years after 1009 (until 1099) are also read in the same manner (e.g. 1015 is either ten fifteen or, rarely, one thousand fifteen). Some Britons read years within the 1000s to 9000s BC/BCE in the American manner, that is, 1234 BC is read as twelve (hundred and) thirty-four BC, while 2400 BC can be read as either two thousand four hundred or twenty four hundred BC. Collective numbers Collective numbers are numbers that refer to a group of a specific size. Words like "pair" and "dozen" are common in English, though most are formally derived from Greek and Latin numerals, as follows: Fractions and decimals Numbers used to denote the denominator of a fraction are known linguistically as "partitive numerals". In spoken English, ordinal numerals and partitive numerals are identical with a few exceptions. Thus "fifth" can mean the element between fourth and sixth, or the fraction created by dividing the unit into five pieces. When used as a partitive numeral, these forms can be pluralized: one seventh, two sevenths. The sole exceptions to this rule are division by one, two, and sometimes four: "first" and "second" cannot be used for a fraction with a denominator of one or two. Instead, "whole" and "half" (plural "halves") are used. For a fraction with a denominator of four, either "fourth" or "quarter" may be used. Here are some common English fractions, or partitive numerals: Alternatively, and for greater numbers, one may say for "one over two", for "five over eight", and so on. This "over" form is also widely used in mathematics. Fractions together with an integer are read as follows: is "one and a half" is "six and a quarter" is "seven and five eighths" A space is placed to mark the boundary between the whole number and the fraction part unless superscripts and subscripts are used; for example: 9 1/2 Numbers with a decimal point may be read as a cardinal number, then "and", then another cardinal number followed by an indication of the significance of the second cardinal number (mainly U.S.); or as a cardinal number, followed by "point", and then by the digits of the fractional part. The indication of significance takes the form of the denominator of the fraction indicating division by the smallest power of ten larger than the second cardinal. This is modified when the first cardinal is zero, in which case neither the zero nor the "and" is pronounced, but the zero is optional in the "point" form of the fraction. Some American and Canadian schools teach students to pronounce decimaly written fractions (for example, .5) as though they were longhand fractions (five tenths), such as thirteen and seven tenths for 13.7. This formality is often dropped in common speech and is steadily disappearing in instruction in mathematics and science as well as in international American schools. In the U.K., and among most North Americans, 13.7 would be read thirteen point seven. For example: 0.002 is "point zero zero two", "point oh oh two", "nought point zero zero two", etc.; or "two thousandths" (U.S., occasionally) 3.1416 is "three point one four one six" 99.3 is "ninety-nine point three"; or "ninety-nine and three tenths" (U.S., occasionally). In English the decimal point was originally printed in the center of the line (0·002), but with the advent of the typewriter it was placed at the bottom of the line, so that a single key could be used as a full stop/period and as a decimal point. In many non-English languages a full-stop/period at the bottom of the line is used as a thousands separator with a comma being used as the decimal point. Whether or not digits or words are used With few exceptions, most grammatical texts rule that the numbers zero to nine inclusive should be "written out" – instead of "1" and "2", one would write "one" and "two". Example: "I have two apples." (Preferred) Example: "I have 2 apples." After "nine", one can head straight back into the 10, 11, 12, etc., although some write out the numbers until "twelve". Example: "I have 28 grapes." (Preferred) Example: "I have twenty-eight grapes." Another common usage is to write out any number that can be expressed as one or two words, and use figures otherwise. Examples: "There are six million dogs." (Preferred) "There are 6,000,000 dogs." "That is one hundred and twenty-five oranges." (British English) "That is one hundred twenty-five oranges." (US-American English) "That is 125 oranges." (Preferred) Numbers at the beginning of a sentence should also be written out, or the sentence rephrased. The above rules are not always followed. In literature, larger numbers might be spelled out. On the other hand, digits might be more commonly used in technical or financial articles, where many figures are discussed. In particular, the two different forms should not be used for figures that serve the same purpose; for example, it is inelegant to write, "Between day twelve and day 15 of the study, the population doubled." Empty numbers Colloquial English's small vocabulary of empty or indefinite numbers can be employed when there is uncertainty as to the precise number to use, but it is desirable to define a general range: specifically, the terms "umpteen", "umpty", and "zillion". These are derived etymologically from the range affixes: "-teen" (designating the range as being between 13 and 19 inclusive) "-ty" (designating the range as being between 20 and 90 inclusive) "-illion" (designating the range as being above 1,000,000; or, more generally, as being extremely large). The prefix "ump-" is added to the first two suffixes to produce the empty numbers "umpteen" and "umpty": derived from the onomatopoeic sound on the telegraph key used by Morse operators. A noticeable absence of an empty number is in the hundreds range. Usage of empty numbers: The word "umpteen" may be used as an adjective, as in "I had to go to umpteen stores to find shoes that fit." It can also be used to modify a larger number, usually "million", as in "Umpteen million people watched the show; but they still cancelled it." "Umpty" is not in common usage. It can appear in the form "umpty-one" (paralleling the usage in such numbers as "twenty-one"), as in "There are umpty-one ways to do it wrong." "Umpty-ump" is also heard, though "ump" is never used by itself. The word "zillion" may be used as an adjective, modifying a noun. The noun phrase normally contains the indefinite article "a", as in "There must be a zillion pages on the World Wide Web." The plural "zillions" designates a number indefinitely larger than "millions" or "billions". In this case, the construction is parallel to the one for "millions" or "billions", with the number used as a plural count noun, followed by a prepositional phrase with "of", as in "There are zillions of grains of sand on the beaches of the world." Empty numbers are sometimes made up, with obvious meaning: "squillions" is obviously an empty, but very large, number; a "squintillionth" would be a very small number. Some empty numbers may be modified by actual numbers, such as "four zillion", and are used for jest, exaggeration, or to relate abstractly to actual numbers. Empty numbers are colloquial, and primarily used in oral speech or informal contexts. They are inappropriate in formal or scholarly usage. See also Placeholder name. See also Indefinite and fictitious numbers List of numbers Long and short scales Names of large numbers Natural number Number prefixes and their derivatives References External links English Numbers - explanations, exercises and number generator (cardinal and ordinal numbers) Numerals Naming conventions American and British English differences
English numerals
[ "Mathematics" ]
6,150
[ "Numeral systems", "Numerals" ]
163,895
https://en.wikipedia.org/wiki/Orion%20%28mythology%29
In Greek mythology, Orion (; Ancient Greek: Ὠρίων or ; Latin: Orion) was a giant huntsman whom Zeus (or perhaps Artemis) placed among the stars as the constellation of Orion. Ancient sources told several different stories about Orion; there are two major versions of his birth and several versions of his death. The most important recorded episodes are his birth in Boeotia, his visit to Chios where he met Merope and raped her, being blinded by Merope's father, the recovery of his sight at Lemnos, his hunting with Artemis on Crete, his death by the bow of Artemis or the sting of the giant scorpion which became Scorpius, and his elevation to the heavens. Most ancient sources omit some of these episodes and several tell only one. These various incidents may originally have been independent, unrelated stories, and it is impossible to tell whether the omissions are simple brevity or represent a real disagreement. In Greek literature he first appears as a great hunter in Homer's epic the Odyssey, where Odysseus sees his shade in the underworld. The bare bones of Orion's story are told by the Hellenistic and Roman collectors of myths, but there is no extant literary version of his adventures comparable, for example, to that of Jason in Apollonius of Rhodes' Argonautica or Euripides' Medea; the entry in Ovid's Fasti for May 11 is a poem on the birth of Orion, but that is one version of a single story. The surviving fragments of legend have provided a fertile field for speculation about Greek prehistory and myth. Orion served several roles in ancient Greek culture. The story of the adventures of Orion, the hunter, is the one for which there is the most evidence (and even for that, not very much); he is also the personification of the constellation of the same name; he was venerated as a hero, in the Greek sense, in the region of Boeotia; and there is one etiological passage which says that Orion was responsible for the present shape of the Strait of Sicily. Legends Homer and Hesiod Orion is mentioned in the oldest surviving works of Greek literature, which would probably date back to the 7th or 8th century BC, but which are the products of an oral tradition with origins several centuries earlier. In Homer's Iliad Orion is described as a constellation, and the star Sirius is mentioned as his dog. In the Odyssey, Orion is essentially the pinnacle of human excellence in hunting: Odysseus sees him hunting in the underworld with a bronze club, a great slayer of animals. In some legends Orion claims to be able to hunt any animal in existence. He is also mentioned as a constellation, as the lover of the Goddess Dawn, as slain by Artemis, and as the most handsome of the earthborn. In the Works and Days of Hesiod, Orion is also a constellation, one whose rising and setting with the sun is used to reckon the year. The legend of Orion was probably told in the Astronomia, a lost work attributed to Hesiod. This version is known through a summary of Eratosthenes's lost work the Catasterismi, on the constellations. According to this summary, Orion was the son of the sea-god Poseidon and Euryale, a daughter of Minos, King of Crete. Orion could walk on the waves because of his father; he walked to the island of Chios where he got drunk and raped Merope, daughter of Oenopion, the ruler there. In vengeance, Oenopion blinded Orion and drove him away. Orion stumbled to Lemnos where Hephaestus—the smith-god—had his forge. Hephaestus told his servant, Cedalion, to guide Orion to the uttermost East where Helios, the Sun, healed him; Orion carried Cedalion around on his shoulders. Orion returned to Chios to punish Oenopion, but the king hid away underground and escaped Orion's wrath. Orion's next journey took him to Crete where he hunted with the goddess Artemis and her mother Leto, and in the course of the hunt, threatened to kill every beast on Earth. Gaia (Apollo in some versions, disapproving of his sister's relationship with a male) objected and sent a giant scorpion to kill Orion. The creature succeeded, and after his death, the goddesses asked Zeus to place Orion among the constellations. Zeus consented and, as a memorial to Orion's death, added the Scorpion to the heavens as well. Other sources Although Orion has a few lines in both Homeric poems and in the Works and Days, most of the stories about him are recorded in incidental allusions and in fairly obscure later writings. No great poet standardized the legend. The ancient sources for Orion's legend are mostly notes in the margins of ancient poets (scholia) or compilations by later scholars, the equivalent of modern reference works or encyclopedias; even the legend from the Hesiodic Astronomia survives only in one such compilation. The margin of the Empress Eudocia's copy of the Iliad has a note summarizing a Hellenistic poet who tells a different story of Orion's birth. Here the gods Zeus, Hermes, and Poseidon come to visit Hyrieus of Tanagra, who roasts a whole bull for them. When they offer him a favor, he asks for the birth of sons. The gods take the bull's hide and urinate into it and bury it in the earth, then tell him to dig it up ten months later. When he does, he finds Orion; this explains why Orion is earthborn. A second full telling (even shorter than the summary of the Astronomia) is in a Roman-era collection of myths; the account of Orion is based largely on the mythologist and poet Pherecydes of Athens. Here Orion is described as earthborn and enormous in stature. This version also mentions Poseidon and Euryale as his parents. It adds a first marriage to Side before his marriage to Merope. All that is known about Side is that Hera threw her into Hades for rivalling her in beauty. It also gives a different version of Orion's death than the Iliad: Eos, the Dawn, fell in love with Orion and took him to Delos where Artemis killed him. Another narrative on the constellations, three paragraphs long, is from a Latin writer whose brief notes have come down to us under the name of Hyginus. It begins with the oxhide story of Orion's birth, which this source ascribes to Callimachus and Aristomachus, and sets the location at Thebes or Chios. Hyginus has two versions. In one of them he omits Poseidon; a modern critic suggests this is the original version. The same source tells two stories of the death of Orion. The first says that because of his "living joined in too great a friendship" with Oenopion, he boasted to Artemis and Leto that he could kill anything which came from Earth. Gaia (the personification of Earth in Greek mythology) objected and created the Scorpion. In the second story, Apollo, being jealous of Orion's love for Artemis, arranged for Artemis to kill him. Seeing Orion swimming in the ocean, a long way off, he remarked that Artemis could not possibly hit that black thing in the water. Feeling challenged, she sent an arrow right through it and killed Orion; when his body washed up on shore, she wept copiously, and decided to place Orion among the stars. He connects Orion with several constellations, not just Scorpius. Orion chased Pleione, the mother of the Pleiades, for seven years, until Zeus intervened and raised all of them to the stars. In Works and Days, Orion chases the Pleiades themselves. Canis Minor and Canis Major are his dogs, the one in front is called Procyon. They chase Lepus, the hare, although Hyginus says some critics thought this too base a prey for the noble Orion and have him pursuing Taurus, the bull, instead. A Renaissance mythographer adds other names for Orion's dogs: Leucomelaena, Maera, Dromis, Cisseta, Lampuris, Lycoctonus, Ptoophagus, Arctophonus. Variants There are numerous variants in other authors. Most of these are incidental references in poems and scholiasts. The Roman poet Virgil shows Orion as a giant wading through the Aegean Sea with the waves breaking against his shoulders; rather than, as the mythographers have it, walking on the water. There are several references to Hyrieus as the father of Orion that connect him to various places in Boeotia, including Hyria; this may well be the original story (although not the first attested), since Hyrieus is presumably the eponym of Hyria. He is also called Oeneus, although he is not the Calydonian Oeneus. Other ancient scholia say, as Hesiod does, that Orion was the son of Poseidon and his mother was a daughter of Minos; but they call the daughter Brylle or Hyeles. There are two versions where Artemis killed Orion, either with her arrows or by producing the Scorpion. In the second variant, Orion died of the Scorpion's sting as he does in Hesiod. Although Orion does not defeat the Scorpion in any version, several variants have it die from its wounds. Artemis is given various motives. One is that Orion boasted of his beast-killing and challenged her to a contest with the discus. Another is that he assaulted either Artemis herself or Opis, a Hyperborean maiden in her band of huntresses. Aratus's brief description, in his Astronomy, conflates the elements of the myth: according to Aratus, Orion attacks Artemis while hunting on Chios, and the Scorpion kills him there. Nicander, in his Theriaca, has the scorpion of ordinary size and hiding under a small (oligos) stone. Most versions of the story that continue after Orion's death tell of the gods raising Orion and the Scorpion to the stars, but even here a variant exists: Ancient poets differed greatly on whom Aesculapius brought back from the dead; the Argive epic poet Telesarchus is quoted as saying in a scholion that Aesculapius resurrected Orion. Other ancient authorities are quoted anonymously that Aesculapius healed Orion after he was blinded by Oenopion. The story of Orion and Oenopion also varies. One source refers to Merope as Oenopion's wife, not his daughter. Another refers to Merope as the daughter of Minos and not of Oenopion. The longest version (a page in the Loeb) is from a collection of melodramatic plots drawn up by an Alexandrian poet for the Roman Cornelius Gallus to make into Latin verse. It describes Orion as slaying the wild beasts of Chios and looting the other inhabitants to make a bride-price for Oenopion's daughter, who is called Aëro or Leiro. Oenopion does not want to marry her to someone like Orion, and eventually Orion, in frustration, breaks into her bedchamber and rapes her. The text implies that Oenopion blinds him on the spot. Lucian includes a picture with Orion in a rhetorical description of an ideal building, in which Orion is walking into the rising sun with Lemnos nearby, Cedalion on his shoulder. He recovers his sight there with Hephaestus still watching in the background. Latin sources add that Oenopion was the son of Dionysus. Dionysus sent satyrs to put Orion into a deep sleep so he could be blinded. One source tells the same story but converts Oenopion into Minos of Crete. It adds that an oracle told Orion that his sight could be restored by walking eastward and that he found his way by hearing the Cyclops' hammer, placing a Cyclops as a guide on his shoulder; it does not mention Cabeiri or Lemnos—this is presumably the story of Cedalion recast. Both Hephaestus and the Cyclopes were said to make thunderbolts; they are combined in other sources. One scholion, on a Latin poem, explains that Hephaestus gave Orion a horse. Giovanni Boccaccio cites a lost Latin writer for the story that Orion and Candiope were son and daughter of Oenopion, king of Sicily. While the virgin huntsman Orion was sleeping in a cave, Venus seduced him; as he left the cave, he saw his sister shining as she crossed in front of it. He ravished her; when his father heard of this, he banished Orion. Orion consulted an oracle, which told him that if he went east, he would regain the glory of kingship. Orion, Candiope, and their son Hippologus sailed to Thrace, "a province eastward from Sicily". There he conquered the inhabitants, and became known as the son of Neptune. His son begat the Dryas mentioned in Statius. Cult and popular appreciation In Ancient Greece, Orion had a hero cult in the region of Boeotia. The number of places associated with his birth suggest that it was widespread. Hyria, the most frequently mentioned, was in the territory of Tanagra. A feast of Orion was held at Tanagra as late as the Roman Empire. They had a tomb of Orion most likely at the foot of Mount Cerycius (now Mount Tanagra). Maurice Bowra argues that Orion was a national hero of the Boeotians, much as Castor and Pollux were for the Dorians. He bases this claim on the Athenian epigram on the Battle of Coronea in which a hero gave the Boeotian army an oracle, then fought on their side and defeated the Athenians. The Boeotian school of epic poetry was chiefly concerned with the genealogies of the gods and heroes; later writers elaborated this web. Several other myths are attached to Orion in this way: A papyrus fragment of the Boeotian poet Corinna gives Orion fifty sons (a traditional number). This included the oracular hero Acraephen, who, she sings, gave a response to Asopus regarding Asopus' daughters who were abducted by the gods. Corinna sang of Orion conquering and naming all the land of the dawn. Bowra argues that Orion was believed to have delivered oracles as well, probably at a different shrine. Hyginus says that Hylas's mother was Menodice, daughter of Orion. Another mythographer, Liberalis, tells of Menippe and Metioche, daughters of Orion, who sacrificed themselves for their country's good and were transformed into comets. Orion also has etiological connection to the city of Messina in Sicily. Diodorus of Sicily wrote a history of the world up to his own time (the beginning of the reign of Augustus). He starts with the gods and the heroes. At the end of this part of the work, he tells the story of Orion and two wonder-stories of his mighty earth-works in Sicily. One tells how he aided Zanclus, the founder of Zancle (the former name for Messina), by building the promontory which forms the harbor. The other, which Diodorus ascribes to Hesiod, relates that there was once a broad sea between Sicily and the mainland. Orion built the whole Peloris, the Punta del Faro, and the temple to Poseidon at the tip, after which he settled in Euboea. He was then "numbered among the stars of heaven and thus won for himself immortal remembrance". The Renaissance historian and mathematician Francesco Maurolico, who came from Messina, identified the remains of a temple of Orion near the present Messina Cathedral. Maurolico also designed an ornate fountain, built by the sculptor Giovanni Angelo Montorsoli in 1547, in which Orion is a central figure, symbolizing the Emperor Charles V, also a master of the sea and restorer of Messina; Orion is still a popular symbol of the city. Images of Orion in classical art are difficult to recognize, and clear examples are rare. There are several ancient Greek images of club-carrying hunters that could represent Orion, but such generic examples could equally represent an archetypal "hunter", or indeed Heracles. Some claims have been made that other Greek art represents specific aspects of the Orion myth. A tradition of this type has been discerned in 5th century BC Greek pottery—John Beazley identified a scene of Apollo, Delian palm in hand, revenging Orion for the attempted rape of Artemis, while another scholar has identified a scene of Orion attacking Artemis as she is revenged by a snake (a counterpart to the scorpion) in a funerary group—supposedly symbolizing the hope that even the criminal Orion could be made immortal, as well as an astronomical scene in which Cephalus is thought to stand in for Orion and his constellation, also reflecting this system of iconography. Also, a tomb frieze in Taranto () may show Orion attacking Opis. But the earliest surviving clear depiction of Orion in classical art is Roman, from the depictions of the Underworld scenes of the Odyssey discovered at the Esquiline Hill (50–40 BC). Orion is also seen on a 4th-century bas-relief, currently affixed to a wall in the Porto neighborhood of Naples. The constellation Orion rises in November, the end of the sailing season, and was associated with stormy weather, and this characterization extended to the mythical Orion—the bas-relief may be associated with the sailors of the city. Interpretations Renaissance Mythographers have discussed Orion at least since the Renaissance of classical learning; the Renaissance interpretations were allegorical. In the 14th century, Boccaccio interpreted the oxhide story as representing human conception; the hide is the womb, Neptune the moisture of semen, Jupiter its heat, and Mercury the female coldness; he also explained Orion's death at the hands of the moon-goddess as the Moon producing winter storms. The 16th-century Italian mythographer Natalis Comes interpreted the whole story of Orion as an allegory of the evolution of a storm cloud: Begotten by air (Zeus), water (Poseidon), and the sun (Apollo), a storm cloud is diffused (Chios, which Comes derives from χέω, "pour out"), rises though the upper air (Aërope, as Comes spells Merope), chills (is blinded), and is turned into rain by the moon (Artemis). He also explains how Orion walked on the sea: "Since the subtler part of the water which is rarefied rests on the surface, it is said that Orion learned from his father how to walk on water." Similarly, Orion's conception made him a symbol of the philosophical child, an allegory of philosophy springing from multiple sources, in the Renaissance as in alchemical works, with some variations. The 16th-century German alchemist Michael Maier lists the fathers as Apollo, Vulcan and Mercury, and the 18th-century French alchemist Antoine-Joseph Pernety gave them as Jupiter, Neptune and Mercury. Modern Modern mythographers have seen the story of Orion as a way to access local folk tales and cultic practices directly without the interference of ancient high culture; several of them have explained Orion, each through his own interpretation of Greek prehistory and of how Greek mythology represents it. There are some points of general agreement between them: for example, that the attack on Opis is an attack on Artemis, for Opis is one of the names of Artemis. There was a movement in the late nineteenth century to interpret all the Boeotian heroes as merely personifications of the constellations; there has since come to be wide agreement that the myth of Orion existed before there was a constellation named for him. Homer, for example, mentions Orion, the Hunter, and Orion, the constellation, but never confuses the two. Once Orion was recognized as a constellation, astronomy in turn affected the myth. The story of Side may well be a piece of astronomical mythology. The Greek word side means pomegranate, which bears fruit while Orion, the constellation, can be seen in the night sky. Rose suggests she is connected with Sidae in Boeotia, and that the pomegranate, as a sign of the Underworld, is connected with her descent there. The 19th-century German classical scholar Erwin Rohde viewed Orion as an example of the Greeks erasing the line between the gods and mankind. That is, if Orion was in the heavens, other mortals could hope to be also. The Hungarian mythographer Karl Kerényi, one of the founders of the modern study of Greek mythology, wrote about Orion in Gods of the Greeks (1951). Kerényi portrays Orion as a giant of Titanic vigor and criminality, born outside his mother as were Tityos or Dionysus. Kerényi places great stress on the variant in which Merope is the wife of Oenopion. He sees this as the remnant of a lost form of the myth in which Merope was Orion's mother (converted by later generations to his stepmother and then to the present forms). Orion's blinding is therefore parallel to that of Aegypius and Oedipus. In Dionysus (1976), Kerényi portrays Orion as a shamanic hunting hero, surviving from Minoan times (hence his association with Crete). Kerényi derives Hyrieus (and Hyria) from the Cretan dialect word , meaning "beehive", which survives only in ancient dictionaries. From this association he turns Orion into a representative of the old mead-drinking cultures, overcome by the wine masters Oenopion and Oeneus. (The Greek for "wine" is oinos.) Fontenrose cites a source stating that Oenopion taught the Chians how to make wine before anybody else knew how. Joseph Fontenrose wrote Orion: the Myth of the Hunter and the Huntress (1981) to show Orion as the type specimen of a variety of grotesque hero. Fontenrose views him as similar to Cúchulainn, that is, stronger, larger, and more potent than ordinary men and the violent lover of the Divine Huntress; other heroes of the same type are Actaeon, Leucippus (son of Oenomaus), Cephalus, Teiresias, and Zeus as the lover of Callisto. Fontenrose also sees Eastern parallels in the figures of Aqhat, Attis, Dumuzi, Gilgamesh, Dushyanta, and Prajapati (as pursuer of Ushas). In The Greek Myths (1955), Robert Graves views Oenopion as his perennial Year-King, at the stage where the king pretends to die at the end of his term and appoints a substitute, in this case Orion, who actually dies in his place. His blindness is iconotropy from a picture of Odysseus blinding the Cyclops, mixed with a purely Hellenic solar legend: the Sun-hero is captured and blinded by his enemies at dusk, but escapes and regains his sight at dawn, when all beasts flee him. Graves sees the rest of the myth as a syncretism of diverse stories. These include Gilgamesh and the Scorpion-Men, Set becoming a scorpion to kill Horus and the story of Aqhat and Yatpan from Ras Shamra, as well as a conjectural story of how the priestesses of Artemis Opis killed a visitor to their island of Ortygia. He compares Orion's birth from the bull's hide to a West African rainmaking charm and claims that the son of Poseidon should be a rainmaker. Cultural references The ancient Greek and Roman sources which tell more about Orion than his being a gigantic huntsman are mostly both dry and obscure, but poets do write of him: The brief passages in Aratus and Virgil are mentioned above. Pindar celebrates the pancratist Melissus of Thebes "who was not granted the build of an Orion", but whose strength was still great. Cicero translated Aratus in his youth; he made the Orion episode half again longer than it was in the Greek, adding the traditional Latin topos of madness to Aratus's text. Cicero's Aratea is one of the oldest Latin poems to come down to us as more than isolated lines; this episode may have established the technique of including epyllia in non-epic poems. Orion is used by Horace, who tells of his death at the hands of Diana/Artemis, and by Ovid, in his Fasti for May 11, the middle day of the Lemuria, when (in Ovid's time) the constellation Orion set with the sun. Ovid's episode tells the story of Hyrieus and two gods, Jupiter and Neptune, although Ovid is bashful about the climax; Ovid makes Hyrieus a poor man, which means the sacrifice of an entire ox is more generous. There is also a single mention of Orion in his Art of Love, as a sufferer from unrequited love: "Pale Orion wandered in the forest for Side." Statius mentions Orion four times in his Thebaïd; twice as the constellation, a personification of storm, but twice as the ancestor of Dryas of Tanagra, one of the defenders of Thebes. The very late Greek epic poet Nonnus mentions the oxhide story in brief, while listing the Hyrians in his Catalogue of the Boeotian army of Dionysius. References since antiquity are fairly rare. At the beginning of the 17th century, French sculptor Barthélemy Prieur cast a bronze statue Orion et Cédalion, some time between 1600 and 1611. This featured Orion with Cedalion on his shoulder, in a depiction of the ancient legend of Orion recovering his sight; the sculpture is now displayed at the Louvre. Nicolas Poussin painted Paysage avec Orion aveugle cherchant le soleil (1658) ("Landscape with blind Orion seeking the sun"), after learning of the description by the 2nd-century Greek author Lucian, of a picture of Orion recovering his sight; Poussin included a storm-cloud, which both suggests the transient nature of Orion's blindness, soon to be removed like a cloud exposing the sun, and includes Natalis Comes' esoteric interpretation of Orion as a storm-cloud. Poussin need not have consulted Lucian directly; the passage is in the notes of the illustrated French translation of Philostratus' Imagines which Poussin is known to have consulted. The Austrian Daniel Seiter (active in Turin, Italy), painted Diane auprès du cadavre d'Orion (c. 1685) ("Diana next to Orion's corpse"), pictured above. In Endymion (1818), John Keats includes the line "Or blind Orion hungry for the morn", thought to be inspired by Poussin. William Hazlitt may have introduced Keats to the painting—he later wrote the essay "On Landscape of Nicholas Poussin", published in Table Talk, Essays on Men and Manners (1821–2). Richard Henry Horne, writing in the generation after Keats and Hazlitt, penned the three volume epic poem Orion in 1843. It went into at least ten editions and was reprinted by the Scholartis Press in 1928. Science fiction author Ben Bova re-invented Orion as a time-traveling servant of various gods in a series of five novels. In The Blood of Olympus, the final volume of a series, Rick Riordan depicts Orion as one of the giant sons of the earth goddess Gaea. Italian composer Francesco Cavalli wrote the opera, L'Orione in 1653. The story is set on the Greek island of Delos and focuses on Diana's love for Orion as well as on her rival, Aurora. Diana shoots Orion only after being tricked by Apollo into thinking him a sea monster—she then laments his death and searches for Orion in the underworld until he is elevated to the heavens. French composer Louis de La Coste composed in 1728 the tragédie lyrique Orion. This time, it is Diana who is in love with Orion and is rejected by him. Johann Christian Bach ('the English Bach') wrote an opera, Orion, or Diana Reveng'd, first presented at London's Haymarket Theatre in 1763. Orion, sung by a castrato, is in love with Candiope, the daughter of Oenopion, King of Arcadia but his arrogance has offended Diana. Diana's oracle forbids him to marry Candiope and foretells his glory and death. He bids a touching farewell to Candiope and marches off to his destiny. Diana allows him his victory and then kills him, offstage, with her arrow. In another aria, his mother Retrea (Queen of Thebes), laments his death but ultimately sees his elevation to the heavens. The 2002 opera Galileo Galilei by American composer Philip Glass includes an opera within an opera piece between Orion and Merope. The sunlight, which heals Orion's blindness, is an allegory of modern science. Philip Glass has also written a shorter work on Orion, as have Tōru Takemitsu, Kaija Saariaho, and John Casken. David Bedford's late-twentieth-century works are about the constellation rather than the mythical figure; he is an amateur astronomer. The twentieth-century French poet René Char found the blind, lustful huntsman, both pursuer and pursued, a central symbol, as James Lawler has explained at some length in his 1978 work René Char: the Myth and the Poem. French novelist Claude Simon likewise found Orion an apt symbol, in this case of the writer, as he explained in his Orion aveugle of 1970. Marion Perret argues that Orion is a silent link in T. S. Eliot's The Waste Land (1922), connecting the lustful Actaeon/Sweeney to the blind Teiresias and, through Sirius, to the Dog "that's friend to men". See also Orion (sculpture) Rudra Standing on the shoulders of giants Notes References Giovanni Boccaccio; Genealogie Deorum Gentilium Libri. ed. Vincenzo Romano. Vol. X and XI of Opere, Bari 1951. The section about Orion is Vol XI, p. 557–560: Book IX §19 is a long chapter about Orion himself; §20–21 are single paragraphs about his son and grandson (and the genealogy continues through §25 about Phyllis daughter of Lycurgus). Natalis Comes: Mythologiae siue explicationis fabularum libri decem; translated as Natale Conti's Mythologiae, translated and annotated by John Mulryan and Steven Brown; Arizona Center for Medieval and Renaissance Studies, 2006. This is cited by the page number in the 1616 printing, followed by the page in Mulryan and Brown. The chapter on Orion is VIII, 13, which is pp. 457–9 Tritonius; II 751–5 Mulryan and Brown. Joseph Fontenrose Orion: The Myth of the Hunter and the Huntress Berkeley : University of California Press (1981) Gantz, Timothy, Early Greek Myth: A Guide to Literary and Artistic Sources, Johns Hopkins University Press, 1996, Two volumes: (Vol. 1), (Vol. 2). E. H. Gombrich: "The Subject of Poussin's Orion" The Burlington Magazine, Vol. 84, No. 491. (Feb., 1944), pp. 37–41 Robert Graves, The Greek Myths Penguin 1955; is the 1988 reprint by a different publisher. Hard, Robin (2015), Eratosthenes and Hyginus: Constellation Myths, With Aratus's Phaenomena, Oxford University Press, 2015. . Karl Kerényi, Gods of the Greeks, tr. Norman Cameron. Thames and Hudson 1951. is a reprint, by the same publisher. Karl Kerényi, Dionysus: Archetypal Image of Indestructible Life. Princeton University Press, 1976. David Kubiak: "The Orion Episode of Cicero's Aratea" The Classical Journal, Vol. 77, No. 1. (October–November, 1981), pp. 12–22. Most, G.W. (2018a), Hesiod, Theogony, Works and Days, Testimonia, Edited and translated by Glenn W. Most, Loeb Classical Library No. 57, Cambridge, Massachusetts, Harvard University Press, 2018. . Online version at Harvard University Press. Roger Pack, "A Romantic Narrative in Eunapius"; Transactions and Proceedings of the American Philological Association, Vol. 83. (1952), pp. 198–204. JSTOR link. A practicing classicist retells Orion in passing. H. J. Rose (1928). A Handbook of Greek Mythology, pp. 115–117. London and New York: Routledge, 1991. . External links Theoi.com: Orion Excerpts from translations from Greek and Roman texts. The Warburg Institute Iconographic Database (images of Orion) Star Tales – Orion Constellation mythology. Orion Facts, Mythology & Its Cultural Significance Everything You Need To Know About Orion. Natalis Comes Mythologiae siue explicationis fabularum libri decem Scan of 1616 Padua edition, ed M. Antonius Tritonius, pr. Petropaulus Tozzius.   of Boccaccio's Genealogiae; apparently scan of edition cited. Astronomical myths Mythological Boeotians Children of Poseidon Consorts of Eos Deeds of Apollo Deeds of Artemis Deeds of Zeus Mythological hunters Greek giants Helios in mythology Mythological Greek archers Retinue of Artemis Deeds of Gaia Deeds of Hermes Mythological rapists Mythological blind people
Orion (mythology)
[ "Astronomy" ]
7,168
[ "Astronomical myths" ]
163,901
https://en.wikipedia.org/wiki/Information%20society
An information society is a society or subculture where the usage, creation, distribution, manipulation and integration of information is a significant activity. Its main drivers are information and communication technologies, which have resulted in rapid growth of a variety of forms of information. Proponents of this theory posit that these technologies are impacting most important forms of social organization, including education, economy, health, government, warfare, and levels of democracy. The people who are able to partake in this form of society are sometimes called either computer users or even digital citizens, defined by K. Mossberger as “Those who use the Internet regularly and effectively”. This is one of many dozen internet terms that have been identified to suggest that humans are entering a new and different phase of society. Some of the markers of this steady change may be technological, economic, occupational, spatial, cultural, or a combination of all of these. Information society is seen as a successor to industrial society. Closely related concepts are the post-industrial society (post-fordism), post-modern society, computer society and knowledge society, telematic society, society of the spectacle (postmodernism), Information Revolution and Information Age, network society (Manuel Castells) or even liquid modernity. Definition There is currently no universally accepted concept of what exactly can be defined as an information society and what shall not be included in the term. Most theoreticians agree that a transformation can be seen as started somewhere between the 1970s, the early 1990s transformations of the Socialist East and the 2000s period that formed most of today's net principles and currently as is changing the way societies work fundamentally. Information technology goes beyond the internet, as the principles of internet design and usage influence other areas, and there are discussions about how big the influence of specific media or specific modes of production really is. Frank Webster notes five major types of information that can be used to define information society: technological, economic, occupational, spatial and cultural. According to Webster, the character of information has transformed the way that we live today. How we conduct ourselves centers around theoretical knowledge and information. Kasiwulaya and Gomo (Makerere University) allude that information societies are those that have intensified their use of IT for economic, social, cultural and political transformation. In 2005, governments reaffirmed their dedication to the foundations of the Information Society in the Tunis Commitment and outlined the basis for implementation and follow-up in the Tunis Agenda for the Information Society. In particular, the Tunis Agenda addresses the issues of financing of ICTs for development and Internet governance that could not be resolved in the first phase. Some people, such as Antonio Negri, characterize the information society as one in which people do immaterial labour. By this, they appear to refer to the production of knowledge or cultural artifacts. One problem with this model is that it ignores the material and essentially industrial basis of the society. However it does point to a problem for workers, namely how many creative people does this society need to function? For example, it may be that you only need a few star performers, rather than a plethora of non-celebrities, as the work of those performers can be easily distributed, forcing all secondary players to the bottom of the market. It is now common for publishers to promote only their best selling authors and to try to avoid the rest—even if they still sell steadily. Films are becoming more and more judged, in terms of distribution, by their first weekend's performance, in many cases cutting out opportunity for word-of-mouth development. Michael Buckland characterizes information in society in his book Information and Society. Buckland expresses the idea that information can be interpreted differently from person to person based on that individual's experiences. Considering that metaphors and technologies of information move forward in a reciprocal relationship, we can describe some societies (especially the Japanese society) as an information society because we think of it as such. The word information may be interpreted in many different ways. According to Buckland in Information and Society, most of the meanings fall into three categories of human knowledge: information as knowledge, information as a process, and information as a thing. Thus, the Information Society refers to the social importance given to communication and information in today's society, where social, economic and cultural relations are involved. In the Information Society, the process of capturing, processing and communicating information is the main element that characterizes it. Thus, in this type of society, the vast majority of it will be dedicated to the provision of services and said services will consist of the processing, distribution or use of information. The growth of computer information in society The growth of the amount of technologically mediated information has been quantified in different ways, including society's technological capacity to store information, to communicate information, and to compute information. It is estimated that, the world's technological capacity to store information grew from 2.6 (optimally compressed) exabytes in 1986, which is the informational equivalent to less than one 730-MB CD-ROM per person in 1986 (539 MB per person), to 295 (optimally compressed) exabytes in 2007. This is the informational equivalent of 60 CD-ROM per person in 2007 and represents a sustained annual growth rate of some 25%. The world's combined technological capacity to receive information through one-way broadcast networks was the informational equivalent of 174 newspapers per person per day in 2007. The world's combined effective capacity to exchange information through two-way telecommunications networks was 281 petabytes of (optimally compressed) information in 1986, 471 petabytes in 1993, 2.2 (optimally compressed) exabytes in 2000, and 65 (optimally compressed) exabytes in 2007, which is the informational equivalent of 6 newspapers per person per day in 2007. The world's technological capacity to compute information with humanly guided general-purpose computers grew from 3.0 × 10^8 MIPS in 1986, to 6.4 x 10^12 MIPS in 2007, experiencing the fastest growth rate of over 60% per year during the last two decades. James R. Beniger describes the necessity of information in modern society in the following way: “The need for sharply increased control that resulted from the industrialization of material processes through application of inanimate sources of energy probably accounts for the rapid development of automatic feedback technology in the early industrial period (1740-1830)” (p. 174) “Even with enhanced feedback control, industry could not have developed without the enhanced means to process matter and energy, not only as inputs of the raw materials of production but also as outputs distributed to final consumption.”(p. 175) Development of the information society model One of the first people to develop the concept of the information society was the economist Fritz Machlup. In 1933, Fritz Machlup began studying the effect of patents on research. His work culminated in the study The production and distribution of knowledge in the United States in 1962. This book was widely regarded and was eventually translated into Russian and Japanese. The Japanese have also studied the information society (or jōhōka shakai, 情報化社会). The issue of technologies and their role in contemporary society have been discussed in the scientific literature using a range of labels and concepts. This section introduces some of them. Ideas of a knowledge or information economy, post-industrial society, postmodern society, network society, the information revolution, informational capitalism, network capitalism, and the like, have been debated over the last several decades. Fritz Machlup (1962) introduced the concept of the knowledge industry. He began studying the effects of patents on research before distinguishing five sectors of the knowledge sector: education, research and development, mass media, information technologies, information services. Based on this categorization he calculated that in 1959 29% per cent of the GNP in the USA had been produced in knowledge industries. Economic transition Peter Drucker has argued that there is a transition from an economy based on material goods to one based on knowledge. Marc Porat distinguishes a primary (information goods and services that are directly used in the production, distribution or processing of information) and a secondary sector (information services produced for internal consumption by government and non-information firms) of the information economy. Porat uses the total value added by the primary and secondary information sector to the GNP as an indicator for the information economy. The OECD has employed Porat's definition for calculating the share of the information economy in the total economy (e.g. OECD 1981, 1986). Based on such indicators, the information society has been defined as a society where more than half of the GNP is produced and more than half of the employees are active in the information economy. For Daniel Bell the number of employees producing services and information is an indicator for the informational character of a society. "A post-industrial society is based on services. (…) What counts is not raw muscle power, or energy, but information. (…) A post industrial society is one in which the majority of those employed are not involved in the production of tangible goods". Alain Touraine already spoke in 1971 of the post-industrial society. "The passage to postindustrial society takes place when investment results in the production of symbolic goods that modify values, needs, representations, far more than in the production of material goods or even of 'services'. Industrial society had transformed the means of production: post-industrial society changes the ends of production, that is, culture. (…) The decisive point here is that in postindustrial society all of the economic system is the object of intervention of society upon itself. That is why we can call it the programmed society, because this phrase captures its capacity to create models of management, production, organization, distribution, and consumption, so that such a society appears, at all its functional levels, as the product of an action exercised by the society itself, and not as the outcome of natural laws or cultural specificities" (Touraine 1988: 104). In the programmed society also the area of cultural reproduction including aspects such as information, consumption, health, research, education would be industrialized. That modern society is increasing its capacity to act upon itself means for Touraine that society is reinvesting ever larger parts of production and so produces and transforms itself. This makes Touraine's concept substantially different from that of Daniel Bell who focused on the capacity to process and generate information for efficient society functioning. Jean-François Lyotard has argued that "knowledge has become the force of production over the last few decades". Knowledge would be transformed into a commodity. Lyotard says that postindustrial society makes knowledge accessible to the layman because knowledge and information technologies would diffuse into society and break up Grand Narratives of centralized structures and groups. Lyotard denotes these changing circumstances as postmodern condition or postmodern society. Similarly to Bell, Peter Otto and Philipp Sonntag (1985) say that an information society is a society where the majority of employees work in information jobs, i.e. they have to deal more with information, signals, symbols, and images than with energy and matter. Radovan Richta (1977) argues that society has been transformed into a scientific civilization based on services, education, and creative activities. This transformation would be the result of a scientific-technological transformation based on technological progress and the increasing importance of computer technology. Science and technology would become immediate forces of production (Aristovnik 2014: 55). Nico Stehr (1994, 2002a, b) says that in the knowledge society a majority of jobs involves working with knowledge. "Contemporary society may be described as a knowledge society based on the extensive penetration of all its spheres of life and institutions by scientific and technological knowledge" (Stehr 2002b: 18). For Stehr, knowledge is a capacity for social action. Science would become an immediate productive force, knowledge would no longer be primarily embodied in machines, but already appropriated nature that represents knowledge would be rearranged according to certain designs and programs (Ibid.: 41-46). For Stehr, the economy of a knowledge society is largely driven not by material inputs, but by symbolic or knowledge-based inputs (Ibid.: 67), there would be a large number of professions that involve working with knowledge, and a declining number of jobs that demand low cognitive skills as well as in manufacturing (Stehr 2002a). Also Alvin Toffler argues that knowledge is the central resource in the economy of the information society: "In a Third Wave economy, the central resource – a single word broadly encompassing data, information, images, symbols, culture, ideology, and values – is actionable knowledge" (Dyson/Gilder/Keyworth/Toffler 1994). At the end of the twentieth century, the concept of the network society gained importance in information society theory. For Manuel Castells, network logic is besides information, pervasiveness, flexibility, and convergence a central feature of the information technology paradigm (2000a: 69ff). "One of the key features of informational society is the networking logic of its basic structure, which explains the use of the concept of 'network society'" (Castells 2000: 21). "As an historical trend, dominant functions and processes in the Information Age are increasingly organized around networks. Networks constitute the new social morphology of our societies, and the diffusion of networking logic substantially modifies the operation and outcomes in processes of production, experience, power, and culture" (Castells 2000: 500). For Castells the network society is the result of informationalism, a new technological paradigm. Jan Van Dijk (2006) defines the network society as a "social formation with an infrastructure of social and media networks enabling its prime mode of organization at all levels (individual, group/organizational and societal). Increasingly, these networks link all units or parts of this formation (individuals, groups and organizations)" (Van Dijk 2006: 20). For Van Dijk networks have become the nervous system of society, whereas Castells links the concept of the network society to capitalist transformation, Van Dijk sees it as the logical result of the increasing widening and thickening of networks in nature and society. Darin Barney uses the term for characterizing societies that exhibit two fundamental characteristics: "The first is the presence in those societies of sophisticated – almost exclusively digital – technologies of networked communication and information management/distribution, technologies which form the basic infrastructure mediating an increasing array of social, political and economic practices. (…) The second, arguably more intriguing, characteristic of network societies is the reproduction and institutionalization throughout (and between) those societies of networks as the basic form of human organization and relationship across a wide range of social, political and economic configurations and associations". Critiques The major critique of concepts such as information society, postmodern society, knowledge society, network society, postindustrial society, etc. that has mainly been voiced by critical scholars is that they create the impression that we have entered a completely new type of society. "If there is just more information then it is hard to understand why anyone should suggest that we have before us something radically new" (Webster 2002a: 259). Critics such as Frank Webster argue that these approaches stress discontinuity, as if contemporary society had nothing in common with society as it was 100 or 150 years ago. Such assumptions would have ideological character because they would fit with the view that we can do nothing about change and have to adapt to existing political realities (kasiwulaya 2002b: 267). These critics argue that contemporary society first of all is still a capitalist society oriented towards accumulating economic, political, and cultural capital. They acknowledge that information society theories stress some important new qualities of society (notably globalization and informatization), but charge that they fail to show that these are attributes of overall capitalist structures. Critics such as Webster insist on the continuities that characterise change. In this way Webster distinguishes between different epochs of capitalism: laissez-faire capitalism of the 19th century, corporate capitalism in the 20th century, and informational capitalism for the 21st century (kasiwulaya 2006). For describing contemporary society based on a new dialectic of continuity and discontinuity, other critical scholars have suggested several terms like: transnational network capitalism, transnational informational capitalism (Christian Fuchs 2008, 2007): "Computer networks are the technological foundation that has allowed the emergence of global network capitalism, that is, regimes of accumulation, regulation, and discipline that are helping to increasingly base the accumulation of economic, political, and cultural capital on transnational network organizations that make use of cyberspace and other new technologies for global coordination and communication. [...] The need to find new strategies for executing corporate and political domination has resulted in a restructuration of capitalism that is characterized by the emergence of transnational, networked spaces in the economic, political, and cultural system and has been mediated by cyberspace as a tool of global coordination and communication. Economic, political, and cultural space have been restructured; they have become more fluid and dynamic, have enlarged their borders to a transnational scale, and handle the inclusion and exclusion of nodes in flexible ways. These networks are complex due to the high number of nodes (individuals, enterprises, teams, political actors, etc.) that can be involved and the high speed at which a high number of resources is produced and transported within them. But global network capitalism is based on structural inequalities; it is made up of segmented spaces in which central hubs (transnational corporations, certain political actors, regions, countries, Western lifestyles, and worldviews) centralize the production, control, and flows of economic, political, and cultural capital (property, power, definition capacities). This segmentation is an expression of the overall competitive character of contemporary society." (Fuchs 2008: 110+119). digital capitalism (Schiller 2000, cf. also Peter Glotz): "networks are directly generalizing the social and cultural range of the capitalist economy as never before" (Schiller 2000: xiv) virtual capitalism: the "combination of marketing and the new information technology will enable certain firms to obtain higher profit margins and larger market shares, and will thereby promote greater concentration and centralization of capital" (Dawson/John Bellamy Foster 1998: 63sq), high-tech capitalism or informatic capitalism (Fitzpatrick 2002) – to focus on the computer as a guiding technology that has transformed the productive forces of capitalism and has enabled a globalized economy. Other scholars prefer to speak of information capitalism (Morris-Suzuki 1997) or informational capitalism (Manuel Castells 2000, Christian Fuchs 2005, Schmiede 2006a, b). Manuel Castells sees informationalism as a new technological paradigm (he speaks of a mode of development) characterized by "information generation, processing, and transmission" that have become "the fundamental sources of productivity and power" (Castells 2000: 21). The "most decisive historical factor accelerating, channelling and shaping the information technology paradigm, and inducing its associated social forms, was/is the process of capitalist restructuring undertaken since the 1980s, so that the new techno-economic system can be adequately characterized as informational capitalism" (Castells 2000: 18). Castells has added to theories of the information society the idea that in contemporary society dominant functions and processes are increasingly organized around networks that constitute the new social morphology of society (Castells 2000: 500). Nicholas Garnham is critical of Castells and argues that the latter's account is technologically determinist because Castells points out that his approach is based on a dialectic of technology and society in which technology embodies society and society uses technology (Castells 2000: 5sqq). But Castells also makes clear that the rise of a new "mode of development" is shaped by capitalist production, i.e. by society, which implies that technology isn't the only driving force of society. Antonio Negri and Michael Hardt argue that contemporary society is an Empire that is characterized by a singular global logic of capitalist domination that is based on immaterial labour. With the concept of immaterial labour Negri and Hardt introduce ideas of information society discourse into their Marxist account of contemporary capitalism. Immaterial labour would be labour "that creates immaterial products, such as knowledge, information, communication, a relationship, or an emotional response" (Hardt/Negri 2005: 108; cf. also 2000: 280-303), or services, cultural products, knowledge (Hardt/Negri 2000: 290). There would be two forms: intellectual labour that produces ideas, symbols, codes, texts, linguistic figures, images, etc.; and affective labour that produces and manipulates affects such as a feeling of ease, well-being, satisfaction, excitement, passion, joy, sadness, etc. (Ibid.). Overall, neo-Marxist accounts of the information society have in common that they stress that knowledge, information technologies, and computer networks have played a role in the restructuration and globalization of capitalism and the emergence of a flexible regime of accumulation (David Harvey 1989). They warn that new technologies are embedded into societal antagonisms that cause structural unemployment, rising poverty, social exclusion, the deregulation of the welfare state and of labour rights, the lowering of wages, welfare, etc. Concepts such as knowledge society, information society, network society, informational capitalism, postindustrial society, transnational network capitalism, postmodern society, etc. show that there is a vivid discussion in contemporary sociology on the character of contemporary society and the role that technologies, information, communication, and co-operation play in it. Information society theory discusses the role of information and information technology in society, the question which key concepts shall be used for characterizing contemporary society, and how to define such concepts. It has become a specific branch of contemporary sociology. Second and third nature Information society is the means of sending and receiving information from one place to another. As technology has advanced so too has the way people have adapted in sharing information with each other. "Second nature" refers a group of experiences that get made over by culture. They then get remade into something else that can then take on a new meaning. As a society we transform this process so it becomes something natural to us, i.e. second nature. So, by following a particular pattern created by culture we are able to recognise how we use and move information in different ways. From sharing information via different time zones (such as talking online) to information ending up in a different location (sending a letter overseas) this has all become a habitual process that we as a society take for granted. However, through the process of sharing information vectors have enabled us to spread information even further. Through the use of these vectors information is able to move and then separate from the initial things that enabled them to move. From here, something called "third nature" has developed. An extension of second nature, third nature is in control of second nature. It expands on what second nature is limited by. It has the ability to mould information in new and different ways. So, third nature is able to ‘speed up, proliferate, divide, mutate, and beam in on us from elsewhere. It aims to create a balance between the boundaries of space and time (see second nature). This can be seen through the telegraph, it was the first successful technology that could send and receive information faster than a human being could move an object. As a result different vectors of people have the ability to not only shape culture but create new possibilities that will ultimately shape society. Therefore, through the use of second nature and third nature society is able to use and explore new vectors of possibility where information can be moulded to create new forms of interaction. Sociological uses In sociology, informational society refers to a post-modern type of society. Theoreticians like Ulrich Beck, Anthony Giddens and Manuel Castells argue that since the 1970s a transformation from industrial society to informational society has happened on a global scale. As steam power was the technology standing behind industrial society, so information technology is seen as the catalyst for the changes in work organisation, societal structure and politics occurring in the late 20th century. In the book Future Shock, Alvin Toffler used the phrase super-industrial society to describe this type of society. Other writers and thinkers have used terms like "post-industrial society" and "post-modern industrial society" with a similar meaning. Related terms A number of terms in current use emphasize related but different aspects of the emerging global economic order. The Information Society intends to be the most encompassing in that an economy is a subset of a society. The Information Age is somewhat limiting, in that it refers to a 30-year period between the widespread use of computers and the knowledge economy, rather than an emerging economic order. The knowledge era is about the nature of the content, not the socioeconomic processes by which it will be traded. The computer revolution, and knowledge revolution refer to specific revolutionary transitions, rather than the end state towards which we are evolving. The Information Revolution relates with the well-known terms agricultural revolution and Industrial Revolution. The information economy and the knowledge economy emphasize the content or intellectual property that is being traded through an information market or knowledge market, respectively. Electronic commerce and electronic business emphasize the nature of transactions and running a business, respectively, using the Internet and World-Wide Web. The digital economy focuses on trading bits in cyberspace rather than atoms in physical space. The network economy stresses that businesses will work collectively in webs or as part of business ecosystems rather than as stand-alone units. Social networking refers to the process of collaboration on massive, global scales. The internet economy focuses on the nature of markets that are enabled by the Internet. Knowledge services and knowledge value put content into an economic context. Knowledge services integrates Knowledge management, within a Knowledge organization, that trades in a Knowledge market. In order for individuals to receive more knowledge, surveillance is used. This relates to the use of Drones as a tool in order to gather knowledge on other individuals. Although seemingly synonymous, each term conveys more than nuances or slightly different views of the same thing. Each term represents one attribute of the likely nature of economic activity in the emerging post-industrial society. Alternatively, the new economic order will incorporate all of the above plus other attributes that have not yet fully emerged. In connection with the development of the information society, information pollution appeared, which in turn evolved information ecology – associated with information hygiene. Intellectual property considerations One of the central paradoxes of the information society is that it makes information easily reproducible, leading to a variety of freedom/control problems relating to intellectual property. Essentially, business and capital, whose place becomes that of producing and selling information and knowledge, seems to require control over this new resource so that it can effectively be managed and sold as the basis of the information economy. However, such control can prove to be both technically and socially problematic. Technically because copy protection is often easily circumvented and socially rejected because the users and citizens of the information society can prove to be unwilling to accept such absolute commodification of the facts and information that compose their environment. Responses to this concern range from the Digital Millennium Copyright Act in the United States (and similar legislation elsewhere) which make copy protection (see Digital rights management) circumvention illegal, to the free software, open source and copyleft movements, which seek to encourage and disseminate the "freedom" of various information products (traditionally both as in "gratis" or free of cost, and liberty, as in freedom to use, explore and share). Caveat: Information society is often used by politicians meaning something like "we all do internet now"; the sociological term information society (or informational society) has some deeper implications about change of societal structure. Because we lack political control of intellectual property, we are lacking in a concrete map of issues, an analysis of costs and benefits, and functioning political groups that are unified by common interests representing different opinions of this diverse situation that are prominent in the information society. See also Cyberspace Digitization Digital transformation Digital dark age Digital addict Digital phobic Information culture Information history Information industry Information revolution Internet culture Netizen Network society Simon Buckingham and unorganisation Social Age Surveillance capitalism The Information Society (journal) World Summit on the Information Society (WSIS) Yoneji Masuda References Works cited Further reading Alan Mckenna (2011) A Human Right to Participate in the Information Society. New York: Hampton Press. . Lev Manovich (2009) How to Represent Information Society?, Miltos Manetas, Paintings from Contemporary Life, Johan & Levi Editore, Milan . Online: Manuel Castells (2000) The Rise of the Network Society. The Information Age: Economy, Society and Culture. Volume 1. Malden: Blackwell. Second Edition. Michael Dawson/John Bellamy Foster (1998) Virtual Capitalism. In: Robert W. McChesney/Ellen Meiksins Wood/John Bellamy Foster (Eds.) (1998) Capitalism and the Information Age. New York: Monthly Review Press. pp. 51–67. Aleksander Aristovnik (2014) Development of the information society and its impact on the education sector in the EU : efficiency at the regional (NUTS 2) level. In: Turkish online journal of educational technology. Vol. 13. No. 2. pp. 54–60. Alistair Duff (2000) Information Society Studies. London: Routledge. Esther Dyson/George Gilder/George Keyworth/Alvin Toffler (1994) Cyberspace and the American Dream: A Magna Carta for the Knowledge Age. In: Future Insight 1.2. The Progress & Freedom Foundation. Tony Fitzpatrick (2002) Critical Theory, Information Society and Surveillance Technologies. In: Information, Communication and Society. Vol. 5. No. 3. pp. 357–378. Vilém Flusser (2013) Post-History, Univocal Publishing, Minneapolis Christian Fuchs (2008) Internet and Society: Social Theory in the Information Age. New York: Routledge. . Christian Fuchs (2007) Transnational Space and the ’Network Society’. In: 21st Century Society. Vol. 2. No. 1. pp. 49–78. Christian Fuchs (2005) Emanzipation! Technik und Politik bei Herbert Marcuse. Aachen: Shaker. Christian Fuchs (2004) The Antagonistic Self-Organization of Modern Society. In: Studies in Political Economy, No. 73 (2004), pp. 183– 209. Michael Hardt/Antonio Negri (2005) Multitude. War and Democracy in the Age of the Empire. New York: Hamish Hamilton. Michael Hardt/Antonio Negri Empire. Cambridge, MA: Harvard University Press. David Harvey (1989) The Condition of Postmodernity. London: Blackwell. Fritz Machlup (1962) The Production and Distribution of Knowledge in the United States. Princeton: Princeton University Press. OECD (1986) Trends in The Information Economy. Paris: OECD. OECD (1981) Information Activities, Electronics and Telecommunications Technologies: Impact on Employment, Growth and Trade. Paris: OECD. Pasquinelli, M. (2014) Italian Operaismo and the Information Machine, Theory, Culture & Society, first published on February 2, 2014. Pastore G. (2009) Verso la società della conoscenza, Le Lettere, Firenze. Peter Otto/Philipp Sonntag (1985) Wege in die Informationsgesellschaft. München. dtv. Pinterič, Uroš (2015): Spregledane pasti informacijske družbe. Fakulteta za organizacijske študije v Novem mestu Radovan Richta (1977) The Scientific and Technological Revolution and the Prospects of Social Development. In: Ralf Dahrendorf (Ed.) (1977) Scientific-Technological Revolution. Social Aspects. London: Sage. pp. 25–72. Dan Schiller (2000) Digital Capitalism. Cambridge, MA: MIT Press. Rudi Schmiede (2006a) Knowledge, Work and Subject in Informational Capitalism. In: Berleur, Jacques/Nurminen, Markku I./Impagliazzo, John (Eds.) (2006) Social Informatics: An Information Society for All? New York: Springer. pp. 333–354. Rudi Schmiede (2006b) Wissen und Arbeit im “Informational Capitalism”. In: Baukrowitz, Andrea et al. (Eds.) (2006) Informatisierung der Arbeit – Gesellschaft im Umbruch. Berlin: Edition Sigma. pp. 455–488. Nico Stehr (1994) Arbeit, Eigentum und Wissen. Frankfurt/Main: Suhrkamp. Nico Stehr (2002a) A World Made of Knowledge. Lecture at the Conference “New Knowledge and New Consciousness in the Era of the Knowledge Society", Budapest, January 31, 2002. Online: Nico Stehr (2002b) Knowledge & Economic Conduct. Toronto: University of Toronto Press. Alain Touraine (1988) Return of the Actor. Minneapolis. University of Minnesota Press. Jan Van Dijk (2006) The Network Society. London: Sage. Second Edition. Yannis Veneris (1984) The Informational Revolution, Cybernetics and Urban Modelling, PhD Thesis, University of Newcastle upon Tyne, UK. Yannis Veneris (1990) Modeling the transition from the Industrial to the Informational Revolution, Environment and Planning A 22(3):399-416. Frank Webster (2002a) The Information Society Revisited. In: Lievrouw, Leah A./Livingstone, Sonia (Eds.) (2002) Handbook of New Media. London: Sage. pp. 255–266. Frank Webster (2002b) Theories of the Information Society. London: Routledge. Frank Webster (2006) Theories of the Information Society. 3rd edition. London: Routledge Gelbstein, E. (2006) Crossing the Executive Digital Divide. External links Special Report - "Information Society: The Next Steps" Knowledge Assessment Methodology - interactive country-level data for the information society and knowledge economy The origin and development of a concept: the information society. Global Information Society Project at the World Policy Institute UNESCO - Observatory on the Information Society I/S: A Journal of Law and Policy for the Information Society - Ohio State law journal which addresses legal aspects related to the information society. - Participation in the Broadband Society. European network on social and technical research on the emerging information society. Information Information Age Information revolution Hyperreality Digital divide Postindustrial society Social information processing Sociological terminology Stages of history Evolution
Information society
[ "Technology" ]
7,232
[ "Hyperreality", "Information Age", "Science and technology studies", "Computing and society" ]
164,033
https://en.wikipedia.org/wiki/Netizen
The term netizen is a portmanteau of the English words internet and citizen, as in a "citizen of the net" or "net citizen". It describes a person actively involved in online communities or the Internet in general. The term also commonly implies an interest and active engagement in improving the internet, making it an intellectual and a social resource, or its surrounding political structures, especially in regard to open access, net neutrality and free speech. The term was widely adopted in the mid-1990s as a way to describe those who inhabit the new geography of the internet. Internet pioneer and author Michael F. Hauben is credited with coining and popularizing the term. Determining factor In general, any individual who has access to the internet has the potential to be classified as a netizen. In the 21st century, this is made possible by the global connectivity of the internet. People can physically be located in one country but connected to most of the world via a global network. There is a clear distinction between netizens and people who come online to use the internet. A netizen is described as an individual who actively seek to contribute to the development of the internet. Netizens are not individuals who go online for personal gain or profit, but instead actively seeks to make the internet a better place. A term used to classify internet users who do not actively contribute to the development of the internet is "lurker". Lurkers cannot be classified as netizens, as although they do not actively harm the internet, they do not contribute either. Besides, lurkers seemed to be more critical of the technological elements enabling communities whereas posters appeared to be more critical of users who hampered community creation by making rude or unpleasant comments. Additionally, discussions indicate that both lurkers and posters had distinct motives for lurking and might modify their engagement behaviours based on how they understand the community from various online groups, despite the fact that engagement between those who post and those who lurk was different in the communities studied. In China In Mandarin Chinese, the terms wǎngmín (, literally "netizen" or "net folks") and wǎngyǒu (, literally "net friend" or "net mate") are commonly used terms meaning "internet users", and the English word netizen is used by mainland China-based English language media to translate both terms, resulting in the frequent appearance of that English word in media reporting about China, far more frequently than the use of the word in other contexts. Netizen Prize The international nonprofit organisation Reporters Without Borders awards an annual Netizen Prize in recognition to an internet user, blogger, cyber-dissident, or group who has helped to promote freedom of expression on the internet. The organisation uses the term when describing the political repression of cyber-dissidents such as legal consequences of blogging in politically repressive environments. Psychological studies With time, more and more people have started interacting and building communities online. The effect it has on human psychology and life is of major interest and concern to researchers. Several studies are being done on netizens under the name Netizens’ Psychology. Problems are internet addiction, mental health, outrage, and the effect on kids' development are some of the many problems netizen psychology tries to focus on. See also Digital citizen – citizens (of the physical space) using the Internet as a tool in order to engage in society, politics, and government participation Digital native – a person who has grown up in the information age Netiquette – social conventions for online communities Cyberspace – the new societal territory that is inhabited by Netizens Information Age Internet age Network society Active citizenship – the concept that citizens have certain roles and responsibilities to society and the environment and should actively participate Social Age List of Internet pioneers – those who helped erect the theoretical and technological foundation of the Internet (instead of improving its content, utility or political aspects) Participatory culture – a culture in which the public does not act merely as consumers and voters, but also as contributors, producers and active participants References Further reading External links The Mysterious Netizen Information society Information Age Internet terminology Social influence Cyberspace Hyperreality Virtual communities Citizenship
Netizen
[ "Technology" ]
855
[ "Information Age", "Computing terminology", "Internet terminology", "Information society", "Cyberspace", "Science and technology studies", "Information technology", "Computing and society", "Hyperreality" ]
164,040
https://en.wikipedia.org/wiki/Formula
In science, a formula is a concise way of expressing information symbolically, as in a mathematical formula or a chemical formula. The informal use of the term formula in science refers to the general construct of a relationship between given quantities. The plural of formula can be either formulas (from the most common English plural noun form) or, under the influence of scientific Latin, formulae (from the original Latin). In mathematics In mathematics, a formula generally refers to an equation or inequality relating one mathematical expression to another, with the most important ones being mathematical theorems. For example, determining the volume of a sphere requires a significant amount of integral calculus or its geometrical analogue, the method of exhaustion. However, having done this once in terms of some parameter (the radius for example), mathematicians have produced a formula to describe the volume of a sphere in terms of its radius: Having obtained this result, the volume of any sphere can be computed as long as its radius is known. Here, notice that the volume V and the radius r are expressed as single letters instead of words or phrases. This convention, while less important in a relatively simple formula, means that mathematicians can more quickly manipulate formulas which are larger and more complex. Mathematical formulas are often algebraic, analytical or in closed form. In a general context, formulas often represent mathematical models of real world phenomena, and as such can be used to provide solutions (or approximate solutions) to real world problems, with some being more general than others. For example, the formula is an expression of Newton's second law, and is applicable to a wide range of physical situations. Other formulas, such as the use of the equation of a sine curve to model the movement of the tides in a bay, may be created to solve a particular problem. In all cases, however, formulas form the basis for calculations. Expressions are distinct from formulas in the sense that they don't usually contain relations like equality (=) or inequality (<). Expressions denote a mathematical object, where as formulas denote a statement about mathematical objects. This is analogous to natural language, where a noun phrase refers to an object, and a whole sentence refers to a fact. For example, is an expression, while is a formula. However, in some areas mathematics, and in particular in computer algebra, formulas are viewed as expressions that can be evaluated to true or false, depending on the values that are given to the variables occurring in the expressions. For example takes the value false if is given a value less than 1, and the value true otherwise. (See Boolean expression) In mathematical logic In mathematical logic, a formula (often referred to as a well-formed formula) is an entity constructed using the symbols and formation rules of a given logical language. For example, in first-order logic, is a formula, provided that is a unary function symbol, a unary predicate symbol, and a ternary predicate symbol. Chemical formulas In modern chemistry, a chemical formula is a way of expressing information about the proportions of atoms that constitute a particular chemical compound, using a single line of chemical element symbols, numbers, and sometimes other symbols, such as parentheses, brackets, and plus (+) and minus (−) signs. For example, H2O is the chemical formula for water, specifying that each molecule consists of two hydrogen (H) atoms and one oxygen (O) atom. Similarly, O denotes an ozone molecule consisting of three oxygen atoms and a net negative charge. A chemical formula identifies each constituent element by its chemical symbol, and indicates the proportionate number of atoms of each element. In empirical formulas, these proportions begin with a key element and then assign numbers of atoms of the other elements in the compound—as ratios to the key element. For molecular compounds, these ratio numbers can always be expressed as whole numbers. For example, the empirical formula of ethanol may be written C2H6O, because the molecules of ethanol all contain two carbon atoms, six hydrogen atoms, and one oxygen atom. Some types of ionic compounds, however, cannot be written as empirical formulas which contains only the whole numbers. An example is boron carbide, whose formula of CBn is a variable non-whole number ratio, with n ranging from over 4 to more than 6.5. When the chemical compound of the formula consists of simple molecules, chemical formulas often employ ways to suggest the structure of the molecule. There are several types of these formulas, including molecular formulas and condensed formulas. A molecular formula enumerates the number of atoms to reflect those in the molecule, so that the molecular formula for glucose is C6H12O6 rather than the glucose empirical formula, which is CH2O. Except for the very simple substances, molecular chemical formulas generally lack needed structural information, and might even be ambiguous in occasions. A structural formula is a drawing that shows the location of each atom, and which atoms it binds to. In computing In computing, a formula typically describes a calculation, such as addition, to be performed on one or more variables. A formula is often implicitly provided in the form of a computer instruction such as. Degrees Celsius = (5/9)*(Degrees Fahrenheit  - 32) In computer spreadsheet software, a formula indicating how to compute the value of a cell, say A3, could be written as =A1+A2 where A1 and A2 refer to other cells (column A, row 1 or 2) within the spreadsheet. This is a shortcut for the "paper" form A3 = A1+A2, where A3 is, by convention, omitted because the result is always stored in the cell itself, making the stating of the name redundant. Units Formulas used in science almost always require a choice of units. Formulas are used to express relationships between various quantities, such as temperature, mass, or charge in physics; supply, profit, or demand in economics; or a wide range of other quantities in other disciplines. An example of a formula used in science is Boltzmann's entropy formula. In statistical thermodynamics, it is a probability equation relating the entropy S of an ideal gas to the quantity W, which is the number of microstates corresponding to a given macrostate: where k is the Boltzmann constant, equal to , and W is the number of microstates consistent with the given macrostate. See also Formula editor Formula unit Law (mathematics) Mathematical notation Scientific law Symbol (chemical element) Theorem Well-formed formula References Mathematical notation Elementary algebra
Formula
[ "Mathematics" ]
1,338
[ "Elementary mathematics", "nan", "Algebra", "Elementary algebra" ]
164,055
https://en.wikipedia.org/wiki/Logarithmic%20scale
A logarithmic scale (or log scale) is a method used to display numerical data that spans a broad range of values, especially when there are significant differences between the magnitudes of the numbers involved. Unlike a linear scale where each unit of distance corresponds to the same increment, on a logarithmic scale each unit of length is a multiple of some base value raised to a power, and corresponds to the multiplication of the previous value in the scale by the base value. In common use, logarithmic scales are in base 10 (unless otherwise specified). A logarithmic scale is nonlinear, and as such numbers with equal distance between them such as 1, 2, 3, 4, 5 are not equally spaced. Equally spaced values on a logarithmic scale have exponents that increment uniformly. Examples of equally spaced values are 10, 100, 1000, 10000, and 100000 (i.e., 101, 102, 103, 104, 105) and 2, 4, 8, 16, and 32 (i.e., 21, 22, 23, 24, 25). Exponential growth curves are often depicted on a logarithmic scale graph. Common uses The markings on slide rules are arranged in a log scale for multiplying or dividing numbers by adding or subtracting lengths on the scales. The following are examples of commonly used logarithmic scales, where a larger quantity results in a higher value: Richter magnitude scale and moment magnitude scale (MMS) for strength of earthquakes and movement in the Earth Sound level, with the unit decibel Neper for amplitude, field and power quantities Frequency level, with units cent, minor second, major second, and octave for the relative pitch of notes in music Logit for odds in statistics Palermo Technical Impact Hazard Scale Logarithmic timeline Counting f-stops for ratios of photographic exposure The rule of nines used for rating low probabilities Entropy in thermodynamics Information in information theory Particle size distribution curves of soil The following are examples of commonly used logarithmic scales, where a larger quantity results in a lower (or negative) value: pH for acidity Stellar magnitude scale for brightness of stars Krumbein scale for particle size in geology Absorbance of light by transparent samples Some of our senses operate in a logarithmic fashion (Weber–Fechner law), which makes logarithmic scales for these input quantities especially appropriate. In particular, our sense of hearing perceives equal ratios of frequencies as equal differences in pitch. In addition, studies of young children in an isolated tribe have shown logarithmic scales to be the most natural display of numbers in some cultures. Graphic representation The top left graph is linear in the X- and Y-axes, and the Y-axis ranges from 0 to 10. A base-10 log scale is used for the Y-axis of the bottom left graph, and the Y-axis ranges from 0.1 to 1000. The top right graph uses a log-10 scale for just the X-axis, and the bottom right graph uses a log-10 scale for both the X axis and the Y-axis. Presentation of data on a logarithmic scale can be helpful when the data: covers a large range of values, since the use of the logarithms of the values rather than the actual values reduces a wide range to a more manageable size; may contain exponential laws or power laws, since these will show up as straight lines. A slide rule has logarithmic scales, and nomograms often employ logarithmic scales. The geometric mean of two numbers is midway between the numbers. Before the advent of computer graphics, logarithmic graph paper was a commonly used scientific tool. Log–log plots If both the vertical and horizontal axes of a plot are scaled logarithmically, the plot is referred to as a log–log plot. Semi-logarithmic plots If only the ordinate or abscissa is scaled logarithmically, the plot is referred to as a semi-logarithmic plot. Extensions A modified log transform can be defined for negative input (y < 0) to avoid the singularity for zero input (y = 0), and so produce symmetric log plots: for a constant C=1/ln(10). Logarithmic units A logarithmic unit is a unit that can be used to express a quantity (physical or mathematical) on a logarithmic scale, that is, as being proportional to the value of a logarithm function applied to the ratio of the quantity and a reference quantity of the same type. The choice of unit generally indicates the type of quantity and the base of the logarithm. Examples Examples of logarithmic units include units of information and information entropy (nat, shannon, ban) and of signal level (decibel, bel, neper). Frequency levels or logarithmic frequency quantities have various units are used in electronics (decade, octave) and for music pitch intervals (octave, semitone, cent, etc.). Other logarithmic scale units include the Richter magnitude scale point. In addition, several industrial measures are logarithmic, such as standard values for resistors, the American wire gauge, the Birmingham gauge used for wire and needles, and so on. Units of information bit, byte hartley nat shannon Units of level or level difference bel, decibel neper Units of frequency level decade, decidecade, savart octave, tone, semitone, cent Table of examples The two definitions of a decibel are equivalent, because a ratio of power quantities is equal to the square of the corresponding ratio of root-power quantities. See also Alexander Graham Bell Bode plot Geometric mean (arithmetic mean in logscale) John Napier Level (logarithmic quantity) Log–log plot Logarithm Logarithmic mean Log semiring Preferred number Semi-log plot Scale Order of magnitude Applications Entropy Entropy (information theory) pH Richter magnitude scale References Further reading (135 pages) External links Non-Newtonian calculus website Non-Newtonian calculus
Logarithmic scale
[ "Physics", "Mathematics" ]
1,280
[ "Physical quantities", "Calculus", "Quantity", "Non-Newtonian calculus", "Logarithmic scales of measurement" ]
164,089
https://en.wikipedia.org/wiki/Politeness
Politeness is the practical application of good manners or etiquette so as not to offend others and to put them at ease. It is a culturally defined phenomenon, and therefore what is considered polite in one culture can sometimes be quite rude or simply eccentric in another cultural context. While the goal of politeness is to refrain from behaving in an offensive way so as not to offend others, and to make all people feel relaxed and comfortable with one another, these culturally defined standards at times may be broken within the context of personal boundaries – this is known as positive politeness. Types Anthropologists Penelope Brown and Stephen Levinson identified four kinds of politeness, deriving from Erving Goffman's concept of face: Negative politeness is the act of making a request less infringing, such as "If you don't mind..." or "If it isn't too much trouble..."; respects a person's right to act freely. This is a variety of deference. There is a greater use of indirect speech acts. It is also considered a part of being assertive. Non-assertive politeness is when a person refrains from making a comment or asserting their beliefs during a discussion so as to remain polite to others present. It is also when a person goes along with a decision made by someone else so as not to appear impolite, essentially following general social norms. Assertive politeness can be when a person offers their opinion in a positive and constructive way to be assistive and helpful during an interaction, or refrains from purporting to agree with something they do not actually agree with in a way that does not offend others. Positive politeness seeks to establish a positive relationship between parties, and it respects a person's need to be liked and understood. This standard of politeness is determined by personal boundaries, and often violates etiquette norms in letter. Direct speech acts, swearing and flouting Grice's maxims can be considered aspects of positive politeness because: They show an awareness that the relationship is strong enough to cope with what would normally be considered impolite (in the popular understanding of the term); They articulate an awareness of the other person's values, which fulfills the person's desire to be accepted. They convey a natural, relaxed, casual setting. Some cultures, groups, and individuals prefer some ideals of politeness over the other. In this way, politeness is culturally bound, and even within broader cultures, people may disagree. History During the Enlightenment era, a self-conscious process of the imposition of polite norms and behaviors became a symbol of being a genteel member of the upper class. Upwardly mobile middle class bourgeoisie increasingly tried to identify themselves with the elite through their adopted artistic preferences and their standards of behavior. They became preoccupied with precise rules of etiquette, such as when to show emotion, the art of elegant dress and graceful conversation and how to act courteously, especially with women. Influential in this new discourse was a series of essays on the nature of politeness in a commercial society, penned by the philosopher Lord Shaftesbury in the early 18th century. Shaftesbury defined politeness as the art of being pleasing in company: "'Politeness' may be defined a dext'rous management of our words and actions, whereby we make other people have better opinion of us and themselves." Periodicals, such as The Spectator, founded as a daily publication by Joseph Addison and Richard Steele in 1711, gave regular advice to its readers on how to be a polite gentleman. Its stated goal was "to enliven morality with wit, and to temper wit with morality... to bring philosophy out of the closets and libraries, schools and colleges, to dwell in clubs and assemblies, at tea-tables and coffeehouses." It provided its readers with educated, topical talking points, and advice on how to carry on conversations and social interactions in a polite manner. The art of polite conversation and debate was particularly cultivated in the coffeehouses of the period. Conversation was supposed to conform to a particular manner, with the language of polite and civil conversation considered to be essential to the conduct of coffeehouse debate and conversation. The concept of "civility" referred to a desired social interaction which valued sober and reasoned debate on matters of interest. Established rules and procedures for proper behavior, as well as conventions, were outlined by gentleman's clubs, such as Harrington's Rota Club. Periodicals, including The Tatler and The Spectator, intended to infuse politeness into English coffeehouse conversation, as their explicit purpose lay in the reformation of English manners and morals. Techniques There is a variety of techniques one can use to seem polite. Some techniques include expressing uncertainty and ambiguity through hedging and indirectness, polite lying or use of euphemisms (which make use of ambiguity as well as connotation). Additionally, one can use tag questions to direct statements, such as "You were at the store, weren't you?" There are three types of tags: modal tags, affective tags, and facilitative tags. Modal tags request information of which the speaker is uncertain: "You haven't been to the store yet, have you?" Affective tags indicate concern for the listener: "You haven't been here long, have you? Facilitative tags invite the addressee to comment on the request being made: "You can do that, can't you?" Finally, softeners reduce the force of what would be a brusque demand: "Hand me that thing, could you?" Some studies have shown that women are more likely to use politeness formulas than men, though the exact differences are not clear. Most current research has shown that gender differences in politeness use are complex, since there is a clear association between politeness norms and the stereotypical speech of middle class white women, at least in the UK and US. It is therefore unsurprising that women tend to be associated with politeness more and their linguistic behavior judged in relation to these politeness norms. Linguistic devices Besides and additionally to the above, many languages have specific means to show politeness, deference, respect, or a recognition of the social status of the speaker and the hearer. There are two main ways in which a given language shows politeness: in its lexicon (for example, employing certain words in formal occasions, and colloquial forms in informal contexts), and in its morphology (for example, using special verb forms for polite discourse). The T–V distinction is a common example in Western languages, while some Asian languages extend this to avoiding pronouns entirely. Some languages have complex politeness systems, such as Korean speech levels and honorific speech in Japanese. Japanese is perhaps the most widely known example of a language that encodes politeness at its core. Japanese has two main levels of politeness, one for intimate acquaintances, family, and friends, and one for other groups, and verb morphology reflects these levels. Besides that, some verbs have special hyper-polite suppletive forms. This happens also with some nouns and interrogative pronouns. Japanese also employs different personal pronouns for each person according to gender, age, rank, degree of acquaintance, and other cultural factors. Criticism of Brown and Levinson's typology Brown and Levinson's theory of politeness has been criticised as not being universally valid, by linguists working with East-Asian languages, including Japanese. Matsumoto and Ide claim that Brown and Levinson assume the speaker's volitional use of language, which allows the speaker's creative use of face-maintaining strategies toward the addressee. In East Asian cultures like Japan, politeness is achieved not so much on the basis of volition as on discernment (, finding one's place), or prescribed social norms. is oriented towards the need for acknowledgment of the positions or roles of all the participants as well as adherence to formality norms appropriate to the particular situation. See also Confirmation bias Courtesy Formality Intercultural competence Polite fiction Politeness maxims Politeness theory Register (sociolinguistics) Respect Valediction, expression used to say farewell References Further reading External links Politeness, BBC Radio 4 discussion with Amanda Vickery, David Wootton & John Mullan (In Our Time, Sep. 30, 2004) Etiquette Pragmatics Sociolinguistics Virtue zh-yue:禮貌
Politeness
[ "Biology" ]
1,766
[ "Etiquette", "Behavior", "Human behavior" ]
164,095
https://en.wikipedia.org/wiki/Moral%20panic
A moral panic is a widespread feeling of fear that some evil person or thing threatens the values, interests, or well-being of a community or society. It is "the process of arousing social concern over an issue", usually perpetuated by moral entrepreneurs and mass media coverage, and exacerbated by politicians and lawmakers. Moral panic can give rise to new laws aimed at controlling the community. Stanley Cohen, who developed the term, states that moral panic happens when "a condition, episode, person or group of persons emerges to become defined as a threat to societal values and interests". While the issues identified may be real, the claims "exaggerate the seriousness, extent, typicality and/or inevitability of harm". Moral panics are now studied in sociology and criminology, media studies, and cultural studies. It is often academically considered irrational (see Cohen's model of moral panic, below). Examples of moral panic include the belief in widespread abduction of children by predatory pedophiles and belief in ritual abuse of women and children by Satanic cults. Some moral panics can become embedded in standard political discourse, which include concepts such as the Red Scare, racism, and terrorism. It differs from mass hysteria, which is closer to a psychological illness rather than a sociological phenomenon. History and development Though the term moral panic was used in 1830 by a religious magazine regarding a sermon, it was used in a way that completely differs from its modern social science application. The phrase was used again in 1831, with an intent that is possibly closer to its modern use. Though not using the term moral panic, Marshall McLuhan, in his 1964 book Understanding Media, articulated the concept academically in describing the effects of media. As a social theory or sociological concept, the concept was first developed in the United Kingdom by Stanley Cohen, who introduced the phrase moral panic in a 1967–1969 PhD thesis that became the basis for his 1972 book Folk Devils and Moral Panics. In the book, Cohen describes the reaction among the British public to the rivalry between the "mod" and "rocker" youth subcultures of the 1960s and 1970s. Cohen's initial development of the concept was for the purpose of analyzing the definition of and social reaction to these subcultures as a social problem. According to Cohen, a moral panic occurs when a "condition, episode, person or group of persons emerges to become defined as a threat to societal values and interests." To Cohen, those who start the panic after fearing a threat to prevailing social or cultural values are 'moral entrepreneurs', while those who supposedly threaten social order have been described as 'folk devils'. In the early 1990s, Erich Goode and Nachman Ben-Yehuda produced an "attributional" model that placed more emphasis on strict definition than cultural processes. Differences in British and American definitions Many sociologists have pointed out the differences between definitions of a moral panic as described by American versus British sociologists. Kenneth Thompson claimed that American sociologists tended to emphasize psychological factors, while the British portrayed "moral panics" as crises of capitalism. British criminologist Jock Young used the term in his participant observation study of drug consumption in Porthmadog, Wales, between 1967 and 1969. In Policing the Crisis: Mugging, the State and Law and Order (1978), Marxist Stuart Hall and his colleagues studied the public reaction to the phenomenon of mugging and the perception that it had recently been imported from American culture into the UK. Employing Cohen's definition of moral panic, Hall and colleagues theorized that the "rising crime rate equation" performs an ideological function relating to social control. Crime statistics, in Hall's view, are often manipulated for political and economic purposes; moral panics could thereby be ignited to create public support for the need to "police the crisis". Cohen's model of moral panic First to name the phenomenon, Stanley Cohen investigated a series of "moral panics" in his 1972 book Folk Devils and Moral Panics. In the book, Cohen describes the reaction among the British public to the seaside rivalry between the "mod" and "rocker" youth subcultures of the 1960s and 1970s. In a moral panic, Cohen says, "the untypical is made typical". Cohen's initial development of the concept was for the purpose of analyzing the definition of and social reaction to these subcultures as a social problem. He was interested in demonstrating how agents of social control amplified deviance, in that they potentially damaged the identities of those labeled as "deviant" and invited them to embrace deviant identities and behavior. According to Cohen, these groups were labelled as being outside the central core values of consensual society and as posing a threat to both the values of society and society itself, hence the term folk devils. Setting out to test his hypotheses on mods and rockers, Cohen ended up in a rather different place: he discovered a pattern of construction and reaction with greater foothold than mods and rockersthe moral panic. He thereby identified five sequential stages of moral panic. Characterizing the reactions to the mod and rocker conflict, he identified four key agents in moral panics: mass media, moral entrepreneurs, the culture of social control, and the public. In a more recent edition of Folk Devils and Moral Panics, Cohen suggested that the term panic in itself connotes irrationality and a lack of control. Cohen maintained that panic is a suitable term when used as an extended metaphor. Cohen's stages of moral panic Setting out to test his hypotheses on mods and rockers, Cohen discovered a pattern of construction and reaction with greater foothold than mods and rockersthe moral panic. According to Cohen, there are five sequential stages in the construction of a moral panic: An event, condition, episode, person, or group of persons is perceived and defined as a threat to societal values, safety, and interests. The nature of these apparent threats are amplified by the mass media, who present the supposed threat through simplistic, symbolic rhetoric. Such portrayals appeal to public prejudices, creating an evil in need of social control (folk devils) and victims (the moral majority). A sense of social anxiety and concern among the public is aroused through these symbolic representations of the threat. The gatekeepers of moralityeditors, religious leaders, politicians, and other "moral"-thinking peoplerespond to the threat, with socially-accredited experts pronouncing their diagnoses and solutions to the "threat". This includes new laws or policies. The condition then disappears, submerges or deteriorates and becomes more visible. Cohen observed further: Sometimes the object of the panic is quite novel and at other times it is something which has been in existence long enough, but suddenly appears in the limelight. Sometimes the panic passes over and is forgotten, except in folk-lore and collective memory; at other times it has more serious and long-lasting repercussions and might produce such changes as those in legal and social policy or even in the way the society conceives itself. Agents of moral panic Characterizing the reactions to the mod and rocker conflict, Cohen identified four key agents in moral panics: mass media, moral entrepreneurs, the culture of social control, and the public. Media – especially key in the early stage of social reaction, producing "processed or coded images" of deviance and the deviants. This involves three processes: exaggeration and distortion of who did or said what; prediction, the dire consequences of failure to act; symbolization, signifying a person, word, or thing as a threat. Moral entrepreneurs – individuals and groups who target deviant behavior Societal control culture – comprises those with institutional power: the police, the courts, and local and national politicians. They are made aware of the nature and extent of the 'threat'; concern is passed up the chain of command to the national level, where control measures are instituted. The public – these include individuals and groups. They have to decide who and what to believe: in the mod and rocker case, the public initially distrusted media messages, but ultimately believed them. Mass media The concept of "moral panic" has also been linked to certain assumptions about the mass media. In recent times, the mass media have become important players in the dissemination of moral indignation, even when they do not appear to be consciously engaged in sensationalism or in muckraking. Simply reporting a subset of factual statements without contextual nuance can be enough to generate concern, anxiety, or panic. Cohen stated that the mass media is the primary source of the public's knowledge about deviance and social problems. He further argued that moral panic gives rise to the folk devil by labelling actions and people. Christian Joppke furthers the importance of media as he notes shifts in public attention "can trigger the decline of movements and fuel the rise of others." According to Cohen, the media appear in any or all three roles in moral panic dramas: Setting the agendaselecting deviant or socially problematic events deemed as newsworthy, then using finer filters to select which events are candidates for moral panic. Transmitting the imagestransmitting the claims by using the rhetoric of moral panics. Breaking the silence and making the claim. Goode and Ben-Yehuda's attributional model In their 1994 book Moral Panics: The Social Construction of Deviance, Erich Goode and Nachman Ben-Yehuda take a social constructionist approach to moral panics, challenging the assumption that sociology is able to define, measure, explain, and ameliorate social problems. Reviewing empirical studies in the social constructionist perspective, Goode and Ben-Yehuda produced an "attributional" model that identifies essential characteristics and placed more emphasis on strict definition than cultural processes. They arrived at five defining "elements", or "criteria", of a moral panic: Concern – there is "heightened level of concern over the behaviour of a certain group or category" and its consequences; in other words, there is the belief that the behavior of the group or activity deemed deviant is likely to have a negative effect on society. Concern can be indicated via opinion polls, media coverage, and lobbying activity. Hostility – there is "an increased level of hostility" toward the deviants, who are "collectively designated as the enemy, or an enemy, of respectable society". These deviants are constructed as "folk devils", and a clear division forms between "them" and "us". Consensus – "there must be at least a certain minimal measure of consensus" across society as a whole, or at least "designated segments" of it, that "the threat is real, serious and caused by the wrongdoing group members and their behaviour". This is to say, though concern does not have to be nationwide, there must be widespread acceptance that the group in question poses a very real threat to society. It is important at this stage that the "moral entrepreneurs" are vocal and the "folk devils" appear weak and disorganized. Disproportionality – "public concern is in excess of what is appropriate if concern were directly proportional to objective harm". More simply, the action taken is disproportionate to the actual threat posed by the accused group. According to Goode and Ben-Yehuda, "the concept of moral panic rests on disproportion". As such, statistics are exaggerated or fabricated, and the existence of other equally or more harmful activity is denied. Volatility – moral panics are highly volatile and tend to disappear as quickly as they appeared because public interest wanes or news reports change to another narrative. Goode and Ben-Yehuda also examined three competing explanations of moral panics: the grass-roots model – the source of panic is identified as widespread anxieties about real or imagined threats. the elite-engineered model – an elite group induces, or engineers, a panic over an issue that they know to be exaggerated in order to move attention away from their own lack of solving social problems. the interest group theory – "the middle rungs of power and status" are where moral issues are most significantly felt. Similarly, writing about the Blue Whale Challenge and the Momo Challenge as examples of moral panics, Benjamin Radford listed themes that he commonly observed in modern versions of these phenomena: Hidden dangers of modern technology. Evil stranger manipulating the innocent. A "hidden world" of anonymous evil people. Topic clusters In over 40 years of extensive study, researchers have identified several general clusters of topics that help describe the way in which moral panics operate and the impact they have. Some of the more common clusters identified are: child abuse, drugs and alcohol, immigration, media technologies, and street crime. Child abuse Exceptional cases of physical or sexual abuse against children have driven policies based on child protection, regardless of their frequency or contradicting evidence from experts. While discoveries about pedophilia in the priesthood and among celebrities has somewhat altered the original notion of pedophiles being complete strangers, their presence in and around the family is hardly acknowledged. Alcohol and other drugs Substances used for pleasure such as alcohol and other drugs are popularly subject to legal action and criminalization due to their alleged harms to the health of those who partake in them or general order on the streets. Recent examples include methamphetamine, mephedrone, and designer drugs. Immigration A series of moral panic is likely to recur whenever humans migrate to a foreign location to live alongside the native or indigenous population, particularly if the newcomers are of a different skin color or religion. These immigrants may be accused of: bringing alien cultures and refusing to integrate with the mainstream culture; putting strain on welfare, education, and housing systems; and excessive involvement in crime. Media technologies The advent of any new medium of communication produces anxieties among those who deem themselves as protectors of childhood and culture. Their fears are often based on a lack of knowledge as to the actual capacities or usage of the medium. Moralizing organizations, such as those motivated by religion, commonly advocate censorship, while parents remain concerned. According to media studies professor Kirsten Drotner: [E]very time a new mass medium has entered the social scene, it has spurred public debates on social and cultural norms, debates that serve to reflect, negotiate and possibly revise these very norms.… In some cases, debate of a new medium brings about – indeed changes into – heated, emotional reactions … what may be defined as a media panic.Recent manifestations of this kind of development include cyberbullying and sexting. Street crime A central concern of modern mass media has been interpersonal crime. When new types or patterns of crime emerge, coverage expands considerably, especially when said crime involves increased violence or the use of weapons. Sustaining the idea that crime is out of control, this keeps prevalent the fear of being randomly attacked on the street by violent young men. Examples Researchers have considered a number of historical and current events to meet the criteria set out by Stanley Cohen. Historic examples Nativist movement and the Know-Nothing Party (1840s–1860s) The brief success of the Know-Nothing Party in the US during the 1850s can be understood as resulting from a moral panic over Irish Catholic immigration dating back to the 1840s, particularly as it related to religion, politics, and jobs. Nativist criticism of immigrants from Catholic nations centered upon the control of the Pope over church members. The concern regarding the social threat led the Know-Nothing Party in the 1856 presidential election to win 21.5% of the vote. The quick decline in political success for the Know-Nothing Party as a result of a decline in concern for the perceived social threat is an indicative feature of the movements situated in moral panic. Red Scare (1919–1920, late 1940s–1950s) During the years 1919 to 1920, followed by the late 1940s to the 1950s, the United States had a moral panic over communism and feared being attacked by the Soviet Union. In the late 1940s and the 1950s, a period now known as the McCarthy Era, Senator Joseph McCarthy used his power as a senator to conduct a witch hunt for communists he claimed had infiltrated all levels of American society, including Hollywood, the State Department, and the armed forces. When he began, he held little influence or respect within the Senate, but he exploited Americans' fears of communism (and Congress' desire to not lose re-election) to rise to prominence and keep the hunt going in spite of an increasingly apparent lack of evidence, often accusing those who dared oppose him of being communists themselves. "The Devil's music" (1920s–1980s) Over the years, there has been concern of various types of new music causing spiritual or otherwise moral corruption to younger generations, often called "the devil's music". While the types of music popularly labeled as such has changed with time, along with the intended meaning of the term, this basic factor of the moral panic has remained constant. It could thus be argued that this is really a series of smaller moral panics that fall under a larger umbrella. While most notable in the United States, other countries such as Romania have seen exposure to or promotion of the idea as well. Blues was one of the first music genres to receive this label, mainly due to a perception that it incited violence and other poor behavior. In the early 20th century, the blues was considered disreputable, especially as white audiences began listening to the blues during the 1920s. Jazz was another early receiver of the label. At the time, traditionalists considered jazz to contribute to the breakdown of morality. Despite the veiled attacks on blues and jazz as "negro music" often going hand-in-hand with other attacks on the genres, urban middle-class African Americans perceived jazz as "devil's music", and agreed with the beliefs that jazz's improvised rhythms and sounds were promoting promiscuity. Some have speculated that the rock phase of the panic in the 1970s and 1980s contributed to the popularity of the satanic ritual abuse alleged moral panic in the 1980s. Comic books (1950s) In the United States, substantial limits were placed on comic book content during the 1950s, especially in the horror and crime genres. This moral panic was promoted by the psychologist Fredric Wertham, who claimed that comics were a major source of juvenile delinquency, arguing in his book Seduction of the Innocent that they predisposed children to violence. Comic books appeared in congressional hearings, and organisations promoted book burnings. Wertham's work resulted in the creation of the Comics Code, which drastically limited what kind of content could be published. As a result of these limitations, many comics publishers and illustrators were forced to leave the profession, and the content produced by those that remained became tamer and more focused on superheroes. During the following decades, the Comics Code was loosened in scope before finally being abolished in 2011. Switchblades (1950s) In the United States, a 1950 article titled "The Toy That Kills" in the Women's Home Companion, about automatic knives, or "switchblades", sparked significant controversy. It was further fuelled by highly popular films of the late 1950s, including Rebel Without a Cause (1955), Crime in the Streets (1956), 12 Angry Men (1957), The Delinquents, High School Confidential (1958), and the 1957 Broadway musical West Side Story. Fixation on the switchblade as the symbol of youth violence, sex, and delinquency resulted in demands from the public and Congress to control the sale and possession of such knives. State laws restricting or criminalizing switchblade possession and use were adopted by an increasing number of state legislatures, and many of the restrictive laws around them worldwide date back to this period. Mods and rockers (1960s) In early 1960s Britain, the two main youth subcultures were Mods and Rockers. The "Mods and Rockers" conflict was explored as an instance of moral panic by sociologist Stanley Cohen in his seminal study Folk Devils and Moral Panics, which examined media coverage of the Mod and Rocker riots in the 1960s. Although Cohen acknowledged that Mods and Rockers engaged in street fighting in the mid-1960s, he argued that they were no different from the evening brawls that occurred between non-Mod and non-Rocker youths throughout the 1950s and early 1960s, both at seaside resorts and after football games. Dungeons & Dragons (1980s–1990s) At various times, Dungeons & Dragons and other tabletop role-playing games have been accused of promoting such practices as Satanism, witchcraft, suicide, pornography and murder. In the 1980s and later, some groups, especially fundamentalist Christian groups, accused the games of encouraging interest in sorcery and the veneration of demons. Satanic panic (1980s–1990s) The "satanic panic" was a series of moral panics regarding satanic ritual abuse that originated in the United States and spread to other English-speaking countries in the 1980s and 1990s, which led to a string of wrongful convictions. The West Memphis Three were three teenagers falsely accused of murdering children in a satanic ritual. Two were sentenced to life in prison and one was sentenced to death, before all being released after 18 years in prison. HIV/AIDS (1980s–1990s) Acquired immune deficiency syndrome (AIDS) is a viral illness that may lead to or exacerbate other health conditions such as pneumonia, fungal infections, tuberculosis, toxoplasmosis, and cytomegalovirus. A meeting of the British Sociological Association's South West and Wales Study entitled "AIDS: The Latest Moral Panic" was prompted by the growing interest of medical sociologists in AIDS, as well as that of UK health care professionals working in the field of health education. It took place at a time when both groups were beginning to voice an increased concern with the growing media attention and fear-mongering that AIDS was attracting. In the 1980s, a moral panic was created within the media over HIV/AIDS. For example, in Britain, a prominent advertisement by the government suggested that the public was uninformed about HIV/AIDS due to a lack of publicly accessible and accurate information. The media outlets nicknamed HIV/AIDS the "gay plague", which further stigmatized the disease. However, scientists gained a far better understanding of HIV/AIDS as it grew in the 1980s and moved into the 1990s and beyond. The illness was still negatively viewed by many as either being caused by or passed on through the gay community. Once it became clear that this was not the case, the moral panic created by the media changed to blaming the overall negligence of ethical standards by the younger generation (both male and female), resulting in another moral panic. Authors behind AIDS: Rights, Risk, and Reason argued that "British TV and press coverage is locked into an agenda which blocks out any approach to the subject which does not conform in advance to the values and language of a profoundly homophobic culture—a culture that does not regard gay men as fully or properly human. No distinction obtains for the agenda between 'quality' and 'tabloid' newspapers, or between 'popular' and 'serious' television." Similarly, reports of a group of AIDS cases amongst gay men in Southern California which suggested that a sexually transmitted infectious agent might be the etiological agent led to several terms relating to homosexuality being coined for the disease, including gay plague. Dangerous dogs (late 1980s – early 1990s) After a series of high-profile dog attacks on children in the United Kingdom, the British press began to engage in a campaign against so-called dangerous dog breeds, especially pit bulls and Rottweilers, which bore all the hallmarks of a moral panic. This media pressure led the government to hastily introduce the Dangerous Dogs Act 1991 which has been criticised as "among the worst pieces of legislation ever seen, a poorly thought-out knee-jerk reaction to tabloid headlines that was rushed through Parliament without proper scrutiny." The act specifically focused on pit bulls, which were associated with the lower social strata of British society, rather than the Rottweilers and Dobermann Pinschers generally owned by richer social groups. Critics have identified the presence of social class as a factor in the dangerous dogs moral panic, with establishment anxieties about the "sub-proletarian" sector of British society displaced onto the folk devil of the "Dangerous dog". Ongoing historic examples Increase in crime (1970s–present) Fear of increasing crime rates is often the cause of moral panics. In fact, the rates of many types of crime have declined by 50% or more beginning in the mid to late 1980s and early 1990s. In Europe, crime statistics show this is part of a broader pattern of crime decline since the late Middle Ages, with a reversal from the 1960s to the 1980s and 1990s, before the decline continued. This phenomenon, which often taps into a population's herd mentality, continues to occur in various cultures. In some cases, the perception of increased crime can be caused by increased reporting of crimes or by better record-keeping. Japanese jurist Koichi Hamai explains how the changes in crime recording in Japan since the 1990s caused people to believe that the crime rate was rising and that crimes were getting increasingly severe. Violence and video games (1970s–present) There have been calls to regulate violence in video games for nearly as long as the video game industry has existed, with Death Race being a notable early example. In the 1990s, improvements in video game technology allowed for more lifelike depictions of violence in games such as Mortal Kombat and Doom. The industry attracted controversy over violent content and concerns about effects they might have on players, generating frequent media stories that attempted to associate video games with violent behavior, in addition to a number of academic studies that reported conflicting findings about the strength of correlations. According to Christopher Ferguson, sensationalist media reports and the scientific community unintentionally worked together in "promoting an unreasonable fear of violent video games". Concerns from parts of the public about violent games led to cautionary, often exaggerated news stories, warnings from politicians and other public figures, and calls for research to prove the connection, which in turn led to studies "speaking beyond the available data and allowing the promulgation of extreme claims without the usual scientific caution and skepticism". Since the 1990s, there have been attempts to regulate violent video games in the United States through congressional bills as well as within the industry. Public concern and media coverage of violent video games reached a high point following the Columbine High School massacre in 1999, after which videos were found of the perpetrators, Eric Harris and Dylan Klebold, talking about violent games like Doom and making comparisons between the acts they intended to carry out and aspects of games. Ferguson and others have explained the video game moral panic as part of a cycle that all new media go through. In 2011, the U.S. Supreme Court ruled in Brown v. Entertainment Merchants Association that legally restricting sales of video games to minors would be unconstitutional and deemed the research presented in favour of regulation as "unpersuasive". War on drugs (1970s–present) Some critics have pointed to moral panic as an explanation for the War on Drugs. For example, a Royal Society of Arts commission concluded that "the Misuse of Drugs Act 1971 ... is driven more by 'moral panic' than by a practical desire to reduce harm". Some have written that one of the many rungs supporting the moral panic behind the War on Drugs was a separate but related moral panic, which peaked in the late 1990s, involving media's gross exaggeration of the frequency of the surreptitious use of date rape drugs. News media have been criticized for advocating "grossly excessive protective measures for women, particularly in coverage between 1996 and 1998", for overstating the threat and for excessively dwelling on the topic. For example, a 2009 Australian study found that drug panel tests were unable to detect any drug in any of the 97 instances of patients admitted to the hospital believing their drinks might have been spiked. Sex offenders, child sexual abuse, and pedophilia (1970s–present) The media narrative of a sex offender, highlighting egregious offenses as typical behaviour of any sex offender, and media distorting the facts of some cases, has led legislators to attack judicial discretion, making sex offender registration mandatory based on certain listed offenses rather than individual risk or the actual severity of the crime, thus practically catching less serious offenders under the domain of harsh sex offender laws. In the 1990s and 2000s, there have been instances of moral panics in the United Kingdom and the United States, related to colloquial uses of the term pedophilia to refer to such unusual crimes as high-profile cases of child abduction. The moral panic over pedophilia began in the 1970s after the sexual revolution. While homosexuality was becoming more socially accepted after the sexual revolution, pro-contact pedophiles believed that the sexual revolution never helped them. In the 1970s, pro-contact pedophile activist organizations such as Paedophile Information Exchange (PIE) and North American Man/Boy Love Association (NAMBLA) were formed in October 1974 and December 1978, respectively. Despite receiving some support, PIE received much backlash when they advocated for abolishing or lowering age of consent laws. As a result, people protested against PIE. Until the first half of the 1970s, sex was not yet part of the concept of domestic child abuse, which used to be limited to physical abuse and neglect. The sexual part of child abuse became prominent in the United States due to the encounter of two political agendas: the fight against battered child syndrome by pediatricians during the 1960s and the feminist anti-rape movement, in particular the denunciation of domestic sexual violence. These two movements overlapped in 1975, creating a new political agenda about child sexual abuse. Laura Lowenkron wrote: "The strong political and emotional appeal of the theme of 'child sexual abuse' strengthened the feminist criticism of the patriarchal family structure, according to which domestic violence is linked to the unequal power between men and women and between adults and children." Although the concern over child sexual abuse was caused by feminists, the concern over child sexual abuse also attracted traditional groups and conservative groups. Lowenkron added: "Concerned about the increasing expansion and acceptance of so-called 'sexual deviations' during what was called the libertarian age from the 1960s to the early 1970s", conservative groups and traditional groups "saw in the fight against 'child sexual abuse' the chance" to "revive fears about crime and sexual dangers". In the 1980s, the media began to report more frequently on cases of children being raped, kidnapped, or murdered, leading to the moral panic over sex offenders and pedophiles becoming very intense in the early 1980s. In 1981, for instance, a six-year-old boy named Adam Walsh was abducted, murdered, and beheaded. Investigators believe the murderer was serial killer Ottis Toole. The murder of Adam Walsh took over nationwide news and led to a moral panic over child abduction, followed by the creation of new laws for missing children. According to criminologist Richard Moran, the Walsh case "created a nation of petrified kids and paranoid parents ... Kids used to be able to go out and organize a stickball game, and now all playdates and the social lives of children are arranged and controlled by the parents." Also during the 1980s, inaccurate and heavily flawed data about sex offenders and their recidivism rates was published. This data led to the public believing sex offenders to have a particularly high recidivism rate; this in turn led to the creation of sex offender registries. Later information revealed that sex offenders, including child sex offenders, have a low recidivism rate. Other highly publicized cases, similar to the murder of Adam Walsh, that contributed to the creation of sex offender registries and sex offender laws include the abduction and murder of 11-year-old boy Jacob Wetterling in 1989; the rape and murder of 7-year-old girl Megan Kanka in 1994; and the rape and murder of 9-year-old girl Jessica Lunsford in 2005. Another contributing factor in the moral panic over pedophiles and sex offenders was the day-care sex-abuse hysteria in the 1980s and early 1990s, including the McMartin preschool trial. This led to a panic where parents became hypervigilant with concerns of predatory child sex offenders seeking to abduct children in public spaces, such as playgrounds. Contemporary examples Human trafficking (2000–present) Many critics of contemporary anti-prostitution activism argue that much of the current concern about human trafficking and its more general conflation with prostitution and other forms of sex work have hallmarks of moral panic. They further argue that this moral panic shares much in common with the 'white slavery' panic of a century earlier, which in the US prompted passage of the 1910 Mann Act. Nick Davies argues that the following major factors contributed towards this effect. Since the collapse of Communism, Western Europe was flooded with sex workers from Eastern Europe, and the term sex trafficking came to mean any organized movement of sex workers, losing the connotation of force and coercion. This change of the definition entered, e.g., into the UK's Sexual Offences Act 2003. Second, academic researchers on sex trade provided a range of estimates of the trafficked persons, including estimates based on various assumptions, up to the very pessimistic ones. The media picked the most alarmist numbers, which were uncritically used by politicians, who in their turn were quoted for further misleading information. Terrorism and Islamic extremism (2001–present) After the September 11 attacks in 2001, some scholars identified a rising fear of Muslims in the western world, which they described as a moral panic. This exaggeration of the threat posed by Islam served a political purpose, contributing to the concept of a global war on terror, including the war in Afghanistan and a war in Iraq. Following the September 11 attacks, there was a dramatic increase in hate crimes against Muslims and Arabs in the United States, with rates peaking in 2001 and later surpassed in 2016. QAnon conspiracies (2020s) QAnon, a late-2010s to early 2020s far-right conspiracy theory that began on 4chan and which alleged that a secret cabal of Jewish, Satan-worshipping, cannibalistic pedophiles is running a global child sex-trafficking ring, has been described as a moral panic and compared to the 1980s panic over satanic ritual abuse. Criticism of moral panic as an explanation Paul Joosse has argued that while classic moral panic theory styled itself as being part of the "sceptical revolution" that sought to critique structural functionalism, it is actually very similar to Émile Durkheim's depiction of how the collective conscience is strengthened through its reactions to deviance (in Cohen's case, for example, "right-thinkers" use folk devils to strengthen societal orthodoxies). In his analysis of Donald Trump's victory in the 2016 United States presidential election, Joosse reimagined moral panic in Weberian terms, showing how charismatic moral entrepreneurs can at once deride folk devils in the traditional sense while avoiding the conservative moral recapitulation that classic moral panic theory predicts. Another criticism is that of disproportionality: there is no way to measure what a proportionate reaction should be to a specific action. Writing in 1995 about the moral panic that arose in the UK after a series of murders by juveniles, chiefly that of two-year-old James Bulger by two 10-year-old boys but also including that of 70-year-old Edna Phillips by two 17-year-old girls, the sociologist Colin Hay pointed out that the folk devil was ambiguous in such cases; the child perpetrators would normally be thought of as innocent. In 1995, Angela McRobbie and Sarah Thornton argued "that it is now time that every stage in the process of constructing a moral panic, as well as the social relations which support it, should be revised". Their argument is that mass media has changed since the concept of moral panic emerged so "that 'folk devils' are less marginalized than they once were", and that "folk devils" are not only castigated by mass media but supported and defended by it as well. They also suggest that the "points of social control" that moral panics used to rest on "have undergone some degree of shift, if not transformation". British criminologist Yvonne Jewkes (2004) has also raised issue with the term morality, how it is accepted unproblematically in the concept of "moral panic" and how most research into moral panics fails to approach the term critically but instead accepts it at face value. Jewkes goes on to argue that the thesis and the way it has been used fails to distinguish between crimes that quite rightly offend human morality, and thus elicit a justifiable reaction, and those that demonise minorities. The public are not sufficiently gullible to keep accepting the latter and consequently allow themselves to be manipulated by the media and the government. Another British criminologist, Steve Hall (2012), goes a step further to suggest that the term moral panic is a fundamental category error. Hall argues that although some crimes are sensationalized by the media, in the general structure of the crime/control narrative the ability of the existing state and criminal justice system to protect the public is also overstated. Public concern is whipped up only for the purpose of being soothed, which produces not panic but the opposite, comfort and complacency. Echoing another point Hall makes, sociologists Thompson and Williams (2013) argue that the concept of "moral panic" is not a rational response to the phenomenon of social reaction, but itself a product of the irrational middle-class fear of the imagined working-class "mob". Using as an example a peaceful and lawful protest staged by local mothers against the re-housing of sex-offenders on their estate, Thompson and Williams argue that the sensationalist demonization of the protesters by moral panic theorists and the liberal press was just as irrational as the demonization of the sex offenders by the protesters and the tabloid press. Many sociologists and criminologists (Ungar, Hier, Rohloff) have revised Cohen's original framework. The revisions are compatible with the way in which Cohen theorizes panics in the third Introduction to Folk Devils and Moral Panics. See also Major Boobage, a fictional depiction of one The Population Bomb (1968) Citations General and cited references Further reading Author affiliation: Planned Parenthood Federation of America (PPFA). External links Barriers to critical thinking Crowd psychology Deviance (sociology) Mass psychogenic illness Moral psychology Panic Social phenomena
Moral panic
[ "Biology" ]
8,005
[ "Deviance (sociology)", "Behavior", "Human behavior" ]
7,193,551
https://en.wikipedia.org/wiki/Ectoplasm%20%28paranormal%29
In spiritualism, ectoplasm, also known as simply ecto, is a substance or spiritual energy "exteriorized" by physical mediums. It was coined in 1894 by psychical researcher Charles Richet. Although the term is widespread in popular culture, there is no scientific evidence that ectoplasm exists and many purported examples were exposed as hoaxes fashioned from cheesecloth, gauze or other natural substances. The term comes from the Ancient Greek words ἐκτός ektos, "outside" and πλάσμα plasma, "anything formed". Phenomenon In Spiritualism, ectoplasm is said to be formed by physical mediums when in a trance state. This material is excreted as a gauze-like substance from orifices on the medium's body and spiritual entities are said to drape this substance over their nonphysical body, enabling them to interact in the physical and real universe. Some accounts claim that ectoplasm begins clear and almost invisible, but darkens and becomes visible as the psychic energy becomes stronger. Still other accounts state that in extreme cases ectoplasm will develop a strong odor. According to some mediums, the ectoplasm cannot occur in light conditions as the ectoplasmic substance would disintegrate. The psychical researcher Gustav Geley defined ectoplasm as being "very variable in appearance, being sometimes vaporous, sometimes a plastic paste, sometimes a bundle of fine threads, or a membrane with swellings or fringes, or a fine fabric-like tissue". Arthur Conan Doyle described ectoplasm as "a viscous, gelatinous substance which appeared to differ from every known form of matter in that it could solidify and be used for material purposes". The physical existence of ectoplasm has not been scientifically demonstrated, and tested samples purported to be ectoplasm have been found to be various non-paranormal substances. Other researchers have duplicated, with non-supernatural materials, the photographic effects sometimes said to prove the existence of ectoplasm. Ectenic force The idea of ectoplasm was merged into the notion of an "ectenic force" by some early psychical researchers who were seeking a physical explanation for reports of psychokinesis in sessions. Its existence was initially hypothesized by Count Agenor de Gasparin, to explain the phenomena of table turning and tapping during séances. Ectenic force was named by de Gasparin's colleague M. Thury, a professor of natural history at the Academy of Geneva. Between them, de Gasparin and Thury conducted a number of experiments in ectenic force, and claimed some success. Their work was not independently verified. Other psychical researchers who studied mediumship speculated that within the human body an unidentified fluid termed the "psychode", "psychic force" or "ecteneic force" existed and was capable of being released to influence matter. This view was held by Camille Flammarion and William Crookes, however a later psychical researcher Hereward Carrington pointed out that the fluid was hypothetical and has never been discovered. The psychical investigator W. J. Crawford (1881–1920) had claimed that a fluid substance was responsible for levitation of objects after witnessing the medium Kathleen Goligher. Crawford, after witnessing a number of her séances, claimed to have obtained flashlight photographs of the substance; he later described the substance as "plasma". He claimed the substance is not visible to the naked eye but can be felt by the body. The physicist and psychical researcher Edmund Edward Fournier d'Albe later investigated the medium Kathleen Goligher at many sittings and arrived at the opposite conclusions to Crawford; according to D'Albe, no paranormal phenomena such as levitation had occurred with Goligher and stated he had found evidence of fraud. D'Albe claimed the substance in the photographs of Crawford was ordinary muslin. During a séance D'Albe had observed white muslin between Goligher's feet. Fraud Ectoplasm on many occasions has been proven to be fraudulent. Many mediums had used methods of swallowing and regurgitating cheesecloth, textile products smoothed with potato starch and in other cases the ectoplasm was made from paper, cloth and egg white or butter muslin. The Society for Psychical Research investigations into mediumship exposed many fraudulent mediums which contributed to the decline of interest in physical mediumship. In 1907, Hereward Carrington exposed the tricks of fraudulent mediums such as those used in slate-writing, table-turning, trumpet mediumship, materializations, sealed-letter reading and spirit photography. In the early 20th century the psychical researcher Albert von Schrenck-Notzing investigated medium Eva Carrière and claimed her ectoplasm "materializations" were not from spirits but the result of "ideoplasty" in which the medium could form images onto ectoplasm from her mind. Schrenck-Notzing published the book Phenomena of Materialisation (1923) which included photographs of the ectoplasm. Critics pointed out the photographs of the ectoplasm revealed marks of magazine cut-outs, pins and a piece of string. Schrenck-Notzing admitted that on several occasions Carrière deceptively smuggled pins into the séance room. Magician Carlos María de Heredia replicated Carrière's ectoplasm using a comb, gauze and a handkerchief. Donald West wrote that the ectoplasm of Carrière was fake and was made of cut-out paper faces from newspapers and magazines on which fold marks could sometimes be seen from the photographs. A photograph of Carrière taken from the back of the ectoplasm face revealed it to be made from a magazine cut out with the letters "Le Miro". The two-dimensional face had been clipped from the French magazine Le Miroir. Back issues of the magazine also matched some of Carrière's ectoplasm faces. Cut out faces that she used included Woodrow Wilson, King Ferdinand of Bulgaria, French president Raymond Poincaré and the actress Mona Delza. After Schrenck-Notzing discovered Carrière had taken her ectoplasm faces from the magazine he defended her by claiming she had read the magazine but her memory had recalled the images and they had materialized into the ectoplasm. Because of this Schrenck-Notzing was described as credulous. Joseph McCabe wrote "In Germany and Austria, Baron von Schrenck-Notzing is the laughing-stock of his medical colleagues." Danish medium Einer Nielsen was investigated by a committee from the Kristiania University in Norway in 1922 and it was discovered in a séance that his ectoplasm was fake. He was also caught hiding ectoplasm in his rectum. Mina Crandon was a famous medium known for producing ectoplasm during her séance sittings. She produced a small ectoplasmic hand from her stomach which waved about in the darkness. Her career ended, however, when biologists examined the hand and found it to be made of a piece of carved animal liver. Walter Franklin Prince described the Crandon case as "the most ingenious, persistent, and fantastic complex of fraud in the history of psychic research". Psychical researchers Eric Dingwall and Harry Price republished an anonymous work by a former medium, entitled Revelations of a Spirit Medium (1922), which exposed the tricks of mediumship and the fraudulent methods of producing "spirit hands". Originally all the copies of the book were bought up by spiritualists and destroyed. On the subject of ectoplasm and fraud, John Ryan Haule wrote: Price exposed medium Helen Duncan's fraudulent techniques by proving, through analysis of a sample of ectoplasm produced by Duncan, that it was cheesecloth that she had swallowed and regurgitated. Duncan had also used dolls' heads and masks as ectoplasm. Mediums would also cut pictures from magazines and stick them to the cheesecloth to pretend they were spirits of the dead. Another researcher, C. D. Broad, wrote that ectoplasm in many cases had proven to be composed of home material such as butter-muslin, and that there was no solid evidence that it had anything to do with spirits. Photographs taken by Thomas Glendenning Hamilton of ectoplasm reveal it to be made of tissue paper and magazine cut-outs of people. The famous photograph taken by Hamilton of medium Mary Ann Marshall (1880–1963) depicts tissue paper with a cut out of Arthur Conan Doyle's head from a newspaper. Skeptics have suspected that Hamilton may have been behind the hoax. Mediums Rita Goold and Alec Harris dressed up in their séances as ectoplasm spirits and were exposed as frauds. The exposures of fraudulent ectoplasm in séances caused a rapid decline in physical mediumship. In popular culture In the 1937 Cary Grant movie Topper, ectoplasm is the means whereby the ghosts George and Marian Kirby make themselves visible. In the 1941 play Blithe Spirit, and subsequent movies, ectoplasm is referenced by Madame Acrati in Act 1, scene 2. Since its release in 1984, the film Ghostbusters has popularized in contemporary fiction the idea of associating ghosts with slimy, often green, ectoplasm. In the 1996 children's novel written by Eva Ibbotson called Dial-a-Ghost , ghosts are made up of Ectoplasm which is a state of matter/material. See also Aura (paranormal) Bhoot (ghost) Ghost Ichor Incorporeal Kirlian photography List of basic parapsychology topics Spirit photography References External links Energy (esotericism) Fictional materials Ghosts Paranormal hoaxes Paranormal terminology Spiritualism Vitalism 1890s neologisms Séances
Ectoplasm (paranormal)
[ "Physics", "Biology" ]
2,052
[ "Vitalism", "Materials", "Fictional materials", "Non-Darwinian evolution", "Biology theories", "Matter" ]
7,194,467
https://en.wikipedia.org/wiki/Microdistillery
A microdistillery is a small, often boutique-style distillery established to produce beverage grade spirit alcohol in relatively small quantities, usually done in single batches (as opposed to larger distillers' continuous distilling process). While the term is most commonly used in the United States, micro-distilleries have been established in Europe for many years, either as small cognac distilleries supplying the larger cognac houses, or as distilleries of single malt whisky originally produced for the blended Scotch whisky market, but whose products are now sold as niche single malt brands. The more recent development of micro-distilleries can now also be seen in locations as diverse as London, Switzerland, and South Africa. Throughout much of the world, small distilleries operate throughout communities of various sizes, mostly without being given a special description. Due to the extended period of Prohibition in the United States, however, most small distillers were forced out of business, leaving only the corporate-dominated megadistilleries to resume operation when Prohibition was repealed to produce small batch brands. Most microdistilleries in South Africa ceased to exist when legislation was introduced in 1964 that made it almost impossible for small, private distilleries to operate viably. The legislation was relaxed again in 2003 and although most distilling expertise was lost, it was recovered by a new generation of microdistillers and has grown since. A recent trend in this segment of the distilling industry is for megadistillers to create their own micro-distillery within their current operation. The Makers Mark distillery is owned by Suntory and the Buffalo Trace distillery is owned by The Sazerac Company (which also owns the A. Smith Bowman microdistillery) are now producing specialty bourbon brands with small stills. Movement The modern microdistilling movement grew out of the beer microbrewing trend, which originated in the United Kingdom in the 1970s and quickly spread throughout the United States in the following decades. While still in its infancy, the popularity of microdistilling and microdistilled spirits is expanding consistently, with many microbreweries and small wineries establishing distilleries within the scope of their brewing or winemaking operations. Other microdistilleries are farm-based. Anchor Brewing Company, Ballast Point Brewing Company, and Dogfish Head are examples of American craft breweries that have begun expanding into microdistillation. Leopold Bros. is an example of a microdistiller that began as a microbrewery, and now operates as a distillery alone. Some of the newer microdistilleries produce only spirits. Plain and seasonally-flavored vodkas are popular products. As with the emergence of microbrewing, California and Oregon have experienced the highest number of microdistillery openings. Significant recent growth has also occurred in the Midwest. Microdistilleries for gin and vodka have also now started to re-emerge in London, England, after being restricted and effectively banned for over a hundred years due to UK government restrictions on still sizes, which have now been partially relaxed. There are now five licensed distilleries in London: Beefeater, and Thames Distillers, and four microdistilleries: the City of London Distillery, The London Distillery Company, Sacred Microdistillery and Sipsmith. At the same time, European micro-distilleries have been a key element in the absinthe renaissance in several countries, including Switzerland. South Africa has experienced relatively big growth in microdistilleries and produces mainly pot-distilled brandies, fruit brandies, fruit-based eau de vie (locally called mampoer), husk-based spirits (like Italian Grappa), and a wide range of liqueurs and flavoured vodkas. Distillique is one of the few training academies worldwide which provides craft and microdistiller training courses on a regular monthly basis for microdistillers. South African microdistillers include the Jorgensen's Distillery, Dalla Cia Distillery, Nyati JJJ Distillery, Schoemanati Distillery, Tanagra Distillery and Wilderer Distillery. In the 1990s the liquor industry established the notion of super premium spirits offering a higher-quality (and usually more elaborately packaged) product at a higher price. The higher prices created an opportunity for small distilleries to profitably produce niche brands of exotic spirits. The early 21st Century saw the creation of hundreds of such distilleries producing products that were designed and marketed in a way that resembled celebrated restaurants more than alcoholic spirits marketing. The growth of craft distilleries and breweries was partly driven by consumer interest in greater variety, perceived quality, and support for locally owned businesses. According to the American Distilling Institute, there were 50 microdistilleries operating in the United States in 2005, but by 2012 this number had increased to 250. Numerous competitions and publications were formed to support the burgeoning sub-culture of spirits. By 2019, there were over 2,000 microdistilleries in the United States, and the market share of craft spirits was steadily growing. It is no longer the case that microdistilleries are producing at the premium end of the market only; the established brands are under threat from local microdistilleries at all price points (with the possible exception of the ultra discount supermarket brands such as Sainsbury's and Tesco's "value" brands, which are close to loss leaders). The COVID-19 pandemic negatively impacted the microdistillery industry, as bars and pubs closed, and the economy shrank. This marked a sharp downturn in the previously steady growth of microdistilleries in the United States, in a phenomenon compared to the impact of Prohibition. Innovation Microdistillers often experiment with new techniques to produce new flavors. Tony Conigliaro uses a rotavap (i.e. glassware not copper pot) on a small scale to produce distilled spirits which change from day to day in his bar, and Ian Hart uses vacuum equipment to conduct distillation at much-reduced temperatures, resulting in less cooked aromatics. U.S. regulation The U.S. Government regulates distilleries to a high degree and currently does not distinguish its treatment of distilleries in terms of size. This stringent regulation has prevented microdistilling from developing as rapidly as microbrewing which enjoys relatively more relaxed government control. A number of states, such as California, Illinois, Indiana, Iowa, Kansas, Michigan, Utah, Washington, West Virginia, and Wisconsin have passed legislation reducing the stringent regulations for small distilleries that were a holdover from prohibition. The Bureau of Alcohol, Tobacco, and Firearms (BATF) and the Alcohol and Tobacco Tax and Trade Bureau (TTB) are responsible for enforcing Federal statutes as they apply to all manufacturers of beverage alcohol. South African regulations In South Africa, microdistilleries are legally defined as distilleries with an annual capacity of fewer than 2 million litres of spirits. These microdistilleries are regulated through provincial laws rather than the national liquor laws (as prescribed in the Liquor Act of South Africa, Act 59 of 2003). Craft distillery The American Craft Spirits Association defines a "craft distillery" as a distillery that produces fewer than 750,000 gallons per year; is independently owned and operated (with a greater than 75% equity stake, plus operational control), and is transparent regarding its ingredients, its distilling and bottling location; its distilling and bottling process, and its aging process. See also Distillation Microbrewery Portland Oregon Distilleries Third wave of coffee Footnotes References External links Burning Still | Distilling Community Distillique craft and micro distilling Distilleries Restaurants by type Alcohol industry
Microdistillery
[ "Chemistry" ]
1,627
[ "Distilleries", "Distillation" ]
7,194,760
https://en.wikipedia.org/wiki/Space%20Tourism%20Society
The Space Tourism Society is a California 501(c)3 non-profit organization founded in 1996 by John Spencer, a former member of the board of directors of the National Space Society, with the goal of promoting space tourism. The STS is based in the US and has chapters in Japan, Norway, Canada, Malaysia, India, Russia, and the United Kingdom. It is an organization member of the Alliance for Space Development. Members , the president of the society, John Spencer, is designing a space yacht aimed at cruising in Earth orbit. See also Commercial astronaut Private spaceflight Quasi Universal Intergalactic Denomination References Further reading The Popular Science Monthly Feasibility Study and Future Projections of Suborbital Space Tourism at the Example of Virgin Galactic by Matthias Otto Space tourism: do you want to go? by John Spencer and Karen L. Rugg Space enterprise: living and working offworld in the 21st century by Philip Robert Harris Worldwide Destinations and Companion Book of Cases Set by =Brian G. Boniface and Chris Cooper External links Official website Space tourism Space organizations Organizations established in 1996
Space Tourism Society
[ "Astronomy" ]
215
[ "Outer space", "Astronomy stubs", "Astronomy organizations", "Space organizations", "Outer space stubs" ]
7,195,059
https://en.wikipedia.org/wiki/Variable%20Valve%20Control
VVC (Variable Valve Control) is an automobile variable valve timing technology developed by Rover and applied to some high performance variants of the company's K Series 1800cc engine. About In order to improve the optimisation of the valve timing for differing engine speeds and loads, the system is able to vary the timing and duration of the inlet valve opening. It achieves this by using a complex and finely machined mechanism to drive the inlet camshafts. This mechanism can accelerate and decelerate the rotational speed of the camshaft during different parts of its cycle. e.g. to produce longer opening duration, it slows the rotation during the valve open part of the cycle and speeds it up during the valve closed period. The system has the advantage that it is continuously variable rather than switching in at a set speed. Its disadvantage is the complexity of the system and corresponding price. Other systems will achieve similar results with less cost and simpler design (electronic control). For a more detailed description, see the sandsmuseum link below. Applications MG Rover cars MG F / MG TF MG ZR Rover 200 / 25 Non MG/Rover cars Lotus Elise Caterham 7 Caterham 21 GTM Libra See also Variable Valve Timing Rover K-Series engine External links How the MGF VVC really works MG Rover Group Powertrain Ltd, manufacturer of the VVC engines Valve Manufacturer Sandsmuseum Engine technology Variable valve timing Rover Company
Variable Valve Control
[ "Technology" ]
290
[ "Engine technology", "Engines" ]
7,195,427
https://en.wikipedia.org/wiki/Carbon%20detonation
Carbon detonation or carbon deflagration is the violent reignition of thermonuclear fusion in a white dwarf star that was previously slowly cooling. It involves a runaway thermonuclear process which spreads through the white dwarf in a matter of seconds, producing a type Ia supernova which releases an immense amount of energy as the star is blown apart. The carbon detonation/deflagration process leads to a supernova by a different route than the better known type II (core-collapse) supernova (the type II is caused by the cataclysmic explosion of the outer layers of a massive star as its core implodes). White dwarf density and mass increase A white dwarf is the remnant of a small to medium size star (the Sun is an example of these). At the end of its life, the star has burned its hydrogen and helium fuel, and thermonuclear fusion processes cease. The star does not have enough mass to either burn much heavier elements, or to implode into a neutron star or type II supernova as a larger star can, from the force of its own gravity, so it gradually shrinks and becomes very dense as it cools, glowing white and then red, for a period many times longer than the present age of the Universe. Occasionally, a white dwarf gains mass from another source – for example, a binary star companion that is close enough for the dwarf star to siphon sufficient amounts of matter onto itself; or from a collision with other stars, the siphoned matter having been expelled during the process of the companion's own late stage stellar evolution. If the white dwarf gains enough matter, its internal pressure and temperature will rise enough for carbon to begin fusing in its core. Carbon detonation generally occurs at the point when the accreted matter pushes the white dwarf's mass close to the Chandrasekhar limit of roughly 1.4 solar masses, the mass at which gravity can overcome the electron degeneracy pressure that prevents it from collapsing during its lifetime. This also happens when two white dwarfs merge if the combined mass is over the Chandrasekhar limit, resulting in a type Ia supernova. A main sequence star supported by thermal pressure would expand and cool which automatically counterbalances an increase in thermal energy. However, degeneracy pressure is independent of temperature; the white dwarf is unable to regulate the fusion process in the manner of normal stars, so it is vulnerable to a runaway fusion reaction. Fusion and pressure In the case of a white dwarf, the restarted fusion reactions releases heat, but the outward pressure that exists in the star and supports it against further collapse is initially due almost entirely to degeneracy pressure, not fusion processes or heat. Therefore, even when fusion recommences, the outward pressure that is key to the star's thermal balance does not increase much. One result is that the star does not expand much to balance its fusion and heat processes with gravity and electron pressure, as it did when burning hydrogen (until too late). This increase of heat production without a means of cooling by expansion raises the internal temperature dramatically, and therefore the rate of fusion also increases extremely fast as well, a form of positive feedback known as thermal runaway. A 2004 analysis of such a process states that: Supercritical event The flame accelerates dramatically, in part due to the Rayleigh–Taylor instability and interactions with turbulence. The resumption of fusion spreads outward in a series of uneven, expanding "bubbles" in accordance with Rayleigh–Taylor instability. Within the fusion area, the increase in heat with unchanged volume results in an exponentially rapid increase in the rate of fusion – a sort of supercritical event as thermal pressure increases boundlessly. As hydrostatic equilibrium is not possible in this situation, a "thermonuclear flame" is triggered and an explosive eruption through the dwarf star's surface that completely disrupts it, seen as a type Ia supernova. Regardless of the exact details of this nuclear fusion, it is generally accepted that a substantial fraction of the carbon and oxygen in the white dwarf is converted into heavier elements within a period of only a few seconds, raising the internal temperature to billions of degrees. This energy release from thermonuclear fusion (1–) is more than enough to unbind the star; that is, the individual particles making up the white dwarf gain enough kinetic energy to fly apart from each other. The star explodes violently and releases a shock wave in which matter is typically ejected at speeds on the order of 5,000–, roughly 6% of the speed of light. The energy released in the explosion also causes an extreme increase in luminosity. The typical visual absolute magnitude of type Ia supernovae is Mv = −19.3 (about 5 billion times brighter than the Sun), with little variation. This process, of a volume supported by electron degeneracy pressure instead of thermal pressure gradually reaching conditions capable of igniting runaway fusion, is also found in a less dramatic form in a helium flash in the core of a sufficiently massive red giant star. See also Helium flash, a similar (although less cataclysmic) sudden initiation of fusion Nuclear fusion References External links JINA: Type Ia Supernova Flame Models A Computer Simulation of Carbon Detonation/Deflagration Stellar evolution Stellar phenomena White dwarfs
Carbon detonation
[ "Physics" ]
1,104
[ "Physical phenomena", "Stellar phenomena", "Astrophysics", "Stellar evolution" ]
7,195,477
https://en.wikipedia.org/wiki/Nylon-eating%20bacteria
Paenarthrobacter ureafaciens KI72, popularly known as nylon-eating bacteria, is a strain of Paenarthrobacter ureafaciens that can digest certain by-products of nylon 6 manufacture. It uses a set of enzymes to digest nylon, popularly known as nylonase'''. Discovery and nomenclature In 1975, a team of Japanese scientists discovered a strain of bacterium, living in ponds containing waste water from a nylon factory, that could digest certain byproducts of nylon 6 manufacture, such as the linear dimer of 6-aminohexanoate. These substances are not known to have existed before the invention of nylon in 1935. It was initially named as Achromobacter guttatus. Studies in 1977 revealed that the three enzymes that the bacteria were using to digest the byproducts were significantly different from any other enzymes produced by any other bacteria, and not effective on any material other than the manmade nylon byproducts. The bacterium was reassigned to Flavobacterium in 1980. Its genome was resolved in 2017, again reassigning it to Arthrobacter. The Genome Taxonomy Database considers it a strain of Paenarthrobacter ureafaciens following a 2016 reclassification. As of January 2021, the NCBI taxonomy browser has been updated to match GTDB. Descendant strains A few newer strains have been created by growing the original KI72 in different conditions, forcing it to adapt. These include KI722, KI723, KI723T1, KI725, KI725R, and many more. The enzymes The bacterium contains the following three enzymes: 6-aminohexanoate-cyclic-dimer hydrolase (EI, NylA, ) 6-aminohexanoate-dimer hydrolase (EII, NylB, ) 6-aminohexanoate-oligomer endohydrolase (EIII, NylC, ) All three enzymes are encoded on a plasmid called pOAD2. The plasmid can be transferred to E. coli, as shown in a 1983 publication. EI The enzyme EI is related to amidases. Its structure was resolved in 2010. EII EII has evolved by gene duplication followed by base substitution of another protein EII'. Both enzymes have 345 identical aminoacids out of 392 aminoacids (88% homology). The enzymes are similar to beta-lactamase. The EII' (NylB, ) protein is about 100x times less efficient compared to EII. A 2007 research by the Seiji Negoro team shows that just two amino-acid alterations to EII', i.e. G181D and H266N, raises its activity to 85% of EII. EIII The structure of EIII was resolved in 2018. Instead of being a completely novel enzyme, it appears to be a member of the N-terminal nucleophile (N-tn) hydrolase family. Specifically, computational approaches classify it as a MEROPS S58 (now renamed P1) hydrolase. The protein is expressed as a precursor, which then cleaves itself into two chains. Outside of this plasmid, > 95% similar proteins are found in Agromyces and Kocuria. EIII was originally thought to be completely novel. Susumu Ohno proposed that it had come about from the combination of a gene-duplication event with a frameshift mutation. An insertion of thymidine would turn an arginine-rich 427aa protein into this 392aa enzyme. Role in evolution teaching There is scientific consensus that the capacity to synthesize nylonase most probably developed as a single-step mutation that survived because it improved the fitness of the bacteria possessing the mutation. More importantly, one of the enzymes involved was produced by a frame-shift mutation that completely scrambled existing genetic code data. Despite this, the new gene still had a novel, albeit weak, catalytic capacity. This is seen as a good example of how mutations easily can provide the raw material for evolution by natural selection.Miller, Kenneth R. Only a Theory: Evolution and the Battle for America's Soul (2008) pp. 80-82 A 1995 paper showed that scientists have also been able to induce another species of bacterium, Pseudomonas aeruginosa'', to evolve the capability to break down the same nylon byproducts in a laboratory by forcing them to live in an environment with no other source of nutrients. See also Plastivore Biodegradable plastic E. coli long-term evolution experiment Radiotrophic fungus London Underground mosquito Lonicera fly Mealworms are capable of digesting polystyrene References External links NBRC 14590, information on the KI72 culture maintained at National Institute of Technology and Evaluation NBRC 114184, a derived culture used in the 2017 sequencing Biological evolution Plastivores Actinomycetota
Nylon-eating bacteria
[ "Biology" ]
1,049
[ "Organisms by adaptation", "Plastivores" ]
7,195,699
https://en.wikipedia.org/wiki/Driven%20right%20leg%20circuit
A Driven Right Leg circuit or DRL circuit, also known as Right Leg Driving technique, is an electric circuit that is often added to biological signal amplifiers to reduce common-mode interference. Biological signal amplifiers such as ECG (electrocardiogram) EEG (electroencephalogram) or EMG circuits measure very small electrical signals emitted by the body, often as small as several micro-volts (millionths of a volt). However, the patient's body can also act as an antenna which picks up electromagnetic interference, especially 50/60 Hz noise from electrical power lines. This interference can obscure the biological signals, making them very hard to measure. Right leg driver circuitry is used to eliminate interference noise by actively cancelling the interference. Other methods of noise control include: Faraday cage Twisting Wires High Gain Instrumentation Amplifier Filtering Further reading J.G. Webster, "Medical Instrumentation", 3rd ed, New York: John Wiley & Sons, 1998, . B. B. Winter and J. G. Webster, “Driven-right-leg circuit design,” IEEE Trans. Biomed. Eng., vol. BME-30, no. 1, pp. 62–66, Jan. 1983. "Improving Common-Mode Rejection Using the Right-Leg Drive Amplifier" by Texas Instruments. Electronic design Analog circuits
Driven right leg circuit
[ "Engineering" ]
273
[ "Electronic design", "Analog circuits", "Electronic engineering", "Design" ]
7,195,896
https://en.wikipedia.org/wiki/Orciprenaline
Orciprenaline, also known as metaproterenol, is a bronchodilator used in the treatment of asthma. Orciprenaline is a moderately selective β2 adrenergic receptor agonist that stimulates receptors of the smooth muscle in the lungs, uterus, and vasculature supplying skeletal muscle, with minimal or no effect on α adrenergic receptors. The pharmacologic effects of β adrenergic agonist drugs, such as orciprenaline, are at least in part attributable to stimulation through β adrenergic receptors of intracellular adenylyl cyclase, the enzyme which catalyzes the conversion of ATP to cAMP. Increased cAMP levels are associated with relaxation of bronchial smooth muscle and inhibition of release of mediators of immediate hypersensitivity from many cells, especially from mast cells. Adverse effects tremor nervousness dizziness weakness headache nausea tachycardia Rare side effects that could be life-threatening difficulty breathing rapid or increased heart rate irregular heartbeat chest pain or discomfort References Chemical substances for emergency medicine Phenylethanolamines Bronchodilators Tocolytics
Orciprenaline
[ "Chemistry" ]
244
[ "Chemicals in medicine", "Chemical substances for emergency medicine" ]
7,196,358
https://en.wikipedia.org/wiki/Caneworking
In glassblowing, cane refers to rods of glass with color; these rods can be simple, containing a single color, or they can be complex and contain strands of one or several colors in pattern. Caneworking refers to the process of making cane, and also to the use of pieces of cane, lengthwise, in the blowing process to add intricate, often spiral, patterns and stripes to vessels or other blown glass objects. Cane is also used to make murrine (singular murrina, sometimes called mosaic glass), thin discs cut from the cane in cross-section that are also added to blown or hot-worked objects. A particular form of murrine glasswork is millefiori ("thousand flowers"), in which many murrine with a flower-like or star-shaped cross-section are included in a blown glass piece. Caneworking is an ancient technique, first invented in southern Italy in the second half of the third century BC, and elaborately developed centuries later on the Italian island of Murano. Making cane There are several different methods of making cane. In each, the fundamental technique is the same: a lump of glass, often containing some pattern of colored and clear glass, is heated in a furnace (glory hole) and then pulled, by means of a long metal rod (punty) attached at each end. As the glass is stretched out, it retains whatever cross-sectional pattern was in the original lump, but narrows quite uniformly along its length (due to the skill of the glassblowers doing the pulling, aided by the fact that if the glass becomes narrower at some point along the length, it cools more there and thus becomes stiffer). Cane is usually pulled until it reaches roughly the diameter of a pencil, when, depending on the size of the original lump, it may be anywhere from one to fifty feet in length. After cooling, it is broken into sections usually from four to six inches long, which can then be used in making more complex canes or in other glassblowing techniques. The simplest cane, called vetro a fili (glass with threads) is clear glass with one or more threads of colored (often white) glass running its length. It is commonly made by heating and shaping a chunk of clear, white, or colored glass on the end of a punty, and then gathering molten clear glass over the color by dipping the punty in a furnace containing the clear glass. After the desired amount of clear glass is surrounding the color, this cylinder of hot glass is then shaped, cooled and heated until uniform in shape and temperature. Simultaneously an assistant prepares a 'post' which is another punty with a small platform of clear glass on the end. The post is pressed against the end of the hot cylinder of glass to connect them, and the glassblower (or 'gaffer') and assistant walk away from each other with the punties, until the cane is stretched to the desired length and diameter. The cane cools within minutes and is cut into small sections. Variations in cane making A simple single-thread cane can then be used to make more complex canes. A small bundle of single-thread canes can be heated until they fuse, or heated canes, laid parallel, can be picked up on the circumference of a hot cylinder of clear or colored glass. This bundle, treated just as the chunk of color in the description above, is cased in clear glass and pulled out, forming a vetro a fili cane with multiple threads and perhaps a clear or solid color core. If the cane is twisted as it is pulled, the threads take a spiral shape called vetro a retorti (twisted glass) or zanfirico. Ballotini is a cane technique in which several vetro a fili canes are picked up while laid side-by-side rather than a bundle, with a clear glass gather over them. This gather is shaped into a cylinder with the canes directed along the axis, so that the canes form a sort of "fence" across the diameter of the cylinder. When this is simultaneously twisted and pulled, the resulting cane has a helix of threads across its thickness. Another technique for forming cane is to use optic molds to make more complex cross sections. An optic mold is an open-ended cone-shaped mold with some sort of lobed or star shape around its inside circumference. When a gather or partially blown bubble is forced into the mold, its outside takes the shape of the mold. Canes with complicated, multi-colored patterns are formed by placing layers of different or alternating colors over a solid-color core, using various optic molds on the layers as they are built. Because the outer layers are hotter than those inside when the molds are used, the mold shape is impressed into the outer color without deforming the inner shapes. Canes made in this way are used in making millefiori. Discs from eight different canes have been used to make the pendant in the photo. Finally, flameworkers sometimes make cane by building up the cross-section using ordinary flameworking or bead making techniques. This permits very subtle gradations of color and shading, and is the way murrine portraits are usually made. Cane use The generic term for blown glass made using canes in the lengthwise direction is filigrano (filigree glass), as contrasted with murrine when the canes are sliced and used in cross-section. (An older term is latticino, which has fallen into disuse). One way glassblowers incorporate cane into their work is to line up canes on a steel or ceramic plate and heat them slowly to avoid cracking. When the surfaces of the canes just begin to melt, the canes adhere to each other. The tip of a glassblowing pipe (blowpipe) is covered with a 'collar' of clear molten glass, and touched to one corner of the aligned canes. The tip of the blowpipe is then rolled along the bottom of the canes, which stick to the collar, aligned cylindrically around the edge of the blowpipe. They are heated further until soft enough to shape. The cylinder of canes is sealed at the bottom with jacks and tweezers, to form the beginning of a bubble. The bubble is then blown using traditional glassblowing techniques. Cane can also be incorporated in larger blown glass work by picking it up on a bubble of molten clear glass. This technique involves the gaffer creating a bubble from molten clear glass while an assistant heats the pattern of cane. When the cane design is fused and at the correct temperature and the bubble is exactly the correct size and temperature, the bubble is rolled over the cane pattern, which sticks to the hot glass. The bubble must be the right size and temperature for the pattern to cover it fully without any gaps or trapping air. Once the canes have been picked up, the bubble can be further heated, blown, and smoothed and shaped on the marver to give whatever final shape the glassblower wishes, with an embedded lacy pattern from the canes. Twisting the object as it is being shaped imparts a spiral shape to the overall pattern. The classical reticello pattern is a small uniform mesh of white threads in clear glass, with a tiny air bubble in every mesh rectangle. To make an object in this pattern, the glassblower first uses white single-thread vetro a fili canes to blow a cylindrical cup shape, twisting as he forms it so the canes are in a spiral, and using care not to totally smooth the inside ribbing that remains from the canes. Setting this cup aside (usually keeping it warm in a furnace, below its softening point), he then makes another closed cylinder in the same pattern, but twisted in the opposite direction, and retaining some of the ribbing on the cylinder's outside. When this cylinder is the right size, the glassblower plunges it into the warm cup, without touching any of the sides until it is inserted all the way. Air is trapped in the spaces between the ribs of the two pieces, forming the uniformly spaced air bubbles. The piece may then be blown out and shaped as desired. The term reticello is often loosely applied to any criss-cross pattern, whether vetro a fili or vetro a retorti , white or colored, and with or without air bubbles. See Murrine and Millefiori for information about these techniques. Additional canework images See also Glass art Studio glass Notes References Glass art Glass production Crafts Glass fabrication techniques
Caneworking
[ "Materials_science", "Engineering" ]
1,779
[ "Glass engineering and science", "Glass production" ]
7,196,499
https://en.wikipedia.org/wiki/Alpha%20Centaurids
The Alpha Centaurids are a meteor shower in the constellation Centaurus, peaking in early February each year. The average magnitude is around 2.5, with a peak of about three meteors an hour. They have been observed since 1969, with a single possible recorded observation in 1938. References Centaurus February Meteor showers
Alpha Centaurids
[ "Astronomy" ]
66
[ "Centaurus", "Constellations" ]
7,196,522
https://en.wikipedia.org/wiki/Lowest%20common%20ancestor
In graph theory and computer science, the lowest common ancestor (LCA) (also called least common ancestor) of two nodes and in a tree or directed acyclic graph (DAG) is the lowest (i.e. deepest) node that has both and as descendants, where we define each node to be a descendant of itself (so if has a direct connection from , is the lowest common ancestor). The LCA of and in is the shared ancestor of and that is located farthest from the root. Computation of lowest common ancestors may be useful, for instance, as part of a procedure for determining the distance between pairs of nodes in a tree: the distance from to can be computed as the distance from the root to , plus the distance from the root to , minus twice the distance from the root to their lowest common ancestor . In a tree data structure where each node points to its parent, the lowest common ancestor can be easily determined by finding the first intersection of the paths from and to the root. In general, the computational time required for this algorithm is where is the height of the tree (length of longest path from a leaf to the root). However, there exist several algorithms for processing trees so that lowest common ancestors may be found more quickly. Tarjan's off-line lowest common ancestors algorithm, for example, preprocesses a tree in linear time to provide constant-time LCA queries. In general DAGs, similar algorithms exist, but with super-linear complexity. History The lowest common ancestor problem was defined by , but were the first to develop an optimally efficient lowest common ancestor data structure. Their algorithm processes any tree in linear time, using a heavy path decomposition, so that subsequent lowest common ancestor queries may be answered in constant time per query. However, their data structure is complex and difficult to implement. Tarjan also found a simpler but less efficient algorithm, based on the union-find data structure, for computing lowest common ancestors of an offline batch of pairs of nodes. simplified the data structure of Harel and Tarjan, leading to an implementable structure with the same asymptotic preprocessing and query time bounds. Their simplification is based on the principle that, in two special kinds of trees, lowest common ancestors are easy to determine: if the tree is a path, then the lowest common ancestor can be computed simply from the minimum of the levels of the two queried nodes, while if the tree is a complete binary tree, the nodes may be indexed in such a way that lowest common ancestors reduce to simple binary operations on the indices. The structure of Schieber and Vishkin decomposes any tree into a collection of paths, such that the connections between the paths have the structure of a binary tree, and combines both of these two simpler indexing techniques. discovered a completely new way to answer lowest common ancestor queries, again achieving linear preprocessing time with constant query time. Their method involves forming an Euler tour of a graph formed from the input tree by doubling every edge, and using this tour to write a sequence of level numbers of the nodes in the order the tour visits them; a lowest common ancestor query can then be transformed into a query that seeks the minimum value occurring within some subinterval of this sequence of numbers. They then handle this range minimum query problem (RMQ) by combining two techniques, one technique based on precomputing the answers to large intervals that have sizes that are powers of two, and the other based on table lookup for small-interval queries. This method was later presented in a simplified form by . As had been previously observed by , the range minimum problem can in turn be transformed back into a lowest common ancestor problem using the technique of Cartesian trees. Further simplifications were made by and . proposed the dynamic LCA variant of the problem in which the data structure should be prepared to handle LCA queries intermixed with operations that change the tree (that is, rearrange the tree by adding and removing edges). This variant can be solved in time in the total size of the tree for all modifications and queries. This is done by maintaining the forest using the dynamic trees data structure with partitioning by size; this then maintains a heavy-light decomposition of each tree, and allows LCA queries to be carried out in logarithmic time in the size of the tree. Linear space and constant search time solution to LCA in trees As mentioned above, LCA can be reduced to RMQ. An efficient solution to the resulting RMQ problem starts by partitioning the number sequence into blocks. Two different techniques are used for queries across blocks and within blocks. Reduction from LCA to RMQ Reduction of LCA to RMQ starts by walking the tree. For each node visited, record in sequence its label and depth. Suppose nodes and occur in positions and in this sequence, respectively. Then the LCA of and will be found in position RMQ(, ), where the RMQ is taken over the depth values. Linear space and constant search time algorithm for RMQ reduced from LCA Despite that there exists a constant time and linear space solution for general RMQ, but a simplified solution can be applied that make uses of LCA’s properties. This simplified solution can only be used for RMQ reduced from LCA. Similar to the solution mentioned above, we divide the sequence into each block , where each block has size of . By splitting the sequence into blocks, the  query can be solved by solving two different cases: Case 1: if i and j are in different blocks To answer the query in case one, there are 3 groups of variables precomputed to help reduce query time. First, the minimum element with the smallest index in each block is precomputed and denoted as . A set of takes space. Second, given the set of , the RMQ query for this set is precomputed using the solution with constant time and linearithmic space. There are blocks, so the lookup table in that solution takes space. Because , = space. Hence, the precomputed RMQ query using the solution with constant time and linearithmic space on these blocks only take space. Third, in each block , let be an index in such that . For all from until , block is divided into two intervals and . Then the minimum element with the smallest index for intervals in and in each block is precomputed. Such minimum elements are called as prefix min for the interval in and suffix min for the interval in . Each iteration of computes a pair of prefix min and suffix min. Hence, the total number of prefix mins and suffix mins in a block is . Since there are blocks, in total, all prefix min and suffix min arrays take which is spaces. In total, it takes space to store all 3 groups of precomputed variables mentioned above. Therefore, answering the query in case 1 is simply tasking the minimum of the following three questions: Let be the block that contains the element at index , and for index . The suffix min in in the block Answering the RMQ query on a subset of s from blocks using the solution with constant time and linearithmic space The prefix min in in the block All 3 questions can be answered in constant time. Hence, case 1 can be answered in linear space and constant time. Case 2: if i and j are in the same block The sequence of RMQ that reduced from LCA has one property that a normal RMQ doesn’t have. The next element is always +1 or -1 from the current element. For example: Therefore, each block can be encoded as a bitstring with 0 represents the current depth -1, and 1 represent the current depth +1. This transformation turns a block into a bitstring of size . A bitstring of size has possible bitstrings. Since , so . Hence, is always one of the possible bitstring with size of . Then, for each possible bitstrings, we apply the naïve quadratic space constant time solution. This will take up spaces, which is . Therefore, answering the query in case 2 is simply finding the corresponding block (in which is a bitstring) and perform a table lookup for that bitstring. Hence, case 2 can be solved using linear space with constant searching time. Extension to directed acyclic graphs While originally studied in the context of trees, the notion of lowest common ancestors can be defined for directed acyclic graphs (DAGs), using either of two possible definitions. In both, the edges of the DAG are assumed to point from parents to children. Given , define a poset such that iff is reachable from . The lowest common ancestors of and are then the minimum elements under ≤ of the common ancestor set }. gave an equivalent definition, where the lowest common ancestors of and are the nodes of out-degree zero in the subgraph of induced by the set of common ancestors of and . In a tree, the lowest common ancestor is unique; in a DAG of nodes, each pair of nodes may have as much as LCAs , while the existence of an LCA for a pair of nodes is not even guaranteed in arbitrary connected DAGs. A brute-force algorithm for finding lowest common ancestors is given by : find all ancestors of and , then return the maximum element of the intersection of the two sets. Better algorithms exist that, analogous to the LCA algorithms on trees, preprocess a graph to enable constant-time LCA queries. The problem of LCA existence can be solved optimally for sparse DAGs by means of an algorithm due to . present a unified framework for preprocessing directed acyclic graphs to compute a representative lowest common ancestor in a rooted DAG in constant time. Their framework can achieve near-linear preprocessing times for sparse graphs and is available for public use. Applications The problem of computing lowest common ancestors of classes in an inheritance hierarchy arises in the implementation of object-oriented programming systems . The LCA problem also finds applications in models of complex systems found in distributed computing . See also Level ancestor problem Semilattice References . . . A preliminary version appeared in SPAA 2002. . . . . . . . . External links Lowest Common Ancestor of a Binary Search Tree, by Kamal Rawat Python implementation of the algorithm of Bender and Farach-Colton for trees, by David Eppstein Python implementation for arbitrary directed acyclic graphs Lecture notes on LCAs from a 2003 MIT Data Structures course. Course by Erik Demaine, notes written by Loizos Michael and Christos Kapoutsis. Notes from 2007 offering of same course, written by Alison Cichowlas. Lowest Common Ancestor in Binary Trees in C. A simplified version of the Schieber–Vishkin technique that works only for balanced binary trees. Video of Donald Knuth explaining the Schieber–Vishkin technique Range Minimum Query and Lowest Common Ancestor article in Topcoder Documentation for the lca package for Haskell by Edward Kmett, which includes the skew-binary random access list algorithm. Purely functional data structures for on-line LCA slides for the same package. Theoretical computer science Trees (graph theory)
Lowest common ancestor
[ "Mathematics" ]
2,320
[ "Theoretical computer science", "Applied mathematics" ]
7,196,964
https://en.wikipedia.org/wiki/Functionally%20graded%20material
In materials science Functionally Graded Materials (FGMs) may be characterized by the variation in composition and structure gradually over volume, resulting in corresponding changes in the properties of the material. The materials can be designed for specific function and applications. Various approaches based on the bulk (particulate processing), preform processing, layer processing and melt processing are used to fabricate the functionally graded materials. History The concept of FGM was first considered in Japan in 1984 during a space plane project, where a combination of materials used would serve the purpose of a thermal barrier capable of withstanding a surface temperature of 2000 K and a temperature gradient of 1000 K across a 10 mm section. In recent years this concept has become more popular in Europe, particularly in Germany. A transregional collaborative research center (SFB Transregio) is funded since 2006 in order to exploit the potential of grading monomaterials, such as steel, aluminium and polypropylen, by using thermomechanically coupled manufacturing processes. General information FGMs can vary in either composition and structure, for example, porosity, or both to produce the resulting gradient. The gradient can be categorized as either continuous or discontinuous, which exhibits a stepwise gradient. There are several examples of FGMs in nature, including bamboo and bone, which alter their microstructure to create a material property gradient. In biological materials, the gradients can be produced through changes in the chemical composition, structure, interfaces, and through the presence of gradients spanning multiple length scales. Specifically within the variation of chemical compositions, the manipulation of the mineralization, the presence of inorganic ions and biomolecules, and the level of hydration have all been known to cause gradients in plants and animals. The basic structural units of FGMs are elements or material ingredients represented by maxel. The term maxel was introduced in 2005 by Rajeev Dwivedi and Radovan Kovacevic at Research Center for Advanced Manufacturing (RCAM). The attributes of maxel include the location and volume fraction of individual material components. A maxel is also used in the context of the additive manufacturing processes (such as stereolithography, selective laser sintering, fused deposition modeling, etc.) to describe a physical voxel (a portmanteau of the words 'volume' and 'element'), which defines the build resolution of either a rapid prototyping or rapid manufacturing process, or the resolution of a design produced by such fabrication means. The transition between the two materials can be approximated by through either a power-law or exponential law relation: Power Law: where is the Young's modulus at the surface of the material, z is the depth from surface, and k is a non-dimensional exponent (). Exponential Law: where indicates a hard surface and indicates soft surface. Applications There are many areas of application for FGM. The concept is to make a composite material by varying the microstructure from one material to another material with a specific gradient. This enables the material to have the best of both materials. If it is for thermal, or corrosive resistance or malleability and toughness both strengths of the material may be used to avoid corrosion, fatigue, fracture and stress corrosion cracking. There is a myriad of possible applications and industries interested in FGMs. They span from defense, looking at protective armor, to biomedical, investigating implants, to optoelectronics and energy. The aircraft and aerospace industry and the computer circuit industry are very interested in the possibility of materials that can withstand very high thermal gradients. This is normally achieved by using a ceramic layer connected with a metallic layer. The Air Vehicles Directorate has conducted a Quasi-static bending test results of functionally graded titanium/titanium boride test specimens which can be seen below. The test correlated to the finite element analysis (FEA) using a quadrilateral mesh with each element having its own structural and thermal properties. Advanced Materials and Processes Strategic Research Programme (AMPSRA) have done analysis on producing a thermal barrier coating using Zr02 and NiCoCrAlY. Their results have proved successful but no results of the analytical model are published. The rendition of the term that relates to the additive fabrication processes has its origins at the RMRG (Rapid Manufacturing Research Group) at Loughborough University in the United Kingdom. The term forms a part of a descriptive taxonomy of terms relating directly to various particulars relating to the additive CAD-CAM manufacturing processes, originally established as a part of the research conducted by architect Thomas Modeen into the application of the aforementioned techniques in the context of architecture. Gradient of elastic modulus essentially changes the fracture toughness of adhesive contacts. Additionally, there has been an increased focus on how to apply FGMs to biomedical applications, specifically dental and orthopedic implants. For example, bone is an FGM that exhibits a change in elasticity and other mechanical properties between the cortical and cancellous bone. It logically follows that FGMs for orthopedic implants would be ideal for mimicking the performance of bone. FGMs for biomedical applications have the potential benefit of preventing stress concentrations that could lead to biomechanical failure and improving biocompatibility and biomechanical stability. FGMs in relation to orthopedic implants are particularly important as the common materials used (titanium, stainless steel, etc.) are stiffer and thus pose a risk of creating abnormal physiological conditions that alter the stress concentration at the interface between the implant and the bone. If the implant is too stiff it risks causing bone resorption, while a flexible implant can cause stability and the bone-implant interface. Numerous FEM simulations have been carried out to understand the possible FGM and mechanical gradients that could be implemented into different orthopedic implants, as the gradients and mechanical properties are highly geometry specific. An example of a FGM for use in orthopedic implants is carbon fiber reinforcement polymer matrix (CRFP) with yttria-stabilized zirconia (YSZ). Varying the amount of YSZ present as a filler in the material, resulted in a flexural strength gradation ratio of 1.95. This high gradation ratio and overall high flexibility shows promise as being a supportive material in bone implants. There are quite a few FGMs being explored using hydroxyapatite (HA) due to its osteoconductivity which assists with osseointegration of implants. However, HA exhibits lower fracture strength and toughness compared to bone, which requires it to be used in conjunction with other materials in implants. One study combined HA with alumina and zirconia via a spark plasma process to create a FGM that shows a mechanical gradient as well as good cellular adhesion and proliferation. Modeling and simulation Numerical methods have been developed for modelling the mechanical response of FGMs, with the finite element method being the most popular one. Initially, the variation of material properties was introduced by means of rows (or columns) of homogeneous elements, leading to a discontinuous step-type variation in the mechanical properties. Later, Santare and Lambros developed functionally graded finite elements, where the mechanical property variation takes place at the element level. Martínez-Pañeda and Gallego extended this approach to commercial finite element software. Contact properties of FGM can be simulated using the Boundary Element Method (which can be applied both to non-adhesive and adhesive contacts). Molecular dynamics simulation has also been implemented to study functionally graded materials. M. Islam studied the mechanical and vibrational properties of functionally graded Cu-Ni nanowires using molecular dynamics simulation. Mechanics of functionally graded material structures was considered by many authors. However, recently a new micro-mechanical model is developed to calculate the effective elastic Young modulus for graphene-reinforced plates composite. The model considers the average dimensions of the graphene nanoplates, weight fraction, and the graphene/ matrix ratio in the Representative Volume Element. The dynamic behavior of this functionally graded polymer-based composite reinforced with graphene fillers is crucial for engineering applications. Materials science Loughborough University Composite materials
Functionally graded material
[ "Physics", "Materials_science", "Engineering" ]
1,697
[ "Applied and interdisciplinary physics", "Composite materials", "Materials science", "Materials", "nan", "Matter" ]
7,197,227
https://en.wikipedia.org/wiki/Intel%20Play
The Intel Play product line, developed and jointly marketed by Intel and Mattel, was a product line of consumer "toy" electronic devices. The other toys were the Digital Movie Creator, the Computer Sound Morpher, and the Me2Cam. The Intel Play product line was discontinued on March 29, 2002 when it was purchased by Tim Hall's holding company Prime Entertainment. Hall founded Digital Blue, which continued the Intel Play product line under the Digital Blue brand. The "Play" logo of Intel Play became a staple of 2K Play in 2007. QX3 Computer Microscope The QX3 Computer Microscope was a product in the Intel Play product line and was continued in the Digital Blue product line. The upgraded QX5 model was available. The QX3 is a small electronic microscope that can connect to a computer via a USB connection. It has magnification levels of 10x, 60x, and 200x. The microscope comes with software which allows a computer to access the microscope and use it to either take pictures or record movie. The specimen can be lit either from underneath or from above by one of two incandescent bulbs (3.5V, 300mA). The specimen platform is adjustable to focus the image. The Vision CPiA (VV0670P001) is interfaced to a CIF CCD sensor, sampled at a resolution of 320x240 pixels. QX5 Computer Microscope The QX5 Computer Microscope is a Digital Blue product and upgraded the QX3 with multiple improvements, including a 640x480 image capture device and brighter light source. Digital Movie Creator The Digital Movie Creator was a product in the Intel Play product line and was continued in the Digital Blue product line. The upgraded 2.0 and 3.0 was available. Intel Play Digital Movie Creator is featured as an easy-to-use digital video camera and movie-making software package that allows children to use the PC to script and star in their own feature movies. At the time of development and release in 2001, the goal of the Intel Play products is to extend the value and utility of powerful PCs, like ones based on the Intel® Pentium® 4 processor. References External links QX3 Support Page QX3 Download Finder: Drivers and Software Page QX3 Manual by Brian Ford QX3 Tutorials at Marly Cain's Amazing Micronautic Adventures QX3 Microscope Tutorials at Molecular Expressions QX3 Review by Microscopy UK DigiBlue Downloads Page Linux drivers and technical information Play Mattel 2000s toys
Intel Play
[ "Technology" ]
521
[ "Computing stubs" ]
7,197,430
https://en.wikipedia.org/wiki/International%20Congress%20of%20Actuaries
The International Congress of Actuaries (ICA) is a conference held under the auspices of the International Actuarial Association every four years. The most recent conference was the 31st Congress, held in Berlin, Germany from 4 to 8 June 2018. The 33rd Congress will be held in Sydney, Australia in 2023 and the 34th in Tokyo, Japan in 2026. Past congresses 1895 Brussels, Belgium 1898 London, United Kingdom 1900 Paris, France 1903 New York, United States 1906 Berlin, Germany 1909 Vienna, Austria 1912 Amsterdam, Netherlands 1915 St. Petersburg, Russia (organised but not held) 1927 London, United Kingdom 1930 Stockholm, Sweden 1934 Rome, Italy 1937 Paris, France 1940 Lucerne, Switzerland (organised but not held; papers published) 1951 Scheveningen, Netherlands 1954 Madrid, Spain 1957 New York, United States and Toronto, Canada 1960 Brussels, Belgium 1964 London and Edinburgh, United Kingdom 1968 Munich, Germany 1972 Oslo, Norway 1976 Tokyo, Japan 1980 Zurich and Lausanne, Switzerland 1984 Sydney, Australia 1988 Helsinki, Finland 1992 Montreal, Canada 1995 Brussels, Belgium 1998 Birmingham, United Kingdom 2002 Cancún, Mexico 2006 Paris, France 2010 Cape Town, South Africa 2014 Washington, D.C., United States 2018 Berlin, Germany 2023 Sydney, Australia Future congresses 2026 Tokyo, Japan External links ICA 2006 Paris ICA 2010 Cape Town ICA 2014 Washington ICA 2018 Berlin ICA 2023 Sydney International business conferences Actuarial science Academic conferences
International Congress of Actuaries
[ "Mathematics" ]
297
[ "Applied mathematics", "Actuarial science" ]
7,197,701
https://en.wikipedia.org/wiki/Kintu
Kintu is a mythological figure who appears in a creation myth of the people of Buganda, Uganda. According to this legend, Kintu was the first person on earth. And the first Muganda. Kintu, meaning "thing" in Bantu languages, is also commonly attached to the name Muntu, the legendary figure who founded the Gisu and Bukusu tribes. Background and cultural significance The creation myth of the people of Buganda, Uganda, includes a figure called Kintu, who was the first person on earth, and the first man to wander the plains of Uganda alone. He has also sometimes been known as God, or the father of all people who created the first kingdoms. The name Kintu, meaning 'thing' in Bantu, is commonly attached to the name Muntu, who was the legendary figure who founded the Gisu and Bukusu tribes. Kintu is believed to originate from the east, west, and north, who brought with him the first materials to begin life on earth. These materials were millet, cattle or call it Ente in Luganda language, and bananas. Narrative In the version of the creation myth recorded by Harry Johnston, Kintu appears on the plains of Uganda with a cow which was his only possession. He fed on its milk and cow dung before being rewarded bananas and millet from the sky god, Ggulu. Before his encounter with Ggulu, Kintu meets a woman named Nambi (sometimes rendered Nnambi) and her sister who had come from the sky. They first take his beloved cow to Ggulu to prove his humanness and to seek Ggulu's permission to admit him into the sky. Once arriving in the sky, Kintu's humanness is tested by Ggulu through five consecutive trials, each one trickier and more difficult than the last. However, Kintu is able to come out of each trial victorious with the assistance of an unidentified divine power. Ggulu is impressed with Kintu's wit and resilience, rewarding his efforts with his daughter Nnambi and many agricultural gifts as dowry which included: bananas, potatoes, beans, maize corn, groundnuts, and a hen. From this point, Kintu was given the basic materials to be able to create life in Uganda. However, before leaving the sky, Kintu and Nnambi were warned by Ggulu not to come back for any reason as they made their journey back to Earth for fear that Nnambi's brother, Walumbe (meaning 'disease' and 'death' in Bantu), would follow them back to Earth and cause them great trouble. Kintu and Nnambi disregarded Ggulu's warning and Kintu returned to the sky to fetch the millet the hen had to feed on while on earth that Nnambi had left behind. In his short time there, Walumbe had figured Nnambi's whereabouts and convinced Kintu to allow him to live with them on Earth. Upon seeing Walumbe accompanying Kintu on their way down from the sky, Nnambi at first denied her brother but Walumbe eventually persuaded her into allowing him to stay with them. The three of them first settled in Magongo in Buganda where they rested and planted the first crops on earth: banana, maize corn, beans, and groundnuts. During this time, Kintu and Nnambi had three children, and Walumbe insisted on claiming one as his own. Kintu denied his request, promising him one of his future children; however, Kintu and Nnambi proceeded to have many more children and denied Walumbe with each child causing him to lash out and declare that he would kill each and every one of Kintu's children and claim them in that sense. Each day for three days, one of Kintu's children died by the hands of Walumbe until Kintu returned to the sky and told Ggulu of the killings. Ggulu expected the actions of Walumbe and sent Kayiikuuzi (meaning 'digger' in Bantu), his son, to Earth to attempt to capture and bring Walumbe back to the sky. Kintu and Kayiikuuzi descended to Earth and were notified by Nnambi that a few more of their children had died during Kintu's trip to the sky. In response to this, Kayiikuuzi called upon Walumbe and the two met and fought. During the fight, Walumbe was able to slip away into a hole in the ground and continued to dig deeper as Kayiikuuzi tried to retrieve him. These gigantic holes are believed to be in the present day Tanda. After relentlessly digging, Kayiikuuzi tired out and took a break from chasing Walumbe. Kayiikuuzi remained on earth for two more days and ordered silence among all things on Earth during that time (before sunrise) in an attempt to lure Walumbe out of the ground. However, just as Walumbe started to get curious and came out from under the ground, some of Kintu's children spotted him and screamed out, scaring Walumbe back into the Earth. Tired and frustrated with his wasted efforts and broken orders, Kayiikuuzi returned to the sky without capturing Walumbe, who stayed on earth and is responsible for the misery and suffering of Kintu's children today. However, Kayiikuuzi is still chasing Walumbe and every time earthquakes and tsunamis strike, it is Kayiikuuzi almost catching Walumbe. Variations Roscoe and Kaggwa In the early 1900s, two similar oral traditions of the Kintu creation myth were recorded and published. One oral tradition recorded by John Roscoe differs from other versions in that Kintu was said to have been seduced by Nnambi into going with her to the sky. In addition, after completing the trials Ggulu tasked him with, he was given permission to marry Nnambi and returned to Uganda with various livestock and one plantation stalk to begin life on Earth. Furthermore, in this version Kintu was the one to try to capture Walumbe, not Kayiikuuzi. The other oral tradition recorded by Sir Apolo Kaggwa differed from other Kintu creation myths in that it focused more on the contributions that Kintu had on the political aspects of Buganda. According to this oral tradition, Kintu formed the political and geographical foundations of the nation by setting the physical boundaries of the nation, founding the capital, and creating the first form of politics in Baganda society through royal hierarchy. Kizza Kintu is also presented in Kizza's 2011 The Oral Tradition of Baganda of Uganda. In this version of the Kintu creation myth, the importance of the story is placed upon Nambi; in the beginning of the myth, it is Nambi who falls in love with Kintu upon their first meeting in Baganda and convinces Kintu to seek approval from her father in order to get her hand in marriage. For this reason, Kintu's worthiness was tested by Nambi's father Ggulu through a series of trials over the course of four days. From this point, this version of the oral tradition differs from others in that Ggulu instructed Nambi to take one female and one male of each living thing in order to begin life on Earth. Ggulu also warned her not to forget anything while packing because she would never be able to return to the sky in fear that her mischievous brother Walumbe would follow them to Earth and bring hardships upon them. References Bantu religion Creation myths Ganda Legendary progenitors Mythological first humans Ugandan mythology
Kintu
[ "Astronomy" ]
1,598
[ "Cosmogony", "Creation myths" ]
7,197,808
https://en.wikipedia.org/wiki/Aniline%20acetate%20test
The aniline acetate test is a chemical test for the presence of certain carbohydrates, in which they are converted to furfural with hydrochloric acid, which reacts with aniline acetate to produce a bright pink color. Pentoses give a strong reaction, and hexoses give a much weaker reaction. Procedure and mechanism A dry sample is dissolved in a small volume of hydrochloric acid and briefly heated. A piece of paper, previously impregnated with aniline acetate, is exposed to the vapor from the sample solution. A bright pink color on the paper is positive for the presence of pentoses. Hydrochloric acid dehydrates pentoses (sugars containing five carbon atoms) to produce furfural. The reaction of furfural and aniline produces a bright pink color. Hexoses, which are sugars which contain six carbons, are not dehydrated to furfural, and so they do not produce a pink color. Interferences 3-Furanaldehyde responds to the usual tests for aldehydes, but unlike 2-furanaldehyde it gives no color test with aniline acetate. References Notes Bibliography A Method for the identification of pure organic compounds by a systematic analytical procedure based on physical properties and chemical reactions. v. 1, 1911. By Samuel Parsons Mulliken. Published by J. Wiley & Sons, Inc., 1904. Google Books link page 33. "Modifications of the aniline acetate-furfural method for the determination of pentose." The Analyst, 1956, 81, 598 - 601, Biochemistry detection reactions Carbohydrate methods
Aniline acetate test
[ "Chemistry", "Biology" ]
356
[ "Biochemistry methods", "Biochemistry detection reactions", "Biochemical reactions", "Microbiology techniques", "Carbohydrate chemistry", "Carbohydrate methods" ]
7,198,409
https://en.wikipedia.org/wiki/Woodward%2C%20Inc.
Woodward, Inc. is an American designer, manufacturer, and service provider of control systems and control system components (e.g. fuel pumps, engine controls, actuators, air valves, fuel nozzles, and electronics) for aircraft engines, industrial engines and turbines, power generation and mobile industrial equipment. The company also provides military devices and other equipment for defense. Woodward, Inc. was founded as The Woodward Governor Company by Amos Woodward in 1870. Initially, the company made controls for waterwheels (first patent No. 103,813), and then moved to hydro turbines. In the 1920s and 1930s, Woodward began designing controls for diesel and other reciprocating engines and for industrial turbines. Also in the 1930s, Woodward developed a governor for variable-pitch aircraft propellers. Woodward parts were notably used in the GE engine on the United States military's first turbine-powered aircraft. Starting in the 1950s, Woodward began designing electronic controls, first analog and then digital units. Historical information The company was founded in Rockford, Illinois, in 1870 with Amos W. Woodward's invention of a non compensating mechanical waterwheel governor (U.S. patent No. 103,813). Thirty years later, his son Elmer patented the first successful mechanical compensating governor for hydraulic turbines (U.S. patent No. 583,527). In 1933, the company expanded its product line to include diesel engine controls (U.S. patent No. 2,039,507) and aircraft propeller governors (British patent No. 470,284). Woodward governors followed the rapid advancement of diesel engine applications for railroads, maritime and electrical generation in many fields. The advent of gas turbine engines for aircraft and industrial uses offered still more opportunities for Woodward designed fuel controls. And, of course, the science of electronics has added impetus to this industry. Elmer E. Woodward conceived, designed, and developed the first successful propeller control in 1933. This model PW-34 propeller governor is on display at the Udvar-Hazy annex of the Smithsonian National Air and Space Museum. Modern day company As of 2007, Woodward Governor Company became a billion-dollar company with establishments worldwide, including Japan, China, and Europe. On January 26, 2011, the company announced that shareholders had approved the name change to Woodward, Inc. A growing number of general aviation and commuter aircraft rely on Woodward AES overspeed governors, synchronizers and synchrophasers for turboshaft, turboprop, and reciprocating engines. , approximately 34% of the company's sales were to the defense market, including parts for the V-22 Osprey ($645,000 revenue per aircraft) and the F/A-18 ($335,000 revenue per aircraft). The engines that are controlled by Woodward Aircraft engines systems include those from Honeywell (TPE331), General Electric (CT7), Pratt & Whitney Canada (PT6A series), Raytheon, Vans, and Rotax Corporations. In April 2018, Woodward Inc. purchased L'Orange GmbH for $859 million. This supplier of fuel-injection components for stationary, marine, offshore, and industrial engines was part of Rolls-Royce's power-systems business in Germany, the US and China. On January 12, 2020, the company announced an intent to merge with Hexcel, according to the Wall Street Journal. On April 20, it was announced the merger was called off, as a result of the health crisis caused by the COVID-19 pandemic. The COVID19 crisis also led to a sharp drop in revenues for Woodward, Inc. In February 2024, a protest outside facilities in Niles, Illinois resulted in arrests of 7 men and 26 women. Protesters said that Woodward is complicit in the Israel–Hamas war and called for an end to contracts with Boeing and Israel. A previous protest in support of the Palestinian cause brought about 300 people to the facility in Fort Collins, Colorado, in November 2023. Woodward family patents References 1870 establishments in Illinois Aerospace companies of the United States Aerospace materials Aircraft component manufacturers of the United States Companies based in Fort Collins, Colorado Companies listed on the Nasdaq Electrical engineering companies of the United States Manufacturing companies based in Colorado Manufacturing companies established in 1870 Turbine manufacturers Companies in the S&P 400
Woodward, Inc.
[ "Engineering" ]
897
[ "Aerospace materials", "Aerospace engineering" ]
7,198,571
https://en.wikipedia.org/wiki/Monoclonal%20antibody%20therapy
Monoclonal antibodies (mAbs) have varied therapeutic uses. It is possible to create a mAb that binds specifically to almost any extracellular target, such as cell surface proteins and cytokines. They can be used to render their target ineffective (e.g. by preventing receptor binding), to induce a specific cell signal (by activating receptors), to cause the immune system to attack specific cells, or to bring a drug to a specific cell type (such as with radioimmunotherapy which delivers cytotoxic radiation). Major applications include cancer, autoimmune diseases, asthma, organ transplants, blood clot prevention, and certain infections. Antibody structure and function Immunoglobulin G (IgG) antibodies are large heterodimeric molecules, approximately 150 kDa and are composed of two kinds of polypeptide chain, called the heavy (~50kDa) and the light chain (~25kDa). The two types of light chains are kappa (κ) and lambda (λ). By cleavage with enzyme papain, the Fab (fragment-antigen binding) part can be separated from the Fc (fragment crystallizable region) part of the molecule. The Fab fragments contain the variable domains, which consist of three antibody hypervariable amino acid domains responsible for the antibody specificity embedded into constant regions. The four known IgG subclasses are involved in antibody-dependent cellular cytotoxicity. Antibodies are a key component of the adaptive immune response, playing a central role in both in the recognition of foreign antigens and the stimulation of an immune response to them. The advent of monoclonal antibody technology has made it possible to raise antibodies against specific antigens presented on the surfaces of tumors. Monoclonal antibodies can be acquired in the immune system via passive immunity or active immunity. The advantage of active monoclonal antibody therapy is the fact that the immune system will produce antibodies long-term, with only a short-term drug administration to induce this response. However, the immune response to certain antigens may be inadequate, especially in the elderly. Additionally, adverse reactions from these antibodies may occur because of long-lasting response to antigens. Passive monoclonal antibody therapy can ensure consistent antibody concentration, and can control for adverse reactions by stopping administration. However, the repeated administration and consequent higher cost for this therapy are major disadvantages. Monoclonal antibody therapy may prove to be beneficial for cancer, autoimmune diseases, and neurological disorders that result in the degeneration of body cells, such as Alzheimer's disease. Monoclonal antibody therapy can aid the immune system because the innate immune system responds to the environmental factors it encounters by discriminating against foreign cells from cells of the body. Therefore, tumor cells that are proliferating at high rates, or body cells that are dying which subsequently cause physiological problems are generally not specifically targeted by the immune system, since tumor cells are the patient's own cells. Tumor cells, however are highly abnormal, and many display unusual antigens. Some such tumor antigens are inappropriate for the cell type or its environment. Monoclonal antibodies can target tumor cells or abnormal cells in the body that are recognized as body cells, but are debilitating to one's health. History Immunotherapy developed in the 1970s following the discovery of the structure of antibodies and the development of hybridoma technology, which provided the first reliable source of monoclonal antibodies. These advances allowed for the specific targeting of tumors both in vitro and in vivo. Initial research on malignant neoplasms found mAb therapy of limited and generally short-lived success with blood malignancies. Treatment also had to be tailored to each individual patient, which was impracticable in routine clinical settings. Four major antibody types that have been developed are murine, chimeric, humanised and human. Antibodies of each type are distinguished by suffixes on their name. Murine Initial therapeutic antibodies were murine analogues (suffix -omab). These antibodies have: a short half-life in vivo (due to immune complex formation), limited penetration into tumour sites and inadequately recruit host effector functions. Chimeric and humanized antibodies have generally replaced them in therapeutic antibody applications. Understanding of proteomics has proven essential in identifying novel tumour targets. Initially, murine antibodies were obtained by hybridoma technology, for which Jerne, Köhler and Milstein received a Nobel prize. However the dissimilarity between murine and human immune systems led to the clinical failure of these antibodies, except in some specific circumstances. Major problems associated with murine antibodies included reduced stimulation of cytotoxicity and the formation of complexes after repeated administration, which resulted in mild allergic reactions and sometimes anaphylactic shock. Hybridoma technology has been replaced by recombinant DNA technology, transgenic mice and phage display. Chimeric and humanized To reduce murine antibody immunogenicity (attacks by the immune system against the antibody), murine molecules were engineered to remove immunogenic content and to increase immunologic efficiency. This was initially achieved by the production of chimeric (suffix -ximab) and humanized antibodies (suffix -zumab). Chimeric antibodies are composed of murine variable regions fused onto human constant regions. Taking human gene sequences from the kappa light chain and the IgG1 heavy chain results in antibodies that are approximately 65% human. This reduces immunogenicity, and thus increases serum half-life. Humanised antibodies are produced by grafting murine hypervariable regions on amino acid domains into human antibodies. This results in a molecule of approximately 95% human origin. Humanised antibodies bind antigen much more weakly than the parent murine monoclonal antibody, with reported decreases in affinity of up to several hundredfold. Increases in antibody-antigen binding strength have been achieved by introducing mutations into the complementarity determining regions (CDR), using techniques such as chain-shuffling, randomization of complementarity-determining regions and antibodies with mutations within the variable regions induced by error-prone PCR, E. coli mutator strains and site-specific mutagenesis. Human monoclonal antibodies Human monoclonal antibodies (suffix -umab) are produced using transgenic mice or phage display libraries by transferring human immunoglobulin genes into the murine genome and vaccinating the transgenic mouse against the desired antigen, leading to the production of appropriate monoclonal antibodies. Murine antibodies in vitro are thereby transformed into fully human antibodies. The heavy and light chains of human IgG proteins are expressed in structural polymorphic (allotypic) forms. Human IgG allotype is one of the many factors that can contribute to immunogenicity. Targeted conditions Cancer Anti-cancer monoclonal antibodies can be targeted against malignant cells by several mechanisms. Ramucirumab is a recombinant human monoclonal antibody and is used in the treatment of advanced malignancies. In childhood lymphoma, phase I and II studies have found a positive effect of using antibody therapy. Monoclonal antibodies used to boost an anticancer immune response is another strategy to fight cancer where cancer cells are not targeted directly. Strategies include antibodies engineered to block mechanisms which downregulate anticancer immune responses, checkpoints such as PD-1 and CTLA-4 (checkpoint therapy), and antibodies modified to stimulate activation of immune cells. Autoimmune diseases Monoclonal antibodies used for autoimmune diseases include infliximab and adalimumab, which are effective in rheumatoid arthritis, Crohn's disease and ulcerative colitis by their ability to bind to and inhibit TNF-α. Basiliximab and daclizumab inhibit IL-2 on activated T cells and thereby help preventing acute rejection of kidney transplants. Omalizumab inhibits human immunoglobulin E (IgE) and is useful in moderate-to-severe allergic asthma. Alzheimer's disease Alzheimer's disease (AD) is a multi-faceted, age-dependent, progressive neurodegenerative disorder, and is a major cause of dementia. According to the Amyloid hypothesis, the accumulation of extracellular amyloid beta peptides (Aβ) into plaques via oligomerization leads to hallmark symptomatic conditions of AD through synaptic dysfunction and neurodegeneration. Immunotherapy via exogenous monoclonal antibody (mAb) administration has been known to treat various central nervous disorders. In the case of AD, immunotherapy is believed to inhibit Aβ-oligomerization or clearing of Aβ from the brain and thereby prevent neurotoxicity. However, mAbs are large molecules and due to the blood–brain barrier, uptake of mAb into the brain is extremely limited, only approximately 1 of 1000 mAb molecules is estimated to pass. However, the Peripheral Sink hypothesis proposes a mechanism where mAbs may not need to cross the blood–brain barrier. Therefore, many research studies are being conducted from failed attempts to treat AD in the past. However, anti-Aβ vaccines can promote antibody-mediated clearance of Aβ plaques in transgenic mice models with amyloid precursor proteins (APP), and can reduce cognitive impairments. Vaccines can stimulate the immune system to produce its own antibodies, in the case of Alzheimer's disease by administration of the antigen Aβ. This is also known as active immunotherapy. Another strategy is so called passive immunotherapy. In this case the antibodies is produced externally in cultured cells and are delivered to the patient in the form of a drug. In mice expressing APP, both active and passive immunization of anti-Aβ antibodies has been shown to be effective in clearing plaques, and can improve cognitive function. Currently, there are two FDA approved antibody therapies for Alzheimer's disease, Aducanemab and Lecanemab. Aducanemab has received accelerated approval while Lecanemab has received full approval. Several clinical trials using passive and active immunization have been performed and some are on the way with expected results in a couple of years. The implementation of these drugs is often during the early onset of AD. Other research and drug development for early intervention and AD prevention is ongoing. Examples of important mAb drugs that have been or are under evaluation for treatment of AD include Bapineuzumab, Solanezumab, Gautenerumab, Crenezumab, Aducanemab, Lecanemab and Donanemab. Bapineuzumab Bapineuzumab, a humanized anti-Aβ mAb, is directed against the N-terminus of Aβ. Phase II clinical trials of Bapineuzumab in mild to moderate AD patients resulted in reduced Aβ concentration in the brain. However, in patients with increased apolipoprotein (APOE) e4 carriers, Bapineuzumab treatment is also accompanied by vasogenic edema, a cytotoxic condition where the blood brain barrier has been disrupted thereby affecting white matter from excess accumulation of fluid from capillaries in intracellular and extracellular spaces of the brain. In Phase III clinical trials, Bapineuzumab showed promising positive effect on biomarkers of AD but failed to show effect on cognitive decline. Therefore, Bapineuzumab was discontinued after failing in the Phase III clinical trial. Solanezumab Solanezumab, an anti-Aβ mAb, targets the N-terminus of Aβ. In Phase I and Phase II of clinical trials, Solanezumab treatment resulted in cerebrospinal fluid elevation of Aβ, thereby showing a reduced concentration of Aβ plaques. Additionally, there are no associated adverse side effects. Phase III clinical trials of Solanezumab brought about significant reduction in cognitive impairment in patients with mild AD, but not in patients with severe AD. However, Aβ concentration did not significantly change, along with other AD biomarkers, including phospho-tau expression, and hippocampal volume. Phase III clinical trials of Solanezumab failed as it did not show effect on cognitive decline in comparison to placebo. Lecanemab Lecanemab (BAN2401), is a humanized mAb that selectively targets toxic soluble Aβ protofibrils, In phase 3 clinical trials, Lecanemab showed a 27% slower cognitive decline after 18 months of treatment in comparison to placebo. The phase 3 clinical trials also reported infusion related reactions, amyloid-related imaging abnormalities and headaches as the most common side effects of Lecanemab. In July 2023 the FDA gave Lecanemab full approval for the treatment of Alzheimer's Disease and it was given the commercial name Leqembi. Preventive trials Failure of several drugs in Phase III clinical trials has led to AD prevention and early intervention for onset AD treatment endeavours. Passive anti-Aβ mAb treatment can be used for preventive attempts to modify AD progression before it causes extensive brain damage and symptoms. Trials using mAb treatment for patients positive for genetic risk factors, and elderly patients positive for indicators of AD are underway. This includes anti-AB treatment in Asymptomatic Alzheimer's Disease (A4), the Alzheimer's Prevention Initiative (API), and DIAN-TU. The A4 study on older individuals who are positive for indicators of AD but are negative for genetic risk factors will test Solanezumab in Phase III Clinical Trials, as a follow-up of previous Solanezumab studies. DIAN-TU, launched in December 2012, focuses on young patients positive for genetic mutations that are risks for AD. This study uses Solanezumab and Gautenerumab. Gautenerumab, the first fully human MAB that preferentially interacts with oligomerized Aβ plaques in the brain, caused significant reduction in Aβ concentration in Phase I clinical trials, preventing plaque formation and concentration without altering plasma concentration of the brain. Phase II and III clinical trials are currently being conducted. Therapy types Radioimmunotherapy Radioimmunotherapy (RIT) involves the use of radioactively-conjugated murine antibodies against cellular antigens. Most research involves their application to lymphomas, as these are highly radio-sensitive malignancies. To limit radiation exposure, murine antibodies were chosen, as their high immunogenicity promotes rapid tumor clearance. Tositumomab is an example used for non-Hodgkin's lymphoma. Antibody-directed enzyme prodrug therapy Antibody-directed enzyme prodrug therapy (ADEPT) involves the application of cancer-associated monoclonal antibodies that are linked to a drug-activating enzyme. Systemic administration of a non-toxic agent results in the antibody's conversion to a toxic drug, resulting in a cytotoxic effect that can be targeted at malignant cells. The clinical success of ADEPT treatments is limited. Antibody-drug conjugates Antibody-drug conjugates (ADCs) are antibodies linked to one or more drug molecules. Typically when the ADC meets the target cell (e.g. a cancerous cell) the drug is released to kill it. Many ADCs are in clinical development. a few have been approved. Immunoliposome therapy Immunoliposomes are antibody-conjugated liposomes. Liposomes can carry drugs or therapeutic nucleotides and when conjugated with monoclonal antibodies, may be directed against malignant cells. Immunoliposomes have been successfully used in vivo to convey tumour-suppressing genes into tumours, using an antibody fragment against the human transferrin receptor. Tissue-specific gene delivery using immunoliposomes has been achieved in brain and breast cancer tissue. Checkpoint therapy Checkpoint therapy uses antibodies and other techniques to circumvent the defenses that tumors use to suppress the immune system. Each defense is known as a checkpoint. Compound therapies combine antibodies to suppress multiple defensive layers. Known checkpoints include CTLA-4 targeted by ipilimumab, PD-1 targeted by nivolumab and pembrolizumab and the tumor microenvironment. The tumor microenvironment (TME) features prevents the recruitment of T cells to the tumor. Ways include chemokine CCL2 nitration, which traps T cells in the stroma. Tumor vasculature helps tumors preferentially recruit other immune cells over T cells, in part through endothelial cell (EC)–specific expression of FasL, ETBR, and B7H3. Myelomonocytic and tumor cells can up-regulate expression of PD-L1, partly driven by hypoxic conditions and cytokine production, such as IFNβ. Aberrant metabolite production in the TME, such as the pathway regulation by IDO, can affect T cell functions directly and indirectly via cells such as Treg cells. CD8 cells can be suppressed by B cells regulation of TAM phenotypes. Cancer-associated fibroblasts (CAFs) have multiple TME functions, in part through extracellular matrix (ECM)–mediated T cell trapping and CXCL12-regulated T cell exclusion. FDA-approved therapeutic antibodies The first FDA-approved therapeutic monoclonal antibody was a murine IgG2a CD3 specific transplant rejection drug, OKT3 (also called muromonab), in 1986. This drug found use in solid organ transplant recipients who became steroid resistant. Hundreds of therapies are undergoing clinical trials. Most are concerned with immunological and oncological targets. Tositumomab – Bexxar – 2003 – CD20 Mogamulizumab – Poteligeo – August 2018 – CCR4 Moxetumomab pasudotox – Lumoxiti – September 2018 – CD22 Cemiplimab – Libtayo – September 2018 – PD-1 Polatuzumab vedotin – Polivy – June 2019 – CD79B The bispecific antibodies have arrived in the clinic. In 2009, the bispecific antibody catumaxomab was approved in the European Union and was later withdrawn for commercial reasons. Others include amivantamab, blinatumomab, teclistamab, and emicizumab. Economics Since 2000, the therapeutic market for monoclonal antibodies has grown exponentially. In 2006, the "big 5" therapeutic antibodies on the market were bevacizumab, trastuzumab (both oncology), adalimumab, infliximab (both autoimmune and inflammatory disorders, 'AIID') and rituximab (oncology and AIID) accounted for 80% of revenues in 2006. In 2007, eight of the 20 best-selling biotechnology drugs in the U.S. are therapeutic monoclonal antibodies. This rapid growth in demand for monoclonal antibody production has been well accommodated by the industrialization of mAb manufacturing. References External links Cancer Management Handbook: Principles of Oncologic Pharmacotherapy Immunology Antiviral drugs sl:Biološka zdravila
Monoclonal antibody therapy
[ "Biology" ]
4,051
[ "Immunology", "Antiviral drugs", "Biocides" ]
7,198,865
https://en.wikipedia.org/wiki/Electron%20magnetic%20resonance
In physics, biology and chemistry, electron magnetic resonance (EMR) is an interdisciplinary field that covers both electron paramagnetic resonance (EPR, also known as electron spin resonance – ESR) and electron cyclotron resonance (ECR). EMR looks at electrons rather than nuclei or ions as in nuclear magnetic resonance (NMR) and ion cyclotron resonance (ICR) respectively. References Electromagnetism
Electron magnetic resonance
[ "Physics", "Materials_science" ]
92
[ "Electromagnetism", "Materials science stubs", "Physical phenomena", "Fundamental interactions", "Electromagnetism stubs" ]
7,199,356
https://en.wikipedia.org/wiki/Circuit%20ID
A circuit ID is a company-specific identifier assigned to a data or voice network connection between two locations. This connection, often called a circuit, may then be leased to a customer referring to that ID. In this way, the circuit ID is similar to a serial number on any product sold from a retailer to a customer. Each circuit ID is unique, so a specific customer having many circuit connections sold to them would have many circuit IDs to refer to those connections. As an example of a use of the circuit ID, when a subscriber/customer has an issue (or trouble) with a circuit, they may contact the Controlling Local Exchange Carrier (Controlling LEC) telecommunications provider, identifying the circuit that has the issue by giving the LEC that circuit ID reference. The LEC would refer to their internal records for this circuit ID to take corrective action on the designated circuit. Telecom circuit ID formats Although telecommunication providers are not required to follow any specific standard for circuit IDs, many do. In the United States, LECs typically generate circuit IDs based on Telcordia Technologies' Common Language Information Services. Using the Telcordia standards for circuit naming allow a LEC the ability to build a certain amount of intelligence into the name of a circuit. The way Telcordia has developed circuit IDs, different types of circuit connections require different formats for the circuit ID. In each format, different segments of the ID have very specific meaning. At one time, abbreviations used for circuit types were meaningful (for example, HC for high capacity) but the complexity of the business no longer allows for it. Now, with many different technologies and uses for circuit connections, different types of circuits may use different types of circuit ID formats that provide more meaning for that type of circuit. Below are "examples" of how one telecommunications provider, CenturyLink, has published their choice for circuit IDs for three different types of circuit connections. Carrier-facility format For "carrier circuits", CenturyLink uses a format like: AAAAA/BBBBBB/CCCCCCCCCCC/DDDDDDDDDDD Where: A = Prefix: 3–5 Alphanumeric characters. This is a unique identifier. Required. B = Facility Type: 1–6 Alphanumeric characters. Describes the "type" of facility circuit. Required. C = CLLI Code for the A-location: 8 or 11 alphanumeric characters. Required. D = CLLI Code for the Z-location: 8 or 11 alphanumeric characters. Required. Example: HN101/T3U/MPLSMNDT000/GLVYMNORIII The above example circuit ID represents an unframed T3 circuit between two locations in Minnesota with a "serial number" of HN101. Some telecom providers also build a bit of intelligence (or meaning) into this unique prefix information. For instance, a T3U circuit type that carries a specific type of network traffic might use the HN designation at the beginning followed by a number in the 100-block for another specific purpose, 200-block for yet another purpose, and so on. For more on Carrier Facility formatted circuit IDs using Telcordia's standards, see Common Language Facility Identification. Serial-number format For "special circuits", CenturyLink uses a format like: AA/BBCC/DDDDDD/EEE/FFFF/GGG Where: A = Prefix: 1–2 alphanumeric characters. Optional. B = Service Code: 2 Characters. Required. The type of service this circuit is providing. C = Service Code Modifier: 2 Characters. Required. Modifies the meaning of the service code, often identifies different billing options. D = Serial Number: 1–6 digits. Required. E = Suffix: 3 character suffix to the serial number. Optional, but rarely used. F = Company Code: 2–4 alphabetic characters (eg. NW, MS, PN, CTL, GTEW, NRLD, UDMN, FROT...) Required. Identifies the Controlling LEC. G = Segment: 1–3 alphanumeric characters. Optional for point-to-point circuits, but usually found with multi-point DS0 circuits. Examples: 32/HFGS/012345/NW = T3 Circuit controlled by Qwest 73/HCGS/123456/000/CC = T1 Circuit controlled by Consolidated Communications 44/AQDU/987654/000/G3 = HDSL Circuit controlled by G3 Telecom Parts of this circuit ID may also have additional intelligence (or meaning) built in. For instance, the Prefix may or may not be based on the LATA from one end of the circuit or the other. Telephone-number format For "telephone based data circuits", CenturyLink uses a format like: AA/BBCC/DDD/EEE/FFFF/GGGGG/HHH Where: A = Prefix: 1-2 alphanumeric characters. Required if it exists. B = Service Code: 2 alphabetic characters. Required for non-DSL numbered circuits. C = Service Code Modifier: 2 alphabetic characters. Required. Modifies meaning of the service code, often identifies different billing options. D = NPA: 3 digits. This is a required field. Numbering plan area code. E = NXX: 3 digits. This is a required field. Central office (exchange) code. F = Line: 4 digits. This is a required field. G = Extension: 1-5 alphanumeric characters. Optional. H = Segment: 1–3 alphanumeric characters. This is a rarely used optional field. Example: 54/UDNV/303/111/5555/99/1 = a circuit serving phone number 303-111-5555, ext. 99, on segment 1 Circuit designations in the United Kingdom Each carrier (Public Telecomms Operator) in the UK has its own form of designation. The Post Office/BT system is described here from the original term ‘Engineering Circuit Designation’. Other PTOs have their own scheme and would be suitable for inclusion. The Post Office or BT system historically used PW and R, optionally followed by a Region and/or Area code, followed by a number of digits between 4 and 6 for rented, analogue private lines. A Region code might have been LR for London Region or ER for Eastern Region, and for an Area L/NW for North West London or CB for Cambridge. In all but the rarest instances these must have been migrated to AX by the Analogue Upgrade project to digitise as much as possible for FDM Offload, (no) dc path uniformity and remote access in maintenance. Between 2 locations then: AX nnnnnn designates an analogue (2w or 4w) presented link. KX nnnnnn designates a digitally presented link up to 64 kbit/s. NX nnnnnn designates a digitally presented link from 128 up to 1,024 kbit/s. MX nnnnnn designates a digitally presented link from 2 Mbit/s upwards Each of the prefixes may have an addition making MX/GB, KX/INT for example. There are many specialist designations covering bearers for other purposes, for example for IP and bearers provided for other PTOs. One common other use has been IMUK and IMGB for the 2 Mbit/s link from the public exchange to a customer location for the delivery of ISDN30 in the earlier DASS and more recent I.421 format. The trend has been that the local UK area suffixes are no longer used though legacy lines with these may still be in place. International circuit designations for correspondent International Private Leased Circuits (IPLCs) These were known as CCITT, now ITU-T, designations. In the interests of international recognition, a protocol with recognisable town names has been used. The format is: <Earliest alphabetical town or city> – <second alphabetical town or city> <Type> <Serial> Towns and cities have abbreviations accepted by the 2 corresponding PTOs and the CCITT/ITU-T. Examples are: AMS = Amsterdam BS = Bristol DSSD = Düsseldorf FFTM = Frankfurt-am-Main KOB = Copenhagen L = London MDD = Madrid PS = Paris Original CCITT Leased Circuit Types were: Analogue P = Audio circuit (however transmitted over distance) presented as Audio for voice FP = Audio circuit (however transmitted over distance) presented as Audio for Fax modem DP = Audio circuit (however transmitted over distance) presented as Audio for Data modem XP = Audio circuit (however transmitted over distance) presented as Audio and switched by customer for alternate use by Voice or Data modem L – PS P4 was the fourth analogue line between Paris and London normally used for voice transmission at that time. DSSD-L XP2 was the fourth analogue line between Düsseldorf and London alternately used for voice and data at that time. Digital NP became the Type designator for most correspondent International digital links. BS – MDD NP12 and KOB – PS NP34 would have been typical uses of the scheme for links between Bristol & Madrid and Copenhagen & Paris. Notes The choice of letters in designating major town and cities could be seen to reflect a short form of the name in the language of the country and also to disambiguate with similarly named locations. København being the Danish home version of Copenhagen attracted KOB as the abbreviation. FFTO was the designation for Frankfurt-an-der-Oder, where there was the clear need to disambiguate references from FFTM. The serial or the numbered occurrence of a link between two PTOs between 2 cities was usually the next free number in the system, but the CCITT allowed for the re-use of old serial numbers after a period of 6 months. A customer ordering 3 links could be allocated DP23, DP24 and then DP6 between 2 major cities. (DP6 had been ceased over 6 months earlier). It can be considered that the serial or last number of the type of correspondent link between two places made the link unique but did lead to problems, for example when a major PTO in one country was setting up links in correspondent relations with more than one PTO in another country. The development was away from correspondent IPC links to a situation where one facilities provider could provide the link over the international part and sometimes as afar as the distant end customer. This was the result of liberalisation and competition in home and overseas markets. In some cases the facilities provider would carry the link to their PoP in the distant country and then rent a national or local tail from the PTO in that country. That would attract a designation particular to that area and not reflect its international connection significance. See also Common Language Information Services Outline of telecommunication References Network addressing Local loop Telecommunications engineering
Circuit ID
[ "Engineering" ]
2,263
[ "Electrical engineering", "Telecommunications engineering" ]
7,199,881
https://en.wikipedia.org/wiki/Pozzolan
Pozzolans are a broad class of siliceous and aluminous materials which, in themselves, possess little or no cementitious value but which will, in finely divided form and in the presence of water, react chemically with calcium hydroxide (Ca(OH)2) at ordinary temperature to form compounds possessing cementitious properties. The quantification of the capacity of a pozzolan to react with calcium hydroxide and water is given by measuring its pozzolanic activity. Pozzolana are naturally occurring pozzolans of volcanic origin. History Mixtures of calcined lime and finely ground, active aluminosilicate materials were pioneered and developed as inorganic binders in the Ancient world. Architectural remains of the Minoan civilization on Crete have shown evidence of the combined use of slaked lime and additions of finely ground potsherds for waterproof renderings in baths, cisterns and aqueducts. Evidence of the deliberate use of volcanic materials such as volcanic ashes or tuffs by the ancient Greeks dates back to at least 500–400 BC, as uncovered at the ancient city of Kameiros, Rhodes. In subsequent centuries the practice spread to the mainland and was eventually adopted and further developed by the Romans. The Romans used volcanic pumices and tuffs found in neighbouring territories, the most famous ones found in Pozzuoli (Naples), hence the name pozzolan, and in Segni (Latium). Preference was given to natural pozzolan sources such as German trass, but crushed ceramic waste was frequently used when natural deposits were not locally available. The exceptional lifetime and preservation conditions of some of the most famous Roman buildings such as the Pantheon or the Pont du Gard constructed using pozzolan-lime mortars and concrete testify both to the excellent workmanship achieved by Roman engineers and to the durable properties of the binders they used. Much of the practical skill and knowledge regarding the use of pozzolans was lost at the decline of the Roman empire. The rediscovery of Roman architectural practices, as described by Vitruvius in De architectura, also led to the reintroduction of lime-pozzolan binders. Particularly the strength, durability and hydraulic capability of hardening underwater made them popular construction materials during the 16th–18th century. The invention of other hydraulic lime cements and eventually Portland cement in the 18th and 19th century resulted in a gradual decline of the use of pozzolan-lime binders, which develop strength less rapidly. Over the course of the 20th century the use of pozzolans as additions (the technical term is "supplementary cementitious material", usually abbreviated "SCM") to Portland cement concrete mixtures has become common practice. Combinations of economic and technical aspects and, increasingly, environmental concerns have made so-called blended cements, i.e., cements that contain considerable amounts of supplementary cementitious materials (mostly around 20% by weight, but over 80% by weight in Portland blast-furnace slag cement), the most widely produced and used cement type by the beginning of the 21st century. Pozzolanic materials The general definition of a pozzolan embraces a large number of materials which vary widely in terms of origin, composition and properties. Both natural and artificial (man-made) materials show pozzolanic activity and are used as supplementary cementitious materials. Artificial pozzolans can be produced deliberately, for instance by thermal activation of kaolin-clays to obtain metakaolin, or can be obtained as waste or by-products from high-temperature process such as fly ashes from coal-fired electricity production. The most commonly used pozzolans today are industrial by-products such as fly ash, silica fume from silicon smelting, highly reactive metakaolin, and burned organic matter residues rich in silica such as rice husk ash. Their use has been firmly established and regulated in many countries. However, the supply of high-quality pozzolanic by-products is limited and many local sources are already fully exploited. Alternatives to the established pozzolanic by-products are to be found on the one hand in an expansion of the range of industrial by-products or societal waste considered and on the other hand in an increased usage of naturally occurring pozzolans. Natural pozzolanas are abundant in certain locations and are extensively used as an addition to Portland cement in countries such as Italy, Germany, Greece and China. Volcanic ashes and pumices largely composed of volcanic glass are commonly used, as are deposits in which the volcanic glass has been altered to zeolites by interaction with alkaline waters. Deposits of sedimentary origin are less common. Diatomaceous earths, formed by the accumulation of siliceous diatom microskeletons, are a prominent source material here. Use The benefits of pozzolan use in cement and concrete are threefold. First is the economic gain obtained by replacing a substantial part of the Portland cement by cheaper natural pozzolans or industrial by-products. Second is the lowering of the blended cement environmental cost associated with the greenhouse gases emitted during Portland cement production. A third advantage is the increased durability of the end product. Blending of pozzolans with Portland cement is of limited interference in the conventional production process and offers the opportunity to convert waste (for example, fly ash) into durable construction materials. A reduction of 40 percent of Portland cement in the concrete mix is usually feasible when replaced with a combination of pozzolanic materials. Pozzolans can be used to control setting, increase durability, reduce cost and reduce pollution without significantly reducing the final compressive strength or other performance characteristics. The properties of hardened blended cements are strongly related to the development of the binder microstructure, i.e., to the distribution, type, shape and dimensions of both reaction products and pores. The beneficial effects of pozzolan addition in terms of higher compressive strength, performance and greater durability are mostly attributed to the pozzolanic reaction in which calcium hydroxide is consumed to produce additional C-S-H and C-A-H reaction products. These pozzolanic reaction products fill in pores and result in a refining of the pore size distribution or pore structure. This results in a lowered permeability of the binder. The contribution of the pozzolanic reaction to cement strength is usually developed at later curing stages, depending on the pozzolanic activity. In the large majority of blended cements initial lower strengths can be observed compared to the parent Portland cement. However, especially in the case of pozzolans finer than the Portland cement, the decrease in early strength is usually less than what can be expected based on the dilution factor. This can be explained by the filler effect, in which small SCM grains fill in the space between the cement particles, resulting in a much denser binder. The acceleration of the Portland cement hydration reactions can also partially accommodate the loss of early strength. The increased chemical resistance to the ingress and harmful action of aggressive solutions constitutes one of the main advantages of pozzolan blended cements. The improved durability of the pozzolan-blended binders lengthen the service life of structures and reduces the costly and inconvenient need to replace damaged construction. One of the principal reasons for increased durability is the lowered calcium hydroxide content available to take part in deleterious expansive reactions induced by, for example, sulfate attack. Furthermore, the reduced binder permeability slows down the ingress of harmful ions such as chlorine or carbonate. The pozzolanic reaction can also reduce the risk of expansive alkali-silica reactions between the cement and aggregates by changing the binder pore solution. Lowering the solution alkalinity and increasing alumina concentrations strongly decreases or inhibits the dissolution of the aggregate aluminosilicates. See also (AAR) (ASR) ; its ancient artificial bay was built with pozzolan (C-S-H) (CCN) (EMC) Zeolite References Citations General sources Cook, D. J. (1986). "Natural pozzolanas". In: Swamy R.N., Editor (1986) Cement Replacement Materials, Surrey University Press, p. 200. McCann, A. M. (1994). "The Roman Port of Cosa" (273 BC), Scientific American, Ancient Cities, pp. 92–99, by Anna Marguerite McCann. Covers, hydraulic concrete, of "Pozzolana mortar" and the 5 piers, of the Cosa harbor, the Lighthouse on pier 5, diagrams, and photographs. Height of Port city: 100 BC. External links Articles We Finally Know Why Ancient Roman Concrete Was Able to Last Thousands of Years, By Michelle Starr, 29 October 2024, sciencealert.com Concrete Cement
Pozzolan
[ "Engineering" ]
1,851
[ "Structural engineering", "Concrete" ]
7,200,558
https://en.wikipedia.org/wiki/Impact%20of%20nanotechnology
The impact of nanotechnology extends from its medical, ethical, mental, legal and environmental applications, to fields such as engineering, biology, chemistry, computing, materials science, and communications. Major benefits of nanotechnology include improved manufacturing methods, water purification systems, energy systems, physical enhancement, nanomedicine, better food production methods, nutrition and large-scale infrastructure auto-fabrication. Nanotechnology's reduced size may allow for automation of tasks which were previously inaccessible due to physical restrictions, which in turn may reduce labor, land, or maintenance requirements placed on humans. Potential risks include environmental, health, and safety issues; transitional effects such as displacement of traditional industries as the products of nanotechnology become dominant, which are of concern to privacy rights advocates. These may be particularly important if potential negative effects of nanoparticles are overlooked. Whether nanotechnology merits special government regulation is a controversial issue. Regulatory bodies such as the United States Environmental Protection Agency and the Health and Consumer Protection Directorate of the European Commission have started dealing with the potential risks of nanoparticles. The organic food sector has been the first to act with the regulated exclusion of engineered nanoparticles from certified organic produce, firstly in Australia and the UK, and more recently in Canada, as well as for all food certified to Demeter International standards Overview The presence of nanomaterials (materials that contain nanoparticles) is not in itself a threat. It is only certain aspects that can make them risky, in particular their mobility and their increased reactivity. Only if certain properties of certain nanoparticles were harmful to living beings or the environment would we be faced with a genuine hazard. In this case it can be called nanopollution. In addressing the health and environmental impact of nanomaterials we need to differentiate between two types of nanostructures: (1) Nanocomposites, nanostructured surfaces and nanocomponents (electronic, optical, sensors etc.), where nanoscale particles are incorporated into a substance, material or device (“fixed” nano-particles); and (2) “free” nanoparticles, where at some stage in production or use individual nanoparticles of a substance are present. These free nanoparticles could be nanoscale species of elements, or simple compounds, but also complex compounds where for instance a nanoparticle of a particular element is coated with another substance (“coated” nanoparticle or “core-shell” nanoparticle). There seems to be consensus that, although one should be aware of materials containing fixed nanoparticles, the immediate concern is with free nanoparticles. Nanoparticles are very different from their everyday counterparts, so their adverse effects cannot be derived from the known toxicity of the macro-sized material. This poses significant issues for addressing the health and environmental impact of free nanoparticles. To complicate things further, in talking about nanoparticles it is important that a powder or liquid containing nanoparticles almost never be monodisperse, but contain instead a range of particle sizes. This complicates the experimental analysis as larger nanoparticles might have different properties from smaller ones. Also, nanoparticles show a tendency to aggregate, and such aggregates often behave differently from individual nanoparticles. Health impact The health impacts of nanotechnology are the possible effects that the use of nanotechnological materials and devices will have on human health. As nanotechnology is an emerging field, there is great debate regarding to what extent nanotechnology will benefit or pose risks for human health. Nanotechnology's health impacts can be split into two aspects: the potential for nanotechnological innovations to have medical applications to cure disease, and the potential health hazards posed by exposure to nanomaterials. In regards to the current global pandemic, researchers, engineers and medical professionals are using an extremely developed collection of nano science and nanotechnology approaches to explore the ways it could potentially help the medical, technical, and scientific communities to help fight the pandemic. Medical applications Nanomedicine is the medical application of nanotechnology. The approaches to nanomedicine range from the medical use of nanomaterials, to nanoelectronic biosensors, and even possible future applications of molecular nanotechnology. Nanomedicine seeks to deliver a valuable set of research tools and clinically helpful devices in the near future. The National Nanotechnology Initiative expects new commercial applications in the pharmaceutical industry that may include advanced drug delivery systems, new therapies, and in vivo imaging. Neuro-electronic interfaces and other nanoelectronics-based sensors are another active goal of research. Further down the line, the speculative field of molecular nanotechnology believes that cell repair machines could revolutionize medicine and the medical field. Nanomedicine research is directly funded, with the US National Institutes of Health in 2005 funding a five-year plan to set up four nanomedicine centers. In April 2006, the journal Nature Materials estimated that 130 nanotech-based drugs and delivery systems were being developed worldwide. Nanomedicine is a large industry, with nanomedicine sales reaching $6.8 billion in 2004. With over 200 companies and 38 products worldwide, a minimum of $3.8 billion in nanotechnology R&D is being invested every year. As the nanomedicine industry continues to grow, it is expected to have a significant impact on the economy. Health hazards Nanotoxicology is the field which studies potential health risks of nanomaterials. The extremely small size of nanomaterials means that they are much more readily taken up by the human body than larger sized particles. How these nanoparticles behave inside the organism is one of the significant issues that needs to be resolved. The behavior of nanoparticles is a function of their size, shape and surface reactivity with the surrounding tissue. For example, they could cause overload on phagocytes, cells that ingest and destroy foreign matter, thereby triggering stress reactions that lead to inflammation and weaken the body's defense against other pathogens. Apart from what happens if non-degradable or slowly degradable nanoparticles accumulate in organs, another concern is their potential interaction with biological processes inside the body: because of their large surface, nanoparticles on exposure to tissue and fluids will immediately adsorb onto their surface some of the macromolecules they encounter. This may, for instance, affect the regulatory mechanisms of enzymes and other proteins. The large number of variables influencing toxicity means that it is difficult to generalise about health risks associated with exposure to nanomaterials – each new nanomaterial must be assessed individually and all material properties must be taken into account. Health and environmental issues combine in the workplace of companies engaged in producing or using nanomaterials and in the laboratories engaged in nanoscience and nanotechnology research. It is safe to say that current workplace exposure standards for dusts cannot be applied directly to nanoparticle dusts. The National Institute for Occupational Safety and Health has conducted initial research on how nanoparticles interact with the body's systems and how workers might be exposed to nano-sized particles in the manufacturing or industrial use of nanomaterials. NIOSH currently offers interim guidelines for working with nanomaterials consistent with the best scientific knowledge. At The National Personal Protective Technology Laboratory of NIOSH, studies investigating the filter penetration of nanoparticles on NIOSH-certified and EU marked respirators, as well as non-certified dust masks have been conducted. These studies found that the most penetrating particle size range was between 30 and 100 nanometers, and leak size was the largest factor in the number of nanoparticles found inside the respirators of the test dummies. Other properties of nanomaterials that influence toxicity include: chemical composition, shape, surface structure, surface charge, aggregation and solubility, and the presence or absence of functional groups of other chemicals. The large number of variables influencing toxicity means that it is difficult to generalise about health risks associated with exposure to nanomaterials – each new nanomaterial must be assessed individually and all material properties must be taken into account. Literature reviews have been showing that release of engineered nanoparticles and incurred personal exposure can happen during different work activities. The situation alerts regulatory bodies to necessitate prevention strategies and regulations at nanotechnology workplaces. Environmental impact The environmental impact of nanotechnology is the possible effects that the use of nanotechnological materials and devices will have on the environment. As nanotechnology is an emerging field, there is debate regarding to what extent industrial and commercial use of nanomaterials will affect organisms and ecosystems. Nanotechnology's environmental impact can be split into two aspects: the potential for nanotechnological innovations to help improve the environment, and the possibly novel type of pollution that nanotechnological materials might cause if released into the environment. Environmental applications Green nanotechnology refers to the use of nanotechnology to enhance the environmental sustainability of processes producing negative externalities. It also refers to the use of the products of nanotechnology to enhance sustainability. It includes making green nano-products and using nano-products in support of sustainability. Green nanotechnology has been described as the development of clean technologies, "to minimize potential environmental and human health risks associated with the manufacture and use of nanotechnology products, and to encourage replacement of existing products with new nano-products that are more environmentally friendly throughout their lifecycle." Green nanotechnology has two goals: producing nanomaterials and products without harming the environment or human health, and producing nano-products that provide solutions to environmental problems. It uses existing principles of green chemistry and green engineering to make nanomaterials and nano-products without toxic ingredients, at low temperatures using less energy and renewable inputs wherever possible, and using lifecycle thinking in all design and engineering stages. Pollution Nanopollution is a generic name for all waste generated by nanodevices or during the nanomaterials manufacturing process. Nanowaste is mainly the group of particles that are released into the environment, or the particles that are thrown away when still on their products. Social impact Beyond the toxicity risks to human health and the environment which are associated with first-generation nanomaterials, nanotechnology has broader societal impact and poses broader social challenges. Social scientists have suggested that nanotechnology's social issues should be understood and assessed not simply as "downstream" risks or impacts. Rather, the challenges should be factored into "upstream" research and decision-making in order to ensure technology development that meets social objectives Many social scientists and organizations in civil society suggest that technology assessment and governance should also involve public participation. The exploration of the stakeholder's perception is also an essential component in assessing the large amount of risk associated with nanotechnology and nano-related products. Over 800 nano-related patents were granted in 2003, with numbers increasing to nearly 19,000 internationally by 2012. Corporations are already taking out broad-ranging patents on nanoscale discoveries and inventions. For example, two corporations, NEC and IBM, hold the basic patents on carbon nanotubes, one of the current cornerstones of nanotechnology. Carbon nanotubes have a wide range of uses, and look set to become crucial to several industries from electronics and computers, to strengthened materials to drug delivery and diagnostics. Nanotechnologies may provide new solutions for the millions of people in developing countries who lack access to basic services, such as safe water, reliable energy, health care, and education. The 2004 UN Task Force on Science, Technology and Innovation noted that some of the advantages of nanotechnology include production using little labor, land, or maintenance, high productivity, low cost, and modest requirements for materials and energy. However, concerns are frequently raised that the claimed benefits of nanotechnology will not be evenly distributed, and that any benefits (including technical and/or economic) associated with nanotechnology will only reach affluent nations. Longer-term concerns center on the impact that new technologies will have for society at large, and whether these could possibly lead to either a post-scarcity economy, or alternatively exacerbate the wealth gap between developed and developing nations. The effects of nanotechnology on the society as a whole, on human health and the environment, on trade, on security, on food systems and even on the definition of "human", have not been characterized or politicized. Regulation Significant debate exists relating to the question of whether nanotechnology or nanotechnology-based products merit special government regulation. This debate is related to the circumstances in which it is necessary and appropriate to assess new substances prior to their release into the market, community and environment. Regulatory bodies such as the United States Environmental Protection Agency and the Food and Drug Administration in the U.S. or the Health & Consumer Protection Directorate of the European Commission have started dealing with the potential risks posed by nanoparticles. So far, neither engineered nanoparticles nor the products and materials that contain them are subject to any special regulation regarding production, handling or labelling. The Material Safety Data Sheet that must be issued for some materials often does not differentiate between bulk and nanoscale size of the material in question and even when it does these MSDS are advisory only. The new advances and rapid growth within the field of nanotechnology have large implications, which in turn will lead to regulations, on the traditional food and agriculture sectors of the world, in particular the invention of smart and active packaging, nano sensors, nano pesticides, and nano fertilizers. Limited nanotechnology labeling and regulation may exacerbate potential human and environmental health and safety issues associated with nanotechnology. It has been argued that the development of comprehensive regulation of nanotechnology will be vital to ensure that the potential risks associated with the research and commercial application of nanotechnology do not overshadow its potential benefits. Regulation may also be required to meet community expectations about responsible development of nanotechnology, as well as ensuring that public interests are included in shaping the development of nanotechnology. In 2008, E. Marla Felcher "The Consumer Product Safety Commission and Nanotechnology," suggested that the Consumer Product Safety Commission, which is charged with protecting the public against unreasonable risks of injury or death associated with consumer products, is ill-equipped to oversee the safety of complex, high-tech products made using nanotechnology. See also Fail-safes in nanotechnology International Center for Technology Assessment References Further reading Fritz Allhoff, Patrick Lin, and Daniel Moore, What Is Nanotechnology and Why Does It Matter?: From Science to Ethics. (Oxford: Wiley-Blackwell, 2010). Fritz Allhoff and Patrick Lin (eds.), Nanotechnology & Society: Current and Emerging Ethical Issues (Dordrecht: Springer, 2008). Fritz Allhoff, Patrick Lin, James Moor, and John Weckert (eds.), Nanoethics: The Ethical and Societal Implications of Nanotechnology (Hoboken: John Wiley & Sons, 2007). Alternate link. Kaldis, Byron. "Epistemology of Nanotechnology". Sage Encyclopedia of Nanoscience and Society. (Thousand Oaks: CA, Sage, 2010) Approaches to Safe Nanotechnology: An Information Exchange with NIOSH, United States National Institute for Occupational Safety and Health, June 2007, DHHS (NIOSH) publication no. 2007-123 - provides a global overview of the state of nanotechnology and society in Europe, the US, Japan and Canada, and examines the ethics, the environmental and public health risks, and the governance and regulation of this technology. Dónal P O'Mathúna, Nanoethics: Big Ethical Issues with Small Technology (London & New York: Continuum, 2009). External links U.S. National Nanotechnology Initiative, Societal Dimensions Nanotechnology Now USC's Nanoscience & Technology Studies NELSI Global ASU's Center on Nanotechnology and Society UCSB's Center on Nanotechnology and Society The Nanoethics Group Nanotechnology Foresight Nanotech Institute Center for Responsible Nanotechnology The Center for Biological and Environmental Nanotechnology The International Council on Nanotechnology The NanoEthicsBank NanoEthics: Ethics for Technologies that Converge at the Nanoscale National Institute for Occupational Safety and Health Nanotechnology topic page UnderstandingNano European Center for the Sustainable Impact of Nanotechnology Center for the Environmental Implications of NanoTechnology Ethics of science and technology Nanotechnology Occupational safety and health Nanotechnology
Impact of nanotechnology
[ "Materials_science", "Technology", "Engineering" ]
3,454
[ "Nanotechnology", "Nanotechnology and the environment", "Materials science", "Ethics of science and technology" ]
7,200,923
https://en.wikipedia.org/wiki/N-Acetylneuraminic%20acid
N-Acetylneuraminic acid (Neu5Ac or NANA) is the predominant sialic acid found in human cells, and many mammalian cells. Other forms, such as N-Glycolylneuraminic acid, may also occur in cells. This residue is negatively charged at physiological pH and is found in complex glycans on mucins and glycoproteins found at the cell membrane. Neu5Ac residues are also found in glycolipids, known as gangliosides, a crucial component of neuronal membranes found in the brain. Along with involvement in preventing infections (mucus associated with mucous membranes—mouth, nose, GI, respiratory tract), Neu5Ac acts as a receptor for influenza viruses, allowing attachment to mucous cells via hemagglutinin (an early step in acquiring influenzavirus infection). In the biology of bacterial pathogens Neu5Ac is also important in the biology of a number of pathogenic and symbiotic bacteria as it can be used either as a nutrient, providing both carbon and nitrogen to the bacteria, or in some pathogens, can be activated and placed on the cell surface. Bacteria have evolved transporters for Neu5Ac to enable them to capture it from their environment and a number of these have been characterized including the NanT protein from Escherichia coli, the SiaPQM TRAP transporter from Haemophilus influenzae and the SatABCD ABC transporter from Haemophilus ducreyi. Medical use In Japan, Neu5Ac is approved under the trade name Acenobel for the treatment of distal myopathy with rimmed vacuoles. See also Neuraminic acid N-Glycolylneuraminic acid Sialic acid References Amino sugars Sugar acids Monosaccharides
N-Acetylneuraminic acid
[ "Chemistry" ]
388
[ "Amino sugars", "Carbohydrates", "Monosaccharides", "Sugar acids" ]
7,200,934
https://en.wikipedia.org/wiki/Silicon%20photonics
Silicon photonics is the study and application of photonic systems which use silicon as an optical medium. The silicon is usually patterned with sub-micrometre precision, into microphotonic components. These operate in the infrared, most commonly at the 1.55 micrometre wavelength used by most fiber optic telecommunication systems. The silicon typically lies on top of a layer of silica in what (by analogy with a similar construction in microelectronics) is known as silicon on insulator (SOI). Silicon photonic devices can be made using existing semiconductor fabrication techniques, and because silicon is already used as the substrate for most integrated circuits, it is possible to create hybrid devices in which the optical and electronic components are integrated onto a single microchip. Consequently, silicon photonics is being actively researched by many electronics manufacturers including IBM and Intel, as well as by academic research groups, as a means for keeping on track with Moore's Law, by using optical interconnects to provide faster data transfer both between and within microchips. The propagation of light through silicon devices is governed by a range of nonlinear optical phenomena including the Kerr effect, the Raman effect, two-photon absorption and interactions between photons and free charge carriers. The presence of nonlinearity is of fundamental importance, as it enables light to interact with light, thus permitting applications such as wavelength conversion and all-optical signal routing, in addition to the passive transmission of light. Silicon waveguides are also of great academic interest, due to their unique guiding properties, they can be used for communications, interconnects, biosensors, and they offer the possibility to support exotic nonlinear optical phenomena such as soliton propagation. Applications Optical communications In a typical optical link, data is first transferred from the electrical to the optical domain using an electro-optic modulator or a directly modulated laser. An electro-optic modulator can vary the intensity and/or the phase of the optical carrier. In silicon photonics, a common technique to achieve modulation is to vary the density of free charge carriers. Variations of electron and hole densities change the real and the imaginary part of the refractive index of silicon as described by the empirical equations of Soref and Bennett. Modulators can consist of both forward-biased PIN diodes, which generally generate large phase-shifts but suffer of lower speeds, as well as of reverse-biased p–n junctions. A prototype optical interconnect with microring modulators integrated with germanium detectors has been demonstrated. Non-resonant modulators, such as Mach-Zehnder interferometers, have typical dimensions in the millimeter range and are usually used in telecom or datacom applications. Resonant devices, such as ring-resonators, can have dimensions of few tens of micrometers only, occupying therefore much smaller areas. In 2013, researchers demonstrated a resonant depletion modulator that can be fabricated using standard Silicon-on-Insulator Complementary Metal-Oxide-Semiconductor (SOI CMOS) manufacturing processes. A similar device has been demonstrated as well in bulk CMOS rather than in SOI. On the receiver side, the optical signal is typically converted back to the electrical domain using a semiconductor photodetector. The semiconductor used for carrier generation has usually a band-gap smaller than the photon energy, and the most common choice is pure germanium. Most detectors use a p–n junction for carrier extraction, however, detectors based on metal–semiconductor junctions (with germanium as the semiconductor) have been integrated into silicon waveguides as well. More recently, silicon-germanium avalanche photodiodes capable of operating at 40 Gbit/s have been fabricated. Complete transceivers have been commercialized in the form of active optical cables. Optical communications are conveniently classified by the reach, or length, of their links. The majority of silicon photonic communications have so far been limited to telecom and datacom applications, where the reach is of several kilometers or several meters respectively. Silicon photonics, however, is expected to play a significant role in computercom as well, where optical links have a reach in the centimeter to meter range. In fact, progress in computer technology (and the continuation of Moore's Law) is becoming increasingly dependent on faster data transfer between and within microchips. Optical interconnects may provide a way forward, and silicon photonics may prove particularly useful, once integrated on the standard silicon chips. In 2006, Intel Senior Vice President - and future CEO - Pat Gelsinger stated that, "Today, optics is a niche technology. Tomorrow, it's the mainstream of every chip that we build." In 2010 Intel demonstrated a 50 Gbit/s connection made with silicon photonics. The first microprocessor with optical input/output (I/O) was demonstrated in December 2015 using an approach known as "zero-change" CMOS photonics. This is known as fiber-to-the-processor. This first demonstration was based on a 45 nm SOI node, and the bi-directional chip-to-chip link was operated at a rate of 2×2.5 Gbit/s. The total energy consumption of the link was calculated to be of 16 pJ/b and was dominated by the contribution of the off-chip laser. Some researchers believe an on-chip laser source is required. Others think that it should remain off-chip because of thermal problems (the quantum efficiency decreases with temperature, and computer chips are generally hot) and because of CMOS-compatibility issues. One such device is the hybrid silicon laser, in which the silicon is bonded to a different semiconductor (such as indium phosphide) as the lasing medium. Other devices include all-silicon Raman laser or an all-silicon Brillouin lasers wherein silicon serves as the lasing medium. In 2012, IBM announced that it had achieved optical components at the 90 nanometer scale that can be manufactured using standard techniques and incorporated into conventional chips. In September 2013, Intel announced technology to transmit data at speeds of 100 gigabits per second along a cable approximately five millimeters in diameter for connecting servers inside data centers. Conventional PCI-E data cables carry data at up to eight gigabits per second, while networking cables reach 40 Gbit/s. The latest version of the USB standard tops out at ten Gbit/s. The technology does not directly replace existing cables in that it requires a separate circuit board to interconvert electrical and optical signals. Its advanced speed offers the potential of reducing the number of cables that connect blades on a rack and even of separating processor, storage and memory into separate blades to allow more efficient cooling and dynamic configuration. Graphene photodetectors have the potential to surpass germanium devices in several important aspects, although they remain about one order of magnitude behind current generation capacity, despite rapid improvement. Graphene devices can work at very high frequencies, and could in principle reach higher bandwidths. Graphene can absorb a broader range of wavelengths than germanium. That property could be exploited to transmit more data streams simultaneously in the same beam of light. Unlike germanium detectors, graphene photodetectors do not require applied voltage, which could reduce energy needs. Finally, graphene detectors in principle permit a simpler and less expensive on-chip integration. However, graphene does not strongly absorb light. Pairing a silicon waveguide with a graphene sheet better routes light and maximizes interaction. The first such device was demonstrated in 2011. Manufacturing such devices using conventional manufacturing techniques has not been demonstrated. Optical routers and signal processors Another application of silicon photonics is in signal routers for optical communication. Construction can be greatly simplified by fabricating the optical and electronic parts on the same chip, rather than having them spread across multiple components. A wider aim is all-optical signal processing, whereby tasks which are conventionally performed by manipulating signals in electronic form are done directly in optical form. An important example is all-optical switching, whereby the routing of optical signals is directly controlled by other optical signals. Another example is all-optical wavelength conversion. In 2013, a startup company named "Compass-EOS", based in California and in Israel, was the first to present a commercial silicon-to-photonics router. Long range telecommunications using silicon photonics Silicon microphotonics can potentially increase the Internet's bandwidth capacity by providing micro-scale, ultra low power devices. Furthermore, the power consumption of datacenters may be significantly reduced if this is successfully achieved. Researchers at Sandia, Kotura, NTT, Fujitsu and various academic institutes have been attempting to prove this functionality. A 2010 paper reported on a prototype 80 km, 12.5 Gbit/s transmission using microring silicon devices. Light-field displays As of 2015, US startup company Magic Leap is working on a light-field chip using silicon photonics for the purpose of an augmented reality display. Artificial intelligence Silicon photonics has been used in artificial intelligence inference processors that are more energy efficient than those using conventional transistors. This can be done using Mach-Zehnder interferometers (MZIs) which can be combined with nanoelectromechanical systems to modulate the light passing though it, by physically bending the MZI which changes the phase of the light. Physical properties Optical guiding and dispersion tailoring Silicon is transparent to infrared light with wavelengths above about 1.1 micrometres. Silicon also has a very high refractive index, of about 3.5. The tight optical confinement provided by this high index allows for microscopic optical waveguides, which may have cross-sectional dimensions of only a few hundred nanometers. Single mode propagation can be achieved, thus (like single-mode optical fiber) eliminating the problem of modal dispersion. The strong dielectric boundary effects that result from this tight confinement substantially alter the optical dispersion relation. By selecting the waveguide geometry, it is possible to tailor the dispersion to have desired properties, which is of crucial importance to applications requiring ultrashort pulses. In particular, the group velocity dispersion (that is, the extent to which group velocity varies with wavelength) can be closely controlled. In bulk silicon at 1.55 micrometres, the group velocity dispersion (GVD) is normal in that pulses with longer wavelengths travel with higher group velocity than those with shorter wavelength. By selecting a suitable waveguide geometry, however, it is possible to reverse this, and achieve anomalous GVD, in which pulses with shorter wavelengths travel faster. Anomalous dispersion is significant, as it is a prerequisite for soliton propagation, and modulational instability. In order for the silicon photonic components to remain optically independent from the bulk silicon of the wafer on which they are fabricated, it is necessary to have a layer of intervening material. This is usually silica, which has a much lower refractive index (of about 1.44 in the wavelength region of interest), and thus light at the silicon-silica interface will (like light at the silicon-air interface) undergo total internal reflection, and remain in the silicon. This construct is known as silicon on insulator. It is named after the technology of silicon on insulator in electronics, whereby components are built upon a layer of insulator in order to reduce parasitic capacitance and so improve performance. Silicon photonics have also been built with silicon nitride as the material in the optical waveguides. Kerr nonlinearity Silicon has a focusing Kerr nonlinearity, in that the refractive index increases with optical intensity. This effect is not especially strong in bulk silicon, but it can be greatly enhanced by using a silicon waveguide to concentrate light into a very small cross-sectional area. This allows nonlinear optical effects to be seen at low powers. The nonlinearity can be enhanced further by using a slot waveguide, in which the high refractive index of the silicon is used to confine light into a central region filled with a strongly nonlinear polymer. Kerr nonlinearity underlies a wide variety of optical phenomena. One example is four wave mixing, which has been applied in silicon to realise optical parametric amplification, parametric wavelength conversion, and frequency comb generation., Kerr nonlinearity can also cause modulational instability, in which it reinforces deviations from an optical waveform, leading to the generation of spectral-sidebands and the eventual breakup of the waveform into a train of pulses. Another example (as described below) is soliton propagation. Two-photon absorption Silicon exhibits two-photon absorption (TPA), in which a pair of photons can act to excite an electron-hole pair. This process is related to the Kerr effect, and by analogy with complex refractive index, can be thought of as the imaginary-part of a complex Kerr nonlinearity. At the 1.55 micrometre telecommunication wavelength, this imaginary part is approximately 10% of the real part. The influence of TPA is highly disruptive, as it both wastes light, and generates unwanted heat. It can be mitigated, however, either by switching to longer wavelengths (at which the TPA to Kerr ratio drops), or by using slot waveguides (in which the internal nonlinear material has a lower TPA to Kerr ratio). Alternatively, the energy lost through TPA can be partially recovered (as is described below) by extracting it from the generated charge carriers. Free charge carrier interactions The free charge carriers within silicon can both absorb photons and change its refractive index. This is particularly significant at high intensities and for long durations, due to the carrier concentration being built up by TPA. The influence of free charge carriers is often (but not always) unwanted, and various means have been proposed to remove them. One such scheme is to implant the silicon with helium in order to enhance carrier recombination. A suitable choice of geometry can also be used to reduce the carrier lifetime. Rib waveguides (in which the waveguides consist of thicker regions in a wider layer of silicon) enhance both the carrier recombination at the silica-silicon interface and the diffusion of carriers from the waveguide core. A more advanced scheme for carrier removal is to integrate the waveguide into the intrinsic region of a PIN diode, which is reverse biased so that the carriers are attracted away from the waveguide core. A more sophisticated scheme still, is to use the diode as part of a circuit in which voltage and current are out of phase, thus allowing power to be extracted from the waveguide. The source of this power is the light lost to two photon absorption, and so by recovering some of it, the net loss (and the rate at which heat is generated) can be reduced. As is mentioned above, free charge carrier effects can also be used constructively, in order to modulate the light. Second-order nonlinearity Second-order nonlinearities cannot exist in bulk silicon because of the centrosymmetry of its crystalline structure. By applying strain however, the inversion symmetry of silicon can be broken. This can be obtained for example by depositing a silicon nitride layer on a thin silicon film. Second-order nonlinear phenomena can be exploited for optical modulation, spontaneous parametric down-conversion, parametric amplification, ultra-fast optical signal processing and mid-infrared generation. Efficient nonlinear conversion however requires phase matching between the optical waves involved. Second-order nonlinear waveguides based on strained silicon can achieve phase matching by dispersion-engineering. So far, however, experimental demonstrations are based only on designs which are not phase matched. It has been shown that phase matching can be obtained as well in silicon double slot waveguides coated with a highly nonlinear organic cladding and in periodically strained silicon waveguides. The Raman effect Silicon exhibits the Raman effect, in which a photon is exchanged for a photon with a slightly different energy, corresponding to an excitation or a relaxation of the material. Silicon's Raman transition is dominated by a single, very narrow frequency peak, which is problematic for broadband phenomena such as Raman amplification, but is beneficial for narrowband devices such as Raman lasers. Early studies of Raman amplification and Raman lasers started at UCLA which led to demonstration of net gain Silicon Raman amplifiers and silicon pulsed Raman laser with fiber resonator (Optics express 2004). Consequently, all-silicon Raman lasers have been fabricated in 2005. The Brillouin effect In the Raman effect, photons are red- or blue-shifted by optical phonons with a frequency of about 15 THz. However, silicon waveguides also support acoustic phonon excitations. The interaction of these acoustic phonons with light is called Brillouin scattering. The frequencies and mode shapes of these acoustic phonons are dependent on the geometry and size of the silicon waveguides, making it possible to produce strong Brillouin scattering at frequencies ranging from a few MHz to tens of GHz. Stimulated Brillouin scattering has been used to make narrowband optical amplifiers as well as all-silicon Brillouin lasers. The interaction between photons and acoustic phonons is also studied in the field of cavity optomechanics, although 3D optical cavities are not necessary to observe the interaction. For instance, besides in silicon waveguides the optomechanical coupling has also been demonstrated in fibers and in chalcogenide waveguides. Solitons The evolution of light through silicon waveguides can be approximated with a cubic Nonlinear Schrödinger equation, which is notable for admitting sech-like soliton solutions. These optical solitons (which are also known in optical fiber) result from a balance between self phase modulation (which causes the leading edge of the pulse to be redshifted and the trailing edge blueshifted) and anomalous group velocity dispersion. Such solitons have been observed in silicon waveguides, by groups at the universities of Columbia, Rochester, and Bath. See also Photonic integrated circuit Optical computing Silicon Photonics Cloud References Nonlinear optics Photonics Silicon
Silicon photonics
[ "Materials_science" ]
3,757
[ "Nanotechnology", "Silicon photonics" ]
7,201,415
https://en.wikipedia.org/wiki/Biochemistry%20of%20Alzheimer%27s%20disease
The biochemistry of Alzheimer's disease, the most common cause of dementia, is not yet very well understood. Alzheimer's disease (AD) has been identified as a proteopathy: a protein misfolding disease due to the accumulation of abnormally folded amyloid beta (Aβ) protein in the brain. Amyloid beta is a short peptide that is an abnormal proteolytic byproduct of the transmembrane protein amyloid-beta precursor protein (APP), whose function is unclear but thought to be involved in neuronal development. The presenilins are components of proteolytic complex involved in APP processing and degradation. Amyloid beta monomers are soluble and contain short regions of beta sheet and polyproline II helix secondary structures in solution, though they are largely alpha helical in membranes; however, at sufficiently high concentration, they undergo a dramatic conformational change to form a beta sheet-rich tertiary structure that aggregates to form amyloid fibrils. These fibrils and oligomeric forms of Aβ deposit outside neurons in formations known as senile plaques. There are different types of plaques, including the diffuse, compact, cored or neuritic plaque types, as well as Aβ deposits in the walls of small blood vessel walls in the brain called cerebral amyloid angiopathy. AD is also considered a tauopathy due to abnormal aggregation of the tau protein, a microtubule-associated protein expressed in neurons that normally acts to stabilize microtubules in the cell cytoskeleton. Like most microtubule-associated proteins, tau is normally regulated by phosphorylation; however, in Alzheimer's disease, hyperphosphorylated tau accumulates as paired helical filaments that in turn aggregate into masses inside nerve cell bodies known as neurofibrillary tangles and as dystrophic neurites associated with amyloid plaques. Although little is known about the process of filament assembly, depletion of a prolyl isomerase protein in the parvulin family has been shown to accelerate the accumulation of abnormal tau. Neuroinflammation is also involved in the complex cascade leading to AD pathology and symptoms. Considerable pathological and clinical evidence documents immunological changes associated with AD, including increased pro-inflammatory cytokine concentrations in the blood and cerebrospinal fluid. Whether these changes may be a cause or consequence of AD remains to be fully understood, but inflammation within the brain, including increased reactivity of the resident microglia towards amyloid deposits, has been implicated in the pathogenesis and progression of AD. Much of the known biochemistry of Alzheimer's disease has been deciphered through research using experimental models of Alzheimer's disease. Neuropathology At a macroscopic level, AD is characterized by loss of neurons and synapses in the cerebral cortex and certain subcortical regions. This results in gross atrophy of the affected regions, including degeneration in the temporal lobe and parietal lobe, and parts of the frontal cortex and cingulate gyrus. Both amyloid plaques and neurofibrillary tangles are clearly visible by microscopy in AD brains. Plaques are dense, mostly insoluble deposits of protein and cellular material outside and around neurons. Tangles are insoluble twisted fibers that build up inside the nerve cell. Though many older people develop some plaques and tangles, the brains of AD patients have them to a much greater extent and in different brain locations. Biochemical characteristics Fundamental to the understanding of Alzheimer's disease is the biochemical events that leads to accumulation of the amyloid-beta plaques and tau-protein tangles. A delicate balance of the enzymes secretases regulate the amyloid-beta accumulation. Recently, a link between cholinergic neuronal activity and the activity of alpha-secretase has been highlighted, which can discourage amyloid-beta proteins deposition in brain of patients with Alzheimer's disease. Alzheimer's disease has been identified as a protein misfolding disease, or proteopathy, due to the accumulation of abnormally folded amyloid-beta proteins in the brains of AD patients. Abnormal amyloid-beta accumulation can first be detected using cerebrospinal fluid analysis and later using positron emission tomography (PET). Although AD shares pathophysiological mechanisms with prion diseases, it is not transmissible in the wild, as prion diseases are. Any transmissibility that it may have is limited solely to extremely rare iatrogenic events from donor-derived therapies that are no longer used. Amyloid-beta, also written Aβ, is a short peptide that is a proteolytic byproduct of the transmembrane protein amyloid precursor protein (APP), whose function is unclear but thought to be involved in neuronal development. The presenilins are components of a proteolytic complex involved in APP processing and degradation. Although amyloid beta monomers are harmless, they undergo a dramatic conformational change at sufficiently high concentration to form a beta sheet-rich tertiary structure that aggregates to form amyloid fibrils that deposit outside neurons in dense formations known as senile plaques or neuritic plaques, in less dense aggregates as diffuse plaques, and sometimes in the walls of small blood vessels in the brain in a process called amyloid angiopathy or congophilic angiopathy. AD is also considered a tauopathy due to abnormal aggregation of the tau protein, a microtubule-associated protein expressed in neurons that normally acts to stabilize microtubules in the cell cytoskeleton. Like most microtubule-associated proteins, tau is normally regulated by phosphorylation; however, in AD patients, hyperphosphorylated tau accumulates as paired helical filaments that in turn aggregate into masses inside nerve cell bodies known as neurofibrillary tangles and as dystrophic neurites associated with amyloid plaques. Levels of the neurotransmitter acetylcholine (ACh) are reduced. Levels of other neurotransmitters serotonin, norepinephrine, and somatostatin are also often reduced. Replenishing the ACh by anti-cholinesterases is an approved mode of treatment by FDA. An alternative method of stimulating ACh receptors of M1-M3 types by synthetic agonists that have a slower rate of dissociation from the receptor has been proposed as next generation cholinomimetic in Alzheimer's disease[15]. Disease mechanisms While the gross histological features of AD in the brain have been well characterized, several different hypotheses have been advanced regarding the primary cause. Among the oldest hypotheses is the cholinergic hypothesis, which suggests that deficiency in cholinergic signaling initiates the progression of the disease. Current theories establish that both misfolding tau protein inside the cell and aggregation of amyloid beta outside the cell initiates the cascade leading to AD pathology. Newer potential hypotheses propose metabolic factors, vascular disturbance, and chronically elevated inflammation in the brain as contributing factors to AD. The amyloid beta hypothesis of molecular initiation have become dominant among many researchers to date. The amyloid and tau hypothesis are the most widely accepted. Tau hypothesis The hypothesis that tau is the primary causative factor has long been grounded in the observation that deposition of amyloid plaques does not correlate well with neuron loss. A mechanism for neurotoxicity has been proposed based on the loss of microtubule-stabilizing tau protein that leads to the degradation of the cytoskeleton. However, consensus has not been reached on whether tau hyperphosphorylation precedes or is caused by the formation of the abnormal helical filament aggregates. Support for the tau hypothesis also derives from the existence of other diseases known as tauopathies in which the same protein is identifiably misfolded. However, a majority of researchers support the alternative hypothesis that amyloid is the primary causative agent. Amyloid hypothesis The amyloid hypothesis was proposed because the gene for the amyloid beta precursor APP is located on chromosome 21, and patients with trisomy 21 – better known as Down syndrome – who have an extra gene copy exhibit AD-like disorders by 40 years of age. The amyloid hypothesis points to the cytotoxicity of mature aggregated amyloid fibrils, which are believed to be the toxic form of the protein responsible for disrupting the cell's calcium ion homeostasis and thus inducing apoptosis. This hypothesis is supported by the observation that higher levels of a variant of the beta amyloid protein known to form fibrils faster in vitro correlate with earlier onset and greater cognitive impairment in mouse models and with AD diagnosis in humans. However, mechanisms for the induced calcium influx, or proposals for alternative cytotoxic mechanisms, by mature fibrils are not obvious. A more recent variation of the amyloid hypothesis identifies the cytotoxic species as an intermediate misfolded form of amyloid beta, neither a soluble monomer nor a mature aggregated polymer but an oligomeric species, possibly toroidal or star-shaped with a central channel that may induce apoptosis by physically piercing the cell membrane. This ion channel hypothesis postulates that oligomers of soluble, non-fibrillar Aβ form membrane ion channels allowing unregulated calcium influx into neurons. A related alternative suggests that a globular oligomer localized to dendritic processes and axons in neurons is the cytotoxic species. The prefibrillar aggregates were shown to be able to disrupt the membrane. The cytotoxic-fibril hypothesis presents a clear target for drug development: inhibit the fibrillization process. Much early development work on lead compounds has focused on this inhibition; most are also reported to reduce neurotoxicity, but the toxic-oligomer theory would imply that prevention of oligomeric assembly is the more important process or that a better target lies upstream, for example in the inhibition of APP processing to amyloid beta. For example, apomorphine was seen to significantly improve memory function through the increased successful completion of the Morris Water Maze. Soluble intracellular (o)Aβ42 Two papers have shown that oligomeric (o)Aβ42 (a species of Aβ), in soluble intracellular form, acutely inhibits synaptic transmission, a pathophysiology that characterizes AD (in its early stages), by activating casein kinase 2. Inflammatory hypothesis Converging evidence suggests that a sustained inflammatory response in the brain is a core modifying feature of AD pathology and may be a key modifying factor in AD pathogenesis. The brains of AD patients exhibit several markers of increased inflammatory signaling. The inflammatory hypothesis proposes that chronically elevated inflammation in the brain is a crucial component to the amyloid cascade in the early phases of AD and magnifies disease severity in later stages of AD. Aβ is present in healthy brains and serves a vital physiological function in recovery from neuronal injury, protection from infection, and repair of the blood-brain barrier, however it is unknown how Aβ production starts to exceed the clearance capacity of the brain and initiates AD progression. A possible explanation is that Aβ causes microglia, the resident immune cell of the brain, to become activated and secrete pro-inflammatory signaling molecules, called cytokines, which recruit other local microglia. While acute microglial activation, as in response to injury, is beneficial and allows microglia to clear Aβ and other cellular debris via phagocytosis, chronically activated microglia exhibit decreased efficiency in Aβ clearance. Despite this reduced AB clearance capacity, activated microglia continue to secrete pro-inflammatory cytokines like interleukins 1β and 6 (IL-6, IL-1β) and tumor necrosis factor-alpha (TNF-a), as well as reactive oxygen species which disrupt healthy synaptic functioning and eventually cause neuronal death. The loss of synaptic functioning and later neuronal death is responsible for the cognitive impairments and loss of volume in key brain regions which are associated with AD. IL-1B, IL-6, and TNF-a cause further production of Aβ oligomers, as well as tau hyperphosphorylation, leading to continued microglia activation and creating a feed forward mechanism in which Aβ production is increased and Aβ clearance is decreased eventually causing the formation of Aβ plaques. Historical cholinergic hypothesis The cholinergic hypothesis of AD development was first proposed in 1976 by Peter Davies and A.J.F Maloney. It claimed that Alzheimer's begins as a deficiency in the production of acetylcholine, a vital neurotransmitter. Much early therapeutic research was based on this hypothesis, including restoration of the "cholinergic nuclei". The possibility of cell-replacement therapy was investigated on the basis of this hypothesis. All of the first-generation anti-Alzheimer's medications are based on this hypothesis and work to preserve acetylcholine by inhibiting acetylcholinesterases (enzymes that break down acetylcholine). These medications, though sometimes beneficial, have not led to a cure. In all cases, they have served to only treat symptoms of the disease and have neither halted nor reversed it. These results and other research have led to the conclusion that acetylcholine deficiencies may not be directly causal, but are a result of widespread brain tissue damage, damage so widespread that cell-replacement therapies are likely to be impractical. More recent findings center on the effects of the misfolded and aggregated proteins, amyloid beta and tau: tau protein abnormalities may initiate the disease cascade, then beta amyloid deposits progress the disease. Glucose consumption The human brain is one of the most metabolically active organs in the body and metabolizes a large amount of glucose to produce cellular energy in the form of adenosine triphosphate (ATP). Despite its high energy demands, the brain is relatively inflexible in its ability to utilize substrates for energy production and relies almost entirely on circulating glucose for its energy needs. This dependence on glucose puts the brain at risk if the supply of glucose is interrupted, or if its ability to metabolize glucose becomes defective. If the brain is not able to produce ATP, synapses cannot be maintained and cells cannot function, ultimately leading to impaired cognition. Imaging studies have shown decreased utilization of glucose in the brains of Alzheimer's disease patients early in the disease, before clinical signs of cognitive impairment occur. This decrease in glucose metabolism worsens as clinical symptoms develop and the disease progresses. Studies have found a 17%-24% decline in cerebral glucose metabolism in patients with Alzheimer's disease, compared with age-matched controls. Numerous imaging studies have since confirmed this observation. Abnormally low rates of cerebral glucose metabolism are found in a characteristic pattern in the Alzheimer's disease brain, particularly in the posterior cingulate, parietal, temporal, and prefrontal cortices. These brain regions are believed to control multiple aspects of memory and cognition. This metabolic pattern is reproducible and has even been proposed as a diagnostic tool for Alzheimer's disease. Moreover, diminished cerebral glucose metabolism (DCGM) correlates with plaque density and cognitive deficits in patients with more advanced disease. Diminished cerebral glucose metabolism (DCGM) may not be solely an artifact of brain cell loss since it occurs in asymptomatic patients at risk for Alzheimer's disease, such as patients homozygous for the epsilon 4 variant of the apolipoprotein E gene (APOE4, a genetic risk factor for Alzheimer's disease), as well as in inherited forms of Alzheimer's disease. Given that DCGM occurs before other clinical and pathological changes occur, it is unlikely to be due to the gross cell loss observed in Alzheimer's disease. In imaging studies involving young adult APOE4 carriers, where there were no signs of cognitive impairment, diminished cerebral glucose metabolism (DCGM) was detected in the same areas of the brain as older subjects with Alzheimer's disease. However, DCGM is not exclusive to APOE4 carriers. By the time Alzheimer's has been diagnosed, DCGM occurs in genotypes APOE3/E4, APOE3/E3, and APOE4/E4. Thus, DCGM is a metabolic biomarker for the disease state. Insulin signaling A connection has been established between Alzheimer's disease and diabetes during the past decade, as insulin resistance, which is a characteristic hallmark of diabetes, has also been observed in brains of subjects with Alzheimer's disease. Neurotoxic oligomeric amyloid-β species decrease the expression of insulin receptors on the neuronal cell surface and abolish neuronal insulin signaling. It has been suggested that neuronal gangliosides, which take part in the formation of membrane lipid microdomains, facilitate amyloid-β-induced removal of the insulin receptors from the neuronal surface. In Alzheimer's disease, oligomeric amyloid-β species trigger TNF-α signaling. c-Jun N-terminal kinase activation by TNF-α in turn activates stress-related kinases and results in IRS-1 serine phosphorylation, which subsequently blocks downstream insulin signaling. The resulting insulin resistance contributes to cognitive impairment. Consequently, increasing neuronal insulin sensitivity and signaling may constitute a novel therapeutic approach to treat Alzheimer's disease. Oxidative stress Oxidative stress is emerging as a key factor in the pathogenesis of AD. Reactive oxygen species (ROS) over-production is thought to play a critical role in the accumulation and deposition of amyloid beta in AD. Brains of AD patients have elevated levels of oxidative DNA damage in both nuclear and mitochondrial DNA, but the mitochondrial DNA has approximately 10-fold higher levels than nuclear DNA. Aged mitochondria may be the critical factor in the origin of neurodegeneration in AD. Even individuals with mild cognitive impairment, the phase between normal aging and early dementia, have increased oxidative damage in their nuclear and mitochondrial brain DNA (see Aging brain). Naturally occurring DNA double-strand breaks (DSBs) arise in human cells largely from single-strand breaks induced by various processes including the activity of reactive oxygen species, topoisomerases, and hydrolysis due to thermal fluctuations. In neurons DSBs are induced by a type II topoisomerase as part of the physiologic process of memory formation. DSBs are present in both neurons and astrocytes in the postmortem human hippocampus of AD patients at a higher level than in non-AD individuals. AD is associated with an accumulation of DSBs in neurons and astrocytes in the hippocampus and frontal cortex from early stages onward. DSBs are increased in the vicinity of amyloid plaques in the hippocampus, indicating a potential role for Aβ in DSB accumulation or vice versa. The predominant mechanism for repairing DNA double-strand breaks is non-homologous end joining (NHEJ), a mechanism that utilizes the DNA-dependent protein kinase (DNA-PK) complex. The end joining activity and protein levels of DNA-PK catalytic subunit are significantly lower in AD brains than in normal brains. Cholesterol hypothesis The cholesterol hypothesis is a combination of the amyloid hypothesis, tau hypothesis, and potentially the inflammatory hypothesis. Cholesterol was shown to be upstream of both amyloid and tau production. The cholesterol is produced in the astrocytes and shipped to neurons where it activates amyloid production through a process called substrate presentation. The process required apoE. Cholesterol's regulation of Tau production is less well understood, but knocking out the cholesterol synthesis enzyme SREBP2 decreased Tau phosphorylation. Innate immunity triggers cholesterol synthesis and cells take up the cholesterol. Presumably a cell in the brain dies with old age and this triggers innate immunity. More studies are needed to directly tie the inflammatory hypothesis to cholesterol synthesis in the brain. Reelin hypothesis A 1994 study showed that the isoprenoid changes in Alzheimer's disease differ from those occurring during normal aging and that this disease cannot, therefore, be regarded as a result of premature aging. During aging the human brain shows a progressive increase in levels of dolichol, a reduction in levels of ubiquinone, but relatively unchanged concentrations of cholesterol and dolichyl phosphate. In Alzheimer's disease, the situation is reversed with decreased levels of dolichol and increased levels of ubiquinone. The concentrations of dolichyl phosphate are also increased, while cholesterol remains unchanged. The increase in the sugar carrier dolichyl phosphate may reflect an increased rate of glycosylation in the diseased brain and the increase in the endogenous anti-oxidant ubiquinone an attempt to protect the brain from oxidative stress, for instance induced by lipid peroxidation. Ropren, identified previously in Russia, is neuroprotective in a rat model of Alzheimer's disease. A relatively recent hypothesis based mainly on rodent experiments links the onset of Alzheimer's disease to the hypofunction of the large extracellular protein reelin. A decrease of reelin in the human entorhinal cortex where the disease typically initiates is evident while compensatory increase of reelin levels in other brain structures of the patients is also reported. Of key importance, overexpression of reelin rescues the cognitive capacities of Alzheimer's disease model mice and τ-protein overexpressing mice. A recent circuit level model proposed a mechanism of how reelin depletion leads to the early deterioration of episodic memory thereby laying the theoretical foundation of the reelin hypothesis. Large gene instability hypothesis A bioinformatics analysis in 2017 revealed that extremely large human genes are significantly over-expressed in brain and take part in the postsynaptic architecture. These genes are also highly enriched in cell adhesion Gene Ontology (GO) terms and often map to chromosomal fragile sites. The majority of known Alzheimer's disease risk gene products including the amyloid precursor protein (APP) and gamma-secretase, as well as the APOE receptors and GWAS risk loci take part in similar cell adhesion mechanisms. It was concluded that dysfunction of cell and synaptic adhesion is central to Alzheimer's disease pathogenesis, and mutational instability of large synaptic adhesion genes may be the etiological trigger of neurotransmission disruption and synaptic loss in brain aging. As a typical example, this hypothesis explains the APOE risk locus of AD in context of signaling of its giant lipoprotein receptor, LRP1b which is a large tumor-suppressor gene with brain-specific expression and also maps to an unstable chromosomal fragile site. The large gene instability hypothesis puts the DNA damage mechanism at the center of Alzheimer's disease pathophysiology. References Alzheimer's disease Neurology Unsolved problems in neuroscience Pathology
Biochemistry of Alzheimer's disease
[ "Biology" ]
4,850
[ "Pathology" ]
7,202,208
https://en.wikipedia.org/wiki/Canadian%20Electrical%20Code
The Canadian Electrical Code, CE Code, or CSA C22.1 is a standard published by the Canadian Standards Association pertaining to the installation and maintenance of electrical equipment in Canada. The first edition of the Canadian Electrical Code was published in 1927. The current (26th) edition was published in March of 2024. Code revisions are currently scheduled on a three-year cycle. The Code is produced by a large body of volunteers from industry and various levels of government. The Code uses a prescriptive model, outlining in detail the wiring methods that are acceptable. In the current edition, the Code recognizes that other methods can be used to assure safe installations, but these methods must be acceptable to the authority enforcing the Code in a particular jurisdiction. The Canadian Electrical Code serves as the basis for wiring regulations across Canada. Generally, legislation adopts the Code by reference, usually with a schedule of changes that amend the Code for local conditions. These amendments may be administrative in nature or may consist of technical content particular to the region. Since the Code is a copyrighted document produced by a private body, it may not be distributed without copyright permission from the Canadian Standards Association. The Code is divided into sections, each section is labeled with an even number and a title. Sections 0, 2, 4, 6, 8, 10, 12, 14, 16, and 26 include rules that apply to installations in general; the remaining sections are supplementary and deal with installation methods in specific locations or situations. Some examples of general sections include: grounding and bonding, protection and control, conductors, and definitions. Some examples of supplementary sections include: wet locations, hazardous locations, patient care areas, emergency systems, and temporary installations. When interpreting the requirements for a particular installation, rules found in supplementary sections of the Code amend or supersede the rules in general sections of the Code. The Canadian Electrical Code does not apply to vehicles, systems operated by an electrical or communications utility, railway systems, aircraft or ships; since these installations are already regulated by separate documents. The Canadian Electrical Code is published in several parts: Part I is the safety standard for electrical installations. Part II is a collection of individual standards for the evaluation of electrical equipment or installations. (Part I requires that electrical products be approved to a Part II standard) Part III is the safety standard for power distribution and transmission circuits. Part IV is set of objective-based standards that may be used in certain industrial or institutional installations. Part VI establishes standards for the inspection of electrical installation in residential buildings. Technical requirements of the Canadian Electrical Code are very similar to those of the U.S. National Electrical Code. Specific differences still exist and installations acceptable under one Code may not entirely comply with the other. Correlation of technical requirements between the two Codes is ongoing. Several CE Code Part II electrical equipment standards have been harmonized with standards in the USA and Mexico through CANENA, The Council for the Harmonization of Electromechanical Standards of the Nations of the Americas (CANENA) is working to harmonize electrical codes in the western hemisphere. Objective based code In response to industry demand, CSA has developed Part IV of the Canadian Electrical Code, consisting of two standards CSA C22.4 No. 1 "Objective-based industrial electrical code" and CSA C22.4 No. 2 "Objective-based industrial electrical code - Safety management system requirements". These standards are intended for use only by authorized industrial users and would not apply, for example, to residential construction. These standards do not prescribe specific solutions for every case but instead give guidance to the user on achievement of the safety objectives of IEC 60364. Since it is less prescriptive, the OBIEC allows industrial users to use new technology not yet represented in the CE Code Part II. Use of this OBIEC is restricted to industrial and institutional users who have a safety management program in place and the engineering resources to implement the regulations. It is intended that users of the OBIEC will maintain safety while using methods that will reduce the installation cost of large industrial plants, for example, in the petrochemical business. References See also Electrical code Electrical safety Electrical wiring
Canadian Electrical Code
[ "Physics", "Engineering" ]
836
[ "Electrical systems", "Building engineering", "Physical systems", "Electrical engineering", "Electrical wiring" ]
7,202,313
https://en.wikipedia.org/wiki/Electrical%20code
An electrical code is a term for a set of regulations for the design and installation of electrical wiring in a building. The intention of such regulations is to provide standards to ensure electrical wiring systems are safe for people and property, protecting them from electrical shock and fire hazards. They are usually based on a model code (with or without local amendments) produced by a national or international standards organisation. Such wiring is subject to rigorous safety standards for design and installation. Wires and electrical cables are specified according to the circuit operating voltage and electric current capability, with further restrictions on the environmental conditions, such as ambient temperature range, moisture levels, and exposure to sunlight and chemicals. Associated circuit protection, control and distribution devices within a building's wiring system are subject to voltage, current and functional specification. To ensure both wiring and associated devices are designed, selected and installed so that they are safe for use, they are subject to wiring safety codes or regulations, which vary by locality, country or region. The International Electrotechnical Commission (IEC) is attempting to harmonise wiring standards amongst member countries, but large variations in design and installation requirements still exist. Regional codes Wiring installation codes and regulations are intended to protect people and property from electrical shock and fire hazards. They are usually based on a model code (with or without local amendments) produced by a national or international standards organisation, such as the IEC. Australia and New Zealand In Australia and New Zealand, the AS/NZS 3000 standard, commonly known as the "wiring rules", specifies requirements for the selection and installation of electrical equipment, and the design and testing of such installations. The standard is mandatory in both New Zealand and Australia; therefore, all electrical work covered by the standard must comply. Europe In European countries, an attempt has been made to harmonise national wiring standards in an IEC standard, IEC 60364 Electrical Installations for Buildings. Hence national standards follow an identical system of sections and chapters. However, this standard is not written in such language that it can readily be adopted as a national wiring code. Neither is it designed for field use by electrical tradespeople and inspectors for testing compliance with national wiring standards. By contrast, national codes, such as the NEC or CSA C22.1, generally exemplify the common objectives of IEC 60364, but provide specific rules in a form that allows for guidance of those installing and inspecting electrical systems. Belgium RGIE (fr) (Réglement Général sur les Installations Électriques) is used for installations in Belgium. AREI (nl) (Algemeen Reglement Elektrische Installaties) is used for installations in Flanders, Belgium. France NF C 15-100 (fr) is used for low voltage installations in France Germany The VDE is the organisation responsible for the promulgation of electrical standards and safety specifications. DIN VDE 0100 is the German wiring regulations document harmonised with IEC 60364. In Germany, blue can also mean phase or switched phase. Sweden In Sweden, IEC 60364 is implemented through the national standard SS-436 40 000. United Kingdom In the United Kingdom, wiring installations are regulated by the British Standard known as BS 7671 Requirements for Electrical Installations: IET Wiring Regulations, which are harmonised with IEC 60364. The first edition was published in 1882. BS 7671 is an industry standard and as such is not itself statutory, however legislation in the form of UK Building Regulations requires that domestic installations conform to a safe standard, and official guidance accompanying this statutory regulation points to following BS 7671 as one way to comply. BS 7671 is also used as a national standard by Mauritius, St Lucia, Saint Vincent and the Grenadines, Sierra Leone, Singapore, Sri Lanka, Trinidad and Tobago, Uganda and Cyprus. North America The first electrical codes in the United States originated in New York in 1881 to regulate installations of electric lighting. Since 1897 the US National Fire Protection Association, a private non-profit association formed by insurance companies, has published the National Electrical Code (NEC). States, counties or cities often include the NEC in their local building codes by reference along with local differences. The NEC is modified every three years. It is a consensus code considering suggestions from interested parties. The proposals are studied by committees of engineers, tradesmen, manufacturer representatives, fire fighters, and other invitees. Since 1927, the Canadian Standards Association (CSA) has produced the Canadian Safety Standard for Electrical Installations, which is the basis for provincial electrical codes. The CSA also produces the Canadian Electrical Code, the 2006 edition of which references IEC 60364 (Electrical Installations for Buildings) and states that the code addresses the fundamental principles of electrical protection in Section 131. The Canadian code reprints Chapter 13 of IEC 60364, but there are no numerical criteria listed in that chapter to assess the adequacy of any electrical installation. Although the US and Canadian national standards deal with the same physical phenomena and broadly similar objectives, they differ occasionally in technical detail. As part of the North American Free Trade Agreement (NAFTA) program, US and Canadian standards are slowly converging toward each other, in a process known as harmonisation. Mexico and Costa Rica follow the US National Electrical Code. South America Venezuela and Colombia follow the US National Electrical Code. India India is regulated by the so called Central Electricity Authority Regulations (CEAR). Colour coding of wiring by region In a typical electrical code some colour-coding of wires is mandatory. Many local rules and exceptions exist per country, state, or region. Older installations vary in colour codes, and colours may fade with insulation exposure to heat, light, and aging. Europe From 1970 European countries started a process of harmonising their wiring colours, as several countries had chosen the same colour to denote different wires. The new harmonised colours were chosen mainly because no country had used them. Colours like pink, orange and turquoise were not available as they were deemed to be too close to other colours. Even so, there were unavoidable clashes. Blue was a phase conductor in the United Kingdom and Ireland, which delayed the adoption of the new colours for several decades. But flexible cable was changed pretty much instantly following pressure from manufacturers of appliances. Pre-harmonised European colours Post-harmonised European colours As of March 2011, the European Committee for Electrotechnical Standardization (CENELEC) requires the use of green/yellow stripped cables as protective conductors, blue as neutral conductors and brown as single-phase conductors. The use of stripped green/yellow for earth conductors was adopted for its distinctive appearance to reduce the likelihood of dangerous confusion of safety earthing (grounding) wires with other electrical functions, especially by persons affected by red–green colour blindness. Sweden In Sweden there is a notable exception for blue, where while the colour normally is used for neutral, it may be used as connecting wire between switches and between switch and fixture, as well as phase wire in a two-phase circuit, all under the condition that no neutral wire is used in the particular circuit. United Kingdom In the UK it is fairly common practice to use three-core cable with three-phase coloured insulation for part of the wiring of two-way lighting switches. To avoid confusion the accepted practice is to add coloured sleeves to the ends in brown or blue as appropriate to communicate how the wires are being used. United States The United States National Electrical Code requires a bare copper, or green or green/yellow insulated protective conductor, a white or grey neutral, with any other colour used for single phase. The NEC also requires the high-leg conductor of a high-leg delta system to have orange insulation, or to be identified by other suitable means such as tagging. Prior to the adoption of orange as the suggested colour for the high-leg in the 1971 NEC, it was common practice in some areas to use red for this purpose. The introduction of the NEC clearly states that it is not intended to be a design manual, and therefore creating a colour code for ungrounded or "hot" conductors falls outside the scope and purpose of the NEC. However, it is a common misconception that "hot" conductor colour-coding is required by the Code. In the United States, colour-coding of three-phase system conductors follows a de facto standard, wherein black, red, and blue are used for three-phase 120/208-volt systems, and brown, orange or violet, and yellow are used in 277/480-volt systems. (Violet avoids conflict with the NEC's high-leg delta rule.) In buildings with multiple voltage systems, the grounded conductors (neutrals) of both systems are required to be separately identified and made distinguishable to avoid cross-system connections. Most often, 120/208-volt systems use white insulation, while 277/480-volt systems use grey insulation, although this particular colour code is not currently an explicit requirement of the NEC. Some local jurisdictions do specify required colour coding in their local building codes, however. Color codes See also Electrical wiring References Electrical safety Fire protection Fire prevention
Electrical code
[ "Engineering" ]
1,852
[ "Building engineering", "Fire protection" ]
7,202,520
https://en.wikipedia.org/wiki/Aluminium%E2%80%93lithium%20alloys
Aluminium–lithium alloys (Al–Li alloys) are a set of alloys of aluminium and lithium, often also including copper and zirconium. Since lithium is the least dense elemental metal, these alloys are significantly less dense than aluminium. Commercial Al–Li alloys contain up to 2.45% lithium by mass. Crystal structure Alloying with lithium reduces structural mass by three effects: Displacement  A lithium atom is lighter than an aluminium atom; each lithium atom then displaces one aluminium atom from the crystal lattice while maintaining the lattice structure. Every 1% by mass of lithium added to aluminium reduces the density of the resulting alloy by 3% and increases the stiffness by 5%. This effect works up to the solubility limit of lithium in aluminium, which is 4.2%. Strain hardening Introducing another type of atom into the crystal strains the lattice, which helps block dislocations. The resulting material is thus stronger, which allows less of it to be used. Precipitation hardening When properly aged, lithium forms a metastable Al3Li phase (δ') with a coherent crystal structure. These precipitates strengthen the metal by impeding dislocation motion during deformation. The precipitates are not stable, however, and care must be taken to prevent overaging with the formation of the stable AlLi (β) phase. This also produces precipitate free zones (PFZs) typically at grain boundaries and can reduce the corrosion resistance of the alloy. The crystal structure for Al3Li and Al–Li, while based on the FCC crystal system, are very different. Al3Li shows almost the same-size lattice structure as pure aluminium, except that lithium atoms are present in the corners of the unit cell. The Al3Li structure is known as the AuCu3, L12, or Pmm and has a lattice parameter of 4.01 Å. The Al–Li structure is known as the NaTl, B32, or Fdm structure, which is made of both lithium and aluminium assuming diamond structures and has a lattice parameter of 6.37 Å. The interatomic spacing for Al–Li (3.19 Å) is smaller than either pure lithium or aluminium. Usage Al–Li alloys are primarily of interest to the aerospace industry for their weight advantage. On narrow-body airliners, Arconic (formerly Alcoa) claims up to 10% weight reduction compared to composites, leading to up to 20% better fuel efficiency, at a lower cost than titanium or composites. Aluminium–lithium alloys were first used in the wings and horizontal stabilizer of the North American A-5 Vigilante military aircraft. Other Al–Li alloys have been employed in the lower wing skins of the Airbus A380, the inner wing structure of the Airbus A350, the fuselage of the Bombardier CSeries (where the alloys make up 24% of the fuselage), the cargo floor of the Boeing 777X, and the fan blades of the Pratt & Whitney PurePower geared turbofan aircraft engine. They are also used in the fuel and oxidizer tanks in the SpaceX Falcon 9 launch vehicle, Formula One brake calipers, and the AgustaWestland EH101 helicopter. The third and final version of the US Space Shuttle's external tank was principally made of Al–Li 2195 alloy. In addition, Al–Li alloys are also used in the Centaur Forward Adapter in the Atlas V rocket, in the Orion Spacecraft, and were to be used in the planned Ares I and Ares V rockets (part of the cancelled Constellation program). Al–Li alloys are generally joined by friction stir welding. Some Al–Li alloys, such as Weldalite 049, can be welded conventionally; however, this property comes at the price of density; Weldalite 049 has about the same density as 2024 aluminium and 5% higher elastic modulus. Al–Li is also produced in rolls as wide as , which can reduce the number of joins. Although aluminium–lithium alloys are generally superior to aluminium–copper or aluminium–zinc alloys in ultimate strength-to-weight ratio, their poor fatigue strength under compression remains a problem, which is only partially solved as of 2016. Also, high costs (around 3 times or more than for conventional aluminium alloys), poor corrosion resistance, and strong anisotropy of mechanical properties of rolled aluminium–lithium products has resulted in a paucity of applications. Al-Li alloy powder is used in the production of lightweight sporting goods, including bicycles, tennis rackets, golf clubs, and baseball bats. Its high strength combined with reduced weight significantly enhances performance, speed, and maneuverability. It is also used in the automobile industry as body panels, chassis parts, and suspension components. List of aluminium–lithium alloys Aside from its formal four-digit designation derived from its element composition, an aluminium–lithium alloy is also associated with particular generations, based primarily on when it was first produced, but secondarily on its lithium content. The first generation lasted from the initial background research in the early 20th century to their first aircraft application in the middle 20th century. Consisting of alloys that were meant to replace the popular 2024 and 7075 alloys directly, the second generation of Al–Li had high lithium content of at least 2%; this characteristic produced a large reduction in density but resulted in some negative effects, particularly in fracture toughness. The third generation is the current generation of Al–Li product that is available, and it has gained wide acceptance by aircraft manufacturers, unlike the previous two generations. This generation has reduced lithium content to 0.75–1.8% to mitigate those negative characteristics while retaining some of the density reduction; third-generation Al–Li densities range from . First-generation alloys (1920s–1960s) Second-generation alloys (1970s–1980s) Third-generation alloys (1990s–2010s) Other alloys 1424 aluminium alloy 1429 aluminium alloy 1441K aluminium alloy 1445 aluminium alloy V-1461 aluminium alloy V-1464 aluminium alloy V-1469 aluminium alloy V-1470 aluminium alloy 2094 aluminium alloy 2095 aluminium alloy (Weldalite 049) 2097 aluminium alloy 2197 aluminium alloy 8025 aluminium alloy 8091 aluminium alloy 8093 aluminium alloy CP 276 Production sites Key world producers of aluminium–lithium alloy products are Arconic, Constellium, and Kamensk-Uralsky Metallurgical Works. Arconic Technical Center (Upper Burrell, Pennsylvania, USA) Arconic Lafayette (Indiana, USA); annual capacity of of aluminium–lithium and capable of casting round and rectangular ingot for rolled, extruded and forged applications Arconic Kitts Green (United Kingdom) Rio Tinto Alcan Dubuc Plant (Canada); capacity Constellium Issoire (Puy-de-Dôme), France; annual capacity of Kamensk-Uralsky Metallurgical Works (KUMZ) Aleris (Koblenz, Germany) FMC Corporation - FMC spun off its lithium division into Livent, which has now (2024) merged to form Arcadium (https://arcadiumlithium.com/) Southwest Aluminium (PRC) See also Aluminium alloy Magnesium–lithium alloys GLARE Carbon fiber reinforced plastic (CFRP) References Bibliography External links Lithium Lithium
Aluminium–lithium alloys
[ "Chemistry" ]
1,515
[ "Alloys", "Aluminium alloys" ]
7,202,576
https://en.wikipedia.org/wiki/List%20of%20Internet%20exchange%20points%20by%20size
This is a list of Internet exchange networks by size, measured by peak data rate (throughput), with additional data on location, establishment and average throughput. No Generally only exchanges with more than ten gigabits per second peak throughput have been taken into consideration. The numbers in the list represent switched traffic only (no private interconnects) and are rounded to whole gigabits. Take into consideration that traffic on each exchange point can change quickly, and be seasonal. This list is not exhaustive, as it includes only exchanges willing to make traffic data public on their website. Particularly data of IXPs from the United States and China is hard to come by. Examples of large peering points without public data are NAP of the Americas or PacketExchange. Neither is it any longer authoritative, as companies aggregate their data capacity. For example, as of 2024, the top two entries each contained data for about 40 separate locations, in one case on four different continents. Table See also Autonomous system (Internet) Border Gateway Protocol Internet exchange point List of Internet exchange points Peering Notes References External links Packet Clearing House – Internet Exchange Directory (automatically updated daily list) TeleGeography: The Internet Exchange Points Directory Euro-IX Member IXPs IXP Database (IXPDB) - collects data directly from IXPs through a recurring automated process (2022-11-30) LookinGlass.Org: IXP's with BGP LG service in the World Routing exchange points de:Internet-Knoten#Tabelle internationaler Internet-Knoten (GIX) (nach Traffic sortiert)
List of Internet exchange points by size
[ "Technology" ]
329
[ "Computing-related lists", "Internet-related lists" ]
7,203,159
https://en.wikipedia.org/wiki/Excinuclease
Excision endonuclease, also known as excinuclease or UV-specific endonuclease, is a nuclease (enzyme) which excises a fragment of nucleotides during DNA repair. The excinuclease cuts out a fragment by hydrolyzing two phosphodiester bonds, one on either side of the lesion in the DNA. This process is part of "nucleotide excision repair", a mechanism that can fix specific types of damage to the DNA in the G1 phase of the eukaryotic cell cycle. Such damage may include thymine dimers created by UV rays as well as the bulky distortions in DNA caused by oxidized benzopyrenes from sources such as cigarette smoke. A deficiency of excinuclease occurs in a rare autosomal recessive disease called xeroderma pigmentosum. This disease can cause light-skin, extreme freckling and facial lesions, as well as preventing the repair of pyrimidine dimers. Diagnosis of this disease is done by measuring the enzyme's level in white blood cells in a blood sample. Symptoms in children include extreme UV sensitivity, excessive freckling, multiple skin cancers and corneal ulcerations. Typically, these symptoms are seen during a child's first sun exposure. Notes DNA repair
Excinuclease
[ "Biology" ]
282
[ "Molecular genetics", "Cellular processes", "DNA repair" ]
7,203,215
https://en.wikipedia.org/wiki/Tiopronin
Tiopronin, sold under the brand name Thiola, is a medication used to control the rate of cystine precipitation and excretion in the disease cystinuria. It is available as a generic medication. Medical uses Tiopronin is indicated, in combination with high fluid intake, alkali, and diet modification, for the prevention of cystine stone formation in people and greater with severe homozygous cystinuria, who are not responsive to these measures alone. Side effects Tiopronin may present a variety of side effects, which are broadly similar to those of D-penicillamine and other compounds containing active sulfhydryl groups. Its pharmacokinetics have been studied. Pharmacology Mechanism of action Tiopronin works by reacting with urinary cysteine to form a more soluble, disulfide linked, tiopronin-cysteine complex. Society and culture In the U.S., the drug was marketed by Mission Pharmacal at $1.50 per pill, but in 2014 the rights were bought by Retrophin, owned by Martin Shkreli, and the price increased to $30 per pill for a 100 mg capsule. In 2016 Imprimis Pharmaceuticals introduced a lower cost version marketed as a compounded drug. Research It may also be used for Wilson's disease (an overload of copper in the body), and has also been investigated for the treatment of arthritis, though tiopronin is not an anti-inflammatory. Tiopronin is also sometimes used as a stabilizing agent for metal nanoparticles. The thiol group binds to the nanoparticles, preventing coagulation. References Alpha-Amino acids Amino acid derivatives Orphan drugs Thiols X
Tiopronin
[ "Chemistry" ]
374
[ "Organic compounds", "Thiols" ]
7,203,375
https://en.wikipedia.org/wiki/Harmonic%20coordinate%20condition
The harmonic coordinate condition is one of several coordinate conditions in general relativity, which make it possible to solve the Einstein field equations. A coordinate system is said to satisfy the harmonic coordinate condition if each of the coordinate functions xα (regarded as scalar fields) satisfies d'Alembert's equation. The parallel notion of a harmonic coordinate system in Riemannian geometry is a coordinate system whose coordinate functions satisfy Laplace's equation. Since d'Alembert's equation is the generalization of Laplace's equation to space-time, its solutions are also called "harmonic". Motivation The laws of physics can be expressed in a generally invariant form. In other words, the real world does not care about our coordinate systems. However, for us to be able to solve the equations, we must fix upon a particular coordinate system. A coordinate condition selects one (or a smaller set of) such coordinate system(s). The Cartesian coordinates used in special relativity satisfy d'Alembert's equation, so a harmonic coordinate system is the closest approximation available in general relativity to an inertial frame of reference in special relativity. Derivation In general relativity, we have to use the covariant derivative instead of the partial derivative in d'Alembert's equation, so we get: Since the coordinate xα is not actually a scalar, this is not a tensor equation. That is, it is not generally invariant. But coordinate conditions must not be generally invariant because they are supposed to pick out (only work for) certain coordinate systems and not others. Since the partial derivative of a coordinate is the Kronecker delta, we get: And thus, dropping the minus sign, we get the harmonic coordinate condition (also known as the de Donder gauge after Théophile de Donder): This condition is especially useful when working with gravitational waves. Alternative form Consider the covariant derivative of the density of the reciprocal of the metric tensor: The last term emerges because is not an invariant scalar, and so its covariant derivative is not the same as its ordinary derivative. Rather, because , while Contracting ν with ρ and applying the harmonic coordinate condition to the second term, we get: Thus, we get that an alternative way of expressing the harmonic coordinate condition is: More variant forms If one expresses the Christoffel symbol in terms of the metric tensor, one gets Discarding the factor of and rearranging some indices and terms, one gets In the context of linearized gravity, this is indistinguishable from these additional forms: However, the last two are a different coordinate condition when you go to the second order in h. Effect on the wave equation For example, consider the wave equation applied to the electromagnetic vector potential: Let us evaluate the right hand side: Using the harmonic coordinate condition we can eliminate the right-most term and then continue evaluation as follows: See also Christoffel symbols Covariant derivative Gauge theory General relativity General covariance Holonomic basis Kronecker delta Laplace's equation Laplace operator Ricci calculus Wave equation References P.A.M.Dirac (1975), General Theory of Relativity, Princeton University Press, , chapter 22 External links http://mathworld.wolfram.com/HarmonicCoordinates.html Coordinate charts in general relativity
Harmonic coordinate condition
[ "Mathematics" ]
687
[ "Coordinate charts in general relativity", "Coordinate systems" ]
7,203,729
https://en.wikipedia.org/wiki/D%27Alembert%27s%20equation
In mathematics, d'Alembert's equation, sometimes also known as Lagrange's equation, is a first order nonlinear ordinary differential equation, named after the French mathematician Jean le Rond d'Alembert. The equation reads as where . After differentiating once, and rearranging we have The above equation is linear. When , d'Alembert's equation is reduced to Clairaut's equation. References Eponymous equations of physics Mathematical physics Ordinary differential equations
D'Alembert's equation
[ "Physics", "Mathematics" ]
102
[ "Mathematical analysis", "Equations of physics", "Mathematical analysis stubs", "Applied mathematics", "Theoretical physics", "Eponymous equations of physics", "Mathematical physics" ]
4,139,219
https://en.wikipedia.org/wiki/T/TCP
T/TCP (Transactional Transmission Control Protocol) was a variant of the Transmission Control Protocol (TCP). It was an experimental TCP extension for efficient transaction-oriented (request/response) service. It was developed to fill the gap between TCP and UDP, by Bob Braden in 1994. Its definition can be found in RFC 1644 (that obsoletes RFC 1379). It is faster than TCP and delivery reliability is comparable to that of TCP. T/TCP suffers from several major security problems as described by Charles Hannum in September 1996. It has not gained widespread popularity. RFC 1379 and RFC 1644 that define T/TCP were moved to Historic Status in May 2011 by RFC 6247 for security reasons. Alternatives TCP Fast Open is a more recent alternative. See also TCP Cookie Transactions Further reading Richard Stevens, Gary Wright, "TCP/IP Illustrated: TCP for transactions, HTTP, NNTP, and the UNIX domain protocols" (Volume 3 of TCP/IP Illustrated) // Addison-Wesley, 1996 (), 2000 (). Part 1 "TCP for Transactions". Chapters 1-12, pages 1–159 References External links Example exploit of T/TCP in a post to Bugtraq by Vasim Valejev TCP extensions Internet Standards
T/TCP
[ "Technology" ]
274
[ "Computing stubs", "Computer network stubs" ]
4,139,340
https://en.wikipedia.org/wiki/Federation%20of%20German%20Scientists
The Federation of German Scientists - VDW (Vereinigung Deutscher Wissenschaftler e. V.) is a German non-governmental organization. History Since its founding 1959 by Carl Friedrich von Weizsäcker, Otto Hahn, Max Born and further prominent nuclear scientists, known as Göttinger 18, who had previously publicly declared their position against the nuclear armament of the German Bundeswehr, the Federation has been committed to the ideal of responsible Wissenschaft. The founders were almost identical to the "Göttinger 18" (compare the historical Göttingen Seven). Both the "Göttingen Manifesto" and the formation of the VDW were an expression of the new sense of responsibility felt by Otto Hahn and some scientists after the dropping of atomic bombs on Hiroshima and Nagasaki. The VDW tried to mirror the American Federation of Atomic Scientists. VDW has been identified as Western Germany's Pugwash group. Members of VDW feel committed to taking into consideration the possible military, political and economical implications and possibilities of atomic misuse when carrying out their scientific research and teaching. The Federation of German Scientists comprises around 400 scholars of different fields. The Federation of German Scientists addresses both interested members of the public and decision-makers on all levels of politics and society with its work. The politician Egon Bahr was a longstanding member. Georg Picht presented a radio series about the Limits of growth on behalf of the VDW in the 1970s. In 2005/2006, the VDW was the patron and main contributor to the Potsdam Manifesto‚ 'We have to learn to think in a new way’ and the Potsdam Denkschrift under co-authorship of Hans Peter Duerr and Daniel Dahm, together with Rudolf zur Lippe. Since 2022 Ulrike Beisiegel and Götz Neuneck are co-chairs of the VDW. VDW was closely connected with the German Friedensbewegung (peace movement) in the 1980s. After 1999 VDW tried to regain public interest with the establishment of the Whistleblower Prize, awarded together with the German branch of the International Association of Lawyers Against Nuclear Arms (IALANA). Whistleblower Prize The Whistleblower Prize worth 3,000 euro, is given biannually and was established in 1999. In 2015, the selection of Gilles-Éric Séralini generated some controversy. Ulrich Bahnsen in Die Zeit described VDW and IALANA as consisting of busybodies with best wills - and worst possible outcome in the case of this award. The opinion piece, featured in Zeit Online, described the awarding of Séralini as a failure, and viewed his status as a "whistleblower" as questionable, in light of his use of "junk science" to support anti-GMO activism. Recipients 1999 Alexander Nikitin 2001 Margrit Herbst 2003 Daniel Ellsberg 2005 Theodore A. Postol and Arpad Puztai 2007 Brigitte Heinisch and Liv Bode, in relation with the alleged Bornavirus 2009 Rudolf Schmenger and Frank Wehrheim, taxation experts in the state of Hessen 2011 Chelsea Manning and Rainer Moormann 2013 Edward Snowden 2015 Gilles-Éric Séralini and Brandon Bryant References External links Website of the Federation of German Scientists - VDW Website Goettinger 18 Anti–nuclear weapons movement Organizations established in 1959 1959 establishments in West Germany Pacifism in Germany Science advocacy organizations Pugwash Conferences on Science and World Affairs Ethics of science and technology
Federation of German Scientists
[ "Technology" ]
720
[ "Ethics of science and technology" ]
4,139,830
https://en.wikipedia.org/wiki/Depensation
In population dynamics, depensation is the effect on a population (such as a fish stock) whereby, due to certain causes, a decrease in the breeding population (mature individuals) leads to reduced production and survival of eggs or offspring. The causes may include predation levels rising per offspring (given the same level of overall predator pressure) and the Allee effect, particularly the reduced likelihood of finding a mate. Critical depensation When the level of depensation is high enough that the population is no longer able to sustain itself, it is said to be a critical depensation. This occurs when the population size has a tendency to decline when the population drops below a certain level (known as the "Critical depensation level"). Ultimately this may lead to the population or fishery's collapse (resource depletion), or even local extinction. The phenomenon of critical depensation may be modelled or defined by a negative second order derivative of population growth rate with respect of population biomass, which describes a situation where a decline in population biomass is not compensated by a corresponding increase in marginal growth per unit of biomass. See also Abundance (ecology) Conservation biology Local extinction Overexploitation Overfishing Small population size Threatened species References External links Optimal harvesting in the presence of critical depensation On line source of definitions and other fish info Extinction Ecological processes Population dynamics
Depensation
[ "Physics" ]
280
[ "Physical phenomena", "Ecological processes", "Earth phenomena" ]
4,140,063
https://en.wikipedia.org/wiki/Symbolic%20power
The concept of symbolic power, also known as symbolic domination (domination symbolique in French language) or symbolic violence, was first introduced by French sociologist Pierre Bourdieu to account for the tacit, almost unconscious modes of cultural/social domination occurring within the social habits maintained over conscious subjects. Symbolic power accounts for discipline used against another to confirm that individual's placement in a social hierarchy, at times in individual relations but most basically through system institutions also. Also referred to as soft power, symbolic power includes actions that have discriminatory or injurious meaning or implications, such as gender dominance and racism. Symbolic power maintains its effect through the mis-recognition of power relations situated in the social matrix of a given field. While symbolic power requires a dominator, it also requires the dominated to accept their position in the exchange of social value that occurs between them. History The concept of symbolic power may be seen as grounded in Friedrich Engels' concept of false consciousness. To Engels, under capitalism, objects and social relationships themselves are embedded with societal value that is dependent upon the actors who engage in interactions themselves. Without the illusion of natural law governing such transactions of social and physical worth, the proletariat would be unwilling to consciously support social relations that counteract their own interests. Dominant actors in a society must consciously accept that such an ideological order exists for unequal social relationships to take place. Louis Althusser further developed it in his writing on what he called Ideological State Apparatuses, arguing that the latter's power is partly based on symbolic repression. The concept of symbolic power was first introduced by Pierre Bourdieu in La Distinction. Bourdieu suggested that cultural roles are more dominant than economic forces in determining how hierarchies of power are situated and reproduced across societies. Status and economic capital are both necessary to maintain dominance in a system, rather than just ownership over the means of production alone. The idea that one could possess symbolic capital in addition and set apart from financial capital played a critical role in Bourdieu's analysis of hierarchies of power. For example, in the process of reciprocal gift exchange in the Kabyle society of Algeria, when there is an asymmetry in wealth between the two parties, the better-endowed giver "can impose a strict relation of hierarchy and debt upon the receiver." Symbolic power, therefore, is fundamentally the imposition of categories of thought and perception upon dominated social agents who, once they begin observing and evaluating the world in terms of those categories—and without necessarily being aware of the change in their perspective—then perceive the existing social order as just. This, in turn, perpetuates a social structure favored by and serving the interests of those agents who are already dominant. Symbolic power differs from physical violence in that it is embedded in the modes of action and structures of cognition of individuals, and imposes the specter of legitimacy of the social order. See also Caciquism Power (social and political) Social dominance theory Structural violence Slavoj Žižek References External links Sociological terminology Violence Pierre Bourdieu
Symbolic power
[ "Biology" ]
634
[ "Behavior", "Aggression", "Human behavior", "Violence" ]
4,140,245
https://en.wikipedia.org/wiki/Operation%20%28mathematics%29
In mathematics, an operation is a function from a set to itself. For example, an operation on real numbers will take in real numbers and return a real number. An operation can take zero or more input values (also called "operands" or "arguments") to a well-defined output value. The number of operands is the arity of the operation. The most commonly studied operations are binary operations (i.e., operations of arity 2), such as addition and multiplication, and unary operations (i.e., operations of arity 1), such as additive inverse and multiplicative inverse. An operation of arity zero, or nullary operation, is a constant. The mixed product is an example of an operation of arity 3, also called ternary operation. Generally, the arity is taken to be finite. However, infinitary operations are sometimes considered, in which case the "usual" operations of finite arity are called finitary operations. A partial operation is defined similarly to an operation, but with a partial function in place of a function. Types of operation There are two common types of operations: unary and binary. Unary operations involve only one value, such as negation and trigonometric functions. Binary operations, on the other hand, take two values, and include addition, subtraction, multiplication, division, and exponentiation. Operations can involve mathematical objects other than numbers. The logical values true and false can be combined using logic operations, such as and, or, and not. Vectors can be added and subtracted. Rotations can be combined using the function composition operation, performing the first rotation and then the second. Operations on sets include the binary operations union and intersection and the unary operation of complementation. Operations on functions include composition and convolution. Operations may not be defined for every possible value of its domain. For example, in the real numbers one cannot divide by zero or take square roots of negative numbers. The values for which an operation is defined form a set called its domain of definition or active domain. The set which contains the values produced is called the codomain, but the set of actual values attained by the operation is its codomain of definition, active codomain, image or range. For example, in the real numbers, the squaring operation only produces non-negative numbers; the codomain is the set of real numbers, but the range is the non-negative numbers. Operations can involve dissimilar objects: a vector can be multiplied by a scalar to form another vector (an operation known as scalar multiplication), and the inner product operation on two vectors produces a quantity that is scalar. An operation may or may not have certain properties, for example it may be associative, commutative, anticommutative, idempotent, and so on. The values combined are called operands, arguments, or inputs, and the value produced is called the value, result, or output. Operations can have fewer or more than two inputs (including the case of zero input and infinitely many inputs). An operator is similar to an operation in that it refers to the symbol or the process used to denote the operation. Hence, their point of view is different. For instance, one often speaks of "the operation of addition" or "the addition operation," when focusing on the operands and result, but one switch to "addition operator" (rarely "operator of addition"), when focusing on the process, or from the more symbolic viewpoint, the function (where X is a set such as the set of real numbers). Definition An n-ary operation ω on a set X is a function . The set is called the domain of the operation, the output set is called the codomain of the operation, and the fixed non-negative integer n (the number of operands) is called the arity of the operation. Thus a unary operation has arity one, and a binary operation has arity two. An operation of arity zero, called a nullary operation, is simply an element of the codomain Y. An n-ary operation can also be viewed as an -ary relation that is total on its n input domains and unique on its output domain. An n-ary partial operation ω from is a partial function . An n-ary partial operation can also be viewed as an -ary relation that is unique on its output domain. The above describes what is usually called a finitary operation, referring to the finite number of operands (the value n). There are obvious extensions where the arity is taken to be an infinite ordinal or cardinal, or even an arbitrary set indexing the operands. Often, the use of the term operation implies that the domain of the function includes a power of the codomain (i.e. the Cartesian product of one or more copies of the codomain), although this is by no means universal, as in the case of dot product, where vectors are multiplied and result in a scalar. An n-ary operation is called an . An n-ary operation where is called an external operation by the scalar set or operator set S. In particular for a binary operation, is called a left-external operation by S, and is called a right-external operation by S. An example of an internal operation is vector addition, where two vectors are added and result in a vector. An example of an external operation is scalar multiplication, where a vector is multiplied by a scalar and result in a vector. An n-ary multifunction or ω is a mapping from a Cartesian power of a set into the set of subsets of that set, formally . See also Finitary relation Hyperoperation Infix notation Operator (mathematics) Order of operations References Elementary mathematics
Operation (mathematics)
[ "Mathematics" ]
1,227
[ "Elementary mathematics", "Arithmetic", "Operations on numbers" ]
4,140,304
https://en.wikipedia.org/wiki/Bonox
Bonox is a beef extract made in Australia, currently owned by Bega Cheese after it acquired the brand from Kraft Heinz in 2017. It is primarily a drink but can also be used as stock in cooking. History Bonox was invented by Camron Thomas for Fred Walker of Fred Walker & Co. in 1918. Bonox was launched the following year. The Walker company was purchased by Kraft Foods Inc. sometime after Walker's death in 1935. The product was produced by Kraft (from 2012 Kraft Foods, from 2015 Kraft Heinz) until 2017, when Bonox, along with other brands, was sold to Bega Cheese. It kept the same recipe and jar designs. , Bonox continues to be produced by Bega. Nutritional information This concentrated beef extract contains iron and niacin. It is a thick dark brown liquid paste which can be added to soups or stews for flavoring and can also be added to hot water and served as a beverage. Approximate per 100g Energy, including dietary fibre 401 kJ Moisture 56.6 g Protein 16.6 g Nitrogen 2.66 g Fat 0.2 g Ash 19.8 g Starch 6.5 g Available carbohydrate, without sugar alcohols 6.5 g Available carbohydrate, with sugar alcohols 6.5 g Minerals Calcium (Ca) 110 mg Copper (Cu) 0.11 mg Fluoride (F) 190 ug Iron (Fe) 2 mg Magnesium (Mg) 60 mg Manganese (Mn) 0.13 mg Phosphorus (P) 360 mg Potassium (K) 690 mg Selenium (Se) 4 ug Sodium (Na) 6660 mg Sulphur (S) 160 mg Zinc (Zn) 1.5 mg Vitamins Thiamin (B1) 0.36 mg Riboflavin (B2) 0.27 mg Niacin (B3) 5.4 mg Niacin Equivalents 8.17 mg Pantothenic acid (B5) 0.38 mg Pyridoxine (B6) 0.23 mg Biotin (B7) 12 ug See also Bovril References Products introduced in 1919 Food ingredients Australian brands
Bonox
[ "Technology" ]
452
[ "Food ingredients", "Components" ]
4,140,679
https://en.wikipedia.org/wiki/Biodemography%20of%20human%20longevity
Biodemography is a multidisciplinary approach, integrating biological knowledge (studies on human biology and animal models) with demographic research on human longevity and survival. Biodemographic studies are important for understanding the driving forces of the current longevity revolution (dramatic increase in human life expectancy), forecasting the future of human longevity, and identification of new strategies for further increase in healthy and productive life span. Theory Biodemographic studies have found a remarkable similarity in survival dynamics between humans and laboratory animals. Specifically, three general biodemographic laws of survival are found: Gompertz–Makeham law of mortality Compensation law of mortality Late-life mortality deceleration (now disputed) The Gompertz–Makeham law states that death rate is a sum of an age-independent component (Makeham term) and an age-dependent component (Gompertz function), which increases exponentially with age. The compensation law of mortality (late-life mortality convergence) states that the relative differences in death rates between different populations of the same biological species are decreasing with age, because the higher initial death rates are compensated by lower pace of their increase with age. The disputed late-life mortality deceleration law states that death rates stop increasing exponentially at advanced ages and level off to the late-life mortality plateau. A consequence of this deceleration is that there would be no fixed upper limit to human longevity — no fixed number which separates possible and impossible values of lifespan. If true, this would challenges the common belief in existence of a fixed maximal human life span. Biodemographic studies have found that even genetically identical laboratory animals kept in constant environment have very different lengths of life, suggesting a crucial role of chance and early-life developmental noise in longevity determination. This leads to new approaches in understanding causes of exceptional human longevity. As for the future of human longevity, biodemographic studies found that evolution of human lifespan had two very distinct stages – the initial stage of mortality decline at younger ages is now replaced by a new trend of preferential improvement of the oldest-old survival. This phenomenon invalidates methods of longevity forecasting based on extrapolation of long-term historical trends. A general explanation of these biodemographic laws of aging and longevity has been suggested based on system reliability theory. See also Stress Modeling Demography Biodemography Longevity Life extension List of life extension-related topics Reliability theory of aging and longevity References Further reading External links Biodemography of Human Longevity — abstract of keynote lecture, p. 42. In: Inaugural International Conference on Longevity. Final Programme and Abstracts. Sydney Convention & Exhibition Centre. Sydney, Australia, March 5–7, 2004, 94 pp Ageing Gerontology Medical aspects of death Longevity
Biodemography of human longevity
[ "Biology" ]
565
[ "Gerontology" ]
4,140,772
https://en.wikipedia.org/wiki/Travel%20cost%20analysis
The travel cost method of economic valuation, travel cost analysis, or Clawson method is a revealed preference method of economic valuation used in cost–benefit analysis to calculate the value of something that cannot be obtained through market prices (i.e. national parks, beaches, ecosystems). The aim of the method is to calculate willingness to pay for a constant price facility. The technique was first suggested by the statistician Harold Hotelling in a 1947 letter to the director of the National Park Service of the United States for a method to measure the benefit of National Parks to the public. The method was further refined by Trice and Wood (1958) and Clawson (1959). The technique is one approach to the estimation of a shadow price. Methodology The general principle is that individual visitors spend varying amounts of time and money to access a particular resource. The further away an individual from the resource, the more time and money they spend and the less frequent is the visit. Individual closer to the resource tend to visit more often and spend less. By fitting the distribution of individuals within this spectrum an average of the transport and opportunity costs of the time spent travelling to a recreational site is used to determine the value of the site. Various approaches may be used in the actual collection of data and the estimation. The travel cost method of economic valuation is a revealed preference method because it looks at actual human behavior to try to define the value people place on something. A sample of visitors to the facility are selected These visitors are split into "zones" depending on their distance travelled to the facility. The average distance to the facility and the average travel cost to the facility from each zone are calculated. The visit rate from each zone is calculated. (i.e.) Visit rate: The number of visitors from a given zone/The population of that zone The visit rate is regressed against travel cost in order to create a visit rate curve. Visit rate from given zone = f(cost from given zone) VR=a+b.C This curve can then be used to obtain estimates of visit rates given differing levels of total costs. This enables estimates of numbers of visitors from each zone to be made given differing level of facility price. The sum of the number of visitors from each zone can be plotted/regressed against these differing levels of facility price in order to create a demand curve for the facility. The area under this demand curve is the willingness to pay for the facility which can be used as a valuation for CBA purposes. References External links Ecosystem valuation Curated bibliography at IDEAS/RePEc Environmental economics
Travel cost analysis
[ "Environmental_science" ]
520
[ "Environmental economics", "Environmental social science" ]
4,141,406
https://en.wikipedia.org/wiki/Solar%20dynamo
The solar dynamo is a physical process that generates the Sun's magnetic field. It is explained with a variant of the dynamo theory. A naturally occurring electric generator in the Sun's interior produces electric currents and a magnetic field, following the laws of Ampère, Faraday and Ohm, as well as the laws of fluid dynamics, which together form the laws of magnetohydrodynamics. The detailed mechanism of the solar dynamo is not known and is the subject of current research. Mechanism A dynamo converts kinetic energy into electric-magnetic energy. An electrically conducting fluid with shear or more complicated motion, such as turbulence, can temporarily amplify a magnetic field through Lenz's law: fluid motion relative to a magnetic field induces electric currents in the fluid that distort the initial field. If the fluid motion is sufficiently complicated, it can sustain its own magnetic field, with advective fluid amplification essentially balancing diffusive or ohmic decay. Such systems are called self-sustaining dynamos. The Sun is a self-sustaining dynamo that converts convective motion and differential rotation within the Sun to electric-magnetic energy. Currently, the geometry and width of the tachocline are hypothesized to play an important role in models of the solar dynamo by winding up the weaker poloidal field to create a much stronger toroidal field. However, recent radio observations of cooler stars and brown dwarfs, which do not have a radiative core and only have a convection zone, have demonstrated that they maintain large-scale, solar-strength magnetic fields and display solar-like activity despite the absence of tachoclines. This suggests that the convection zone alone may be responsible for the function of the solar dynamo. Solar cycle The most prominent time variation of the solar magnetic field is related to the quasi-periodic 11-year solar cycle, characterized by an increasing and decreasing number and size of sunspots. Sunspots are visible as dark patches on the Sun's photosphere and correspond to concentrations of magnetic field. At a typical solar minimum, few or no sunspots are visible. Those that do appear are at high solar latitudes. As the solar cycle progresses towards its maximum, sunspots tend to form closer to the solar equator, following Spörer's law. The 11-year sunspot cycle is half of a 22-year Babcock–Leighton solar dynamo cycle, which corresponds to an oscillatory exchange of energy between toroidal and poloidal solar magnetic fields. At solar-cycle maximum, the external poloidal dipolar magnetic field is near its dynamo-cycle minimum strength, but an internal toroidal quadrupolar field, generated through differential rotation within the tachocline, is near its maximum strength. At this point in the dynamo cycle, buoyant upwelling within the convection zone forces emergence of the toroidal magnetic field through the photosphere, giving rise to pairs of sunspots, roughly aligned east–west with opposite magnetic polarities. The magnetic polarity of sunspot pairs alternates every solar cycle, a phenomenon known as the Hale cycle. During the solar cycle's declining phase, energy shifts from the internal toroidal magnetic field to the external poloidal field, and sunspots diminish in number. At solar minimum, the toroidal field is, correspondingly, at minimum strength, sunspots are relatively rare and the poloidal field is at maximum strength. During the next cycle, differential rotation converts magnetic energy back from the poloidal to the toroidal field, with a polarity that is opposite to the previous cycle. The process carries on continuously, and in an idealized, simplified scenario, each 11-year sunspot cycle corresponds to a change in the polarity of the Sun's large-scale magnetic field. Long minima of solar activity can be associated with the interaction between double dynamo waves of the solar magnetic field caused by the beating effect of the wave interference. See also Stellar magnetic field Solar phenomena Atmospheric dynamo References Dynamo Magnetism in astronomy
Solar dynamo
[ "Astronomy" ]
831
[ "Magnetism in astronomy" ]
4,141,476
https://en.wikipedia.org/wiki/Greenberger%E2%80%93Horne%E2%80%93Zeilinger%20state
In physics, in the area of quantum information theory, a Greenberger–Horne–Zeilinger (GHZ) state is an entangled quantum state that involves at least three subsystems (particle states, qubits, or qudits). Named for the three authors that first described this state, the GHZ state predicts outcomes from experiments that directly contradict predictions by every classical local hidden-variable theory. The state has applications in quantum computing. History The four-particle version was first studied by Daniel Greenberger, Michael Horne and Anton Zeilinger in 1989. The following year Abner Shimony joined in an they published a three-particle version based on suggestions by N. David Mermin. Experimental measurements on such states contradict intuitive notions of locality and causality. GHZ states for large numbers of qubits are theorized to give enhanced performance for metrology compared to other qubit superposition states. Definition The GHZ state is an entangled quantum state for 3 qubits and it can be written where the or values of the qubit correspond to any two physical states. For example the two states may correspond to spin-down and spin up along some physical axis. In physics applications the state may be written where the numbering of the states represents spin eigenvalues. Another example of a GHZ state is three photons in an entangled state, with the photons being in a superposition of being all horizontally polarized (HHH) or all vertically polarized (VVV), with respect to some coordinate system. The GHZ state can be written in bra–ket notation as Prior to any measurements being made, the polarizations of the photons are indeterminate. If a measurement is made on one of the photons using a two-channel polarizer aligned with the axes of the coordinate system, each orientation will be observed, with 50% probability. However the result of all three measurements on the state gives the same result: all three polarizations are observed along the same axis. Generalization The generalized GHZ state is an entangled quantum state of subsystems. If each system has dimension , i.e., the local Hilbert space is isomorphic to , then the total Hilbert space of an -partite system is . This GHZ state is also called an -partite qudit GHZ state. Its formula as a tensor product is . In the case of each of the subsystems being two-dimensional, that is for a collection of M qubits, it reads The results of actual experiments agree with the predictions of quantum mechanics, not those of local realism. GHZ experiment In the language of quantum computation, the polarization state of each photon is a qubit, the basis of which can be chosen to be With appropriately chosen phase factors for and , both types of measurements used in the experiment becomes Pauli measurements, with the two possible results represented as +1 and −1 respectively: The 45° linear polarizer implements a Pauli measurement, distinguishing between the eigenstates The circular polarizer implements a Pauli measurement, distinguishing between the eigenstates A combination of those measurements on each of the three qubits can be regarded as a destructive multi-qubit Pauli measurement, the result of which being the product of each single-qubit Pauli measurement. For example, the combination "circular polarizer on photons 1 and 2, 45° linear polarizer on photon 3" corresponds to a measurement, and the four possible result combinations (RL+, LR+, RR−, LL−) are exactly the ones corresponding to an overall result of −1. The quantum mechanical predictions of the GHZ experiment can then be summarized as which is consistent in quantum mechanics because all these multi-qubit Paulis commute with each other, and due to the anticommutativity between and . These results lead to a contradiction in any local hidden variable theory, where each measurement must have definite (classical) values determined by hidden variables, because must equal +1, not −1. Properties There is no standard measure of multi-partite entanglement because different, not mutually convertible, types of multi-partite entanglement exist. Nonetheless, many measures define the GHZ state to be a maximally entangled state. Another important property of the GHZ state is that taking the partial trace over one of the three systems yields which is an unentangled mixed state. It has certain two-particle (qubit) correlations, but these are of a classical nature. On the other hand, if we were to measure one of the subsystems in such a way that the measurement distinguishes between the states 0 and 1, we will leave behind either or , which are unentangled pure states. This is unlike the W state, which leaves bipartite entanglements even when we measure one of its subsystems. A pure state of parties is called biseparable, if one can find a partition of the parties in two nonempty disjoint subsets and with such that , i.e. is a product state with respect to the partition . The GHZ state is non-biseparable and is the representative of one of the two non-biseparable classes of 3-qubit states which cannot be transformed (not even probabilistically) into each other by local quantum operations, the other being the W state, . Thus and represent two very different kinds of entanglement for three or more particles. The W state is, in a certain sense "less entangled" than the GHZ state; however, that entanglement is, in a sense, more robust against single-particle measurements, in that, for an N-qubit W state, an entangled (N − 1)-qubit state remains after a single-particle measurement. By contrast, certain measurements on the GHZ state collapse it into a mixture or a pure state. Experiments on the GHZ state lead to striking non-classical correlations (1989). Particles prepared in this state lead to a version of Bell's theorem, which shows the internal inconsistency of the notion of elements-of-reality introduced in the famous Einstein–Podolsky–Rosen article. The first laboratory observation of GHZ correlations was by the group of Anton Zeilinger (1998), who was awarded a share of the 2022 Nobel Prize in physics for this work. Many more accurate observations followed. The correlations can be utilized in some quantum information tasks. These include multipartner quantum cryptography (1998) and communication complexity tasks (1997, 2004). Pairwise entanglement Although a measurement of the third particle of the GHZ state that distinguishes the two states results in an unentangled pair, a measurement along an orthogonal direction can leave behind a maximally entangled Bell state. This is illustrated below. The 3-qubit GHZ state can be written as where the third particle is written as a superposition in the X basis (as opposed to the Z basis) as and . A measurement of the GHZ state along the X basis for the third particle then yields either , if was measured, or , if was measured. In the later case, the phase can be rotated by applying a Z quantum gate to give , while in the former case, no additional transformations are applied. In either case, the result of the operations is a maximally entangled Bell state. This example illustrates that, depending on which measurement is made of the GHZ state is more subtle than it first appears: a measurement along an orthogonal direction, followed by a quantum transform that depends on the measurement outcome, can leave behind a maximally entangled state. Applications GHZ states are used in several protocols in quantum communication and cryptography, for example, in secret sharing or in the quantum Byzantine agreement. See also Bell's theorem Local hidden-variable theory NOON state Quantum pseudo-telepathy Dicke state References Quantum information theory Quantum states
Greenberger–Horne–Zeilinger state
[ "Physics" ]
1,621
[ "Quantum states", "Quantum mechanics" ]
4,141,488
https://en.wikipedia.org/wiki/Triflic%20acid
Triflic acid, the short name for trifluoromethanesulfonic acid, TFMS, TFSA, HOTf or TfOH, is a sulfonic acid with the chemical formula CF3SO3H. It is one of the strongest known acids. Triflic acid is mainly used in research as a catalyst for esterification. It is a hygroscopic, colorless, slightly viscous liquid and is soluble in polar solvents. Synthesis Trifluoromethanesulfonic acid is produced industrially by electrochemical fluorination (ECF) of methanesulfonic acid: CH3SO3H + 4 HF ->CF3SO2F + H2O + 3 H2 The resulting CF3SO2F is hydrolyzed, and the resulting triflate salt is reprotonated. Alternatively, trifluoromethanesulfonic acid arises by oxidation of trifluoromethylsulfenyl chloride: CF3SCl + 2 Cl2 + 3 H2O -> CF3SO3H + 5 HCl Triflic acid is purified by distillation from triflic anhydride. Historical Trifluoromethanesulfonic acid was first synthesized in 1954 by Robert Haszeldine and Kidd by the following reaction: Reactions As an acid In the laboratory, triflic acid is useful in protonations because the conjugate base of triflic acid is nonnucleophilic. It is also used as an acidic titrant in nonaqueous acid-base titration because it behaves as a strong acid in many solvents (acetonitrile, acetic acid, etc.) where common mineral acids (such as HCl or H2SO4) are only moderately strong. With a Ka = , pKa = , triflic acid qualifies as a superacid. It owes many of its useful properties to its great thermal and chemical stability. Both the acid and its conjugate base CF3SO, known as triflate, resist oxidation/reduction reactions, whereas many strong acids are oxidizing, such as perchloric or nitric acid. Further recommending its use, triflic acid does not sulfonate substrates, which can be a problem with sulfuric acid, fluorosulfuric acid, and chlorosulfonic acid. Below is a prototypical sulfonation, which triflic acid does not undergo: C6H6 + H2SO4 ->[\ce{SO3}] C6H5(SO3H) + H2O Triflic acid fumes in moist air and forms a stable solid monohydrate, CF3SO3H·H2O, melting point 34 °C. Salt and complex formation The triflate ligand is labile, reflecting its low basicity. Trifluoromethanesulfonic acid reacts exothermically with metal carbonates, hydroxides, and oxides. Illustrative is the synthesis of Cu(OTf)2. Cu2CO3(OH)2 + 4 CF3SO3H -> 2 Cu(O3SCF3)2 + 3 H2O + CO2 Chloride ligands can be converted to the corresponding triflates: 3 CF3SO3H + [Co(NH3)5Cl]Cl2 -> [Co(NH3)5O3SCF3](O3SCF3)2 + 3 HCl This conversion is conducted in neat HOTf at 100 °C, followed by precipitation of the salt upon the addition of ether. Organic chemistry Triflic acid reacts with acyl halides to give mixed triflate anhydrides, which are strong acylating agents, e.g. in Friedel–Crafts reactions. CH3C(O)Cl + CF3SO3H -> CH3C(O)OSO2CF3 + HCl CH3C(O)OSO2CF3 + C6H6 -> CH3C(O)C6H5 + CF3SO3H Triflic acid catalyzes the reaction of aromatic compounds with sulfonyl chlorides, probably also through the intermediacy of a mixed anhydride of the sulfonic acid. Triflic acid promotes other Friedel–Crafts-like reactions including the cracking of alkanes and alkylation of alkenes, which are very important to the petroleum industry. These triflic acid derivative catalysts are very effective in isomerizing straight chain or slightly branched hydrocarbons that can increase the octane rating of a particular petroleum-based fuel. Triflic acid reacts exothermically with alcohols to produce ethers and olefins. Dehydration gives the acid anhydride, trifluoromethanesulfonic anhydride, (CF3SO2)2O. Safety Triflic acid is one of the strongest acids. Contact with skin causes severe burns with delayed tissue destruction. On inhalation it causes fatal spasms, inflammation and edema. Like sulfuric acid, triflic acid must be slowly added to polar solvents to prevent thermal runaway. References Inorganic carbon compounds Reagents for organic chemistry Superacids Perfluorosulfonic acids
Triflic acid
[ "Chemistry" ]
1,133
[ "Acids", "Inorganic compounds", "Superacids", "Inorganic carbon compounds", "Reagents for organic chemistry" ]
4,141,563
https://en.wikipedia.org/wiki/Predictive%20analytics
Predictive analytics, or predictive AI, encompasses a variety of statistical techniques from data mining, predictive modeling, and machine learning that analyze current and historical facts to make predictions about future or otherwise unknown events. In business, predictive models exploit patterns found in historical and transactional data to identify risks and opportunities. Models capture relationships among many factors to allow assessment of risk or potential associated with a particular set of conditions, guiding decision-making for candidate transactions. The defining functional effect of these technical approaches is that predictive analytics provides a predictive score (probability) for each individual (customer, employee, healthcare patient, product SKU, vehicle, component, machine, or other organizational unit) in order to determine, inform, or influence organizational processes that pertain across large numbers of individuals, such as in marketing, credit risk assessment, fraud detection, manufacturing, healthcare, and government operations including law enforcement. Definition Predictive analytics is a set of business intelligence (BI) technologies that uncovers relationships and patterns within large volumes of data that can be used to predict behavior and events. Unlike other BI technologies, predictive analytics is forward-looking, using past events to anticipate the future. Predictive analytics statistical techniques include data modeling, machine learning, AI, deep learning algorithms and data mining. Often the unknown event of interest is in the future, but predictive analytics can be applied to any type of unknown whether it be in the past, present or future. For example, identifying suspects after a crime has been committed, or credit card fraud as it occurs. The core of predictive analytics relies on capturing relationships between explanatory variables and the predicted variables from past occurrences, and exploiting them to predict the unknown outcome. It is important to note, however, that the accuracy and usability of results will depend greatly on the level of data analysis and the quality of assumptions. Predictive analytics is often defined as predicting at a more detailed level of granularity, i.e., generating predictive scores (probabilities) for each individual organizational element. This distinguishes it from forecasting. For example, "Predictive analytics—Technology that learns from experience (data) to predict the future behavior of individuals in order to drive better decisions." In future industrial systems, the value of predictive analytics will be to predict and prevent potential issues to achieve near-zero break-down and further be integrated into prescriptive analytics for decision optimization. Analytical techniques The approaches and techniques used to conduct predictive analytics can broadly be grouped into regression techniques and machine learning techniques. Machine learning Machine learning can be defined as the ability of a machine to learn and then mimic human behavior that requires intelligence. This is accomplished through artificial intelligence, algorithms, and models. Autoregressive Integrated Moving Average (ARIMA) ARIMA models are a common example of time series models. These models use autoregression, which means the model can be fitted with a regression software that will use machine learning to do most of the regression analysis and smoothing. ARIMA models are known to have no overall trend, but instead have a variation around the average that has a constant amplitude, resulting in statistically similar time patterns. Through this, variables are analyzed and data is filtered in order to better understand and predict future values. One example of an ARIMA method is exponential smoothing models. Exponential smoothing takes into account the difference in importance between older and newer data sets, as the more recent data is more accurate and valuable in predicting future values. In order to accomplish this, exponents are utilized to give newer data sets a larger weight in the calculations than the older sets. Time series models Time series models are a subset of machine learning that utilize time series in order to understand and forecast data using past values. A time series is the sequence of a variable's value over equally spaced periods, such as years or quarters in business applications. To accomplish this, the data must be smoothed, or the random variance of the data must be removed in order to reveal trends in the data. There are multiple ways to accomplish this. Single moving average Single moving average methods utilize smaller and smaller numbered sets of past data to decrease error that is associated with taking a single average, making it a more accurate average than it would be to take the average of the entire data set. Centered moving average Centered moving average methods utilize the data found in the single moving average methods by taking an average of the median-numbered data set. However, as the median-numbered data set is difficult to calculate with even-numbered data sets, this method works better with odd-numbered data sets than even. Predictive modeling Predictive modeling is a statistical technique used to predict future behavior. It utilizes predictive models to analyze a relationship between a specific unit in a given sample and one or more features of the unit. The objective of these models is to assess the possibility that a unit in another sample will display the same pattern. Predictive model solutions can be considered a type of data mining technology. The models can analyze both historical and current data and generate a model in order to predict potential future outcomes. Regardless of the methodology used, in general, the process of creating predictive models involves the same steps. First, it is necessary to determine the project objectives and desired outcomes and translate these into predictive analytic objectives and tasks. Then, analyze the source data to determine the most appropriate data and model building approach (models are only as useful as the applicable data used to build them). Select and transform the data in order to create models. Create and test models in order to evaluate if they are valid and will be able to meet project goals and metrics. Apply the model's results to appropriate business processes (identifying patterns in the data doesn't necessarily mean a business will understand how to take advantage or capitalize on it). Afterward, manage and maintain models in order to standardize and improve performance (demand will increase for model management in order to meet new compliance regulations). Regression analysis Generally, regression analysis uses structural data along with the past values of independent variables and the relationship between them and the dependent variable to form predictions. Linear regression In linear regression, a plot is constructed with the previous values of the dependent variable plotted on the Y-axis and the independent variable that is being analyzed plotted on the X-axis. A regression line is then constructed by a statistical program representing the relationship between the independent and dependent variables which can be used to predict values of the dependent variable based only on the independent variable. With the regression line, the program also shows a slope intercept equation for the line which includes an addition for the error term of the regression, where the higher the value of the error term the less precise the regression model is. In order to decrease the value of the error term, other independent variables are introduced to the model, and similar analyses are performed on these independent variables. Applications Analytical Review and Conditional Expectations in Auditing An important aspect of auditing includes analytical review. In analytical review, the reasonableness of reported account balances being investigated is determined. Auditors accomplish this process through predictive modeling to form predictions called conditional expectations of the balances being audited using autoregressive integrated moving average (ARIMA) methods and general regression analysis methods, specifically through the Statistical Technique for Analytical Review (STAR) methods. The ARIMA method for analytical review uses time-series analysis on past audited balances in order to create the conditional expectations. These conditional expectations are then compared to the actual balances reported on the audited account in order to determine how close the reported balances are to the expectations. If the reported balances are close to the expectations, the accounts are not audited further. If the reported balances are very different from the expectations, there is a higher possibility of a material accounting error and a further audit is conducted. Regression analysis methods are deployed in a similar way, except the regression model used assumes the availability of only one independent variable. The materiality of the independent variable contributing to the audited account balances are determined using past account balances along with present structural data. Materiality is the importance of an independent variable in its relationship to the dependent variable. In this case, the dependent variable is the account balance. Through this the most important independent variable is used in order to create the conditional expectation and, similar to the ARIMA method, the conditional expectation is then compared to the account balance reported and a decision is made based on the closeness of the two balances. The STAR methods operate using regression analysis, and fall into two methods. The first is the STAR monthly balance approach, and the conditional expectations made and regression analysis used are both tied to one month being audited. The other method is the STAR annual balance approach, which happens on a larger scale by basing the conditional expectations and regression analysis on one year being audited. Besides the difference in the time being audited, both methods operate the same, by comparing expected and reported balances to determine which accounts to further investigate. Business Value As we move into a world of technological advances where more and more data is created and stored digitally, businesses are looking for ways to take advantage of this opportunity and use this information to help generate profits. Predictive analytics can be used and is capable of providing many benefits to a wide range of businesses, including asset management firms, insurance companies, communication companies, and many other firms. In a study conducted by IDC Analyze the Future, Dan Vasset and Henry D. Morris explain how an asset management firm used predictive analytics to develop a better marketing campaign. They went from a mass marketing approach to a customer-centric approach, where instead of sending the same offer to each customer, they would personalize each offer based on their customer. Predictive analytics was used to predict the likelihood that a possible customer would accept a personalized offer. Due to the marketing campaign and predictive analytics, the firm's acceptance rate skyrocketed, with three times the number of people accepting their personalized offers. Technological advances in predictive analytics have increased its value to firms. One technological advancement is more powerful computers, and with this predictive analytics has become able to create forecasts on large data sets much faster. With the increased computing power also comes more data and applications, meaning a wider array of inputs to use with predictive analytics. Another technological advance includes a more user-friendly interface, allowing a smaller barrier of entry and less extensive training required for employees to utilize the software and applications effectively. Due to these advancements, many more corporations are adopting predictive analytics and seeing the benefits in employee efficiency and effectiveness, as well as profits. Cash-flow Prediction ARIMA univariate and multivariate models can be used in forecasting a company's future cash flows, with its equations and calculations based on the past values of certain factors contributing to cash flows. Using time-series analysis, the values of these factors can be analyzed and extrapolated to predict the future cash flows for a company. For the univariate models, past values of cash flows are the only factor used in the prediction. Meanwhile the multivariate models use multiple factors related to accrual data, such as operating income before depreciation. Another model used in predicting cash-flows was developed in 1998 and is known as the Dechow, Kothari, and Watts model, or DKW (1998). DKW (1998) uses regression analysis in order to determine the relationship between multiple variables and cash flows. Through this method, the model found that cash-flow changes and accruals are negatively related, specifically through current earnings, and using this relationship predicts the cash flows for the next period. The DKW (1998) model derives this relationship through the relationships of accruals and cash flows to accounts payable and receivable, along with inventory. Child protection Some child welfare agencies have started using predictive analytics to flag high risk cases. For example, in Hillsborough County, Florida, the child welfare agency's use of a predictive modeling tool has prevented abuse-related child deaths in the target population. Predicting outcomes of legal decisions The predicting of the outcome of juridical decisions can be done by AI programs. These programs can be used as assistive tools for professions in this industry. Portfolio, product or economy-level prediction Often the focus of analysis is not the consumer but the product, portfolio, firm, industry or even the economy. For example, a retailer might be interested in predicting store-level demand for inventory management purposes. Or the Federal Reserve Board might be interested in predicting the unemployment rate for the next year. These types of problems can be addressed by predictive analytics using time series techniques (see below). They can also be addressed via machine learning approaches which transform the original time series into a feature vector space, where the learning algorithm finds patterns that have predictive power. Underwriting Many businesses have to account for risk exposure due to their different services and determine the costs needed to cover the risk. Predictive analytics can help underwrite these quantities by predicting the chances of illness, default, bankruptcy, etc. Predictive analytics can streamline the process of customer acquisition by predicting the future risk behavior of a customer using application level data. Predictive analytics in the form of credit scores have reduced the amount of time it takes for loan approvals, especially in the mortgage market. Proper predictive analytics can lead to proper pricing decisions, which can help mitigate future risk of default. Predictive analytics can be used to mitigate moral hazard and prevent accidents from occurring. See also Actuarial science Artificial intelligence in healthcare Analytical procedures (finance auditing) Big data Computational sociology Criminal Reduction Utilising Statistical History Decision management Disease surveillance Learning analytics Odds algorithm Pattern recognition Predictive inference Predictive policing Social media analytics References Further reading Financial crime prevention Statistical analysis Business intelligence Actuarial science analytics Types of analytics Management cybernetics
Predictive analytics
[ "Mathematics", "Technology" ]
2,832
[ "Applied mathematics", "Actuarial science", "Data", "Big data" ]
4,141,655
https://en.wikipedia.org/wiki/W%20state
The W state is an entangled quantum state of three qubits which in the bra-ket notation has the following shape and which is remarkable for representing a specific type of multipartite entanglement and for occurring in several applications in quantum information theory. Particles prepared in this state reproduce the properties of Bell's theorem, which states that no classical theory of local hidden variables can produce the predictions of quantum mechanics . The state is named after Wolfgang Dür, who first reported the state together with Guifré Vidal, and Ignacio Cirac in 2000. Properties The W state is the representative of one of the two non-biseparable classes of three-qubit states, the other being the Greenberger–Horne–Zeilinger state, , which cannot be transformed (not even probabilistically) into each other by local quantum operations. Thus and represent two very different kinds of tripartite entanglement. This difference is, for example, illustrated by the following interesting property of the W state: if one of the three qubits is lost, the state of the remaining 2-qubit system is still entangled. This robustness of W-type entanglement contrasts strongly with the GHZ state, which is fully separable after loss of one qubit. The states in the W class can be distinguished from all other 3-qubit states by means of multipartite entanglement measures. In particular, W states have non-zero entanglement across any bipartition, while the 3-tangle vanishes, which is also non-zero for GHZ-type states. Generalization The notion of W state has been generalized for qubits and then refers to the quantum superposition with equal expansion coefficients of all possible pure states in which exactly one of the qubits is in an "excited state" , while all other ones are in the "ground state" : Both the robustness against particle loss and the LOCC-inequivalence with the (generalized) GHZ state also hold for the -qubit W state. Applications In systems in which a single qubit is stored in an ensemble of many two-level systems the logical "1" is often represented by the W state, while the logical "0" is represented by the state . Here the W state's robustness against particle loss is a very beneficial property ensuring good storage properties of these ensemble-based quantum memories. See also NOON state References Quantum information theory Quantum states
W state
[ "Physics" ]
507
[ "Quantum states", "Quantum mechanics" ]
4,141,703
https://en.wikipedia.org/wiki/Market%20abuse
In economics and finance, market abuse may arise in circumstances in which investors in a financial market have been unreasonably disadvantaged, directly or indirectly, by others who: have used information which is not publicly available (insider dealing) have distorted the price-setting mechanism of financial instruments have disseminated false or misleading information (market manipulation) Market abuse is split into two different aspects (under EU definitions): Insider dealing: where a person who has information not available to other investors (for example, a director with knowledge of a takeover bid) makes use of that information for personal gain Market manipulation: where a person knowingly gives out false or misleading information (for instance, about a company's financial circumstances) in order to influence the price of a share for personal gain In 2013/2014, the EU updated its legislation on market abuse, and harmonised criminal sanctions. In the 2015 Danish European Union opt-out referendum, the Danish population rejected adoption of the 2014 market abuse directive (2014/57/EU) and much other legislation. In the UK, the market abuse directive (MAD) was implemented in 2003 to reduce market abuse. It applied to any financial instrument admitted to trading on a regulated market or in respect of which a request for admission to trading had been made. MAD was subsequently replaced by the Market Abuse Regulation (MAR) in 2016. See also Anti-competitive practices Insider trading Financial Services and Markets Act 2000 EU law ISO 37001 Anti-bribery management systems Group of States Against Corruption International Anti-Corruption Academy United Nations Convention against Corruption OECD Anti-Bribery Convention References Further reading Abuse Anti-competitive practices Corruption Financial crimes Insider trading Stock market
Market abuse
[ "Biology" ]
335
[ "Abuse", "Behavior", "Aggression", "Human behavior" ]