id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
17,428,977
https://en.wikipedia.org/wiki/Devitrification
Devitrification is the process of crystallization in a formerly crystal-free (amorphous) glass. The term is derived from the Latin vitreus, meaning glassy and transparent. Devitrification in glass art Devitrification occurs in glass art during the firing process of fused glass whereby the surface of the glass develops a whitish scum, crazing, or wrinkles instead of a smooth glossy shine, as the molecules in the glass change their structure into that of crystalline solids. While this condition is normally undesired, in glass art it is possible to use devitrification as a deliberate artistic technique. Causes of devitrification, commonly referred to as "devit", can include holding a high temperature for too long, which causes the nucleation of crystals. The presence of foreign residue such as dust on the surface of the glass or inside the kiln prior to firing can provide nucleation points where crystals can propagate easily. The chemical compositions of some types of glass can make them more vulnerable to devitrification than others, for example a high lime content can be factor in inducing this condition. In general opaque glass can devit easily as crystals are present in the glass to give its opaque appearance and thus the higher the chance it might devit. Techniques for avoiding devitrification include cleaning the glass surfaces of dust or unwanted residue, and allowing rapid cooling once the piece reaches the desired temperature, until the temperature approaches the annealing temperature. Devit spray can be purchased to apply to the surfaces of the glass pieces prior to firing which is supposed to help prevent devitrification, however there is disagreement over the long term effectiveness of this solution and whether it should be used as a substitute for proper firing techniques. Once devit has occurred, there are techniques that can be attempted to fix it, with varying degrees of success. One technique is to cover the surface with a sheet of clear glass and refiring. Since devitrification can change the COE somewhat, and devitrified glass tends to be somewhat harder to melt again, there is the possibility of this technique resulting in a less stable piece, however it has also been used effectively with full-fused pieces with no apparent problems. Applying devit spray and refiring can also be effective. Alternatively, sandblasting, acid bath, or polishing with a pumice stone or rotary brush can be used to remove the unwanted surface. Devitrification in geology In a general sense, any crystallization from a magma could be considered devitrification, but the term is most commonly used for the formation of spherulites in otherwise glassy rocks such as obsidian. The process of conversion of glass material to crystallized material is known as devitrification. Spherulites are evidence of this process. Perlite is due to hydration of glass causing expansion and not necessarily devitrification. Glass wool Devitrification can occur in glass wool used in high-temperature applications, resulting in the formation of potentially carcinogenic mineral powders. References External links Encyclopædia Britannica Online WarmTIPS: Devitrification Troubleshooting Fusing and Slumping Problems Tech Report: Devitrification of glass Glass engineering and science Glass art Glass physics
Devitrification
[ "Physics", "Materials_science", "Engineering" ]
661
[ "Glass engineering and science", "Glass physics", "Condensed matter physics", "Materials science" ]
17,430,672
https://en.wikipedia.org/wiki/Benefits%20Supervisor%20Sleeping
Benefits Supervisor Sleeping is a 1995 oil on canvas painting by the British artist Lucian Freud depicting a naked woman lying on a couch. It is a portrait of Sue Tilley, a Jobcentre supervisor. Tilley is the author of a biography of the Australian performer Leigh Bowery titled Leigh Bowery, The Life and Times of an Icon. Tilley was introduced to Freud by Bowery, who was already modelling for him. Freud painted a number of large portraits of her around the period 1994–96, and came to call her "Big Sue". He said of her body: "It's flesh without muscle and it has developed a different kind of texture through bearing such a weight-bearing thing." The painting held the world record for the highest price paid for a painting by a living artist when it was sold by Guy Naggar for US$33.6 million (£17.2 million) at Christie's in New York City in May 2008 to Roman Abramovich. Freud's painting The Brigadier was sold for £35.8 million ($56.2 million) in 2015, four years after his death, replacing Benefits Supervisor Sleeping as the most expensive Freud painting sold at auction. The painting was exhibited twice at Flowers Gallery: 1996: Naked – Flowers East at London Fields 1997: British Figurative Art - Part 1: Painting at Flowers East References External links Lot Details on Christies.com Paintings by Lucian Freud Nude art 1995 paintings Portraits of women Oil on canvas paintings Sleep
Benefits Supervisor Sleeping
[ "Biology" ]
304
[ "Behavior", "Sleep" ]
17,430,761
https://en.wikipedia.org/wiki/Debt%20ratio
The debt ratio or debt to assets ratio is a financial ratio which indicates the percentage of a company's assets which are funded by debt. It is measured as the ratio of total debt to total assets, which is also equal to the ratio of total liabilities and total assets: Financial analysts and financial managers use the ratio in assessing the financial position of the firm. Companies with high debt to asset ratios are said to be highly leveraged, and are associated with greater risk. A high debt to asset ratio may also indicate a low borrowing capacity, which in turn will limit the firm's financial flexibility. See also Equity ratio Debt-to-income ratio, for households Debt-to-GDP ratio, for governments Hamada's equation References Corporate Finance: European Edition, by D. Hillier, S. Ross, R. Westerfield, J. Jaffe, and B. Jordan. McGraw-Hill, 1st Edition, 2010. Financial ratios
Debt ratio
[ "Mathematics" ]
194
[ "Financial ratios", "Quantity", "Metrics" ]
17,430,950
https://en.wikipedia.org/wiki/Dynamic%20device%20mapping
Dynamic device mapping is a technology for USB KVM switches which is sometimes implemented as an alternative to standard USB keyboard and mouse emulation. Design With DDM (Dynamic Device Mapping) Technology, the communication between shared peripherals and all connected systems are maintained 100% of the time, even as a user switches between the KVM ports. This makes generic device emulation unnecessary as the DDM allows each connected computer system to believe all connected I/O devices are remaining connected even as the KVM switch might move to another port. KVM device emulation Many USB KVM devices provide peripheral emulation, sending signals to the computers that are not currently selected to simulate a keyboard, mouse and monitor being connected. The emulation is used to avoid problems with machines which may reboot in unattended operation. Peripheral emulation services embedded in the hardware also provides continuous support where computers require constant communication with the peripherals. In addition, some types of computer systems do not treat USB devices as hot-pluggable, which means the keyboard and mouse will not be re-detected when switching back to a particular KVM port. For these types of systems, it is necessary to implement device emulation. Standard device emulation has its limitations. When emulating a USB keyboard, mouse, and monitor it is impossible for most KVM's to simulate various types of I/O devices specifically. As a result, KVM switches will sometimes offer inconsistent performance and even sometimes unsolved compatibility issues with the shared keyboard, mouse, and other devices. The intent of Dynamic Device Mapping is to resolve the issues that standard device mapping sometimes faces. Applications for USB DDM sharing touchscreen monitor among connected systems integrated multi-vendor self-service kiosk systems secured user login by sharing USB smart card or biometrics reader (e.g. fingerprint scanner) See also KVM switch Display data channel Display Control Channel Reverse DDM References USB and USB Device Details U.S. Patent Information External links Computer peripherals Input/output Out-of-band management
Dynamic device mapping
[ "Technology" ]
417
[ "Computer peripherals", "Components" ]
17,433,076
https://en.wikipedia.org/wiki/Ken%20Caldeira
Kenneth Caldeira (born 1960) is an American atmospheric scientist. His areas of research include ocean acidification, climate effects of trees, intentional climate modification, interactions in the global carbon cycle/climate system, and sustainable energy. As of 2021, Caldeira is Senior Scientist in the energy research company Breakthrough Energy, Senior Staff Scientist (emeritus) in the Carnegie Institution for Science's Department of Global Ecology, and Professor (by courtesy) in the Stanford University Department of Earth System Sciences. Early life and education In the 1980s, Caldeira worked as a software developer. He received his Ph.D in Atmospheric Sciences in 1991 from the New York University Department of Applied Science. From 1991 to 1993, Caldeira worked at Penn State University as a post-doctoral researcher. He then worked as an Environmental Scientist and Physicist at Lawrence Livermore National Laboratory until 2005. Climate change research In 2005, Caldeira joined the Carnegie Institution for Science Department of Global Ecology as a senior scientist, where his job is "to make important scientific discoveries." He also serves as a Professor (by courtesy) in the Stanford University Department of Earth System Science. Caldeira served as a member of the committee producing the 2015 U.S. National Academy of Sciences report Geoengineering Climate: Technical Evaluation and Discussion of Impacts. He was a contributing author to the Intergovernmental Panel on Climate Change (IPCC) AR5 report Climate Change 2013: The Physical Science Basis. In 2010, he was a co-author of the 2010 US National Academy America's Climate Choices report He participated in the UK Royal Society geoengineering panel in 2009 and ocean acidification panel in 2005. Caldeira was coordinating lead author of the oceans chapter for the 2005 IPCC report on Carbon Capture and Storage. In 2007, Caldeira began advising Bill Gates on climate change and energy issues. In his 2016 end-of-year blog post, Gates referred to Caldeira as "my amazing teacher". In 2021, Caldeira began working for the energy research company Breakthrough Energy, which was founded by Gates. Press Caldeira's work was featured in a 14 May 2012 article in The New Yorker, entitled "The Climate Fixers" and in a 20 November 2006 article in The New Yorker, entitled "The Darkening Sea." In 2007, he contributed two op-ed pieces on the subject of global warming to The New York Times. In response to the controversy caused by the book SuperFreakonomics over Caldeira's view on climate engineering, Caldeira rejected the suggestion that he had said, "Carbon dioxide is not the right villain". He responded by posting on his website, "Carbon dioxide is the right villain...insofar as inanimate objects can be villains." He said that while the other statements attributed to him by authors Steven Levitt and Stephen Dubner are "based in fact", the casual reader could come up with a misimpression of what he [Caldeira] believes. Views In 2011, Caldeira resigned as a lead author of an IPCC AR5 chapter, stating "Again, I think the IPCC has been extremely useful in the past, and I believe the IPCC could be extremely useful in the future. [...] My resignation was made possible because I believe that the chapter team that I was part of was on the right track and doing an excellent job without my contribution. Had I had a scientific criticism of my chapter team, you can be assured that I would have stayed involved. So, my resignation was a vote of confidence in my scientific peers, not a critique." Caldeira has argued for a policy goal of zero carbon dioxide emissions. In 2005, he said, "If you're talking about mugging little old ladies, you don't say, 'What's our target for the rate of mugging little old ladies?' You say, 'Mugging little old ladies is bad, and we're going to try to eliminate it.' You recognize you might not be a hundred per cent successful, but your goal is to eliminate the mugging of little old ladies. And I think we need to eventually come around to looking at carbon dioxide emissions the same way.". In 2014, he said, "It is time to stop building things with tailpipes and smokestacks. It is time to stop using the sky as a waste dump for our carbon dioxide pollution." In 2013, with other leading experts, he was co-author of an open letter to policy makers, which stated that "continued opposition to nuclear power threatens humanity's ability to avoid dangerous climate change." Awards and recognition 2008 – Hero Scientist of 2008 list, New Scientist magazine 2009 – Number 36 out of 100 Agents of Change in Rolling Stone magazine 2010 – Fellow of the American Geophysical Union References External links Living people American climatologists Intergovernmental Panel on Climate Change contributing authors New York University alumni Lawrence Livermore National Laboratory staff 1960 births Climate change mitigation researchers Stanford University School of Earth Sciences faculty
Ken Caldeira
[ "Engineering" ]
1,043
[ "Geoengineering", "Climate change mitigation researchers" ]
17,434,355
https://en.wikipedia.org/wiki/Arthur%20W%20Graham%20III
Arthur "Art" W. Graham III (Nov 20, 1940 - May 12, 2008) was the Director of Timing & Scoring for the Indianapolis 500 from 1978-1998 A native of Columbus, IN, but a longtime resident of Cincinnati, OH and then Brownsburg, IN. Graham designed and implemented the first fully automated electronic race timing and scoring system and introduced many of the timing-and-scoring innovations now used in American and International open-wheel racing. Graham was also a Computer Engineer for IBM for 30 years from 1962-1992, overseeing the PC Divisions unprecedented growth in home computers. His dual roles with IBM and Indy, birthed a partnership with "Big Blue" and USAC that enabled innovations not seen in other Motorsports. Indy Racing League A lifelong racing enthusiast who recalled watching the first live television coverage of the "500" in 1949 on a tiny screen through an appliance store window, Graham first became involved with the United States Auto Club in 1965 while living in Cincinnati. It wasn't long before he was serving on USAC's various competition commissions, eventually becoming Chairman of the Rules Committee. In 1982 he was named to USAC's Board of Directors, remaining there until 1997 as the Director of Corporate Development. Computers were being used at Indianapolis when Graham first came onto the scene, but he revolutionized their use into timing & scoring procedures. He designed and installed the first automated system that tracked and communicated drivers position and speed in Real-time. It simultaneously displayed race leaders and laps on the position board. Utilizing proprietary in-track antenna loops and on-car position transponders, the information was automatically fed to live TV broadcasts allowing home viewers to follow the race and position of their favorite drivers. For many years prior, it was traditional for an all-night audit of individual manual scoring sheets and DOS-based computers to verify race results, with the results not being officially posted until the following day. By the late 1980s, under Graham's leadership, they would be posted within an hour of the race finish. Graham has been recognized as the "Father of Autosport Timing Technology". In the early 1990s, Graham began championing the cause of the National Midget Auto Racing Hall of Fame, and later served for several years as the organization's Secretary. Interests A great lover of big-band music, Graham was the Indiana representative of the Four Freshmen Society, and he had put in a considerable amount of effort toward the planning of a 60th anniversary celebration of the group's formation, to be held in Indianapolis in August, 2008. Family Graham is a member of Sigma Alpha Epsilon fraternity, Indiana Beta '62. His family includes wife Dina, daughter Susan L. Moore, sons Daniel A. and Matthew S. Graham, brother Andrew S. Graham, mother Martha S. Graham, and four grandchildren, Sydney, Reagan, Taylor and Kyle. References External links United States Auto Club Indianapolis Motor Speedway Indianapolis 500 National Midget Auto Racing Hall of Fame Indy Racing League 1940 births 2008 deaths Auto racing executives Indianapolis 500 Systems engineers fi:USAC
Arthur W Graham III
[ "Engineering" ]
619
[ "Systems engineers", "Systems engineering" ]
17,435,122
https://en.wikipedia.org/wiki/Feryal%20%C3%96zel
Feryal Özel (born May 27, 1975) is a Turkish-American astrophysicist born in Istanbul, Turkey, specializing in the physics of compact objects and high energy astrophysical phenomena. As of 2022, Özel is the department chair and a professor at the Georgia Institute of Technology School of Physics in Atlanta. She was previously a professor at the University of Arizona in Tucson, in the Astronomy Department and Steward Observatory. Özel graduated summa cum laude from Columbia University's Fu Foundation School of Engineering and Applied Science and received her PhD at Harvard University with Ramesh Narayan acting as Thesis advisor. She was a Hubble Fellow and member at the Institute for Advanced Study in Princeton, New Jersey. She was a Fellow at the Harvard-Radcliffe Institute and a visiting professor at the Miller Institute at UC Berkeley. Özel is widely recognized for her contributions to the field of neutron stars, black holes, and magnetars. She is the Modeling lead and member of the Event Horizon Telescope (EHT) that released the first image of a black hole. Özel received the Maria Goeppert Mayer award from the American Physical Society in 2013 for her outstanding contributions to neutron star astrophysics. Özel has appeared on numerous TV documentaries including Big Ideas on PBS and the Universe series in the History Channel. Along with Alexey Vikhlinin, Özel is the Science and Technology Definition Team Community Co-chair for the Lynx X-ray Observatory NASA Large Mission Concept Study. Education The following list summarizes Prof. Özel's education path: 1992 - Üsküdar American Academy, İstanbul, Turkey 1996 - BSc in Physics and Applied Mathematics, Columbia University, New York City 1997 - MSc in Physics, Niels Bohr Institute, Copenhagen 2002 - PhD in Astrophysics, Harvard University, Cambridge, USA Honors and awards Breakthrough Prize, 2020 Chair, Astrophysics Advisory Committee (APAC), NASA, 2019 Fellowship, John Simon Guggenheim Memorial Foundation, 2016 Visiting Miller Professorship, University of California Berkeley, 2014 Maria Goeppert Mayer Award, American Physical Society, 2013 Fellowship, Radcliffe Institute for Advanced Studies, 2012-2013 Bart J. Bok Prize, Harvard University, 2010 Lucas Award, San Diego Astronomy Association, 2010 Visiting Scholar Fellowship, Turkish Scientific and Technical Research Foundation, 2007 Hubble Postdoctoral Fellowship, 2002–2005 Distinguished Scholar Award, Daughters of Atatürk Foundation, 2003 Keck Fellowship, Institute for Advanced Study, 2002 Van Vleck Fellowship, Harvard University, 1999 Kostrup Prize, Niels Bohr Institute, 1997 Niels Bohr Institute Graduate Fellowship, 1996–1997 Applied Mathematics Faculty Award, Columbia University, 1996 Fu Foundation Scholarship, Columbia University, 1994–1996 Research Fellowship, CERN, 1995 Turkish Health and Education Foundation Scholarship, 1992-1994 References External links "Big Ideas" Website (Resume) Personal webpage at the University of Arizona Nature Magazine online service List of published articles according to IOP Publishing List of published articles according to NASA/ADS Georgia Tech faculty University of Arizona faculty American women astronomers Columbia School of Engineering and Applied Science alumni Harvard University alumni Living people 1975 births Turkish women academics Academics from Istanbul People associated with CERN American astrophysicists American academics of Turkish descent Harvard–Smithsonian Center for Astrophysics people Turkish astronomers Black holes Hubble Fellows Aspen Center for Physics people Fellows of the American Physical Society
Feryal Özel
[ "Physics", "Astronomy" ]
677
[ "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Astrophysics", "Density", "Stellar phenomena", "Astronomical objects" ]
17,436,354
https://en.wikipedia.org/wiki/Entoloma%20hochstetteri
Entoloma hochstetteri, also known as the blue pinkgill, sky-blue mushroom or similar names, is a species of mushroom that is endemic to New Zealand. The small mushroom is a distinctive all-blue colour, while the gills have a slight reddish tint from the spores. The blue colouring of the fruit body is due to azulene pigments. Whether Entoloma hochstetteri is poisonous or not is unknown. This species was one of six native fungi featured in a set of fungal stamps issued in New Zealand in 2002. It is also featured on the New Zealand fifty-dollar note. With E. hochstetteri'''s inclusion, this makes it the only banknote in the world which features a mushroom on it. In a 2018 poll, E. hochstetteri was ranked first by Manaaki Whenua – Landcare Research for its pick as New Zealand's national fungus. Naming The Māori name for the mushroom is , because its colour is similar to the blue wattle of the kōkako bird. Taxonomy The species was first described as Cortinarius hochstetteri in 1866 by the Austrian mycologist Erwin Reichardt, before being given its current binomial in 1962 by Greta Stevenson. It is named after the German-Austrian naturalist Ferdinand von Hochstetter. In 1976 Egon Horak combined Entoloma hochstetteri and Entoloma aeruginosum from Japan with Entoloma virescens, first described from the Bonin Islands in Japan. In 1989 S. Dhancholia recorded E. hochstetteri in India. In 1990 Tsuguo Hongo from Japan examined E. hochstetteri and E. aeruginosum and concluded that they were different taxa, because of difference in the size of the spores and the shape of the pseudocystidia.Hongo, Tsuguo (1990). "New and Noteworthy agarics from New Zealand ". Reports of the Tottori Mycological Institute. 28: 129–34. In 2008 Horak recognized E. hochstetteri as a different species from E. virescens, while noting that "it is open to speculation" whether taxa such as E. virescens are the same species. A similar mushroom is found in Australia and mycologists differ as to whether it is E. hochstetteri, E. virescens or a separate species. DescriptionEntoloma hochstetteri has a small delicate epigeous (above-ground) fruit body (basidiocarp). The cap may be up to 4 cm (1.4 in) in diameter and conical in shape. The cap colour is indigo-blue with a green tint, and is fibrillose. The cap margin is striate and rolled inwards. The gill attachment is adnexed or emarginate, gills are thin and 3–5 mm wide, essentially the same colour as the cap, sometimes with a yellow tint. The cylindrical stipe (stalk) is up to 5 cm (2 in) long by 0.5 cm thick, fibrillose and stuffed. The spore print is reddish-pink. The spores are 9.9–13.2 by 11.8–13.2 μm, tetrahedric in shape, hyaline, smooth and thin-walled. The basidia are 35.2–44.2 by 8.8–13.2 μm, club-shaped, hyaline, and with two or four sterigmata. Mythology The Ngāi Tūhoe describe that the kōkako bird (Callaeas wilsoni) got its blue wattles from it rubbing its cheek against the mushroom, thus giving the mushroom the name werewere-kōkako. Habitat and distributionEntoloma hochstetteri is common in forests throughout New Zealand, where it grows on soil among litter in broadleaf/podocarp forest. It fruits in January to July. It was also reported from India in 1989 and from Australia, though it is unclear whether these are the same species or whether E. hochstetteri is endemic to New Zealand. Attempts of lab cultivation of Entoloma hochstetteri have been made, to no avail. Toxicity Although many members of the genus Entoloma are poisonous, the toxicity of this species is unknown. It is being investigated to see if its gene cluster that is responsible for blue colouring might be used to manufacture a natural blue food dye. See also List of Entoloma species References External links More information from the Landcare Research NZFUNGI database Entoloma hochstetteri discussed on RNZ Critter of the Week'', 2 December 2016 Entolomataceae Fungi of New Zealand Fungi of India Fungus species
Entoloma hochstetteri
[ "Biology" ]
1,023
[ "Fungi", "Fungus species" ]
17,437,456
https://en.wikipedia.org/wiki/Construction%20law
Construction law is a branch of law that deals with matters relating to building construction, engineering, and related fields. It is in essence an amalgam of contract law, commercial law, planning law, employment law and tort. Construction law covers a wide range of legal issues including contract, negligence, bonds and bonding, guarantees and sureties, liens and other security interests, tendering, construction claims, and related consultancy contracts. Construction law affects many participants in the construction industry, including financial institutions, surveyors, quantity surveyors, architects, carpenters, engineers, construction workers, and planners. Specific practice areas Construction law builds upon general legal principles and methodologies and incorporates the regulatory framework (including security of payment, planning, environmental and building regulations); contract methodologies and selection (including traditional and alternative forms of contracting); subcontract issues; causes of action, and liability, arising in contract, negligence and on other grounds; insurance and performance security; dispute resolution and avoidance. Construction law has evolved into a practice discipline in its own right, distinct from its traditional locations as a subpractice of project finance, real estate or corporate law. There are often strong links between construction law and energy law and oil and gas law. Some of the major areas a construction lawyer covers are: Alternative Dispute Resolution Arbitration Dispute review boards (or other third party reviews) Mediation Structured negotiations Bankruptcy issues for contractors, owners, suppliers, etc. Bidding (tendering) disputes Building and other permits Building information modeling Contract law Change Orders (Variations) Construction claims Construction liens Wage requirements (Davis-Bacon Act of 1931, etc.) Payment and Prompt payments acts Extensions of time Drafting construction contracts Industry-standard construction contracts Negotiating construction contracts Negotiating a termination claim, whether for convenience or for default Defective design or construction Delays and acceleration Employment Law including Immigration Environmental matters in construction False Claims Act(s) Fire codes and regulations Fulfilling regulations for non-discrimination or other social impact legislation Insurances issues Damage, liability Indemnification Surety Law (Payment and Performance Bonds) Labor issues and strikes Licensing construction professionals OSHA, and other federal agencies Overinspection Project delivery systems, such as design-bid-build, Design-Build, Construction Manager (CM) at Risk or Agency CM Provide defense to businesses facing administrative actions such as delisting (loss of bid listing) Provide legal counsel Public construction Federal construction under FAR or other regulated procurements State contracting procedures State and local building codes Sustainable construction, e.g. LEED Litigation: trying construction cases in court Violations, safety or other regulatory Construction contracts Although no special contract formalities are required, it is normal practice to use standard-form contracts such as, in the UK, the Joint Contracts Tribunal (JCT) form. In order to expedite dispute resolution, standard forms have often provided for arbitration by a "board of arbitration" or professional arbitrator, although many now offer a choice between arbitration and litigation. Construction law has been affected by the requirements in public contracts, which include surety bonds and other procedures. In private contracts, the requirements are negotiated between the parties. As of 1998, the principles of construction law were "well established". Remedies for breach of contract are the same as in the ordinary law, and include damages, repudiation, rescission, and specific performance. Country-specific contract practice Australia The standard form construction contracts used in Australia include the Australian Building Industry Contracts (ABIC), the Standards Australia contracts, the Australian Defence Contracting Suite of Tendering and Contracting (AUSDEFCON) and the GC21 government contracts form. Canada In Canada, the law requires money for work done to be paid in trust. South Africa The standard form construction contracts in use in South Africa include FIDIC, the New Engineering Contract (NEC), the General Conditions of Contract for Construction Works (GCC) and Joint Building Contracts Committee (JBCC) agreements. United Kingdom The JCT works on the most popular type of standard construction contracts and the latest suite of contracts from the JCT are the 2016 editions. The form of contract most favoured by public bodies is the NEC contract suite. In the UK, specific requirements relating to payments and adjudication provisions were introduced by the Housing Grants, Construction and Regeneration Act 1996, and were subsequently amended in Part 8 of the Local Democracy, Economic Development and Construction Act 2009. These requirements are generally known as the Construction Act requirements. The requirements set out certain minimum provisions which must be included in any construction contract (as defined within the Act) and failure to comply with these requirements will cause the relevant provisions to be deleted and compliant provisions to be inserted in their place, which can lead to unexpected consequences for unsuspecting parties to a construction contract. Although some see construction law as another form of general contract law, it is a very specialised area and most people requiring advice on construction law in the UK would seek advice from construction law specialists. United States Standard form contracts promulgated by the American Institute of Architects have been the standard in the industry (insofar as building construction); the organization first published a form in 1888, and has over 200 forms, with revisions to selected forms happening typically every ten years. However, these forms have been criticized as unfair to contractors in favor of owners and architects, which led to the publication of ConsensusDocs standard contracts in September 2007. The ConsensusDocs Coalition includes 41 trade associations representing design professionals, owners, contractors, subcontractors and sureties in the design and construction industry. ConsensusDocs publishes more than 100 contract documents, addressing all methods of project delivery, and are written in the project's best interest versus one particular party. Engineering lead projects such as horizontal infrastructure use other standard form contracts such as those developed by the Engineers Joint Contract Documents Committee (EJCDC). Recently several other organizations have developed contracts for use such as the CMAA (for projects using agency construction management) and the Design-Build Institute of America for projects using design-build. Deviation When a plan has been adopted for a building, and in the progress of the work a change is made from the original plan, the change is called a "deviation". When the contract is to build a house according to the original plan, and a deviation takes place, the contract shall be traced as far as possible, and the additions, if any have been made, shall be paid for according to the usual rate of charging. Construction law organizations United States The Forum on Construction Law of the American Bar Association established in 1973 is the largest organization of construction lawyers in the United States. The group includes law firms of every size, solo practitioners, in-house and government counsel, non-lawyers such as, construction professionals and the public sector representatives. Forum members include those of owners, developers, design professionals, contractors, subcontractors, suppliers, construction managers, lenders, insurers and sureties. United Kingdom and other In the United Kingdom, there has been an active Society of Construction Law since 1983, and there is now a European Society of Construction Law, and Societies of Construction Law in Australia, Hong Kong, Singapore, and the UAE. See also Mechanic's lien Construction management Planning permission References External links American construction news and resources website
Construction law
[ "Engineering" ]
1,477
[ "Construction", "Construction law" ]
17,437,880
https://en.wikipedia.org/wiki/Dr.%20Paul%20Janssen%20Award%20for%20Biomedical%20Research
The Dr. Paul Janssen Award for Biomedical Research is given annually by Johnson & Johnson to honor the work of an active scientist in academia, industry or a scientific institute in the field of biomedical research. It was established in 2004 and perpetuates the memory of Paul Janssen, the founder of Janssen Pharmaceutica, a Johnson & Johnson subsidiary. The Award The Dr. Paul Janssen Award includes a $200,000 prize and acknowledges the work of an individual who has made a significant, transformational contribution toward the improvement of human health. Johnson & Johnson created the award in 2004 with the following goals: To honor the memory of Janssen, his dedication to excellence and his leadership of young scientists To promote, recognize and reward passion and creativity in biomedical research To underscore Johnson & Johnson's commitment to scientific excellence in the advancement of healthcare knowledge, while fulfilling its responsibility in the community Paul Adriaan Jan Janssen (1926–2003) Known to his colleagues as “Dr. Paul,” Janssen was the founder of Janssen Pharmaceutica, N.V., a pharmaceutical research laboratory based in Beerse, Belgium, and a physician-scientist who helped save millions of lives through his contribution to the discovery and development of more than 80 medicines. His work was responsible for many breakthroughs in several fields of disease, including pain management, psychiatry, infectious disease and gastroenterology. In addition, he has more than 100 patents to his name. Recipients Source: Jannsen 2006: Craig C. Mello, a professor of Molecular Medicine at the University of Massachusetts Medical School, Worcester, MA, and an investigator at the Howard Hughes Medical Institute, for his role in the discovery of RNA interference (RNAi) and the elucidation of its biological functions 2008: Professor Marc Feldmann and Emeritus Professor Sir Ravinder N. Maini of The Kennedy Institute of Rheumatology, Imperial College London, for their role in the discovery of tumor necrosis factor-alpha, or TNF-alpha, as an effective therapeutic target for rheumatoid arthritis and other autoimmune diseases. 2009: Axel Ullrich, director of the Department of Molecular Biology, Max Planck Institute of Biochemistry in Germany, for his pioneering work in applying molecular biology and molecular cloning to the discovery of protein therapeutics for the treatment of a wide range of diseases, including diabetes and cancer. 2010: Anthony S. Fauci, Director of the National Institute of Allergy and Infectious Diseases (NIAID) and Erik De Clercq, Professor Emeritus, Rega Institute for Medical Research. Dr. Fauci received the award for his pioneering contributions to basic and clinical research in the areas of AIDS and other immunodeficiencies, both as a scientist and through his service as the Director of the NIAID. Dr. De Clercq was recognized for his landmark discoveries in anti-HIV medications, including nucleotide analogues, and inventions or co-inventions of several approved drugs for anti-viral therapy. 2011: Napoleone Ferrara, Genentech Fellow, for his research on angiogenesis, the process of new blood vessel formation that plays a key role in cancer proliferation and a number of other diseases. Dr. Ferrara’s discoveries opened the door to the development of a new class of therapeutics to combat a serious eye disorder and contributed to the development of new oncology therapeutics. 2012: Victor Ambros, of the University of Massachusetts Medical School, and Gary Ruvkun of Massachusetts General Hospital and Harvard Medical School, for their collaborative discovery of microRNAs (miRNAs) as central regulators of gene expression and development. 2013: David Julius, chair of the Department of Physiology at the University of California, San Francisco for his discovery of the molecular mechanism that controls thermosensation. 2014: Emmanuelle Charpentier, Professor at the Hannover Medical School and Helmholtz Centre for Infection Research (HZI), Germany and The Laboratory for Molecular Infection Medicine Sweden (MIMS), Umeå University, Sweden and Jennifer Doudna, a Howard Hughes Medical Institute Investigator and Li Ka Shing Professor of Biochemistry, Biophysics and Structural Biology, University of California, Berkeley, for their work on a new method for precise and facile genomic editing. 2015: Bert Vogelstein, Johns Hopkins University, Johns Hopkins Kimmel Cancer Center and the Howard Hughes Medical Institute, for his breakthroughs in oncology research. 2016: Yoshinori Ohsumi, Professor, Frontier Research Center, Tokyo Institute of Technology, Yokohama, Japan, for his pioneering discoveries in the field of autophagy. 2017: Douglas C. Wallace, Founder and Director, Center for Mitochondrial and Epigenomic Medicine, Children’s Hospital of Philadelphia; Professor of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, for pioneering the field of human mitochondrial genetics and its application to the study of disease, aging, and patterns of human migration. 2018: James P. Allison, Professor and Chair, Department of Immunology, University of Texas MD Anderson Cancer Center, for pioneering a novel and effective strategy to harness the immune system for treating solid tumor cancers. 2019: Franz-Ulrich Hartl, Director, Max Planck Institute of Biochemistry and Arthur Horwich, Sterling Professor of Genetics and Professor of Pediatrics, Yale School of Medicine and Investigator, Howard Hughes Medical Institute, for their revolutionary insights into chaperone-mediated protein folding. See also List of medicine awards List of prizes named after people References External links Dr. Paul Janssen Award for Biomedical Research Medicine awards Johnson & Johnson American science and technology awards Awards established in 2004
Dr. Paul Janssen Award for Biomedical Research
[ "Technology" ]
1,158
[ "Science and technology awards", "Medicine awards" ]
17,439,110
https://en.wikipedia.org/wiki/Positive%20organizational%20behavior
Positive organizational behavior (POB) is defined as "the study and application of positively oriented human resource strengths and psychological capacities that can be measured, developed, and effectively managed for performance improvement in today's workplace" (Luthans, 2002a, p. 59). For a positive psychological capacity to qualify for inclusion in POB, it must be positive and must have extensive theory and research foundations and valid measures. In addition, it must be state like, which would make it open to development and manageable for performance improvement. Finally, positive states that meet the POB definitional criteria are primarily researched, measured, developed, and managed at the individual, micro level. The state-like criterion distinguishes POB from other positive approaches that focus on positive traits, whereas its emphasis on micro, individual-level constructs separates it from positive perspectives that address positive organizations and their related macro-level variables and measures. Meeting the inclusion criteria for POB are the state-like psychological resource capacities of self-efficacy, hope, optimism, and resiliency and, when combined, the underlying higher-order, core construct of positive psychological capital or PsyCap. General overview POB is the application of positive psychology to the workplace. Its focus is on strengths and on building the best in the workplace under the basic assumption is that goodness and excellence can be analyzed and achieved. Origins Although POB research is relatively new, its core ideas are based on ideas of earlier scholars. POB origins developed from the positive psychology movement, initiated in 1998 by Martin Seligman and colleagues. Positive psychology aims to shift the focus in psychology from dysfunctional mental illness to mental health, calling for an increased focus on the building of human strength. The levels of analysis of positive psychology have been summarized to be at the subjective level (i.e., positive subjective experience such as well being and contentment with the past, flow and happiness in the present, and hope and optimism into the future); the micro, individual level (i.e., positive traits such as the capacity for love, courage, aesthetic sensibility, perseverance, forgiveness, spirituality, high talent, and wisdom); and the macro group and institutional level (i.e., positive civic virtues and the institutions that move individuals toward better citizenship such as responsibility, altruism, civility, moderation, tolerance, and a strong work ethic). By integrating positive psychology to organizational setting, Fred Luthans has pioneered the positive organizational behavior research in 1999. Since then, Luthans and colleagues have been attempting to find ways of designing work settings that emphasize people's strengths, where they can be both their best selves and at their best with each other. Thus far research has shown that employees who are satisfied and find fulfillment in their work are more productive, absent less, and demonstrate greater organizational loyalty. Despite initial studies and conceptualizations, the field of POB is still in its infancy. Further research regarding the precise antecedents, processes, and consequences of positive psychological behavior is needed. The challenge currently awaiting POB is to bring about a more profound understanding the real impact of positive states for organizational functioning and how these states can be enhanced within the work place. See also Positive organizational scholarship Positive psychological capital Positive psychology References External links Fred Luthans, profile in University of Nebraska-Lincoln Institute of Applied Positive Psychology (IAPPI) - A not-for-profit, research based, educational institution dedicated to advancing the use of positive psychology in organizations. Industrial and organizational psychology Organizational behavior Positive psychology
Positive organizational behavior
[ "Biology" ]
725
[ "Behavior", "Organizational behavior", "Human behavior" ]
53,885
https://en.wikipedia.org/wiki/EuroWordNet
EuroWordNet is a system of semantic networks for European languages, based on WordNet. Each language develops its own wordnet but they are interconnected with interlingual links stored in the Interlingual Index (ILI). Unlike the original Princeton WordNet, most of the other wordnets are not freely available. Languages The original EuroWordNet project dealt with Dutch, Italian, Spanish, German, French, Czech, and Estonian. These wordnets are now frozen, but wordnets for other languages have been developed to varying degrees. License Some examples of EuroWordNet are available for free. Access to the full database, however, is charged. In some cases, OpenThesaurus and BabelNet may serve as a free alternative. See also vidby Babbel External links Lexical databases Computational linguistics Online dictionaries
EuroWordNet
[ "Technology" ]
171
[ "Natural language and computing", "Computational linguistics" ]
53,887
https://en.wikipedia.org/wiki/Text%20corpus
In linguistics and natural language processing, a corpus (: corpora) or text corpus is a dataset, consisting of natively digital and older, digitalized, language resources, either annotated or unannotated. Annotated, they have been used in corpus linguistics for statistical hypothesis testing, checking occurrences or validating linguistic rules within a specific language territory. Overview A corpus may contain texts in a single language (monolingual corpus) or text data in multiple languages (multilingual corpus). In order to make the corpora more useful for doing linguistic research, they are often subjected to a process known as annotation. An example of annotating a corpus is part-of-speech tagging, or POS-tagging, in which information about each word's part of speech (verb, noun, adjective, etc.) is added to the corpus in the form of tags. Another example is indicating the lemma (base) form of each word. When the language of the corpus is not a working language of the researchers who use it, interlinear glossing is used to make the annotation bilingual. Some corpora have further structured levels of analysis applied. In particular, smaller corpora may be fully parsed. Such corpora are usually called Treebanks or Parsed Corpora. The difficulty of ensuring that the entire corpus is completely and consistently annotated means that these corpora are usually smaller, containing around one to three million words. Other levels of linguistic structured analysis are possible, including annotations for morphology, semantics and pragmatics. Applications Corpora are the main knowledge base in corpus linguistics. Other notable areas of application include: Language technology, natural language processing, computational linguistics The analysis and processing of various types of corpora are also the subject of much work in computational linguistics, speech recognition and machine translation, where they are often used to create hidden Markov models for part of speech tagging and other purposes. Corpora and frequency lists derived from them are useful for language teaching. Corpora can be considered as a type of foreign language writing aid as the contextualised grammatical knowledge acquired by non-native language users through exposure to authentic texts in corpora allows learners to grasp the manner of sentence formation in the target language, enabling effective writing. Machine translation Multilingual corpora that have been specially formatted for side-by-side comparison are called aligned parallel corpora. There are two main types of parallel corpora which contain texts in two languages. In a translation corpus, the texts in one language are translations of texts in the other language. In a comparable corpus, the texts are of the same kind and cover the same content, but they are not translations of each other. To exploit a parallel text, some kind of text alignment identifying equivalent text segments (phrases or sentences) is a prerequisite for analysis. Machine translation algorithms for translating between two languages are often trained using parallel fragments comprising a first-language corpus and a second-language corpus, which is an element-for-element translation of the first-language corpus. Philologies Text corpora are also used in the study of historical documents, for example in attempts to decipher ancient scripts, or in Biblical scholarship. Some archaeological corpora can be of such short duration that they provide a snapshot in time. One of the shortest corpora in time may be the 15–30 year Amarna letters texts (1350 BC). The corpus of an ancient city, (for example the "Kültepe Texts" of Turkey), may go through a series of corpora, determined by their find site dates. Some notable text corpora See also Concordance Corpus linguistics Culturomics Distributional–relational database Linguistic Data Consortium Natural language processing Natural Language Toolkit Parallel text Speech corpus Translation memory Treebank Zipf's law References External links ACL SIGLEX Resource Links: Text Corpora Developing Linguistic Corpora: a Guide to Good Practice Free samples (not free), web-based corpora (45-425 million words each): American (COCA, COHA, TIME), British (BNC), Spanish, Portuguese Intercorp Building synchronous parallel corpora of the languages taught at the Faculty of Arts of Charles University. Sketch Engine: Open corpora with free access TS Corpus – A Turkish Corpus freely available for academic research. Turkish National Corpus – A general-purpose corpus for contemporary Turkish Corpus of Political Speeches, Free access to political speeches by American and Chinese politicians, developed by Hong Kong Baptist University Library Russian National Corpus Discourse analysis Corpus linguistics Computational linguistics Works based on multiple works Test items lt:Tekstynas
Text corpus
[ "Technology" ]
953
[ "Natural language and computing", "Computational linguistics" ]
53,916
https://en.wikipedia.org/wiki/Herbicide
Herbicides (, ), also commonly known as weed killers, are substances used to control undesired plants, also known as weeds. Selective herbicides control specific weed species while leaving the desired crop relatively unharmed, while non-selective herbicides (sometimes called "total weed killers") kill plants indiscriminately. The combined effects of herbicides, nitrogen fertilizer, and improved cultivars has increased yields (per acre) of major crops by three to six times from 1900 to 2000. In the United States in 2012, about 91% of all herbicide usage, determined by weight applied, was in agriculture. In 2012, world pesticide expenditures totaled nearly $24.7 billion; herbicides were about 44% of those sales and constituted the biggest portion, followed by insecticides, fungicides, and fumigants. Herbicide is also used in forestry, where certain formulations have been found to suppress hardwood varieties in favor of conifers after clearcutting, as well as pasture systems. History Prior to the widespread use of herbicides, cultural controls, such as altering soil pH, salinity, or fertility levels, were used to control weeds. Mechanical control including tillage and flooding were also used to control weeds. In the late 19th and early 20th centuries, inorganic chemicals such as sulfuric acid, arsenic, copper salts, kerosene and sodium chlorate were used to control weeds, but these chemicals were either toxic, flammable or corrosive and were expensive and ineffective at controlling weeds. First herbicides The major breakthroughs occurred during the Second World War as the result of research conducted independently in the United Kingdom and the United States into the potential use of herbicides in war. The compound 2,4-D was first synthesized by W. G. Templeman at Imperial Chemical Industries. In 1940, his work with indoleacetic acid and naphthaleneacetic acid indicated that "growth substances applied appropriately would kill certain broad-leaved weeds in cereals without harming the crops," though these substances were too expensive and too short-lived in soil due to degradation by microorganisms to be of practical agricultural use; by 1941, his team succeeded in synthesizing a wide range of chemicals to achieve the same effect at lower cost and better efficacy, including 2,4-D. In the same year, R. Pokorny in the US achieved this as well. Independently, a team under Juda Hirsch Quastel, working at the Rothamsted Experimental Station made the same discovery. Quastel was tasked by the Agricultural Research Council (ARC) to discover methods for improving crop yield. By analyzing soil as a dynamic system, rather than an inert substance, he was able to apply techniques such as perfusion. Quastel was able to quantify the influence of various plant hormones, inhibitors, and other chemicals on the activity of microorganisms in the soil and assess their direct impact on plant growth. While the full work of the unit remained secret, certain discoveries were developed for commercial use after the war, including the 2,4-D compound. When 2,4-D was commercially released in 1946, it became the first successful selective herbicide, triggering a worldwide revolution in agricultural output. It allowed for greatly enhanced weed control in wheat, maize (corn), rice, and similar cereal grass crops, because it kills dicots (broadleaf plants), but not most monocots (grasses). The low cost of 2,4-D has led to continued usage today, and it remains one of the most commonly used herbicides in the world. Like other acid herbicides, current formulations use either an amine salt (often trimethylamine) or one of many esters of the parent compound. Further discoveries The triazine family of herbicides, which includes atrazine, was introduced in the 1950s; they have the current distinction of being the herbicide family of greatest concern regarding groundwater contamination. Atrazine does not break down readily (within a few weeks) after being applied to soils of above-neutral pH. Under alkaline soil conditions, atrazine may be carried into the soil profile as far as the water table by soil water following rainfall causing the aforementioned contamination. Atrazine is thus said to have "carryover", a generally undesirable property for herbicides. Glyphosate had been first prepared in the 1950s but its herbicidal activity was only recognized in the 1960s. It was marketed as Roundup in 1971. The development of glyphosate-resistant crop plants, it is now used very extensively for selective weed control in growing crops. The pairing of the herbicide with the resistant seed contributed to the consolidation of the seed and chemistry industry in the late 1990s. Many modern herbicides used in agriculture and gardening are specifically formulated to degrade within a short period after application. Terminology Herbicides can be classified/grouped in various ways; for example, according to their activity, the timing of application, method of application, mechanism of their action, and their chemical structures. Selectivity Chemical structure of the herbicide is of primary affecting efficacy. 2,4-D, mecoprop, and dicamba control many broadleaf weeds but remain ineffective against turf grasses. Chemical additives influence selectivity. Surfactants alter the physical properties of the spray solution and the overall phytotoxicity of the herbicide, increasing translocation. Herbicide safeners enhance the selectivity by boosting herbicide resistance by the crop but allowing the herbicide to damage the weed. Selectivity is determined by the circumstances and technique of application. Climatic factors affect absorption including humidity, light, precipitation, and temperature. Foliage-applied herbicides will enter the leaf more readily at high humidity by lengthening the drying time of the spray droplet and increasing cuticle hydration. Light of high intensity may break down some herbicides and cause the leaf cuticle to thicken, which can interfere with absorption. Precipitation may wash away or remove some foliage-applied herbicides, but it will increase root absorption of soil-applied herbicides. Drought-stressed plants are less likely to translocate herbicides. As temperature increases, herbicides' performance may decrease. Absorption and translocation may be reduced in very cold weather. Non-selective herbicides Non-selective herbicides, generally known as defoliants, are used to clear industrial sites, waste grounds, railways, and railway embankments. Paraquat, glufosinate, and glyphosate are non-selective herbicides. Timing of application Preplant: Preplant herbicides are nonselective herbicides applied to the soil before planting. Some preplant herbicides may be mechanically incorporated into the soil. The objective for incorporation is to prevent dissipation through photodecomposition and/or volatility. The herbicides kill weeds as they grow through the herbicide-treated zone. Volatile herbicides have to be incorporated into the soil before planting the pasture. Crops grown in soil treated with a preplant herbicide include tomatoes, corn, soybeans, and strawberries. Soil fumigants like metam-sodium and dazomet are in use as preplant herbicides. Preemergence: Preemergence herbicides are applied before the weed seedlings emerge through the soil surface. Herbicides do not prevent weeds from germinating but they kill weeds as they grow through the herbicide-treated zone by affecting the cell division in the emerging seedling. Dithiopyr and pendimethalin are preemergence herbicides. Weeds that have already emerged before application or activation are not affected by pre-herbicides as their primary growing point escapes the treatment. Postemergence: These herbicides are applied after weed seedlings have emerged through the soil surface. They can be foliar or root absorbed, selective or nonselective, and contact or systemic. Application of these herbicides is avoided during rain since being washed off the soil makes it ineffective. 2,4-D is a selective, systemic, foliar-absorbed postemergence herbicide. Method of application Soil applied: Herbicides applied to the soil are usually taken up by the root or shoot of the emerging seedlings and are used as preplant or preemergence treatment. Several factors influence the effectiveness of soil-applied herbicides. Weeds absorb herbicides by both passive and active mechanisms. Herbicide adsorption to soil colloids or organic matter often reduces the amount available for weed absorption. Positioning of the herbicide in the correct layer of soil is very important, which can be achieved mechanically and by rainfall. Herbicides on the soil surface are subjected to several processes that reduce their availability. Volatility and photolysis are two common processes that reduce the availability of herbicides. Many soil-applied herbicides are absorbed through plant shoots while they are still underground leading to their death or injury. EPTC and trifluralin are soil-applied herbicides. Foliar applied: These are applied to a portion of the plant above the ground and are absorbed by exposed tissues. These are generally postemergence herbicides and can either be translocated (systemic) throughout the plant or remain at a specific site (contact). External barriers of plants like cuticles, waxes, cell walls etc. affect herbicide absorption and action. Glyphosate, 2,4-D, and dicamba are foliar-applied herbicides. Persistence An herbicide is described as having low residual activity if it is neutralized within a short time of application (within a few weeks or months) – typically this is due to rainfall, or reactions in the soil. A herbicide described as having high residual activity will remain potent for the long term in the soil. For some compounds, the residual activity can leave the ground almost permanently barren. Mechanism of action Herbicides interfere with the biochemical machinery that supports plant growth. Herbicides often mimic natural plant hormones, enzyme substrates, and cofactors. They interfere with the metabolism in the target plants. Herbicides are often classified according to their site of action because as a general rule, herbicides within the same site of action class produce similar symptoms on susceptible plants. Classification based on the site of action of the herbicide is preferable as herbicide resistance management can be handled more effectively. Classification by mechanism of action (MOA) indicates the first enzyme, protein, or biochemical step affected in the plant following application: ACCase inhibitors: Acetyl coenzyme A carboxylase (ACCase) is part of the first step of lipid synthesis. Thus, ACCase inhibitors affect cell membrane production in the meristems of the grass plant. The ACCases of grasses are sensitive to these herbicides, whereas the ACCases of dicot plants are not. ALS inhibitors: Acetolactate synthase (ALS; also known as acetohydroxyacid synthase, or AHAS) is part of the first step in the synthesis of the branched-chain amino acids (valine, leucine, and isoleucine). These herbicides slowly starve affected plants of these amino acids, which eventually leads to the inhibition of DNA synthesis. They affect grasses and dicots alike. The ALS inhibitor family includes various sulfonylureas (SUs) (such as flazasulfuron and metsulfuron-methyl), imidazolinones (IMIs), triazolopyrimidines (TPs), pyrimidinyl oxybenzoates (POBs), and sulfonylamino carbonyl triazolinones (SCTs). The ALS biological pathway exists only in plants and microorganisms (but not animals), thus making the ALS-inhibitors among the safest herbicides. EPSPS inhibitors: Enolpyruvylshikimate 3-phosphate synthase enzyme (EPSPS) is used in the synthesis of the amino acids tryptophan, phenylalanine and tyrosine. They affect grasses and dicots alike. Glyphosate (Roundup) is a systemic EPSPS inhibitor inactivated by soil contact. Auxin-like herbicides: The discovery of synthetic auxins inaugurated the era of organic herbicides. They were discovered in the 1940s after a long study of the plant growth regulator auxin. Synthetic auxins mimic this plant hormone in some way. They have several points of action on the cell membrane, and are effective in the control of dicot plants. 2,4-D, 2,4,5-T, and Aminopyralid are examples of synthetic auxin herbicides. Photosystem II inhibitors reduce electron flow from water to NADP+ at the photochemical step in photosynthesis. They bind to the Qb site on the D1 protein, and prevent quinone from binding to this site. Therefore, this group of compounds causes electrons to accumulate on chlorophyll molecules. As a consequence, oxidation reactions in excess of those normally tolerated by the cell occur, killing the plant. The triazine herbicides (including simazine, cyanazine, atrazine) and urea derivatives (diuron) are photosystem II inhibitors. Other members of this class are chlorbromuron, pyrazon, isoproturon, bromacil, and terbacil. Photosystem I inhibitors steal electrons from ferredoxins, specifically the normal pathway through FeS to Fdx to NADP+, leading to direct discharge of electrons on oxygen. As a result, reactive oxygen species are produced and oxidation reactions in excess of those normally tolerated by the cell occur, leading to plant death. Bipyridinium herbicides (such as diquat and paraquat) inhibit the FeS to Fdx step of that chain, while diphenyl ether herbicides (such as nitrofen, nitrofluorfen, and acifluorfen) inhibit the Fdx to NADP+ step. HPPD inhibitors inhibit 4-hydroxyphenylpyruvate dioxygenase, which are involved in tyrosine breakdown. Tyrosine breakdown products are used by plants to make carotenoids, which protect chlorophyll in plants from being destroyed by sunlight. If this happens, the plants turn white due to complete loss of chlorophyll, and the plants die. Mesotrione and sulcotrione are herbicides in this class; a drug, nitisinone, was discovered in the course of developing this class of herbicides. Complementary to mechanism-based classifications, herbicides are often classified according to their chemical structures or motifs. Similar structural types work in similar ways. For example, aryloxphenoxypropionates herbicides (diclofop chlorazifop, fluazifop) appear to all act as ACCase inhibitors. The so-called cyclohexanedione herbicides, which are used against grasses, include the following commercial products cycloxydim, clethodim, tralkoxydim, butroxydim, sethoxydim, profoxydim, and mesotrione. Knowing about herbicide chemical family grouping serves as a short-term strategy for managing resistance to site of action. The phenoxyacetic acid mimic the natural auxin indoleacetic acid (IAA). This family includes MCPA, 2,4-D, and 2,4,5-T, picloram, dicamba, clopyralid, and triclopyr. WSSA and HRAC classification Using the Weed Science Society of America (WSSA) and herbicide Resistance and World Grains (HRAC) systems, herbicides are classified by mode of action. Eventually the Herbicide Resistance Action Committee (HRAC) and the Weed Science Society of America (WSSA) developed a classification system. Groups in the WSSA and the HRAC systems are designated by numbers and letters, inform users awareness of herbicide mode of action and provide more accurate recommendations for resistance management. Use and application Most herbicides are applied as water-based sprays using ground equipment. Ground equipment varies in design, but large areas can be sprayed using self-propelled sprayers equipped with long booms, of with spray nozzles spaced every apart. Towed, handheld, and even horse-drawn sprayers are also used. On large areas, herbicides may also at times be applied aerially using helicopters or airplanes, or through irrigation systems (known as chemigation). Weed-wiping may also be used, where a wick wetted with herbicide is suspended from a boom and dragged or rolled across the tops of the taller weed plants. This allows treatment of taller grassland weeds by direct contact without affecting related but desirable shorter plants in the grassland sward beneath. The method has the benefit of avoiding spray drift. In Wales, a scheme offering free weed-wiper hire was launched in 2015 in an effort to reduce the levels of MCPA in water courses. There is little difference in forestry in the early growth stages, when the height similarities between growing trees and growing annual crops yields a similar problem with weed competition. Unlike with annuals however, application is mostly unnecessary thereafter and is thus mostly used to decrease the delay between productive economic cycles of lumber crops. Misuse and misapplication Herbicide volatilisation or spray drift may result in herbicide affecting neighboring fields or plants, particularly in windy conditions. Sometimes, the wrong field or plants may be sprayed due to error. Use politically, militarily, and in conflict Although herbicidal warfare uses chemical substances, its main purpose is to disrupt agricultural food production or to destroy plants which provide cover or concealment to the enemy. During the Malayan Emergency, British Commonwealth forces deployed herbicides and defoliants in the Malaysian countryside in order to deprive Malayan National Liberation Army (MNLA) insurgents of cover, potential sources of food and to flush them out of the jungle. Deployment of herbicides and defoliants served the dual purpose of thinning jungle trails to prevent ambushes and destroying crop fields in regions where the MNLA was active to deprive them of potential sources of food. As part of this process, herbicides and defoliants were also sprayed from Royal Air Force aircraft. The use of herbicides as a chemical weapon by the U.S. military during the Vietnam War has left tangible, long-term impacts upon the Vietnamese people and U.S soldiers that handled the chemicals. More than 20% of South Vietnam's forests and 3.2% of its cultivated land were sprayed at least once between during the war. The government of Vietnam says that up to four million people in Vietnam were exposed to the defoliant, and as many as three million people have suffered illness because of Agent Orange, while the Viet Nam Red Cross Society estimates that up to one million people were disabled or have health problems as a result of exposure to Agent Orange. The United States government has described these figures as unreliable. Health and environmental effects Human health Many questions exist about herbicides' health and environmental effects, because of the many kinds of herbicide and the myriad potential targets, mostly unintended. For example, a 1995 panel of 13 scientists reviewing studies on the carcinogenicity of 2,4-D had divided opinions on the likelihood 2,4-D causes cancer in humans. , studies on phenoxy herbicides were too few to accurately assess the risk of many types of cancer from these herbicides, even although evidence was stronger that exposure to these herbicides is associated with increased risk of soft tissue sarcoma and non-Hodgkin lymphoma. Toxicity Herbicides have widely variable toxicity. Acute toxicity, short term exposure effects, and chronic toxicity, from long term environmental or occupational exposure. Much public suspicion of herbicides confuses valid statements of acute toxicity with equally valid statements of lack of chronic toxicity at the recommended levels of usage. For instance, while glyphosate formulations with tallowamine adjuvants are acutely toxic, their use was found to be uncorrelated with any health issues like cancer in a massive US Department of Health study on 90,000 members of farmer families for over a period of 23 years. That is, the study shows lack of chronic toxicity, but cannot question the herbicide's acute toxicity. Health effects Some herbicides cause a range of health effects ranging from skin rashes to death. The pathway of attack can arise from intentional or unintentional direct consumption, improper application resulting in the herbicide coming into direct contact with people or wildlife, inhalation of aerial sprays, or food consumption prior to the labelled preharvest interval. Under some conditions, certain herbicides can be transported via leaching or surface runoff to contaminate groundwater or distant surface water sources. Generally, the conditions that promote herbicide transport include intense storm events (particularly shortly after application) and soils with limited capacity to adsorb or retain the herbicides. Herbicide properties that increase likelihood of transport include persistence (resistance to degradation) and high water solubility. Contamination Cases have been reported where Phenoxy herbicides are contaminated with dioxins such as TCDD; research has suggested such contamination results in a small rise in cancer risk after occupational exposure to these herbicides. Triazine exposure has been implicated in a likely relationship to increased risk of breast cancer, although a causal relationship remains unclear. False claims Herbicide manufacturers have at times made false or misleading claims about the safety of their products. Chemical manufacturer Monsanto Company agreed to change its advertising after pressure from New York attorney general Dennis Vacco; Vacco complained about misleading claims that its spray-on glyphosate-based herbicides, including Roundup, were safer than table salt and "practically non-toxic" to mammals, birds, and fish (though proof that this was ever said is hard to find). Roundup is toxic and has resulted in death after being ingested in quantities ranging from 85 to 200 ml, although it has also been ingested in quantities as large as 500 ml with only mild or moderate symptoms. The manufacturer of Tordon 101 (Dow AgroSciences, owned by the Dow Chemical Company) has claimed Tordon 101 has no effects on animals and insects, in spite of evidence of strong carcinogenic activity of the active ingredient, picloram, in studies on rats. Ecological effects Herbicide use generally has negative impacts on many aspects of the environment. Insects, non-targeted plants, animals, and aquatic systems subject to serious damage from herbicides. Impacts are highly variable. Aquatic life Atrazine has often been blamed for affecting reproductive behavior of aquatic life, but the data do not support this assertion. Bird populations Bird populations are one of many indicators of herbicide damage.Most observed effects are due not to toxicity, but to habitat changes and the decreases in abundance of species on which birds rely for food or shelter. Herbicide use in silviculture, used to favor certain types of growth following clearcutting, can cause significant drops in bird populations. Even when herbicides which have low toxicity to birds are used, they decrease the abundance of many types of vegetation on which the birds rely. Herbicide use in agriculture in the UK has been linked to a decline in seed-eating bird species which rely on the weeds killed by the herbicides. Heavy use of herbicides in neotropical agricultural areas has been one of many factors implicated in limiting the usefulness of such agricultural land for wintering migratory birds. Resistance One major complication to the use of herbicides for weed control is the ability of plants to evolve herbicide resistance, rendering the herbicides ineffective against target plants. Out of 31 known herbicide modes of action, weeds have evolved resistance to 21. 268 plant species are known to have evolved herbicide resistance at least once. Herbicide resistance was first observed in 1957, and since has evolved repeatedly in weed species from 30 families across the globe. Weed resistance to herbicides has become a major concern in crop production worldwide. Resistance to herbicides is often attributed to overuse as well as the strong evolutionary pressure on the affected weeds. Three agricultural practices account for the evolutionary pressure upon weeds to evolve resistance: monoculture, neglecting non-herbicide weed control practices, and reliance on one herbicide for weed control. To minimize resistance, rotational programs of herbicide application, where herbicides with multiple modes of action are used, have been widely promoted. In particular, glyphosate resistance evolved rapidly in part because when glyphosate use first began, it was continuously and heavily relied upon for weed control. This caused incredibly strong selective pressure upon weeds, encouraging mutations conferring glyphosate resistance to persist and spread. However, in 2015, an expansive study showed an increase in herbicide resistance as a result of rotation, and instead recommended mixing multiple herbicides for simultaneous application. As of 2023, the effectiveness of combining herbicides is also questioned, particularly in light of the rise of non-target site resistance. Plants developed resistance to atrazine and to ALS-inhibitors relatively early, but more recently, glyphosate resistance has dramatically risen. Marestail is one weed that has developed glyphosate resistance. Glyphosate-resistant weeds are present in the vast majority of soybean, cotton and corn farms in some U.S. states. Weeds that can resist multiple other herbicides are spreading. Few new herbicides are near commercialization, and none with a molecular mode of action for which there is no resistance. Because most herbicides could not kill all weeds, farmers rotate crops and herbicides to stop the development of resistant weeds. A 2008–2009 survey of 144 populations of waterhemp in 41 Missouri counties revealed glyphosate resistance in 69%. Weeds from some 500 sites throughout Iowa in 2011 and 2012 revealed glyphosate resistance in approximately 64% of waterhemp samples. As of 2023, 58 weed species have developed glyphosate resistance. Weeds resistant to multiple herbicides with completely different biological action modes are on the rise. In Missouri, 43% of waterhemp samples were resistant to two different herbicides; 6% resisted three; and 0.5% resisted four. In Iowa 89% of waterhemp samples resist two or more herbicides, 25% resist three, and 10% resist five. As of 2023, Palmer amaranth with resistance to six different herbicide modes of action has emerged. Annual bluegrass collected from a golf course in the U.S. state of Tennessee was found in 2020 to be resistant to seven herbicides at once. Rigid ryegrass and annual bluegrass share the distinction of the species with confirmed resistance to the largest number of herbicide modes of action, both with confirmed resistance to 12 different modes of action; however, this number references how many forms of herbicide resistance are known to have emerged in the species at some point, not how many have been found simultaneously in a single plant. In 2015, Monsanto released crop seed varieties resistant to both dicamba and glyphosate, allowing for use of a greater variety of herbicides on fields without harming the crops. By 2020, five years after the release of dicamba-resistant seed, the first example of dicamba-resistant Palmer amaranth was found in one location. Evolutionary insights When mutations occur in the genes responsible for the biological mechanisms that herbicides interfere with, these mutations may cause the herbicide mode of action to work less effectively. This is called target-site resistance. Specific mutations that have the most helpful effect for the plant have been shown to occur in separate instances and dominate throughout resistant weed populations. This is an example of convergent evolution. Some mutations conferring herbicide resistance may have fitness costs, reducing the plant's ability to survive in other ways, but over time, the least costly mutations tend to dominate in weed populations. Recently, incidences of non-target site resistance have increasingly emerged, such as examples where plants are capable of producing enzymes that neutralize herbicides before they can enter the plant's cells – metabolic resistance. This form of resistance is particularly challenging, since plants can develop non-target-site resistance to herbicides their ancestors were never directly exposed to. Biochemistry of resistance Resistance to herbicides can be based on one of the following biochemical mechanisms: Target-site resistance: In target-site resistance, the genetic change that causes the resistance directly alters the chemical mechanism the herbicide targets. The mutation may relate to an enzyme with a crucial function in a metabolic pathway, or to a component of an electron-transport system. For example, ALS-resistant weeds developed by genetic mutations leading to an altered enzyme. Such changes render the herbicide impotent. Target-site resistance may also be caused by an over-expression of the target enzyme (via gene amplification or changes in a gene promoter). A related mechanism is that an adaptable enzyme such as cytochrome P450 is redesigned to neutralize the pesticide itself. Non-target-site resistance: In non-target-site resistance, the genetic change giving resistance is not directly related to the target site, but causes the plant to be less susceptible by some other means. Some mechanisms include metabolic detoxification of the herbicide in the weed, reduced uptake and translocation, sequestration of the herbicide, or reduced penetration of the herbicide into the leaf surface. These mechanisms all cause less of the herbicide's active ingredient to reach the target site in the first place. The following terms are also used to describe cases where plants are resistant to multiple herbicides at once: Cross-resistance: In this case, a single resistance mechanism causes resistance to several herbicides. The term target-site cross-resistance is used when the herbicides bind to the same target site, whereas non-target-site cross-resistance is due to a single non-target-site mechanism (e.g., enhanced metabolic detoxification) that entails resistance across herbicides with different sites of action. Multiple resistance: In this situation, two or more resistance mechanisms are present within individual plants, or within a plant population. Resistance management Due to herbicide resistance – a major concern in agriculture – a number of products combine herbicides with different means of action. Integrated pest management may use herbicides alongside other pest control methods. Integrated weed management (IWM) approach utilizes several tactics to combat weeds and forestall resistance. This approach relies less on herbicides and so selection pressure should be reduced. By relying on diverse weed control methods, including non-herbicide methods of weed control, the selection pressure on weeds to evolve resistance can be lowered. Researchers warn that if herbicide resistance is combatted only with more herbicides, "evolution will most likely win." In 2017, the USEPA issued a revised Pesticide Registration Notice (PRN 2017-1), which provides guidance to pesticide registrants on required pesticide resistance management labeling. This requirement applies to all conventional pesticides and is meant to provide end-users with guidance on managing pesticide resistance. An example of a fully executed label compliant with the USEPA resistance management labeling guidance can be seen on the specimen label for the herbicide, cloransulam-methyl, updated in 2022. Optimising herbicide input to the economic threshold level should avoid the unnecessary use of herbicides and reduce selection pressure. Herbicides should be used to their greatest potential by ensuring that the timing, dose, application method, soil and climatic conditions are optimal for good activity. In the UK, partially resistant grass weeds such as Alopecurus myosuroides (blackgrass) and Avena genus (wild oat) can often be controlled adequately when herbicides are applied at the 2-3 leaf stage, whereas later applications at the 2-3 tiller stage can fail badly. Patch spraying, or applying herbicide to only the badly infested areas of fields, is another means of reducing total herbicide use. Approaches to treating resistant weeds Alternative herbicides When resistance is first suspected or confirmed, the efficacy of alternatives is likely to be the first consideration. If there is resistance to a single group of herbicides, then the use of herbicides from other groups may provide a simple and effective solution, at least in the short term. For example, many triazine-resistant weeds have been readily controlled by the use of alternative herbicides such as dicamba or glyphosate. Mixtures and sequences The use of two or more herbicides which have differing modes of action can reduce the selection for resistant genotypes. Ideally, each component in a mixture should: Be active at different target sites Have a high level of efficacy Be detoxified by different biochemical pathways Have similar persistence in the soil (if it is a residual herbicide) Exert negative cross-resistance Synergise the activity of the other component No mixture is likely to have all these attributes, but the first two listed are the most important. There is a risk that mixtures will select for resistance to both components in the longer term. One practical advantage of sequences of two herbicides compared with mixtures is that a better appraisal of the efficacy of each herbicide component is possible, provided that sufficient time elapses between each application. A disadvantage with sequences is that two separate applications have to be made and it is possible that the later application will be less effective on weeds surviving the first application. If these are resistant, then the second herbicide in the sequence may increase selection for resistant individuals by killing the susceptible plants which were damaged but not killed by the first application, but allowing the larger, less affected, resistant plants to survive. This has been cited as one reason why ALS-resistant Stellaria media has evolved in Scotland recently (2000), despite the regular use of a sequence incorporating mecoprop, a herbicide with a different mode of action. Natural herbicide The term organic herbicide has come to mean herbicides intended for organic farming. Few natural herbicides rival the effectiveness of synthetics. Some plants also produce their own herbicides, such as the genus Juglans (walnuts), or the tree of heaven; such actions of natural herbicides, and other related chemical interactions, is called allelopathy. The applicability of these agents is unclear. Farming practices and resistance: a case study Herbicide resistance became a critical problem in Australian agriculture after many Australian sheep farmers began to exclusively grow wheat in their pastures in the 1970s. Introduced varieties of ryegrass, while good for grazing sheep, compete intensely with wheat. Ryegrasses produce so many seeds that, if left unchecked, they can completely choke a field. Herbicides provided excellent control, reducing soil disruption because of less need to plough. Within little more than a decade, ryegrass and other weeds began to develop resistance. In response Australian farmers changed methods. By 1983, patches of ryegrass had become immune to Hoegrass (diclofop-methyl), a family of herbicides that inhibit an enzyme called acetyl coenzyme A carboxylase. Ryegrass populations were large and had substantial genetic diversity because farmers had planted many varieties. Ryegrass is cross-pollinated by wind, so genes shuffle frequently. To control its distribution, farmers sprayed inexpensive Hoegrass, creating selection pressure. In addition, farmers sometimes diluted the herbicide to save money, which allowed some plants to survive application. Farmers turned to a group of herbicides that block acetolactate synthase when resistance appeared. Once again, ryegrass in Australia evolved a kind of "cross-resistance" that allowed it to break down various herbicides rapidly. Four classes of herbicides become ineffective within a few years. In 2013, only two herbicide classes called Photosystem II and long-chain fatty acid inhibitors, were effective against ryegrass. See also Bioherbicide Environmental impact assessment HRAC classification Index of pesticide articles Integrated pest management List of environmental health hazards Preemergent herbicide Soil contamination Surface runoff Weed Weed control Defoliant References Further reading A Brief History of On-track Weed Control in the N.S.W. SRA during the Steam Era Longworth, Jim Australian Railway Historical Society Bulletin, April, 1996 pp99–116 External links General Information National Pesticide Information Center, Information about pesticide-related topics National Agricultural Statistics Service Regulatory policy US EPA UK Pesticides Safety Directorate European Commission pesticide information pmra Pest Management Regulatory Agency of Canada Pesticides Soil contamination Lawn care Toxicology Biocides Chemical anti-agriculture weapons
Herbicide
[ "Chemistry", "Biology", "Environmental_science" ]
7,603
[ "Herbicides", "Pesticides", "Toxicology", "Chemical weapons", "Environmental chemistry", "Soil contamination", "Chemical anti-agriculture weapons", "Biocides" ]
53,928
https://en.wikipedia.org/wiki/Greibach%20normal%20form
In formal language theory, a context-free grammar is in Greibach normal form (GNF) if the right-hand sides of all production rules start with a terminal symbol, optionally followed by some variables. A non-strict form allows one exception to this format restriction for allowing the empty word (epsilon, ε) to be a member of the described language. The normal form was established by Sheila Greibach and it bears her name. More precisely, a context-free grammar is in Greibach normal form, if all production rules are of the form: where is a nonterminal symbol, is a terminal symbol, and is a (possibly empty) sequence of nonterminal symbols. Observe that the grammar does not have left recursions. Every context-free grammar can be transformed into an equivalent grammar in Greibach normal form. Various constructions exist. Some do not permit the second form of rule and cannot transform context-free grammars that can generate the empty word. For one such construction the size of the constructed grammar is O(4) in the general case and O(3) if no derivation of the original grammar consists of a single nonterminal symbol, where is the size of the original grammar. This conversion can be used to prove that every context-free language can be accepted by a real-time (non-deterministic) pushdown automaton, i.e., the automaton reads a letter from its input every step. Given a grammar in GNF and a derivable string in the grammar with length , any top-down parser will halt at depth . See also Backus–Naur form Chomsky normal form Kuroda normal form References Formal languages
Greibach normal form
[ "Mathematics" ]
360
[ "Formal languages", "Mathematical logic" ]
53,932
https://en.wikipedia.org/wiki/Euclidean%20distance
In mathematics, the Euclidean distance between two points in Euclidean space is the length of the line segment between them. It can be calculated from the Cartesian coordinates of the points using the Pythagorean theorem, and therefore is occasionally called the Pythagorean distance. These names come from the ancient Greek mathematicians Euclid and Pythagoras. In the Greek deductive geometry exemplified by Euclid's Elements, distances were not represented as numbers but line segments of the same length, which were considered "equal". The notion of distance is inherent in the compass tool used to draw a circle, whose points all have the same distance from a common center point. The connection from the Pythagorean theorem to distance calculation was not made until the 18th century. The distance between two objects that are not points is usually defined to be the smallest distance among pairs of points from the two objects. Formulas are known for computing distances between different types of objects, such as the distance from a point to a line. In advanced mathematics, the concept of distance has been generalized to abstract metric spaces, and other distances than Euclidean have been studied. In some applications in statistics and optimization, the square of the Euclidean distance is used instead of the distance itself. Distance formulas One dimension The distance between any two points on the real line is the absolute value of the numerical difference of their coordinates, their absolute difference. Thus if and are two points on the real line, then the distance between them is given by: A more complicated formula, giving the same value, but generalizing more readily to higher dimensions, is: In this formula, squaring and then taking the square root leaves any positive number unchanged, but replaces any negative number by its absolute value. Two dimensions In the Euclidean plane, let point have Cartesian coordinates and let point have coordinates . Then the distance between and is given by: This can be seen by applying the Pythagorean theorem to a right triangle with horizontal and vertical sides, having the line segment from to as its hypotenuse. The two squared formulas inside the square root give the areas of squares on the horizontal and vertical sides, and the outer square root converts the area of the square on the hypotenuse into the length of the hypotenuse. It is also possible to compute the distance for points given by polar coordinates. If the polar coordinates of are and the polar coordinates of are , then their distance is given by the law of cosines: When and are expressed as complex numbers in the complex plane, the same formula for one-dimensional points expressed as real numbers can be used, although here the absolute value sign indicates the complex norm: Higher dimensions In three dimensions, for points given by their Cartesian coordinates, the distance is In general, for points given by Cartesian coordinates in -dimensional Euclidean space, the distance is The Euclidean distance may also be expressed more compactly in terms of the Euclidean norm of the Euclidean vector difference: Objects other than points For pairs of objects that are not both points, the distance can most simply be defined as the smallest distance between any two points from the two objects, although more complicated generalizations from points to sets such as Hausdorff distance are also commonly used. Formulas for computing distances between different types of objects include: The distance from a point to a line, in the Euclidean plane The distance from a point to a plane in three-dimensional Euclidean space The distance between two lines in three-dimensional Euclidean space The distance from a point to a curve can be used to define its parallel curve, another curve all of whose points have the same distance to the given curve. Properties The Euclidean distance is the prototypical example of the distance in a metric space, and obeys all the defining properties of a metric space: It is symmetric, meaning that for all points and , . That is (unlike road distance with one-way streets) the distance between two points does not depend on which of the two points is the start and which is the destination. It is positive, meaning that the distance between every two distinct points is a positive number, while the distance from any point to itself is zero. It obeys the triangle inequality: for every three points , , and , . Intuitively, traveling from to via cannot be any shorter than traveling directly from to . Another property, Ptolemy's inequality, concerns the Euclidean distances among four points , , , and . It states that For points in the plane, this can be rephrased as stating that for every quadrilateral, the products of opposite sides of the quadrilateral sum to at least as large a number as the product of its diagonals. However, Ptolemy's inequality applies more generally to points in Euclidean spaces of any dimension, no matter how they are arranged. For points in metric spaces that are not Euclidean spaces, this inequality may not be true. Euclidean distance geometry studies properties of Euclidean distance such as Ptolemy's inequality, and their application in testing whether given sets of distances come from points in a Euclidean space. According to the Beckman–Quarles theorem, any transformation of the Euclidean plane or of a higher-dimensional Euclidean space that preserves unit distances must be an isometry, preserving all distances. Squared Euclidean distance In many applications, and in particular when comparing distances, it may be more convenient to omit the final square root in the calculation of Euclidean distances, as the square root does not change the order ( if and only if ). The value resulting from this omission is the square of the Euclidean distance, and is called the squared Euclidean distance. For instance, the Euclidean minimum spanning tree can be determined using only the ordering between distances, and not their numeric values. Comparing squared distances produces the same result but avoids an unnecessary square-root calculation and sidesteps issues of numerical precision. As an equation, the squared distance can be expressed as a sum of squares: Beyond its application to distance comparison, squared Euclidean distance is of central importance in statistics, where it is used in the method of least squares, a standard method of fitting statistical estimates to data by minimizing the average of the squared distances between observed and estimated values, and as the simplest form of divergence to compare probability distributions. The addition of squared distances to each other, as is done in least squares fitting, corresponds to an operation on (unsquared) distances called Pythagorean addition. In cluster analysis, squared distances can be used to strengthen the effect of longer distances. Squared Euclidean distance does not form a metric space, as it does not satisfy the triangle inequality. However it is a smooth, strictly convex function of the two points, unlike the distance, which is non-smooth (near pairs of equal points) and convex but not strictly convex. The squared distance is thus preferred in optimization theory, since it allows convex analysis to be used. Since squaring is a monotonic function of non-negative values, minimizing squared distance is equivalent to minimizing the Euclidean distance, so the optimization problem is equivalent in terms of either, but easier to solve using squared distance. The collection of all squared distances between pairs of points from a finite set may be stored in a Euclidean distance matrix, and is used in this form in distance geometry. Generalizations In more advanced areas of mathematics, when viewing Euclidean space as a vector space, its distance is associated with a norm called the Euclidean norm, defined as the distance of each vector from the origin. One of the important properties of this norm, relative to other norms, is that it remains unchanged under arbitrary rotations of space around the origin. By Dvoretzky's theorem, every finite-dimensional normed vector space has a high-dimensional subspace on which the norm is approximately Euclidean; the Euclidean norm is the only norm with this property. It can be extended to infinite-dimensional vector spaces as the norm or distance. The Euclidean distance gives Euclidean space the structure of a topological space, the Euclidean topology, with the open balls (subsets of points at less than a given distance from a given point) as its neighborhoods. Other common distances in real coordinate spaces and function spaces: Chebyshev distance ( distance), which measures distance as the maximum of the distances in each coordinate. Taxicab distance ( distance), also called Manhattan distance, which measures distance as the sum of the distances in each coordinate. Minkowski distance ( distance), a generalization that unifies Euclidean distance, taxicab distance, and Chebyshev distance. For points on surfaces in three dimensions, the Euclidean distance should be distinguished from the geodesic distance, the length of a shortest curve that belongs to the surface. In particular, for measuring great-circle distances on the Earth or other spherical or near-spherical surfaces, distances that have been used include the haversine distance giving great-circle distances between two points on a sphere from their longitudes and latitudes, and Vincenty's formulae also known as "Vincent distance" for distance on a spheroid. History Euclidean distance is the distance in Euclidean space. Both concepts are named after ancient Greek mathematician Euclid, whose Elements became a standard textbook in geometry for many centuries. Concepts of length and distance are widespread across cultures, can be dated to the earliest surviving "protoliterate" bureaucratic documents from Sumer in the fourth millennium BC (far before Euclid), and have been hypothesized to develop in children earlier than the related concepts of speed and time. But the notion of a distance, as a number defined from two points, does not actually appear in Euclid's Elements. Instead, Euclid approaches this concept implicitly, through the congruence of line segments, through the comparison of lengths of line segments, and through the concept of proportionality. The Pythagorean theorem is also ancient, but it could only take its central role in the measurement of distances after the invention of Cartesian coordinates by René Descartes in 1637. The distance formula itself was first published in 1731 by Alexis Clairaut. Because of this formula, Euclidean distance is also sometimes called Pythagorean distance. Although accurate measurements of long distances on the Earth's surface, which are not Euclidean, had again been studied in many cultures since ancient times (see history of geodesy), the idea that Euclidean distance might not be the only way of measuring distances between points in mathematical spaces came even later, with the 19th-century formulation of non-Euclidean geometry. The definition of the Euclidean norm and Euclidean distance for geometries of more than three dimensions also first appeared in the 19th century, in the work of Augustin-Louis Cauchy. References Distance Length Metric geometry Pythagorean theorem distance
Euclidean distance
[ "Physics", "Mathematics" ]
2,205
[ "Scalar physical quantities", "Planes (geometry)", "Physical quantities", "Distance", "Quantity", "Euclidean plane geometry", "Mathematical objects", "Equations", "Size", "Space", "Length", "Pythagorean theorem", "Spacetime", "Wikipedia categories named after physical quantities" ]
53,933
https://en.wikipedia.org/wiki/Permittivity
In electromagnetism, the absolute permittivity, often simply called permittivity and denoted by the Greek letter (epsilon), is a measure of the electric polarizability of a dielectric material. A material with high permittivity polarizes more in response to an applied electric field than a material with low permittivity, thereby storing more energy in the material. In electrostatics, the permittivity plays an important role in determining the capacitance of a capacitor. In the simplest case, the electric displacement field resulting from an applied electric field E is More generally, the permittivity is a thermodynamic function of state. It can depend on the frequency, magnitude, and direction of the applied field. The SI unit for permittivity is farad per meter (F/m). The permittivity is often represented by the relative permittivity which is the ratio of the absolute permittivity and the vacuum permittivity This dimensionless quantity is also often and ambiguously referred to as the permittivity. Another common term encountered for both absolute and relative permittivity is the dielectric constant which has been deprecated in physics and engineering as well as in chemistry. By definition, a perfect vacuum has a relative permittivity of exactly 1 whereas at standard temperature and pressure, air has a relative permittivity of Relative permittivity is directly related to electric susceptibility () by otherwise written as The term "permittivity" was introduced in the 1880s by Oliver Heaviside to complement Thomson's (1872) "permeability". Formerly written as , the designation with has been in common use since the 1950s. Units The SI unit of permittivity is farad per meter (F/m or F·m−1). Explanation In electromagnetism, the electric displacement field represents the distribution of electric charges in a given medium resulting from the presence of an electric field . This distribution includes charge migration and electric dipole reorientation. Its relation to permittivity in the very simple case of linear, homogeneous, isotropic materials with "instantaneous" response to changes in electric field is: where the permittivity is a scalar. If the medium is anisotropic, the permittivity is a second rank tensor. In general, permittivity is not a constant, as it can vary with the position in the medium, the frequency of the field applied, humidity, temperature, and other parameters. In a nonlinear medium, the permittivity can depend on the strength of the electric field. Permittivity as a function of frequency can take on real or complex values. In SI units, permittivity is measured in farads per meter (F/m or A2·s4·kg−1·m−3). The displacement field is measured in units of coulombs per square meter (C/m2), while the electric field is measured in volts per meter (V/m). and describe the interaction between charged objects. is related to the charge densities associated with this interaction, while is related to the forces and potential differences. Vacuum permittivity The vacuum permittivity (also called permittivity of free space or the electric constant) is the ratio in free space. It also appears in the Coulomb force constant, Its value is where is the speed of light in free space, is the vacuum permeability. The constants and were both defined in SI units to have exact numerical values until the 2019 revision of the SI. Therefore, until that date, could be also stated exactly as a fraction, even if the result was irrational (because the fraction contained ). In contrast, the ampere was a measured quantity before 2019, but since then the ampere is now exactly defined and it is that is an experimentally measured quantity (with consequent uncertainty) and therefore so is the new 2019 definition of ( remains exactly defined before and since 2019). Relative permittivity The linear permittivity of a homogeneous material is usually given relative to that of free space, as a relative permittivity (also called dielectric constant, although this term is deprecated and sometimes only refers to the static, zero-frequency relative permittivity). In an anisotropic material, the relative permittivity may be a tensor, causing birefringence. The actual permittivity is then calculated by multiplying the relative permittivity by : where (frequently written ) is the electric susceptibility of the material. The susceptibility is defined as the constant of proportionality (which may be a tensor) relating an electric field to the induced dielectric polarization density such that where is the electric permittivity of free space. The susceptibility of a medium is related to its relative permittivity by So in the case of a vacuum, The susceptibility is also related to the polarizability of individual particles in the medium by the Clausius-Mossotti relation. The electric displacement is related to the polarization density by The permittivity and permeability of a medium together determine the phase velocity of electromagnetic radiation through that medium: Practical applications Determining capacitance The capacitance of a capacitor is based on its design and architecture, meaning it will not change with charging and discharging. The formula for capacitance in a parallel plate capacitor is written as where is the area of one plate, is the distance between the plates, and is the permittivity of the medium between the two plates. For a capacitor with relative permittivity , it can be said that Gauss's law Permittivity is connected to electric flux (and by extension electric field) through Gauss's law. Gauss's law states that for a closed Gaussian surface, , where is the net electric flux passing through the surface, is the charge enclosed in the Gaussian surface, is the electric field vector at a given point on the surface, and is a differential area vector on the Gaussian surface. If the Gaussian surface uniformly encloses an insulated, symmetrical charge arrangement, the formula can be simplified to where represents the angle between the electric field lines and the normal (perpendicular) to . If all of the electric field lines cross the surface at 90°, the formula can be further simplified to Because the surface area of a sphere is the electric field a distance away from a uniform, spherical charge arrangement is This formula applies to the electric field due to a point charge, outside of a conducting sphere or shell, outside of a uniformly charged insulating sphere, or between the plates of a spherical capacitor. Dispersion and causality In general, a material cannot polarize instantaneously in response to an applied field, and so the more general formulation as a function of time is That is, the polarization is a convolution of the electric field at previous times with time-dependent susceptibility given by . The upper limit of this integral can be extended to infinity as well if one defines for . An instantaneous response would correspond to a Dirac delta function susceptibility . It is convenient to take the Fourier transform with respect to time and write this relationship as a function of frequency. Because of the convolution theorem, the integral becomes a simple product, This frequency dependence of the susceptibility leads to frequency dependence of the permittivity. The shape of the susceptibility with respect to frequency characterizes the dispersion properties of the material. Moreover, the fact that the polarization can only depend on the electric field at previous times (i.e. effectively for ), a consequence of causality, imposes Kramers–Kronig constraints on the susceptibility . Complex permittivity As opposed to the response of a vacuum, the response of normal materials to external fields generally depends on the frequency of the field. This frequency dependence reflects the fact that a material's polarization does not change instantaneously when an electric field is applied. The response must always be causal (arising after the applied field), which can be represented by a phase difference. For this reason, permittivity is often treated as a complex function of the (angular) frequency of the applied field: (since complex numbers allow specification of magnitude and phase). The definition of permittivity therefore becomes where and are the amplitudes of the displacement and electric fields, respectively, is the imaginary unit, . The response of a medium to static electric fields is described by the low-frequency limit of permittivity, also called the static permittivity (also ): At the high-frequency limit (meaning optical frequencies), the complex permittivity is commonly referred to as (or sometimes ). At the plasma frequency and below, dielectrics behave as ideal metals, with electron gas behavior. The static permittivity is a good approximation for alternating fields of low frequencies, and as the frequency increases a measurable phase difference emerges between and . The frequency at which the phase shift becomes noticeable depends on temperature and the details of the medium. For moderate field strength (), and remain proportional, and Since the response of materials to alternating fields is characterized by a complex permittivity, it is natural to separate its real and imaginary parts, which is done by convention in the following way: where is the real part of the permittivity; is the imaginary part of the permittivity; is the loss angle. The choice of sign for time-dependence, , dictates the sign convention for the imaginary part of permittivity. The signs used here correspond to those commonly used in physics, whereas for the engineering convention one should reverse all imaginary quantities. The complex permittivity is usually a complicated function of frequency , since it is a superimposed description of dispersion phenomena occurring at multiple frequencies. The dielectric function must have poles only for frequencies with positive imaginary parts, and therefore satisfies the Kramers–Kronig relations. However, in the narrow frequency ranges that are often studied in practice, the permittivity can be approximated as frequency-independent or by model functions. At a given frequency, the imaginary part, , leads to absorption loss if it is positive (in the above sign convention) and gain if it is negative. More generally, the imaginary parts of the eigenvalues of the anisotropic dielectric tensor should be considered. In the case of solids, the complex dielectric function is intimately connected to band structure. The primary quantity that characterizes the electronic structure of any crystalline material is the probability of photon absorption, which is directly related to the imaginary part of the optical dielectric function . The optical dielectric function is given by the fundamental expression: In this expression, represents the product of the Brillouin zone-averaged transition probability at the energy with the joint density of states, ; is a broadening function, representing the role of scattering in smearing out the energy levels. In general, the broadening is intermediate between Lorentzian and Gaussian; for an alloy it is somewhat closer to Gaussian because of strong scattering from statistical fluctuations in the local composition on a nanometer scale. Tensorial permittivity According to the Drude model of magnetized plasma, a more general expression which takes into account the interaction of the carriers with an alternating electric field at millimeter and microwave frequencies in an axially magnetized semiconductor requires the expression of the permittivity as a non-diagonal tensor: If vanishes, then the tensor is diagonal but not proportional to the identity and the medium is said to be a uniaxial medium, which has similar properties to a uniaxial crystal. Classification of materials Materials can be classified according to their complex-valued permittivity , upon comparison of its real and imaginary components (or, equivalently, conductivity, , when accounted for in the latter). A perfect conductor has infinite conductivity, , while a perfect dielectric is a material that has no conductivity at all, ; this latter case, of real-valued permittivity (or complex-valued permittivity with zero imaginary component) is also associated with the name lossless media. Generally, when we consider the material to be a low-loss dielectric (although not exactly lossless), whereas is associated with a good conductor; such materials with non-negligible conductivity yield a large amount of loss that inhibit the propagation of electromagnetic waves, thus are also said to be lossy media. Those materials that do not fall under either limit are considered to be general media. Lossy media In the case of a lossy medium, i.e. when the conduction current is not negligible, the total current density flowing is: where is the conductivity of the medium; is the real part of the permittivity. is the complex permittivity Note that this is using the electrical engineering convention of the complex conjugate ambiguity; the physics/chemistry convention involves the complex conjugate of these equations. The size of the displacement current is dependent on the frequency of the applied field ; there is no displacement current in a constant field. In this formalism, the complex permittivity is defined as: In general, the absorption of electromagnetic energy by dielectrics is covered by a few different mechanisms that influence the shape of the permittivity as a function of frequency: First are the relaxation effects associated with permanent and induced molecular dipoles. At low frequencies the field changes slowly enough to allow dipoles to reach equilibrium before the field has measurably changed. For frequencies at which dipole orientations cannot follow the applied field because of the viscosity of the medium, absorption of the field's energy leads to energy dissipation. The mechanism of dipoles relaxing is called dielectric relaxation and for ideal dipoles is described by classic Debye relaxation. Second are the resonance effects, which arise from the rotations or vibrations of atoms, ions, or electrons. These processes are observed in the neighborhood of their characteristic absorption frequencies. The above effects often combine to cause non-linear effects within capacitors. For example, dielectric absorption refers to the inability of a capacitor that has been charged for a long time to completely discharge when briefly discharged. Although an ideal capacitor would remain at zero volts after being discharged, real capacitors will develop a small voltage, a phenomenon that is also called soakage or battery action. For some dielectrics, such as many polymer films, the resulting voltage may be less than 1–2% of the original voltage. However, it can be as much as 15–25% in the case of electrolytic capacitors or supercapacitors. Quantum-mechanical interpretation In terms of quantum mechanics, permittivity is explained by atomic and molecular interactions. At low frequencies, molecules in polar dielectrics are polarized by an applied electric field, which induces periodic rotations. For example, at the microwave frequency, the microwave field causes the periodic rotation of water molecules, sufficient to break hydrogen bonds. The field does work against the bonds and the energy is absorbed by the material as heat. This is why microwave ovens work very well for materials containing water. There are two maxima of the imaginary component (the absorptive index) of water, one at the microwave frequency, and the other at far ultraviolet (UV) frequency. Both of these resonances are at higher frequencies than the operating frequency of microwave ovens. At moderate frequencies, the energy is too high to cause rotation, yet too low to affect electrons directly, and is absorbed in the form of resonant molecular vibrations. In water, this is where the absorptive index starts to drop sharply, and the minimum of the imaginary permittivity is at the frequency of blue light (optical regime). At high frequencies (such as UV and above), molecules cannot relax, and the energy is purely absorbed by atoms, exciting electron energy levels. Thus, these frequencies are classified as ionizing radiation. While carrying out a complete ab initio (that is, first-principles) modelling is now computationally possible, it has not been widely applied yet. Thus, a phenomenological model is accepted as being an adequate method of capturing experimental behaviors. The Debye model and the Lorentz model use a first-order and second-order (respectively) lumped system parameter linear representation (such as an RC and an LRC resonant circuit). Measurement The relative permittivity of a material can be found by a variety of static electrical measurements. The complex permittivity is evaluated over a wide range of frequencies by using different variants of dielectric spectroscopy, covering nearly 21 orders of magnitude from 10−6 to 1015 hertz. Also, by using cryostats and ovens, the dielectric properties of a medium can be characterized over an array of temperatures. In order to study systems for such diverse excitation fields, a number of measurement setups are used, each adequate for a special frequency range. Various microwave measurement techniques are outlined in Chen et al. Typical errors for the Hakki–Coleman method employing a puck of material between conducting planes are about 0.3%. Low-frequency time domain measurements ( to  Hz) Low-frequency frequency domain measurements ( to  Hz) Reflective coaxial methods ( to  Hz) Transmission coaxial method ( to  Hz) Quasi-optical methods ( to  Hz) Terahertz time-domain spectroscopy ( to  Hz) Fourier-transform methods ( to  Hz) At infrared and optical frequencies, a common technique is ellipsometry. Dual polarisation interferometry is also used to measure the complex refractive index for very thin films at optical frequencies. For the 3D measurement of dielectric tensors at optical frequency, Dielectric tensor tomography can be used. See also Acoustic attenuation Density functional theory Electric-field screening Green–Kubo relations Green's function (many-body theory) Linear response function Permeability (electromagnetism) Rotational Brownian motion References Further reading (volume 2 publ. 1978) External links – a chapter from an online textbook Electric and magnetic fields in matter Physical quantities
Permittivity
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
3,787
[ "Physical phenomena", "Physical quantities", "Quantity", "Electric and magnetic fields in matter", "Materials science", "Condensed matter physics", "Physical properties" ]
53,937
https://en.wikipedia.org/wiki/Real%20data%20type
A real data type is a data type used in a computer program to represent an approximation of a real number. Because the real numbers are not countable, computers cannot represent them exactly using a finite amount of information. Most often, a computer will use a rational approximation to a real number. Rational numbers The most general data type for a rational number (a number that can be expressed as a fraction) stores the numerator and the denominator as integers. For example 1/3, which can be calculated to any desired precision. Rational number are used, for example, in Interpress from Xerox Corporation. Fixed-point numbers A fixed-point data type uses the same, implied, denominator for all numbers. The denominator is usually a power of two. For example, in a hypothetical fixed-point system that uses the denominator 65,536 (216), the hexadecimal number 0x12345678 (0x1234.5678 with sixteen fractional bits to the right of the assumed radix point) means 0x12345678/65536 or 305419896/65536, 4660 + the fractional value 22136/65536, or about 4660.33777. An integer is a fixed-point number with a fractional part of zero. Floating-point numbers A floating-point data type is a compromise between the flexibility of a general rational number data type and the speed of fixed-point arithmetic. It uses some of the bits in the data type to specify an exponent for the denominator, today usually power of two although both ten and sixteen have been used. Decimal numbers The decimal type is similar to fixed-point or floating-point data type, but with a denominator that is a power of 10 instead of a power of 2. See also Binary number Decimal number Hexadecimal number IEEE Standard for Floating-Point Arithmetic References Data types Computer arithmetic
Real data type
[ "Mathematics" ]
415
[ "Computer arithmetic", "Arithmetic" ]
53,941
https://en.wikipedia.org/wiki/Triangle%20inequality
In mathematics, the triangle inequality states that for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side. This statement permits the inclusion of degenerate triangles, but some authors, especially those writing about elementary geometry, will exclude this possibility, thus leaving out the possibility of equality. If , , and are the lengths of the sides of a triangle then the triangle inequality states that with equality only in the degenerate case of a triangle with zero area. In Euclidean geometry and some other geometries, the triangle inequality is a theorem about vectors and vector lengths (norms): where the length of the third side has been replaced by the length of the vector sum . When and are real numbers, they can be viewed as vectors in , and the triangle inequality expresses a relationship between absolute values. In Euclidean geometry, for right triangles the triangle inequality is a consequence of the Pythagorean theorem, and for general triangles, a consequence of the law of cosines, although it may be proved without these theorems. The inequality can be viewed intuitively in either or . The figure at the right shows three examples beginning with clear inequality (top) and approaching equality (bottom). In the Euclidean case, equality occurs only if the triangle has a angle and two angles, making the three vertices collinear, as shown in the bottom example. Thus, in Euclidean geometry, the shortest distance between two points is a straight line. In spherical geometry, the shortest distance between two points is an arc of a great circle, but the triangle inequality holds provided the restriction is made that the distance between two points on a sphere is the length of a minor spherical line segment (that is, one with central angle in ) with those endpoints. The triangle inequality is a defining property of norms and measures of distance. This property must be established as a theorem for any function proposed for such purposes for each particular space: for example, spaces such as the real numbers, Euclidean spaces, the Lp spaces (), and inner product spaces. Euclidean geometry Euclid proved the triangle inequality for distances in plane geometry using the construction in the figure. Beginning with triangle , an isosceles triangle is constructed with one side taken as and the other equal leg along the extension of side . It then is argued that angle has larger measure than angle , so side is longer than side . However: so the sum of the lengths of sides and is larger than the length of . This proof appears in Euclid's Elements, Book 1, Proposition 20. Mathematical expression of the constraint on the sides of a triangle For a proper triangle, the triangle inequality, as stated in words, literally translates into three inequalities (given that a proper triangle has side lengths that are all positive and excludes the degenerate case of zero area): A more succinct form of this inequality system can be shown to be Another way to state it is implying and thus that the longest side length is less than the semiperimeter. A mathematically equivalent formulation is that the area of a triangle with sides must be a real number greater than zero. Heron's formula for the area is In terms of either area expression, the triangle inequality imposed on all sides is equivalent to the condition that the expression under the square root sign be real and greater than zero (so the area expression is real and greater than zero). The triangle inequality provides two more interesting constraints for triangles whose sides are , where and is the golden ratio, as Right triangle In the case of right triangles, the triangle inequality specializes to the statement that the hypotenuse is greater than either of the two sides and less than their sum. The second part of this theorem is already established above for any side of any triangle. The first part is established using the lower figure. In the figure, consider the right triangle . An isosceles triangle is constructed with equal sides . From the triangle postulate, the angles in the right triangle satisfy: Likewise, in the isosceles triangle , the angles satisfy: Therefore, and so, in particular, That means side , which is opposite to angle , is shorter than side , which is opposite to the larger angle . But . Hence: A similar construction shows , establishing the theorem. An alternative proof (also based upon the triangle postulate) proceeds by considering three positions for point : (i) as depicted (which is to be proved), or (ii) coincident with (which would mean the isosceles triangle had two right angles as base angles plus the vertex angle , which would violate the triangle postulate), or lastly, (iii) interior to the right triangle between points and (in which case angle is an exterior angle of a right triangle and therefore larger than , meaning the other base angle of the isosceles triangle also is greater than and their sum exceeds in violation of the triangle postulate). This theorem establishing inequalities is sharpened by Pythagoras' theorem to the equality that the square of the length of the hypotenuse equals the sum of the squares of the other two sides. Examples of use Consider a triangle whose sides are in an arithmetic progression and let the sides be . Then the triangle inequality requires that To satisfy all these inequalities requires When is chosen such that , it generates a right triangle that is always similar to the Pythagorean triple with sides , , . Now consider a triangle whose sides are in a geometric progression and let the sides be . Then the triangle inequality requires that The first inequality requires ; consequently it can be divided through and eliminated. With , the middle inequality only requires . This now leaves the first and third inequalities needing to satisfy The first of these quadratic inequalities requires to range in the region beyond the value of the positive root of the quadratic equation , i.e. where is the golden ratio. The second quadratic inequality requires to range between 0 and the positive root of the quadratic equation , i.e. . The combined requirements result in being confined to the range When the common ratio is chosen such that it generates a right triangle that is always similar to the Kepler triangle. Generalization to any polygon The triangle inequality can be extended by mathematical induction to arbitrary polygonal paths, showing that the total length of such a path is no less than the length of the straight line between its endpoints. Consequently, the length of any polygon side is always less than the sum of the other polygon side lengths. Example of the generalized polygon inequality for a quadrilateral Consider a quadrilateral whose sides are in a geometric progression and let the sides be . Then the generalized polygon inequality requires that These inequalities for reduce to the following The left-hand side polynomials of these two inequalities have roots that are the tribonacci constant and its reciprocal. Consequently, is limited to the range where is the tribonacci constant. Relationship with shortest paths This generalization can be used to prove that the shortest curve between two points in Euclidean geometry is a straight line. No polygonal path between two points is shorter than the line between them. This implies that no curve can have an arc length less than the distance between its endpoints. By definition, the arc length of a curve is the least upper bound of the lengths of all polygonal approximations of the curve. The result for polygonal paths shows that the straight line between the endpoints is the shortest of all the polygonal approximations. Because the arc length of the curve is greater than or equal to the length of every polygonal approximation, the curve itself cannot be shorter than the straight line path. Converse The converse of the triangle inequality theorem is also true: if three real numbers are such that each is less than the sum of the others, then there exists a triangle with these numbers as its side lengths and with positive area; and if one number equals the sum of the other two, there exists a degenerate triangle (that is, with zero area) with these numbers as its side lengths. In either case, if the side lengths are , , we can attempt to place a triangle in the Euclidean plane as shown in the diagram. We need to prove that there exists a real number consistent with the values , , and , in which case this triangle exists. By the Pythagorean theorem we have and according to the figure at the right. Subtracting these yields . This equation allows us to express in terms of the sides of the triangle: For the height of the triangle we have that . By replacing with the formula given above, we have For a real number to satisfy this, must be non-negative: which holds if the triangle inequality is satisfied for all sides. Therefore, there does exist a real number consistent with the sides , and the triangle exists. If each triangle inequality holds strictly, and the triangle is non-degenerate (has positive area); but if one of the inequalities holds with equality, so , the triangle is degenerate. Generalization to higher dimensions The area of a triangular face of a tetrahedron is less than or equal to the sum of the areas of the other three triangular faces. More generally, in Euclidean space the hypervolume of an -facet of an -simplex is less than or equal to the sum of the hypervolumes of the other facets. Much as the triangle inequality generalizes to a polygon inequality, the inequality for a simplex of any dimension generalizes to a polytope of any dimension: the hypervolume of any facet of a polytope is less than or equal to the sum of the hypervolumes of the remaining facets. In some cases the tetrahedral inequality is stronger than several applications of the triangle inequality. For example, the triangle inequality appears to allow the possibility of four points , , , and in Euclidean space such that distances and . However, points with such distances cannot exist: the area of the equilateral triangle is , which is larger than three times , the area of a isosceles triangle (all by Heron's formula), and so the arrangement is forbidden by the tetrahedral inequality. Normed vector space In a normed vector space , one of the defining properties of the norm is the triangle inequality: That is, the norm of the sum of two vectors is at most as large as the sum of the norms of the two vectors. This is also referred to as subadditivity. For any proposed function to behave as a norm, it must satisfy this requirement. If the normed space is Euclidean, or, more generally, strictly convex, then if and only if the triangle formed by , , and , is degenerate, that is, and are on the same ray, i.e., or , or for some . This property characterizes strictly convex normed spaces such as the spaces with . However, there are normed spaces in which this is not true. For instance, consider the plane with the norm (the Manhattan distance) and denote and . Then the triangle formed by , , and , is non-degenerate but Example norms The absolute value is a norm for the real line; as required, the absolute value satisfies the triangle inequality for any real numbers and . If and have the same sign or either of them is zero, then If and have opposite signs, then without loss of generality assume Then The triangle inequality is useful in mathematical analysis for determining the best upper estimate on the size of the sum of two numbers, in terms of the sizes of the individual numbers. There is also a lower estimate, which can be found using the reverse triangle inequality which states that for any real numbers and , The taxicab norm or 1-norm is one generalization absolute value to higher dimensions. To find the norm of a vector just add the absolute value of each component separately, The Euclidean norm or 2-norm defines the length of translation vectors in an -dimensional Euclidean space in terms of a Cartesian coordinate system. For a vector its length is defined using the -dimensional Pythagorean theorem: The inner product is norm in any inner product space, a generalization of Euclidean vector spaces including infinite-dimensional examples. The triangle inequality follows from the Cauchy–Schwarz inequality as follows: Given vectors and , and denoting the inner product as : {| | || |- | || |- | || |- | || (by the Cauchy–Schwarz inequality) |- | || . |} The Cauchy–Schwarz inequality turns into an equality if and only if and are linearly dependent. The inequality turns into an equality for linearly dependent and if and only if one of the vectors or is a nonnegative scalar of the other. Taking the square root of the final result gives the triangle inequality. The -norm is a generalization of taxicab and Euclidean norms, using an arbitrary positive integer exponent, where the are the components of vector . Except for the case , the -norm is not an inner product norm, because it does not satisfy the parallelogram law. The triangle inequality for general values of is called Minkowski's inequality. It takes the form: Metric space In a metric space with metric , the triangle inequality is a requirement upon distance: for all points , , and in . That is, the distance from to is at most as large as the sum of the distance from to and the distance from to . The triangle inequality is responsible for most of the interesting structure on a metric space, namely, convergence. This is because the remaining requirements for a metric are rather simplistic in comparison. For example, the fact that any convergent sequence in a metric space is a Cauchy sequence is a direct consequence of the triangle inequality, because if we choose any and such that and , where is given and arbitrary (as in the definition of a limit in a metric space), then by the triangle inequality, , so that the sequence is a Cauchy sequence, by definition. This version of the triangle inequality reduces to the one stated above in case of normed vector spaces where a metric is induced via , with being the vector pointing from point to . Reverse triangle inequality The reverse triangle inequality is an equivalent alternative formulation of the triangle inequality that gives lower bounds instead of upper bounds. For plane geometry, the statement is: Any side of a triangle is greater than or equal to the difference between the other two sides. In the case of a normed vector space, the statement is: or for metric spaces, . This implies that the norm as well as the distance-from- function are Lipschitz continuous with Lipschitz constant , and therefore are in particular uniformly continuous. The proof of the reverse triangle inequality from the usual one uses to find: Combining these two statements gives: In the converse, the proof of the triangle inequality from the reverse triangle inequality works in two cases: If then by the reverse triangle inequality, , and if then trivially by the nonnegativity of the norm. Thus, in both cases, we find that . For metric spaces, the proof of the reverse triangle inequality is found similarly by: Putting these equations together we find: And in the converse, beginning from the reverse triangle inequality, we can again use two cases: If , then , and if then again by the nonnegativity of the metric. Thus, in both cases, we find that . Triangle inequality for cosine similarity By applying the cosine function to the triangle inequality and reverse triangle inequality for arc lengths and employing the angle addition and subtraction formulas for cosines, it follows immediately that and With these formulas, one needs to compute a square root for each triple of vectors that is examined rather than for each pair of vectors examined, and could be a performance improvement when the number of triples examined is less than the number of pairs examined. Reversal in Minkowski space The Minkowski space metric is not positive-definite, which means that can have either sign or vanish, even if the vector is non-zero. Moreover, if and are both timelike vectors lying in the future light cone, the triangle inequality is reversed: A physical example of this inequality is the twin paradox in special relativity. The same reversed form of the inequality holds if both vectors lie in the past light cone, and if one or both are null vectors. The result holds in dimensions for any . If the plane defined by and is space-like (and therefore a Euclidean subspace) then the usual triangle inequality holds. See also Subadditivity Minkowski inequality Ptolemy's inequality Notes References Geometric inequalities Linear algebra Metric geometry Articles containing proofs Theorems in geometry
Triangle inequality
[ "Mathematics" ]
3,439
[ "Mathematical theorems", "Articles containing proofs", "Geometric inequalities", "Geometry", "Theorems in geometry", "Linear algebra", "Inequalities (mathematics)", "Mathematical problems", "Algebra" ]
53,951
https://en.wikipedia.org/wiki/Diarrhea
Diarrhea (American English), also spelled diarrhoea or diarrhœa (British English), is the condition of having at least three loose, liquid, or watery bowel movements in a day. It often lasts for a few days and can result in dehydration due to fluid loss. Signs of dehydration often begin with loss of the normal stretchiness of the skin and irritable behaviour. This can progress to decreased urination, loss of skin color, a fast heart rate, and a decrease in responsiveness as it becomes more severe. Loose but non-watery stools in babies who are exclusively breastfed, however, are normal. The most common cause is an infection of the intestines due to a virus, bacterium, or parasite—a condition also known as gastroenteritis. These infections are often acquired from food or water that has been contaminated by feces, or directly from another person who is infected. The three types of diarrhea are: short duration watery diarrhea, short duration bloody diarrhea, and persistent diarrhea (lasting more than two weeks, which can be either watery or bloody). The short duration watery diarrhea may be due to cholera, although this is rare in the developed world. If blood is present, it is also known as dysentery. A number of non-infectious causes can result in diarrhea. These include lactose intolerance, irritable bowel syndrome, non-celiac gluten sensitivity, celiac disease, inflammatory bowel disease such as ulcerative colitis, hyperthyroidism, bile acid diarrhea, and a number of medications. In most cases, stool cultures to confirm the exact cause are not required. Diarrhea can be prevented by improved sanitation, clean drinking water, and hand washing with soap. Breastfeeding for at least six months and vaccination against rotavirus is also recommended. Oral rehydration solution (ORS)—clean water with modest amounts of salts and sugar—is the treatment of choice. Zinc tablets are also recommended. These treatments have been estimated to have saved 50 million children in the past 25 years. When people have diarrhea it is recommended that they continue to eat healthy food, and babies continue to be breastfed. If commercial ORS is not available, homemade solutions may be used. In those with severe dehydration, intravenous fluids may be required. Most cases, however, can be managed well with fluids by mouth. Antibiotics, while rarely used, may be recommended in a few cases such as those who have bloody diarrhea and a high fever, those with severe diarrhea following travelling, and those who grow specific bacteria or parasites in their stool. Loperamide may help decrease the number of bowel movements but is not recommended in those with severe disease. About 1.7 to 5 billion cases of diarrhea occur per year. It is most common in developing countries, where young children get diarrhea on average three times a year. Total deaths from diarrhea are estimated at 1.53 million in 2019—down from 2.9 million in 1990. In 2012, it was the second most common cause of deaths in children younger than five (0.76 million or 11%). Frequent episodes of diarrhea are also a common cause of malnutrition and the most common cause in those younger than five years of age. Other long term problems that can result include stunted growth and poor intellectual development. Terminology The word diarrhea is from the Ancient Greek from "through" and "flow". Diarrhea is the spelling in American English, whereas diarrhoea is the spelling in British English. Slang terms for the condition include "the runs", "the squirts" (or "squits" in Britain) and "the trots". The word is often pronounced as . Definition Diarrhea is defined by the World Health Organization as having three or more loose or liquid stools per day, or as having more stools than is normal for that person. Acute diarrhea is defined as an abnormally frequent discharge of semisolid or fluid fecal matter from the bowel, lasting less than 14 days, by World Gastroenterology Organization. Acute diarrhea that is watery may be known as AWD (Acute Watery Diarrhoea.) Secretory Secretory diarrhea means that there is an increase in the active secretion, or there is an inhibition of absorption. There is little to no structural damage. The most common cause of this type of diarrhea is a cholera toxin that stimulates the secretion of anions, especially chloride ions (Cl–). Therefore, to maintain a charge balance in the gastrointestinal tract, sodium (Na+) is carried with it, along with water. In this type of diarrhea intestinal fluid secretion is isotonic with plasma even during fasting. It continues even when there is no oral food intake. Osmotic Osmotic diarrhea occurs when too much water is drawn into the bowels. If a person drinks solutions with excessive sugar or excessive salt, these can draw water from the body into the bowel and cause osmotic diarrhea. Osmotic diarrhea can also result from maldigestion (e.g., pancreatic disease or coeliac disease) in which the nutrients are left in the lumen to pull in water. Or it can be caused by osmotic laxatives (which work to alleviate constipation by drawing water into the bowels). In healthy individuals, too much magnesium, vitamin C or undigested lactose can produce osmotic diarrhea and distention of the bowel. A person who has lactose intolerance can have difficulty absorbing lactose after an extraordinarily high intake of dairy products. In persons who have fructose malabsorption, excess fructose intake can also cause diarrhea. High-fructose foods that also have a high glucose content are more absorbable and less likely to cause diarrhea. Sugar alcohols such as sorbitol (often found in sugar-free foods) are difficult for the body to absorb and, in large amounts, may lead to osmotic diarrhea. In most of these cases, osmotic diarrhea stops when the offending agent (e.g., milk or sorbitol) is stopped. Exudative Exudative diarrhea occurs with the presence of blood and pus in the stool. This occurs with inflammatory bowel diseases, such as Crohn's disease or ulcerative colitis, and other severe infections such as E. coli or other forms of food poisoning. Inflammatory Inflammatory diarrhea occurs when there is damage to the mucosal lining or brush border, which leads to a passive loss of protein-rich fluids and a decreased ability to absorb these lost fluids. Features of all three of the other types of diarrhea can be found in this type of diarrhea. It can be caused by bacterial infections, viral infections, parasitic infections, or autoimmune problems such as inflammatory bowel diseases. It can also be caused by tuberculosis, colon cancer, and enteritis. Dysentery If there is blood visible in the stools, it is also known as dysentery. The blood is a trace of an invasion of bowel tissue. Dysentery is a symptom of, among others, Shigella, Entamoeba histolytica, and Salmonella. Health effects Diarrheal disease may have a negative impact on both physical fitness and mental development. "Early childhood malnutrition resulting from any cause reduces physical fitness and work productivity in adults", and diarrhea is a primary cause of childhood malnutrition. Further, evidence suggests that diarrheal disease has significant impacts on mental development and health; it has been shown that, even when controlling for helminth infection and early breastfeeding, children who had experienced severe diarrhea had significantly lower scores on a series of tests of intelligence. Diarrhea can cause electrolyte imbalances, kidney impairment, dehydration, and defective immune system responses. When oral drugs are administered, the efficiency of the drug is to produce a therapeutic effect and the lack of this effect may be due to the medication travelling too quickly through the digestive system, limiting the time that it can be absorbed. Clinicians try to treat the diarrheas by reducing the dosage of medication, changing the dosing schedule, discontinuation of the drug, and rehydration. The interventions to control the diarrhea are not often effective. Diarrhea can have a profound effect on the quality of life because fecal incontinence is one of the leading factors for placing older adults in long term care facilities (nursing homes). Causes In the latter stages of human digestion, ingested materials are inundated with water and digestive fluids such as gastric acid, bile, and digestive enzymes in order to break them down into their nutrient components, which are then absorbed into the bloodstream via the intestinal tract in the small intestine. Prior to defecation, the large intestine reabsorbs the water and other digestive solvents in the waste product in order to maintain proper hydration and overall equilibrium. Diarrhea occurs when the large intestine is prevented, for any number of reasons, from sufficiently absorbing the water or other digestive fluids from fecal matter, resulting in a liquid, or "loose", bowel movement. Acute diarrhea is most commonly due to viral gastroenteritis with rotavirus, which accounts for 40% of cases in children under five. In travelers, however, bacterial infections predominate. Various toxins such as mushroom poisoning and drugs can also cause acute diarrhea. Chronic diarrhea can be the part of the presentations of a number of chronic medical conditions affecting the intestine. Common causes include ulcerative colitis, Crohn's disease, microscopic colitis, celiac disease, irritable bowel syndrome, and bile acid malabsorption. Infections There are many causes of infectious diarrhea, which include viruses, bacteria and parasites. Infectious diarrhea is frequently referred to as gastroenteritis. Norovirus is the most common cause of viral diarrhea in adults, but rotavirus is the most common cause in children under five years old. Adenovirus types 40 and 41, and astroviruses cause a significant number of infections. Shiga-toxin producing Escherichia coli, such as E coli o157:h7, are the most common cause of infectious bloody diarrhea in the United States. Campylobacter spp. are a common cause of bacterial diarrhea, but infections by Salmonella spp., Shigella spp. and some strains of Escherichia coli are also a frequent cause. In the elderly, particularly those who have been treated with antibiotics for unrelated infections, a toxin produced by Clostridioides difficile often causes severe diarrhea. Parasites, particularly protozoa e.g., Cryptosporidium spp., Giardia spp., Entamoeba histolytica, Blastocystis spp., Cyclospora cayetanensis, are frequently the cause of diarrhea that involves chronic infection. The broad-spectrum antiparasitic agent nitazoxanide has shown efficacy against many diarrhea-causing parasites. Other infectious agents, such as parasites or bacterial toxins, may exacerbate symptoms. In sanitary living conditions where there is ample food and a supply of clean water, an otherwise healthy person usually recovers from viral infections in a few days. However, for ill or malnourished individuals, diarrhea can lead to severe dehydration and can become life-threatening. Sanitation Open defecation is a leading cause of infectious diarrhea leading to death. Poverty is a good indicator of the rate of infectious diarrhea in a population. This association does not stem from poverty itself, but rather from the conditions under which impoverished people live. The absence of certain resources compromises the ability of the poor to defend themselves against infectious diarrhea. "Poverty is associated with poor housing, crowding, dirt floors, lack of access to clean water or to sanitary disposal of fecal waste (sanitation), cohabitation with domestic animals that may carry human pathogens, and a lack of refrigerated storage for food, all of which increase the frequency of diarrhea... Poverty also restricts the ability to provide age-appropriate, nutritionally balanced diets or to modify diets when diarrhea develops so as to mitigate and repair nutrient losses. The impact is exacerbated by the lack of adequate, available, and affordable medical care." One of the most common causes of infectious diarrhea is a lack of clean water. Often, improper fecal disposal leads to contamination of groundwater. This can lead to widespread infection among a population, especially in the absence of water filtration or purification. Human feces contains a variety of potentially harmful human pathogens. Nutrition Proper nutrition is important for health and functioning, including the prevention of infectious diarrhea. It is especially important to young children who do not have a fully developed immune system. Zinc deficiency, a condition often found in children in developing countries can, even in mild cases, have a significant impact on the development and proper functioning of the human immune system. Indeed, this relationship between zinc deficiency and reduced immune functioning corresponds with an increased severity of infectious diarrhea. Children who have lowered levels of zinc have a greater number of instances of diarrhea, severe diarrhea, and diarrhea associated with fever. Similarly, vitamin A deficiency can cause an increase in the severity of diarrheal episodes. However, there is some discrepancy when it comes to the impact of vitamin A deficiency on the rate of disease. While some argue that a relationship does not exist between the rate of disease and vitamin A status, others suggest an increase in the rate associated with deficiency. Given that estimates suggest 127 million preschool children worldwide are vitamin A deficient, this population has the potential for increased risk of disease contraction. Malabsorption Malabsorption is the inability to absorb food fully, mostly from disorders in the small bowel, but also due to maldigestion from diseases of the pancreas. Causes include: enzyme deficiencies or mucosal abnormality, as in food allergy and food intolerance, e.g. celiac disease (gluten intolerance), lactose intolerance (intolerance to milk sugar, common in non-Europeans), and fructose malabsorption. pernicious anemia, or impaired bowel function due to the inability to absorb vitamin B12, loss of pancreatic secretions, which may be due to cystic fibrosis or pancreatitis, structural defects, like short bowel syndrome (surgically removed bowel) and radiation fibrosis, such as usually follows cancer treatment and other drugs, including agents used in chemotherapy; and certain drugs, like orlistat, which inhibits the absorption of fat. Inflammatory bowel disease The two overlapping types here are of unknown origin: Ulcerative colitis is marked by chronic bloody diarrhea and inflammation mostly affects the distal colon near the rectum. Crohn's disease typically affects fairly well demarcated segments of bowel in the colon and often affects the end of the small bowel. Irritable bowel syndrome Another possible cause of diarrhea is irritable bowel syndrome (IBS), which usually presents with abdominal discomfort relieved by defecation and unusual stool (diarrhea or constipation) for at least three days a week over the previous three months. Symptoms of diarrhea-predominant IBS can be managed through a combination of dietary changes, soluble fiber supplements and medications such as loperamide or codeine. About 30% of patients with diarrhea-predominant IBS have bile acid malabsorption diagnosed with an abnormal SeHCAT test. Other diseases Diarrhea can be caused by other diseases and conditions, namely: Chronic ethanol ingestion Hyperthyroidism Certain medications Bile acid malabsorption Ischemic bowel disease: This usually affects older people and can be due to blocked arteries. Microscopic colitis, a type of inflammatory bowel disease where changes are seen only on histological examination of colonic biopsies. Bile salt malabsorption (primary bile acid diarrhea) where excessive bile acids in the colon produce a secretory diarrhea. Hormone-secreting tumors: some hormones, e.g. serotonin, can cause diarrhea if secreted in excess (usually from a tumor). Chronic mild diarrhea in infants and toddlers may occur with no obvious cause and with no other ill effects; this condition is called toddler's diarrhea. Environmental enteropathy Radiation enteropathy following treatment for pelvic and abdominal cancers. Medications Over 700 medications, such as penicillin, are known to cause diarrhea. The classes of medications that are known to cause diarrhea are laxatives, antacids, heartburn medications, antibiotics, anti-neoplastic drugs, anti-inflammatories as well as many dietary supplements. Pathophysiology Evolution According to two researchers, Nesse and Williams, diarrhea may function as an evolved expulsion defense mechanism. As a result, if it is stopped, there might be a delay in recovery. They cite in support of this argument research published in 1973 that found that treating Shigella with the anti-diarrhea drug (Co-phenotrope, Lomotil) caused people to stay feverish twice as long as those not so treated. The researchers indeed themselves observed that: "Lomotil may be contraindicated in shigellosis. Diarrhea may represent a defense mechanism". Diagnostic approach The following types of diarrhea may indicate further investigation is needed: In infants Moderate or severe diarrhea in young children Associated with blood Continues for more than two days Associated non-cramping abdominal pain, fever, weight loss, etc. In travelers In food handlers, because of the potential to infect others; In institutions such as hospitals, child care centers, or geriatric and convalescent homes. A severity score is used to aid diagnosis in children. When diarrhea lasts for more than four weeks a number of further tests may be recommended including: Complete blood count and a ferritin if anemia is present Thyroid stimulating hormone Tissue transglutaminase for celiac disease Fecal calprotectin to exclude inflammatory bowel disease Stool tests for ova and parasites as well as for Clostridioides difficile A colonoscopy or fecal immunochemical testing for cancer, including biopsies to detect microscopic colitis Testing for bile acid diarrhea with SeHCAT, 7α-hydroxy-4-cholesten-3-one or fecal bile acids depending on availability Hydrogen breath test looking for lactose intolerance Further tests if immunodeficiency, pelvic radiation disease or small intestinal bacterial overgrowth suspected. A 2019 guideline recommended that testing for ova and parasites was only needed in people who are at high risk though they recommend routine testing for giardia. Erythrocyte sedimentation rate (ESR) and C-reactive protein (CRP) were not recommended. Epidemiology Worldwide in 2004, approximately 2.5 billion cases of diarrhea occurred, which resulted in 1.5 million deaths among children under the age of five. Greater than half of these were in Africa and South Asia. This is down from a death rate of 4.5 million in 1980 for gastroenteritis. Diarrhea remains the second leading cause of infant mortality (16%) after pneumonia (17%) in this age group. The majority of such cases occur in the developing world, with over half of the recorded cases of childhood diarrhea occurring in Africa and Asia, with 696 million and 1.2 billion cases, respectively, compared to only 480 million in the rest of the world. Infectious diarrhea resulted in about 0.7 million deaths in children under five years old in 2011 and 250 million lost school days. In the Americas, diarrheal disease accounts for a total of 10% of deaths among children aged 1–59 months while in South East Asia, it accounts for 31.3% of deaths. It is estimated that around 21% of child mortalities in developing countries are due to diarrheal disease. The World Health Organization has reported that "deaths due to diarrhoeal diseases have dropped by 45%, from sixth leading cause of death in 2000 to thirteenth in 2021." Even though diarrhea is best known in humans, it affects many other species, notably among primates. The cecal appendix, when present, appears to afford some protection against diarrhea to young primates. Prevention Sanitation Numerous studies have shown that improvements in drinking water and sanitation (WASH) lead to decreased risks of diarrhoea. Such improvements might include for example use of water filters, provision of high-quality piped water and sewer connections. In institutions, communities, and households, interventions that promote hand washing with soap lead to significant reductions in the incidence of diarrhea. The same applies to preventing open defecation at a community-wide level and providing access to improved sanitation. This includes use of toilets and implementation of the entire sanitation chain connected to the toilets (collection, transport, disposal or reuse of human excreta). There is limited evidence that safe disposal of child or adult feces can prevent diarrheal disease. Hand washing Basic sanitation techniques can have a profound effect on the transmission of diarrheal disease. The implementation of hand washing using soap and water, for example, has been experimentally shown to reduce the incidence of disease by approximately 30–48%. Hand washing in developing countries, however, is compromised by poverty as acknowledged by the CDC: "Handwashing is integral to disease prevention in all parts of the world; however, access to soap and water is limited in a number of less developed countries. This lack of access is one of many challenges to proper hygiene in less developed countries." Solutions to this barrier require the implementation of educational programs that encourage sanitary behaviours. Water Given that water contamination is a major means of transmitting diarrheal disease, efforts to provide clean water supply and improved sanitation have the potential to dramatically cut the rate of disease incidence. In fact, it has been proposed that we might expect an 88% reduction in child mortality resulting from diarrheal disease as a result of improved water sanitation and hygiene. Similarly, a meta-analysis of numerous studies on improving water supply and sanitation shows a 22–27% reduction in disease incidence, and a 21–30% reduction in mortality rate associated with diarrheal disease. Chlorine treatment of water, for example, has been shown to reduce both the risk of diarrheal disease, and of contamination of stored water with diarrheal pathogens. Vaccination Immunization against the pathogens that cause diarrheal disease is a viable prevention strategy, however it does require targeting certain pathogens for vaccination. In the case of Rotavirus, which was responsible for around 6% of diarrheal episodes and 20% of diarrheal disease deaths in the children of developing countries, use of a Rotavirus vaccine in trials in 1985 yielded a slight (2–3%) decrease in total diarrheal disease incidence, while reducing overall mortality by 6–10%. Similarly, a Cholera vaccine showed a strong reduction in morbidity and mortality, though the overall impact of vaccination was minimal as Cholera is not one of the major causative pathogens of diarrheal disease. Since this time, more effective vaccines have been developed that have the potential to save many thousands of lives in developing nations, while reducing the overall cost of treatment, and the costs to society. Rotavirus vaccine decreases the rates of diarrhea in a population. New vaccines against rotavirus, Shigella, Enterotoxigenic Escherichia coli (ETEC), and cholera are under development, as well as other causes of infectious diarrhea. Nutrition Dietary deficiencies in developing countries can be combated by promoting better eating practices. Zinc supplementation proved successful showing a significant decrease in the incidence of diarrheal disease compared to a control group. The majority of the literature suggests that vitamin A supplementation is advantageous in reducing disease incidence. Development of a supplementation strategy should take into consideration the fact that vitamin A supplementation was less effective in reducing diarrhea incidence when compared to vitamin A and zinc supplementation, and that the latter strategy was estimated to be significantly more cost effective. Breastfeeding Breastfeeding practices have been shown to have a dramatic effect on the incidence of diarrheal disease in poor populations. Studies across a number of developing nations have shown that those who receive exclusive breastfeeding during their first 6 months of life are better protected against infection with diarrheal diseases. One study in Brazil found that non-breastfed infants were 14 times more likely to die from diarrhea than exclusively breastfed infants. Exclusive breastfeeding is currently recommended for the first six months of an infant's life by the WHO, with continued breastfeeding until at least two years of age. Others Probiotics decrease the risk of diarrhea in those taking antibiotics. Insecticide spraying may reduce fly numbers and the risk of diarrhea in children in a setting where there is seasonal variations in fly numbers throughout the year. Management In many cases of diarrhea, replacing lost fluid and salts is the only treatment needed. This is usually by mouth – oral rehydration therapy – or, in severe cases, intravenously. Diet restrictions such as the BRAT diet are no longer recommended. Research does not support the limiting of milk to children as doing so has no effect on duration of diarrhea. To the contrary, WHO recommends that children with diarrhea continue to eat as sufficient nutrients are usually still absorbed to support continued growth and weight gain, and that continuing to eat also speeds up recovery of normal intestinal functioning. CDC recommends that children and adults with cholera also continue to eat. There is no evidence that early refeeding in children can cause an increase in inappropriate use of intravenous fluid, episodes of vomiting, and risk of having persistent diarrhea. Medications such as loperamide (Imodium) and bismuth subsalicylate may be beneficial; however they may be contraindicated in certain situations. Fluids Oral rehydration solution (ORS) (a slightly sweetened and salty water) can be used to prevent dehydration. Standard home solutions such as salted rice water, salted yogurt drinks, vegetable and chicken soups with salt can be given. Home solutions such as water in which cereal has been cooked, unsalted soup, green coconut water, weak tea (unsweetened), and unsweetened fresh fruit juices can have from half a teaspoon to full teaspoon of salt (from one-and-a-half to three grams) added per liter. Clean plain water can also be one of several fluids given. There are commercial solutions such as Pedialyte, and relief agencies such as UNICEF widely distribute packets of salts and sugar. A WHO publication for physicians recommends a homemade ORS consisting of one liter water with one teaspoon salt (3 grams) and two tablespoons sugar (18 grams) added (approximately the "taste of tears"). Rehydration Project recommends adding the same amount of sugar but only one-half a teaspoon of salt, stating that this more dilute approach is less risky with very little loss of effectiveness. Both agree that drinks with too much sugar or salt can make dehydration worse. Appropriate amounts of supplemental zinc and potassium should be added if available. But the availability of these should not delay rehydration. As WHO points out, the most important thing is to begin preventing dehydration as early as possible. In another example of prompt ORS hopefully preventing dehydration, CDC recommends for the treatment of cholera continuing to give Oral Rehydration Solution during travel to medical treatment. Vomiting often occurs during the first hour or two of treatment with ORS, especially if a child drinks the solution too quickly, but this seldom prevents successful rehydration since most of the fluid is still absorbed. WHO recommends that if a child vomits, to wait five or ten minutes and then start to give the solution again more slowly. Drinks especially high in simple sugars, such as soft drinks and fruit juices, are not recommended in children under five as they may increase dehydration. A too rich solution in the gut draws water from the rest of the body, just as if the person were to drink sea water. Plain water may be used if more specific and effective ORT preparations are unavailable or are not palatable. Additionally, a mix of both plain water and drinks perhaps too rich in sugar and salt can alternatively be given to the same person, with the goal of providing a medium amount of sodium overall. A nasogastric tube can be used in young children to administer fluids if warranted. Eating The WHO recommends a child with diarrhea continue to be fed. Continued feeding speeds the recovery of normal intestinal function. In contrast, children whose food is restricted have diarrhea of longer duration and recover intestinal function more slowly. The WHO states "Food should never be withheld and the child's usual foods should not be diluted. Breastfeeding should always be continued." In the specific example of cholera, the CDC makes the same recommendation. Breast-fed infants with diarrhea often choose to breastfeed more, and should be encouraged to do so. In young children who are not breast-fed and live in the developed world, a lactose-free diet may be useful to speed recovery. Eating food containing soluble fibre may help, but insoluble fibre might make it worse. Medications Antidiarrheal agents can be classified into four different groups: antimotility, antisecretory, adsorbent, and anti-infectious. While antibiotics are beneficial in certain types of acute diarrhea, they are usually not used except in specific situations. There are concerns that antibiotics may increase the risk of hemolytic uremic syndrome in people infected with Escherichia coli O157:H7. In resource-poor countries, treatment with antibiotics may be beneficial. However, some bacteria are developing antibiotic resistance, particularly Shigella. Antibiotics can also cause diarrhea, and antibiotic-associated diarrhea is the most common adverse effect of treatment with general antibiotics. While bismuth compounds (Pepto-Bismol) decreased the number of bowel movements in those with travelers' diarrhea, they do not decrease the length of illness. Anti-motility agents like loperamide are also effective at reducing the number of stools but not the duration of disease. These agents should be used only if bloody diarrhea is not present. Diosmectite, a natural aluminomagnesium silicate clay, is effective in alleviating symptoms of acute diarrhea in children, and also has some effects in chronic functional diarrhea, radiation-induced diarrhea, and chemotherapy-induced diarrhea. Another absorbent agent used for the treatment of mild diarrhea is kaopectate. Racecadotril an antisecretory medication may be used to treat diarrhea in children and adults. It has better tolerability than loperamide, as it causes less constipation and flatulence. However, it has little benefit in improving acute diarrhea in children. Bile acid sequestrants such as cholestyramine can be effective in chronic diarrhea due to bile acid malabsorption. Therapeutic trials of these drugs are indicated in chronic diarrhea if bile acid malabsorption cannot be diagnosed with a specific test, such as SeHCAT retention. Alternative therapies Zinc supplementation may benefit children over six months old with diarrhea in areas with high rates of malnourishment or zinc deficiency. This supports the World Health Organization guidelines for zinc, but not in the very young. A Cochrane Review from 2020 concludes that probiotics make little or no difference to people who have diarrhea lasting 2 days or longer and that there is no proof that they reduce its duration. The probiotic lactobacillus can help prevent antibiotic-associated diarrhea in adults but possibly not children. For those with lactose intolerance, taking digestive enzymes containing lactase when consuming dairy products often improves symptoms. See also References External links WHO fact sheet on diarrhoeal disease Intestinal infectious diseases Waterborne diseases Diseases of intestines Conditions diagnosed by stool test Symptoms and signs: Digestive system and abdomen Feces Wikipedia medicine articles ready to translate Sanitation Wikipedia emergency medicine articles ready to translate Articles containing video clips
Diarrhea
[ "Biology" ]
7,009
[ "Excretion", "Feces", "Animal waste products" ]
53,953
https://en.wikipedia.org/wiki/Sokoban
is a puzzle video game in which the player pushes boxes around in a warehouse, trying to get them to storage locations. The game was designed in 1981 by Hiroyuki Imabayashi, and first published in December 1982. Gameplay The warehouse is depicted as a grid of squares, each one representing either a floor section or a wall section. Some floor squares contain boxes and some are marked as storage locations. The player, often represented as a worker character, can move one square at a time horizontally or vertically onto empty floor squares, but cannot pass through walls or boxes. To move a box, the player walks up to it and pushes it to an empty square directly beyond the box. Boxes cannot be pushed to squares with walls or other boxes, and they cannot be pulled. The number of boxes matches the number of storage locations. The puzzle is solved when all boxes occupy the storage locations. Challenges and strategy Progressing through the game requires careful planning and precise maneuvering. A single mistake, such as pushing a box into a corner or obstructing the path of others, can render the puzzle unsolvable, forcing the player to backtrack or restart. Anticipating the consequences of each push and considering the overall layout of the puzzle are crucial to avoid deadlocks and complete the puzzle successfully. Development Sokoban was created in 1981 by Hiroyuki Imabayashi. The first commercial game was published in December 1982 by his company, Thinking Rabbit, based in Takarazuka, Japan. Sokoban was a hit in Japan, selling over 400,000 copies before being released in the United States. In 1988, Spectrum HoloByte published Sokoban in the U.S. for the IBM PC, Commodore 64, and Apple II as Soko-Ban. In 2001, the Japanese software company Falcon acquired the trademarks for Sokoban and Thinking Rabbit. Since then, Falcon has continued to develop and license official Sokoban games. Implementations Sokoban has been implemented for almost all home computers, personal computers, video game consoles and even some TVs. Versions also exist for mobile phones, graphing calculators, digital cameras and electronic organizers. Scientific research Sokoban has been studied using the theory of computational complexity. The computational problem of solving Sokoban puzzles was first shown to be NP-hard. Further work proved it is also PSPACE-complete. Solving non-trivial Sokoban puzzles is difficult for computers because of the high branching factor (many legal pushes at each turn) and the large search depth (many pushes needed to reach a solution). Even small puzzles can require lengthy solutions. The Sokoban game provides a challenging testbed for developing and evaluating planning techniques. The first documented automated solver was Rolling Stone, developed at the University of Alberta. Its core principles laid the groundwork for many newer solvers. It employed a conventional search algorithm enhanced with domain-specific knowledge. Festival, utilizing its FESS algorithm, was the first automatic solver to complete all 90 puzzles in the widely used XSokoban test suite. However, even the best automated solvers cannot solve many of the more challenging puzzles that humans can solve with time and effort. Variants Several puzzles can be considered variants of the original Sokoban game in the sense that they all make use of a controllable character pushing boxes around in a maze. Alternative tilings: In the standard game, the mazes are laid out on a square grid. Several variants apply the rules of Sokoban to mazes laid out on other tilings. Hexoban uses regular hexagons, and Trioban uses equilateral triangles. Multiple pushers: In the variant Multiban, the puzzle contains more than one pusher. In the game Sokoboxes Duo, strictly two pushers collaborate to solve the puzzle. Designated storage locations: In Sokomind Plus, some boxes and target squares are uniquely numbered. In Block-o-Mania, the boxes have different colours, and the goal is to push them onto squares with matching colours. Alternative game objectives: Several variants feature different objectives from the traditional Sokoban gameplay. For instance, in Interlock and Sokolor, the boxes have different colours, but the objective is to move them so that similarly coloured boxes are adjacent. In CyberBox, each level has a designated exit square, and the objective is to reach that exit by pushing boxes, potentially more than one simultaneously. In a variant called Beanstalk, the objective is to push the elements of the level onto a target square in a fixed sequence. Additional game elements: Push Crate, Sokonex, Xsok, Cyberbox and Block-o-Mania all add new elements to the basic puzzle. Examples include holes, teleports, moving blocks and one-way passages. Character actions: In Pukoban, the character can pull boxes in addition to pushing them. Reverse mode: In this variant, the player solves the standard puzzle backward, starting with all boxes on goal squares. Then the player pulls the boxes to reach the initial position. Solutions obtained in reverse mode can be directly applied to solve the standard puzzle by reversing the order of the moves. This makes reverse mode a useful tool for players, allowing them to develop strategies for solving puzzles in the standard game. Selected official releases This table lists some prominent official Sokoban releases that mark milestones, such as expanding to new platforms or achieving widespread popularity. They are organized by release date. See also Logic puzzle Sliding puzzle Transport puzzle Motion planning References External links Official Sokoban site (in Japanese) The University of Alberta Sokoban page 1982 video games ASCII Corporation games Cancelled Atari Jaguar games Commodore 64 games DOS games FM-7 games GP2X games Japanese inventions Linux games Logic puzzles MacOS games Maze games MSX games NEC PC-6001 games NEC PC-8001 games NEC PC-8801 games NEC PC-9801 games PSPACE-complete problems Puzzle video games SG-1000 games Sharp MZ games Sharp X1 games X68000 games Single-player video games Thinking Rabbit games Video games developed in Japan Windows games Windows Mobile Professional games ZX Spectrum games NP-complete problems
Sokoban
[ "Mathematics" ]
1,265
[]
53,954
https://en.wikipedia.org/wiki/Work%20function
In solid-state physics, the work function (sometimes spelled workfunction) is the minimum thermodynamic work (i.e., energy) needed to remove an electron from a solid to a point in the vacuum immediately outside the solid surface. Here "immediately" means that the final electron position is far from the surface on the atomic scale, but still too close to the solid to be influenced by ambient electric fields in the vacuum. The work function is not a characteristic of a bulk material, but rather a property of the surface of the material (depending on crystal face and contamination). Definition The work function for a given surface is defined by the difference where is the charge of an electron, is the electrostatic potential in the vacuum nearby the surface, and is the Fermi level (electrochemical potential of electrons) inside the material. The term is the energy of an electron at rest in the vacuum nearby the surface. In practice, one directly controls by the voltage applied to the material through electrodes, and the work function is generally a fixed characteristic of the surface material. Consequently, this means that when a voltage is applied to a material, the electrostatic potential produced in the vacuum will be somewhat lower than the applied voltage, the difference depending on the work function of the material surface. Rearranging the above equation, one has where is the voltage of the material (as measured by a voltmeter, through an attached electrode), relative to an electrical ground that is defined as having zero Fermi level. The fact that depends on the material surface means that the space between two dissimilar conductors will have a built-in electric field, when those conductors are in total equilibrium with each other (electrically shorted to each other, and with equal temperatures). The work function refers to removal of an electron to a position that is far enough from the surface (many nm) that the force between the electron and its image charge in the surface can be neglected. The electron must also be close to the surface compared to the nearest edge of a crystal facet, or to any other change in the surface structure, such as a change in the material composition, surface coating or reconstruction. The built-in electric field that results from these structures, and any other ambient electric field present in the vacuum are excluded in defining the work function. Applications Thermionic emission In thermionic electron guns, the work function and temperature of the hot cathode are critical parameters in determining the amount of current that can be emitted. Tungsten, the common choice for vacuum tube filaments, can survive to high temperatures but its emission is somewhat limited due to its relatively high work function (approximately 4.5 eV). By coating the tungsten with a substance of lower work function (e.g., thorium or barium oxide), the emission can be greatly increased. This prolongs the lifetime of the filament by allowing operation at lower temperatures (for more information, see hot cathode). Band bending models in solid-state electronics The behavior of a solid-state device is strongly dependent on the size of various Schottky barriers and band offsets in the junctions of differing materials, such as metals, semiconductors, and insulators. Some commonly used heuristic approaches to predict the band alignment between materials, such as Anderson's rule and the Schottky–Mott rule, are based on the thought experiment of two materials coming together in vacuum, such that the surfaces charge up and adjust their work functions to become equal just before contact. In reality these work function heuristics are inaccurate due to their neglect of numerous microscopic effects. However, they provide a convenient estimate until the true value can be determined by experiment. Equilibrium electric fields in vacuum chambers Variation in work function between different surfaces causes a non-uniform electrostatic potential in the vacuum. Even on an ostensibly uniform surface, variations in known as patch potentials are always present due to microscopic inhomogeneities. Patch potentials have disrupted sensitive apparatus that rely on a perfectly uniform vacuum, such as Casimir force experiments and the Gravity Probe B experiment. Critical apparatus may have surfaces covered with molybdenum, which shows low variations in work function between different crystal faces. Contact electrification If two conducting surfaces are moved relative to each other, and there is potential difference in the space between them, then an electric current will be driven. This is because the surface charge on a conductor depends on the magnitude of the electric field, which in turn depends on the distance between the surfaces. The externally observed electrical effects are largest when the conductors are separated by the smallest distance without touching (once brought into contact, the charge will instead flow internally through the junction between the conductors). Since two conductors in equilibrium can have a built-in potential difference due to work function differences, this means that bringing dissimilar conductors into contact, or pulling them apart, will drive electric currents. These contact currents can damage sensitive microelectronic circuitry and occur even when the conductors would be grounded in the absence of motion. Measurement Certain physical phenomena are highly sensitive to the value of the work function. The observed data from these effects can be fitted to simplified theoretical models, allowing one to extract a value of the work function. These phenomenologically extracted work functions may be slightly different from the thermodynamic definition given above. For inhomogeneous surfaces, the work function varies from place to place, and different methods will yield different values of the typical "work function" as they average or select differently among the microscopic work functions. Many techniques have been developed based on different physical effects to measure the electronic work function of a sample. One may distinguish between two groups of experimental methods for work function measurements: absolute and relative. Absolute methods employ electron emission from the sample induced by photon absorption (photoemission), by high temperature (thermionic emission), due to an electric field (field electron emission), or using electron tunnelling. Relative methods make use of the contact potential difference between the sample and a reference electrode. Experimentally, either an anode current of a diode is used or the displacement current between the sample and reference, created by an artificial change in the capacitance between the two, is measured (the Kelvin Probe method, Kelvin probe force microscope). However, absolute work function values can be obtained if the tip is first calibrated against a reference sample. Methods based on thermionic emission The work function is important in the theory of thermionic emission, where thermal fluctuations provide enough energy to "evaporate" electrons out of a hot material (called the 'emitter') into the vacuum. If these electrons are absorbed by another, cooler material (called the collector) then a measurable electric current will be observed. Thermionic emission can be used to measure the work function of both the hot emitter and cold collector. Generally, these measurements involve fitting to Richardson's law, and so they must be carried out in a low temperature and low current regime where space charge effects are absent. In order to move from the hot emitter to the vacuum, an electron's energy must exceed the emitter Fermi level by an amount determined simply by the thermionic work function of the emitter. If an electric field is applied towards the surface of the emitter, then all of the escaping electrons will be accelerated away from the emitter and absorbed into whichever material is applying the electric field. According to Richardson's law the emitted current density (per unit area of emitter), Je (A/m2), is related to the absolute temperature Te of the emitter by the equation: where k is the Boltzmann constant and the proportionality constant Ae is the Richardson's constant of the emitter. In this case, the dependence of Je on Te can be fitted to yield We. Work function of cold electron collector The same setup can be used to instead measure the work function in the collector, simply by adjusting the applied voltage. If an electric field is applied away from the emitter instead, then most of the electrons coming from the emitter will simply be reflected back to the emitter. Only the highest energy electrons will have enough energy to reach the collector, and the height of the potential barrier in this case depends on the collector's work function, rather than the emitter's. The current is still governed by Richardson's law. However, in this case the barrier height does not depend on We. The barrier height now depends on the work function of the collector, as well as any additional applied voltages: where Wc is the collector's thermionic work function, ΔVce is the applied collector–emitter voltage, and ΔVS is the Seebeck voltage in the hot emitter (the influence of ΔVS is often omitted, as it is a small contribution of order 10 mV). The resulting current density Jc through the collector (per unit of collector area) is again given by Richardson's Law, except now where A is a Richardson-type constant that depends on the collector material but may also depend on the emitter material, and the diode geometry. In this case, the dependence of Jc on Te, or on ΔVce, can be fitted to yield Wc. This retarding potential method is one of the simplest and oldest methods of measuring work functions, and is advantageous since the measured material (collector) is not required to survive high temperatures. Methods based on photoemission The photoelectric work function is the minimum photon energy required to liberate an electron from a substance, in the photoelectric effect. If the photon's energy is greater than the substance's work function, photoelectric emission occurs and the electron is liberated from the surface. Similar to the thermionic case described above, the liberated electrons can be extracted into a collector and produce a detectable current, if an electric field is applied into the surface of the emitter. Excess photon energy results in a liberated electron with non-zero kinetic energy. It is expected that the minimum photon energy required to liberate an electron (and generate a current) is where We is the work function of the emitter. Photoelectric measurements require a great deal of care, as an incorrectly designed experimental geometry can result in an erroneous measurement of work function. This may be responsible for the large variation in work function values in scientific literature. Moreover, the minimum energy can be misleading in materials where there are no actual electron states at the Fermi level that are available for excitation. For example, in a semiconductor the minimum photon energy would actually correspond to the valence band edge rather than work function. Of course, the photoelectric effect may be used in the retarding mode, as with the thermionic apparatus described above. In the retarding case, the dark collector's work function is measured instead. Kelvin probe method The Kelvin probe technique relies on the detection of an electric field (gradient in ϕ) between a sample material and probe material. The electric field can be varied by the voltage ΔVsp that is applied to the probe relative to the sample. If the voltage is chosen such that the electric field is eliminated (the flat vacuum condition), then Since the experimenter controls and knows ΔVsp, then finding the flat vacuum condition gives directly the work function difference between the two materials. The only question is, how to detect the flat vacuum condition? Typically, the electric field is detected by varying the distance between the sample and probe. When the distance is changed but ΔVsp is held constant, a current will flow due to the change in capacitance. This current is proportional to the vacuum electric field, and so when the electric field is neutralized no current will flow. Although the Kelvin probe technique only measures a work function difference, it is possible to obtain an absolute work function by first calibrating the probe against a reference material (with known work function) and then using the same probe to measure a desired sample. The Kelvin probe technique can be used to obtain work function maps of a surface with extremely high spatial resolution, by using a sharp tip for the probe (see Kelvin probe force microscope). Work functions of elements The work function depends on the configurations of atoms at the surface of the material. For example, on polycrystalline silver the work function is 4.26 eV, but on silver crystals it varies for different crystal faces as (100) face: 4.64 eV, (110) face: 4.52 eV, (111) face: 4.74 eV. Ranges for typical surfaces are shown in the table below. Physical factors that determine the work function Due to the complications described in the modelling section below, it is difficult to theoretically predict the work function with accuracy. However, various trends have been identified. The work function tends to be smaller for metals with an open lattice, and larger for metals in which the atoms are closely packed. It is somewhat higher on dense crystal faces than open crystal faces, also depending on surface reconstructions for the given crystal face. Surface dipole The work function is not simply dependent on the "internal vacuum level" inside the material (i.e., its average electrostatic potential), because of the formation of an atomic-scale electric double layer at the surface. This surface electric dipole gives a jump in the electrostatic potential between the material and the vacuum. A variety of factors are responsible for the surface electric dipole. Even with a completely clean surface, the electrons can spread slightly into the vacuum, leaving behind a slightly positively charged layer of material. This primarily occurs in metals, where the bound electrons do not encounter a hard wall potential at the surface but rather a gradual ramping potential due to image charge attraction. The amount of surface dipole depends on the detailed layout of the atoms at the surface of the material, leading to the variation in work function for different crystal faces. Doping and electric field effect (semiconductors) In a semiconductor, the work function is sensitive to the doping level at the surface of the semiconductor. Since the doping near the surface can also be controlled by electric fields, the work function of a semiconductor is also sensitive to the electric field in the vacuum. The reason for the dependence is that, typically, the vacuum level and the conduction band edge retain a fixed spacing independent of doping. This spacing is called the electron affinity (note that this has a different meaning than the electron affinity of chemistry); in silicon for example the electron affinity is 4.05 eV. If the electron affinity EEA and the surface's band-referenced Fermi level EF-EC are known, then the work function is given by where EC is taken at the surface. From this one might expect that by doping the bulk of the semiconductor, the work function can be tuned. In reality, however, the energies of the bands near the surface are often pinned to the Fermi level, due to the influence of surface states. If there is a large density of surface states, then the work function of the semiconductor will show a very weak dependence on doping or electric field. Theoretical models of metal work functions Theoretical modeling of the work function is difficult, as an accurate model requires a careful treatment of both electronic many body effects and surface chemistry; both of these topics are already complex in their own right. One of the earliest successful models for metal work function trends was the jellium model, which allowed for oscillations in electronic density nearby the abrupt surface (these are similar to Friedel oscillations) as well as the tail of electron density extending outside the surface. This model showed why the density of conduction electrons (as represented by the Wigner–Seitz radius rs) is an important parameter in determining work function. The jellium model is only a partial explanation, as its predictions still show significant deviation from real work functions. More recent models have focused on including more accurate forms of electron exchange and correlation effects, as well as including the crystal face dependence (this requires the inclusion of the actual atomic lattice, something that is neglected in the jellium model). Temperature dependence of the electron work function The electron behavior in metals varies with temperature and is largely reflected by the electron work function. A theoretical model for predicting the temperature dependence of the electron work function, developed by Rahemi et al. explains the underlying mechanism and predicts this temperature dependence for various crystal structures via calculable and measurable parameters. In general, as the temperature increases, the EWF decreases via and is a calculable material property which is dependent on the crystal structure (for example, BCC, FCC). is the electron work function at T=0 and is constant throughout the change. References Further reading For a quick reference to values of work function of the elements: External links Work function of polymeric insulators (Table 2.1) Work function of diamond and doped carbon Work functions of common metals Work functions of various metals for the photoelectric effect Physics of free surfaces of semiconductors Condensed matter physics Physical quantities Vacuum Vacuum tubes
Work function
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
3,509
[ "Physical phenomena", "Physical quantities", "Quantity", "Vacuum tubes", "Phases of matter", "Vacuum", "Materials science", "Condensed matter physics", "Physical properties", "Matter" ]
53,982
https://en.wikipedia.org/wiki/Euchromatin
Euchromatin (also called "open chromatin") is a lightly packed form of chromatin (DNA, RNA, and protein) that is enriched in genes, and is often (but not always) under active transcription. Euchromatin stands in contrast to heterochromatin, which is tightly packed and less accessible for transcription. 92% of the human genome is euchromatic. In eukaryotes, euchromatin comprises the most active portion of the genome within the cell nucleus. In prokaryotes, euchromatin is the only form of chromatin present; this indicates that the heterochromatin structure evolved later along with the nucleus, possibly as a mechanism to handle increasing genome size. Structure Euchromatin is composed of repeating subunits known as nucleosomes, reminiscent of an unfolded set of beads on a string, that are approximately 11 nm in diameter. At the core of these nucleosomes are a set of four histone protein pairs: H3, H4, H2A, and H2B. Each core histone protein possesses a 'tail' structure, which can vary in several ways; it is thought that these variations act as "master control switches" through different methylation and acetylation states, which determine the overall arrangement of the chromatin. Approximately 147 base pairs of DNA are wound around the histone octamers, or a little less than 2 turns of the helix. Nucleosomes along the strand are linked together via the histone, H1, and a short space of open linker DNA, ranging from around 0-80 base pairs. The key distinction between the structure of euchromatin and heterochromatin is that the nucleosomes in euchromatin are much more widely spaced, which allows for easier access of different protein complexes to the DNA strand and thus increased gene transcription. Appearance Euchromatin resembles a set of beads on a string at large magnifications. From farther away, it can resemble a ball of tangled thread, such as in some electron microscope visualizations. In both optical and electron microscopic visualizations, euchromatin appears lighter in color than heterochromatin - which is also present in the nucleus and appears darkly - due to its less compact structure. When visualizing chromosomes, such as in a karyogram, cytogenetic banding is used to stain the chromosomes. Cytogenetic banding allows us to see which parts of the chromosome are made up of euchromatin or heterochromatin in order to differentiate chromosomal subsections, irregularities or rearrangements. One such example is G banding, otherwise known as Giemsa staining where euchromatin appears lighter than heterochromatin. Function Transcription Euchromatin participates in the active transcription of DNA to mRNA products. The unfolded structure allows gene regulatory proteins and RNA polymerase complexes to bind to the DNA sequence, which can then initiate the transcription process. While not all euchromatin is necessarily transcribed, as the euchromatin is divided into transcriptionally active and inactive domains, euchromatin is still generally associated with active gene transcription. There is therefore a direct link to how actively productive a cell is and the amount of euchromatin that can be found in its nucleus. It is thought that the cell uses transformation from euchromatin into heterochromatin as a method of controlling gene expression and replication, since such processes behave differently on densely compacted chromatin. This is known as the 'accessibility hypothesis'. One example of constitutive euchromatin that is 'always turned on' is housekeeping genes, which code for the proteins needed for basic functions of cell survival. Epigenetics Epigenetics involves changes in the phenotype that can be inherited without changing the DNA sequence. This can occur through many types of environmental interactions. Regarding euchromatin, post-translational modifications of the histones can alter the structure of chromatin, resulting in altered gene expression without changing the DNA. Additionally, a loss of heterochromatin and increase in euchromatin has been shown to correlate with an accelerated aging process, especially in diseases known to resemble premature aging. Research has shown epigenetic markers on histones for a number of additional diseases. Regulation Euchromatin is primarily regulated by post-translational modifications to its nucleosomes' histones, conducted by many histone-modifying enzymes. These modifications occur on the histones' N-terminal tails that protrude from the nucleosome structure, and are thought of to recruit enzymes to either keep the chromatin in its open form, as euchromatin, or in its closed form, as heterochromatin. Histone acetylation, for instance, is typically associated with euchromatin structure, whereas histone methylation promotes heterochromatin remodeling. Acetylation makes the histone group more negatively charged, which in turn disrupts its interactions with the DNA strand, essentially "opening" the strand for easier access. Acetylation can occur on multiple lysine residues of a histone's N-terminal tail and in different histones of the same nucleosome, which is thought to further increase DNA accessibility for transcription factors. Phosphorylation of histones is another method by which euchromatin is regulated. This tends to occur on the N-terminal tails of the histones, however some sites are present in the core. Phosphorylation is controlled by kinases and phosphatases, which add and remove the phosphate groups respectively. This can occur at serine, threonine, or tyrosine residues present in euchromatin. Since the phosphate groups added to the structure will incorporate a negative charge, it will promote the more relaxed "open" form, similar to acetylation. In regards to functionality, histone phosphorylation is involved with gene expression, DNA damage repair, and chromatin remodeling. Another method of regulation that incorporates a negative charge, thereby favoring the "open" form, is ADP ribosylation. This process adds one or more ADP-ribose units to the histone, and is involved in the DNA damage response pathway. See also Histone Modifying Enzymes Constitutive Heterochromatin References Further reading Heterochromatin formation involves changes in histone modifications over multiple cell generations – Chromatin Velocity reveals epigenetic dynamics by single-cell profiling of heterochromatin and euchromatin – Epigenetic inheritance and the missing heritability – Histone epigenetic marks in heterochromatin and euchromatin of the Chagas' disease vector, Triatoma infestans – Molecular genetics Nuclear organization sv:Eukromatin
Euchromatin
[ "Chemistry", "Biology" ]
1,447
[ "Nuclear organization", "Molecular genetics", "Cellular processes", "Molecular biology" ]
53,983
https://en.wikipedia.org/wiki/Heterochromatin
Heterochromatin is a tightly packed form of DNA or condensed DNA, which comes in multiple varieties. These varieties lie on a continuum between the two extremes of constitutive heterochromatin and facultative heterochromatin. Both play a role in the expression of genes. Because it is tightly packed, it was thought to be inaccessible to polymerases and therefore not transcribed; however, according to Volpe et al. (2002), and many other papers since, much of this DNA is in fact transcribed, but it is continuously turned over via RNA-induced transcriptional silencing (RITS). Recent studies with electron microscopy and OsO4 staining reveal that the dense packing is not due to the chromatin. Constitutive heterochromatin can affect the genes near itself (e.g. position-effect variegation). It is usually repetitive and forms structural functions such as centromeres or telomeres, in addition to acting as an attractor for other gene-expression or repression signals. Facultative heterochromatin is the result of genes that are silenced through a mechanism such as histone deacetylation or Piwi-interacting RNA (piRNA) through RNAi. It is not repetitive and shares the compact structure of constitutive heterochromatin. However, under specific developmental or environmental signaling cues, it can lose its condensed structure and become transcriptionally active. Heterochromatin has been associated with the di- and tri -methylation of H3K9 in certain portions of the human genome. H3K9me3-related methyltransferases appear to have a pivotal role in modifying heterochromatin during lineage commitment at the onset of organogenesis and in maintaining lineage fidelity. Structure Chromatin is found in two varieties: euchromatin and heterochromatin. Originally, the two forms were distinguished cytologically by how intensely they get stained – the euchromatin is less intense, while heterochromatin stains intensely, indicating tighter packing. Heterochromatin was given its name for this reason by botanist Emil Heitz who discovered that heterochromatin remained darkly stained throughout the entire cell cycle, unlike euchromatin whose stain disappeared during interphase. Heterochromatin is usually localized to the periphery of the nucleus. Despite this early dichotomy, recent evidence in both animals and plants has suggested that there are more than two distinct heterochromatin states, and it may in fact exist in four or five 'states', each marked by different combinations of epigenetic marks. Heterochromatin mainly consists of genetically inactive satellite sequences, and many genes are repressed to various extents, although some cannot be expressed in euchromatin at all. Both centromeres and telomeres are heterochromatic, as is the Barr body of the second, inactivated X-chromosome in a female. Function Heterochromatin has been associated with several functions, from gene regulation to the protection of chromosome integrity; some of these roles can be attributed to the dense packing of DNA, which makes it less accessible to protein factors that usually bind DNA or its associated factors. For example, naked double-stranded DNA ends would usually be interpreted by the cell as damaged or viral DNA, triggering cell cycle arrest, DNA repair or destruction of the fragment, such as by endonucleases in bacteria. Some regions of chromatin are very densely packed with fibers that display a condition comparable to that of the chromosome at mitosis. Heterochromatin is generally clonally inherited; when a cell divides, the two daughter cells typically contain heterochromatin within the same regions of DNA, resulting in epigenetic inheritance. Variations cause heterochromatin to encroach on adjacent genes or recede from genes at the extremes of domains. Transcribable material may be repressed by being positioned (in cis) at these boundary domains. This gives rise to expression levels that vary from cell to cell, which may be demonstrated by position-effect variegation. Insulator sequences may act as a barrier in rare cases where constitutive heterochromatin and highly active genes are juxtaposed (e.g. the 5'HS4 insulator upstream of the chicken β-globin locus, and loci in two Saccharomyces spp.). Constitutive heterochromatin All cells of a given species package the same regions of DNA in constitutive heterochromatin, and thus in all cells, any genes contained within the constitutive heterochromatin will be poorly expressed. For example, all human chromosomes 1, 9, 16, and the Y-chromosome contain large regions of constitutive heterochromatin. In most organisms, constitutive heterochromatin occurs around the chromosome centromere and near telomeres. Facultative heterochromatin The regions of DNA packaged in facultative heterochromatin will not be consistent between the cell types within a species, and thus a sequence in one cell that is packaged in facultative heterochromatin (and the genes within are poorly expressed) may be packaged in euchromatin in another cell (and the genes within are no longer silenced). However, the formation of facultative heterochromatin is regulated, and is often associated with morphogenesis or differentiation. An example of facultative heterochromatin is X chromosome inactivation in female mammals: one X chromosome is packaged as facultative heterochromatin and silenced, while the other X chromosome is packaged as euchromatin and expressed. Among the molecular components that appear to regulate the spreading of heterochromatin are the Polycomb-group proteins and non-coding genes such as Xist. The mechanism for such spreading is still a matter of controversy. The polycomb repressive complexes PRC1 and PRC2 regulate chromatin compaction and gene expression and have a fundamental role in developmental processes. PRC-mediated epigenetic aberrations are linked to genome instability and malignancy and play a role in the DNA damage response, DNA repair and in the fidelity of replication. Yeast heterochromatin Saccharomyces cerevisiae, or budding yeast, is a model eukaryote and its heterochromatin has been defined thoroughly. Although most of its genome can be characterized as euchromatin, S. cerevisiae has regions of DNA that are transcribed very poorly. These loci are the so-called silent mating type loci (HML and HMR), the rDNA (encoding ribosomal RNA), and the sub-telomeric regions. Fission yeast (Schizosaccharomyces pombe) uses another mechanism for heterochromatin formation at its centromeres. Gene silencing at this location depends on components of the RNAi pathway. Double-stranded RNA is believed to result in silencing of the region through a series of steps. In the fission yeast Schizosaccharomyces pombe, two RNAi complexes, the RITS complex and the RNA-directed RNA polymerase complex (RDRC), are part of an RNAi machinery involved in the initiation, propagation and maintenance of heterochromatin assembly. These two complexes localize in a siRNA-dependent manner on chromosomes, at the site of heterochromatin assembly. RNA polymerase II synthesizes a transcript that serves as a platform to recruit RITS, RDRC and possibly other complexes required for heterochromatin assembly. Both RNAi and an exosome-dependent RNA degradation process contribute to heterochromatic gene silencing. These mechanisms of Schizosaccharomyces pombe may occur in other eukaryotes. A large RNA structure called RevCen has also been implicated in the production of siRNAs to mediate heterochromatin formation in some fission yeast. See also Centric heterochromatin References External links Molecular genetics Nuclear organization
Heterochromatin
[ "Chemistry", "Biology" ]
1,719
[ "Nuclear organization", "Molecular genetics", "Cellular processes", "Molecular biology" ]
53,993
https://en.wikipedia.org/wiki/Sylow%20theorems
In mathematics, specifically in the field of finite group theory, the Sylow theorems are a collection of theorems named after the Norwegian mathematician Peter Ludwig Sylow that give detailed information about the number of subgroups of fixed order that a given finite group contains. The Sylow theorems form a fundamental part of finite group theory and have very important applications in the classification of finite simple groups. For a prime number , a Sylow p-subgroup (sometimes p-Sylow subgroup) of a finite group is a maximal -subgroup of , i.e., a subgroup of that is a p-group (meaning its cardinality is a power of or equivalently: For each group element, its order is some power of ) that is not a proper subgroup of any other -subgroup of . The set of all Sylow -subgroups for a given prime is sometimes written . The Sylow theorems assert a partial converse to Lagrange's theorem. Lagrange's theorem states that for any finite group the order (number of elements) of every subgroup of divides the order of . The Sylow theorems state that for every prime factor of the order of a finite group , there exists a Sylow -subgroup of of order , the highest power of that divides the order of . Moreover, every subgroup of order is a Sylow -subgroup of , and the Sylow -subgroups of a group (for a given prime ) are conjugate to each other. Furthermore, the number of Sylow -subgroups of a group for a given prime is congruent to 1 (mod ). Theorems Motivation The Sylow theorems are a powerful statement about the structure of groups in general, but are also powerful in applications of finite group theory. This is because they give a method for using the prime decomposition of the cardinality of a finite group to give statements about the structure of its subgroups: essentially, it gives a technique to transport basic number-theoretic information about a group to its group structure. From this observation, classifying finite groups becomes a game of finding which combinations/constructions of groups of smaller order can be applied to construct a group. For example, a typical application of these theorems is in the classification of finite groups of some fixed cardinality, e.g. . Statement Collections of subgroups that are each maximal in one sense or another are common in group theory. The surprising result here is that in the case of , all members are actually isomorphic to each other and have the largest possible order: if with where does not divide , then every Sylow -subgroup has order . That is, is a -group and . These properties can be exploited to further analyze the structure of . The following theorems were first proposed and proven by Ludwig Sylow in 1872, and published in Mathematische Annalen. The following weaker version of theorem 1 was first proved by Augustin-Louis Cauchy, and is known as Cauchy's theorem. Consequences The Sylow theorems imply that for a prime number every Sylow -subgroup is of the same order, . Conversely, if a subgroup has order , then it is a Sylow -subgroup, and so is conjugate to every other Sylow -subgroup. Due to the maximality condition, if is any -subgroup of , then is a subgroup of a -subgroup of order . An important consequence of Theorem 2 is that the condition is equivalent to the condition that the Sylow -subgroup of is a normal subgroup (Theorem 3 can often show ). However, there are groups that have proper, non-trivial normal subgroups but no normal Sylow subgroups, such as . Groups that are of prime-power order have no proper Sylow -subgroups. The third bullet point of the third theorem has as an immediate consequence that divides . Sylow theorems for infinite groups There is an analogue of the Sylow theorems for infinite groups. One defines a Sylow -subgroup in an infinite group to be a p-subgroup (that is, every element in it has -power order) that is maximal for inclusion among all -subgroups in the group. Let denote the set of conjugates of a subgroup . Examples A simple illustration of Sylow subgroups and the Sylow theorems are the dihedral group of the n-gon, D2n. For n odd, 2 = 21 is the highest power of 2 dividing the order, and thus subgroups of order 2 are Sylow subgroups. These are the groups generated by a reflection, of which there are n, and they are all conjugate under rotations; geometrically the axes of symmetry pass through a vertex and a side. By contrast, if n is even, then 4 divides the order of the group, and the subgroups of order 2 are no longer Sylow subgroups, and in fact they fall into two conjugacy classes, geometrically according to whether they pass through two vertices or two faces. These are related by an outer automorphism, which can be represented by rotation through π/n, half the minimal rotation in the dihedral group. Another example are the Sylow p-subgroups of GL2(Fq), where p and q are primes ≥ 3 and , which are all abelian. The order of GL2(Fq) is . Since , the order of . Thus by Theorem 1, the order of the Sylow p-subgroups is p2n. One such subgroup P, is the set of diagonal matrices , x is any primitive root of Fq. Since the order of Fq is , its primitive roots have order q − 1, which implies that or xm and all its powers have an order which is a power of p. So, P is a subgroup where all its elements have orders which are powers of p. There are pn choices for both a and b, making . This means P is a Sylow p-subgroup, which is abelian, as all diagonal matrices commute, and because Theorem 2 states that all Sylow p-subgroups are conjugate to each other, the Sylow p-subgroups of GL2(Fq) are all abelian. Example applications Since Sylow's theorem ensures the existence of p-subgroups of a finite group, it's worthwhile to study groups of prime power order more closely. Most of the examples use Sylow's theorem to prove that a group of a particular order is not simple. For groups of small order, the congruence condition of Sylow's theorem is often sufficient to force the existence of a normal subgroup. Example-1 Groups of order pq, p and q primes with p < q. Example-2 Group of order 30, groups of order 20, groups of order p2q, p and q distinct primes are some of the applications. Example-3 (Groups of order 60): If the order |G| = 60 and G has more than one Sylow 5-subgroup, then G is simple. Cyclic group orders Some non-prime numbers n are such that every group of order n is cyclic. One can show that n = 15 is such a number using the Sylow theorems: Let G be a group of order 15 = 3 · 5 and n3 be the number of Sylow 3-subgroups. Then n3 5 and n3 ≡ 1 (mod 3). The only value satisfying these constraints is 1; therefore, there is only one subgroup of order 3, and it must be normal (since it has no distinct conjugates). Similarly, n5 must divide 3, and n5 must equal 1 (mod 5); thus it must also have a single normal subgroup of order 5. Since 3 and 5 are coprime, the intersection of these two subgroups is trivial, and so G must be the internal direct product of groups of order 3 and 5, that is the cyclic group of order 15. Thus, there is only one group of order 15 (up to isomorphism). Small groups are not simple A more complex example involves the order of the smallest simple group that is not cyclic. Burnside's pa qb theorem states that if the order of a group is the product of one or two prime powers, then it is solvable, and so the group is not simple, or is of prime order and is cyclic. This rules out every group up to order 30 . If G is simple, and |G| = 30, then n3 must divide 10 ( = 2 · 5), and n3 must equal 1 (mod 3). Therefore, n3 = 10, since neither 4 nor 7 divides 10, and if n3 = 1 then, as above, G would have a normal subgroup of order 3, and could not be simple. G then has 10 distinct cyclic subgroups of order 3, each of which has 2 elements of order 3 (plus the identity). This means G has at least 20 distinct elements of order 3. As well, n5 = 6, since n5 must divide 6 ( = 2 · 3), and n5 must equal 1 (mod 5). So G also has 24 distinct elements of order 5. But the order of G is only 30, so a simple group of order 30 cannot exist. Next, suppose |G| = 42 = 2 · 3 · 7. Here n7 must divide 6 ( = 2 · 3) and n7 must equal 1 (mod 7), so n7 = 1. So, as before, G can not be simple. On the other hand, for |G| = 60 = 22 · 3 · 5, then n3 = 10 and n5 = 6 is perfectly possible. And in fact, the smallest simple non-cyclic group is A5, the alternating group over 5 elements. It has order 60, and has 24 cyclic permutations of order 5, and 20 of order 3. Wilson's theorem Part of Wilson's theorem states that for every prime p. One may easily prove this theorem by Sylow's third theorem. Indeed, observe that the number np of Sylow's p-subgroups in the symmetric group Sp is times the number of p-cycles in Sp, ie. . On the other hand, . Hence, . So, . Fusion results Frattini's argument shows that a Sylow subgroup of a normal subgroup provides a factorization of a finite group. A slight generalization known as Burnside's fusion theorem states that if G is a finite group with Sylow p-subgroup P and two subsets A and B normalized by P, then A and B are G-conjugate if and only if they are NG(P)-conjugate. The proof is a simple application of Sylow's theorem: If B=Ag, then the normalizer of B contains not only P but also Pg (since Pg is contained in the normalizer of Ag). By Sylow's theorem P and Pg are conjugate not only in G, but in the normalizer of B. Hence gh−1 normalizes P for some h that normalizes B, and then Agh−1 = Bh−1 = B, so that A and B are NG(P)-conjugate. Burnside's fusion theorem can be used to give a more powerful factorization called a semidirect product: if G is a finite group whose Sylow p-subgroup P is contained in the center of its normalizer, then G has a normal subgroup K of order coprime to P, G = PK and P∩K = {1}, that is, G is p-nilpotent. Less trivial applications of the Sylow theorems include the focal subgroup theorem, which studies the control a Sylow p-subgroup of the derived subgroup has on the structure of the entire group. This control is exploited at several stages of the classification of finite simple groups, and for instance defines the case divisions used in the Alperin–Brauer–Gorenstein theorem classifying finite simple groups whose Sylow 2-subgroup is a quasi-dihedral group. These rely on J. L. Alperin's strengthening of the conjugacy portion of Sylow's theorem to control what sorts of elements are used in the conjugation. Proof of the Sylow theorems The Sylow theorems have been proved in a number of ways, and the history of the proofs themselves is the subject of many papers, including Waterhouse, Scharlau, Casadio and Zappa, Gow, and to some extent Meo. One proof of the Sylow theorems exploits the notion of group action in various creative ways. The group acts on itself or on the set of its p-subgroups in various ways, and each such action can be exploited to prove one of the Sylow theorems. The following proofs are based on combinatorial arguments of Wielandt. In the following, we use as notation for "a divides b" and for the negation of this statement. Algorithms The problem of finding a Sylow subgroup of a given group is an important problem in computational group theory. One proof of the existence of Sylow p-subgroups is constructive: if H is a p-subgroup of G and the index [G:H] is divisible by p, then the normalizer N = NG(H) of H in G is also such that [N : H] is divisible by p. In other words, a polycyclic generating system of a Sylow p-subgroup can be found by starting from any p-subgroup H (including the identity) and taking elements of p-power order contained in the normalizer of H but not in H itself. The algorithmic version of this (and many improvements) is described in textbook form in Butler, including the algorithm described in Cannon. These versions are still used in the GAP computer algebra system. In permutation groups, it has been proven, in Kantor and Kantor and Taylor, that a Sylow p-subgroup and its normalizer can be found in polynomial time of the input (the degree of the group times the number of generators). These algorithms are described in textbook form in Seress, and are now becoming practical as the constructive recognition of finite simple groups becomes a reality. In particular, versions of this algorithm are used in the Magma computer algebra system. See also Frattini's argument Hall subgroup Maximal subgroup p-group Notes References Proofs Algorithms External links Theorems about finite groups P-groups Articles containing proofs
Sylow theorems
[ "Mathematics" ]
3,023
[ "Articles containing proofs" ]
54,000
https://en.wikipedia.org/wiki/Biophysics
Biophysics is an interdisciplinary science that applies approaches and methods traditionally used in physics to study biological phenomena. Biophysics covers all scales of biological organization, from molecular to organismic and populations. Biophysical research shares significant overlap with biochemistry, molecular biology, physical chemistry, physiology, nanotechnology, bioengineering, computational biology, biomechanics, developmental biology and systems biology. The term biophysics was originally introduced by Karl Pearson in 1892. The term biophysics is also regularly used in academia to indicate the study of the physical quantities (e.g. electric current, temperature, stress, entropy) in biological systems. Other biological sciences also perform research on the biophysical properties of living organisms including molecular biology, cell biology, chemical biology, and biochemistry. Overview Molecular biophysics typically addresses biological questions similar to those in biochemistry and molecular biology, seeking to find the physical underpinnings of biomolecular phenomena. Scientists in this field conduct research concerned with understanding the interactions between the various systems of a cell, including the interactions between DNA, RNA and protein biosynthesis, as well as how these interactions are regulated. A great variety of techniques are used to answer these questions. Fluorescent imaging techniques, as well as electron microscopy, x-ray crystallography, NMR spectroscopy, atomic force microscopy (AFM) and small-angle scattering (SAS) both with X-rays and neutrons (SAXS/SANS) are often used to visualize structures of biological significance. Protein dynamics can be observed by neutron spin echo spectroscopy. Conformational change in structure can be measured using techniques such as dual polarisation interferometry, circular dichroism, SAXS and SANS. Direct manipulation of molecules using optical tweezers or AFM, can also be used to monitor biological events where forces and distances are at the nanoscale. Molecular biophysicists often consider complex biological events as systems of interacting entities which can be understood e.g. through statistical mechanics, thermodynamics and chemical kinetics. By drawing knowledge and experimental techniques from a wide variety of disciplines, biophysicists are often able to directly observe, model or even manipulate the structures and interactions of individual molecules or complexes of molecules. In addition to traditional (i.e. molecular and cellular) biophysical topics like structural biology or enzyme kinetics, modern biophysics encompasses an extraordinarily broad range of research, from bioelectronics to quantum biology involving both experimental and theoretical tools. It is becoming increasingly common for biophysicists to apply the models and experimental techniques derived from physics, as well as mathematics and statistics, to larger systems such as tissues, organs, populations and ecosystems. Biophysical models are used extensively in the study of electrical conduction in single neurons, as well as neural circuit analysis in both tissue and whole brain. Medical physics, a branch of biophysics, is any application of physics to medicine or healthcare, ranging from radiology to microscopy and nanomedicine. For example, physicist Richard Feynman theorized about the future of nanomedicine. He wrote about the idea of a medical use for biological machines (see nanomachines). Feynman and Albert Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would be possible to (as Feynman put it) "swallow the doctor". The idea was discussed in Feynman's 1959 essay There's Plenty of Room at the Bottom. History The studies of Luigi Galvani (1737–1798) laid groundwork for the later field of biophysics. Some of the earlier studies in biophysics were conducted in the 1840s by a group known as the Berlin school of physiologists. Among its members were pioneers such as Hermann von Helmholtz, Ernst Heinrich Weber, Carl F. W. Ludwig, and Johannes Peter Müller. William T. Bovie (1882–1958) is credited as a leader of the field's further development in the mid-20th century. He was a leader in developing electrosurgery. The popularity of the field rose when the book What Is Life?'' by Erwin Schrödinger was published. Since 1957, biophysicists have organized themselves into the Biophysical Society which now has about 9,000 members over the world. Some authors such as Robert Rosen criticize biophysics on the ground that the biophysical method does not take into account the specificity of biological phenomena. Focus as a subfield While some colleges and universities have dedicated departments of biophysics, usually at the graduate level, many do not have university-level biophysics departments, instead having groups in related departments such as biochemistry, cell biology, chemistry, computer science, engineering, mathematics, medicine, molecular biology, neuroscience, pharmacology, physics, and physiology. Depending on the strengths of a department at a university differing emphasis will be given to fields of biophysics. What follows is a list of examples of how each department applies its efforts toward the study of biophysics. This list is hardly all inclusive. Nor does each subject of study belong exclusively to any particular department. Each academic institution makes its own rules and there is much overlap between departments. Biology and molecular biology – Gene regulation, single protein dynamics, bioenergetics, patch clamping, biomechanics, virophysics. Structural biology – Ångstrom-resolution structures of proteins, nucleic acids, lipids, carbohydrates, and complexes thereof. Biochemistry and chemistry – biomolecular structure, siRNA, nucleic acid structure, structure-activity relationships. Computer science – Neural networks, biomolecular and drug databases. Computational chemistry – molecular dynamics simulation, molecular docking, quantum chemistry Bioinformatics – sequence alignment, structural alignment, protein structure prediction Mathematics – graph/network theory, population modeling, dynamical systems, phylogenetics. Medicine – biophysical research that emphasizes medicine. Medical biophysics is a field closely related to physiology. It explains various aspects and systems of the body from a physical and mathematical perspective. Examples are fluid dynamics of blood flow, gas physics of respiration, radiation in diagnostics/treatment and much more. Biophysics is taught as a preclinical subject in many medical schools, mainly in Europe. Neuroscience – studying neural networks experimentally (brain slicing) as well as theoretically (computer models), membrane permittivity. Pharmacology and physiology – channelomics, electrophysiology, biomolecular interactions, cellular membranes, polyketides. Physics – negentropy, stochastic processes, and the development of new physical techniques and instrumentation as well as their application. Quantum biology – The field of quantum biology applies quantum mechanics to biological objects and problems. Decohered isomers to yield time-dependent base substitutions. These studies imply applications in quantum computing. Agronomy and agriculture Many biophysical techniques are unique to this field. Research efforts in biophysics are often initiated by scientists who were biologists, chemists or physicists by training. See also Biophysical Society Index of biophysics articles List of publications in biology – Biophysics List of publications in physics – Biophysics List of biophysicists Outline of biophysics Biophysical chemistry European Biophysical Societies' Association Mathematical and theoretical biology Medical biophysics Membrane biophysics Molecular biophysics Neurophysics Physiomics Virophysics Single-particle trajectory References Sources External links Biophysical Society Journal of Physiology: 2012 virtual issue Biophysics and Beyond bio-physics-wiki Link archive of learning resources for students: biophysika.de (60% English, 40% German) Applied and interdisciplinary physics
Biophysics
[ "Physics", "Biology" ]
1,568
[ "Applied and interdisciplinary physics", "Biophysics" ]
54,034
https://en.wikipedia.org/wiki/Misnay%E2%80%93Schardin%20effect
The Misnay–Schardin effect, or platter effect, is a characteristic of the detonation of a broad sheet of explosive. Description Explosive blasts expand directly away from, and perpendicular to, the surface of an explosive. Unlike the blast from a rounded explosive charge, which expands in all directions, the blast produced by an explosive sheet expands primarily perpendicular to its plane, in both directions. However, if one side is backed by a heavy or fixed mass, most of the blast (i.e. most of the rapidly expanding gas and its kinetic energy) will be reflected in the direction away from the mass. Uses The Misnay–Schardin effect was studied and experimented with by explosive experts József Misnay (sometimes spelled Misznay incorrectly), a Hungarian, and Hubert Schardin, a German, who initially sought to develop a more effective antitank mine for Nazi Germany. Some sources claim that World War II ended before their design became usable, but they and others continued their work. Misnay designed two weapons: the 43M TAK antitank mine and the 44M LŐTAK side-attack mine. The Hungarian army used these weapons in 1944–1945. The later AT2 and M18 Claymore mines rely on this effect. See also High-explosive squash head Explosively formed penetrator Munroe effect M93 Hornet mine References Explosives
Misnay–Schardin effect
[ "Chemistry" ]
282
[ "Explosives", "Explosions" ]
54,044
https://en.wikipedia.org/wiki/Gothic%20architecture
Gothic architecture is an architectural style that was prevalent in Europe from the late 12th to the 16th century, during the High and Late Middle Ages, surviving into the 17th and 18th centuries in some areas. It evolved from Romanesque architecture and was succeeded by Renaissance architecture. It originated in the Île-de-France and Picardy regions of northern France. The style at the time was sometimes known as opus Francigenum (); the term Gothic was first applied contemptuously during the later Renaissance, by those ambitious to revive the architecture of classical antiquity. The defining design element of Gothic architecture is the pointed arch. The use of the pointed arch in turn led to the development of the pointed rib vault and flying buttresses, combined with elaborate tracery and stained glass windows. At the Abbey of Saint-Denis, near Paris, the choir was reconstructed between 1140 and 1144, drawing together for the first time the developing Gothic architectural features. In doing so, a new architectural style emerged that emphasized verticality and the effect created by the transmission of light through stained glass windows. Common examples are found in Christian ecclesiastical architecture, and Gothic cathedrals and churches, as well as abbeys, and parish churches. It is also the architecture of many castles, palaces, town halls, guildhalls, universities and, less prominently today, private dwellings. Many of the finest examples of medieval Gothic architecture are listed by UNESCO as World Heritage Sites. With the development of Renaissance architecture in Italy during the mid-15th century, the Gothic style was supplanted by the new style, but in some regions, notably England and Belgium, Gothic continued to flourish and develop into the 16th century. A series of Gothic revivals began in mid-18th century England, spread through 19th-century Europe and continued, largely for churches and university buildings, into the 20th century. Name Medieval contemporaries described the style as , as , , or as . The term "Gothic architecture" originated as a pejorative description. Giorgio Vasari used the term "barbarous German style" in his Lives of the Artists to describe what is now considered the Gothic style, and in the introduction to the Lives he attributes various architectural features to the Goths, whom he held responsible for destroying the ancient buildings after they conquered Rome, and erecting new ones in this style. When Vasari wrote, Italy had experienced a century of building in the Vitruvian architectural vocabulary of classical orders revived in the Renaissance and seen as evidence of a new Golden Age of learning and refinement. Thus the Gothic style, being in opposition to classical architecture, from that point of view was associated with the destruction of advancement and sophistication. The assumption that classical architecture was better than Gothic architecture was widespread and proved difficult to defeat. Vasari was echoed in the 16th century by François Rabelais, who referred to Goths and Ostrogoths (Gotz and Ostrogotz). The polymath architect Christopher Wren disapproved of the name Gothic for pointed architecture. He compared it to Islamic architecture, which he called the 'Saracen style', pointing out that the pointed arch's sophistication was not owed to the Goths but to the Islamic Golden Age. He wrote: Wren was the first to popularize the belief that it was not the Europeans, but the Saracens that had created the Gothic style. The term 'Saracen' was still in use in the 18th century and it typically referred to all Muslims, including the Arabs and Berbers. Wren mentions Europe's architectural debt to the Saracens no fewer than twelve times in his writings. He also decidedly broke with tradition in his assumption that Gothic architecture did not merely represent a violent and bothersome mistake, as suggested by Vasari. Rather, he saw that the Gothic style had developed over time along the lines of a changing society, and that it was thus a legitimate architectural style of its own. It was no secret that Wren strongly disliked the building practices of the Gothic style. When he was appointed Surveyor of the Fabric at Westminster Abbey in the year 1698, he expressed his distaste for the Gothic style in a letter to the bishop of Rochester: The chaos of the Gothic left much to be desired in Wren's eyes. His aversion of the style was so strong that he refused to put a Gothic roof on the new St. Paul's, despite being pressured to do so. Wren much preferred symmetry and straight lines in architecture, which is why he constantly praised the classic architecture of 'the Ancients' in his writings. Even though he openly expressed his distaste for the Gothic style, Wren did not blame the Saracens for the apparent lack of ingenuity. Quite the opposite: he praised the Saracens for their 'superior' vaulting techniques and their widespread use of the pointed arch. Wren claimed the inventors of the Gothic had seen the Saracen architecture during the Crusades, also called the Religious war or Holy War, organised by the Kingdom of France in the year 1095: There are several chronological issues that arise with this statement, which is one of the reasons why Wren's theory is rejected by many. The earliest examples of the pointed arch in Europe date from before the Holy War in the year 1095; this is widely regarded as proof that the Gothic style could not have possibly been derived from Saracen architecture. Several authors have taken a stance against this allegation, claiming that the Gothic style had most likely filtered into Europe in other ways, for example through Spain or Sicily. The Spanish architecture from the Moors could have favoured the emergence of the Gothic style long before the Crusades took place. This could have happened gradually through merchants, travelers and pilgrims. According to a 19th-century correspondent in the London journal Notes and Queries, Gothic was a derisive misnomer; the pointed arcs and architecture of the later Middle Ages was quite different from the rounded arches prevalent in late antiquity and the period of the Ostrogothic Kingdom in Italy:There can be no doubt that the term 'Gothic' as applied to pointed styles of ecclesiastical architecture was used at first contemptuously, and in derision, by those who were ambitious to imitate and revive the Grecian orders of architecture, after the revival of classical literature. But, without citing many authorities, such as Christopher Wren, and others, who lent their aid in depreciating the old mediaeval style, which they termed Gothic, as synonymous with every thing that was barbarous and rude, it may be sufficient to refer to the celebrated Treatise of Sir Henry Wotton, entitled The Elements of Architecture, ... printed in London so early as 1624. ... But it was a strange misapplication of the term to use it for the pointed style, in contradistinction to the circular, formerly called Saxon, now Norman, Romanesque, &c. These latter styles, like Lombardic, Italian, and the Byzantine, of course belong more to the Gothic period than the light and elegant structures of the pointed order which succeeded them. Influences The Gothic style of architecture was strongly influenced by the Romanesque architecture which preceded it; by the growing population and wealth of European cities, and by the desire to express local grandeur. It was influenced by theological doctrines which called for more light and by technical improvements in vaults and buttresses that allowed much greater height and larger windows. It was also influenced by the necessity of many churches, such as Chartres Cathedral and Canterbury Cathedral, to accommodate growing numbers of pilgrims. It adapted features from earlier styles. According to Charles Texier (French historian, architect, and archaeologist) and Josef Strzygowski (Polish-Austrian art historian), after lengthy research and study of cathedrals in the medieval city of Ani, the capital of the medieval kingdom of Armenia concluded to have discovered the oldest Gothic arch. According to these historians, the architecture of the Saint Hripsime Church near the Armenian religious seat Etchmiadzin was built in the fourth century A.D. and was repaired in 618. The cathedral of Ani was built in 980–1012 A.D. However many of the elements of Islamic and Armenian architecture that have been cited as influences on Gothic architecture also appeared in Late Roman and Byzantine architecture, the most noticeable example being the pointed arch and flying buttress. The most notable example is the capitals, which are forerunners of the Gothic style and deviated from the Classical standards of ancient Greece and Rome with serpentine lines and naturalistic forms. Periods Architecture "became a leading form of artistic expression during the late Middle Ages". Gothic architecture began in the earlier 12th century in northwest France and England and spread throughout Latin Europe in the 13th century; by 1300, a first "international style" of Gothic had developed, with common design features and formal language. A second "international style" emerged by 1400, alongside innovations in England and central Europe that produced both the perpendicular and flamboyant varieties. Typically, these typologies are identified as: c.1130–c.1240 Early to High Gothic and Early English c.1240–c.1350 Rayonnant and Decorated Style c.1350–c.1500 Late Gothic: flamboyant and perpendicular c.16th–18th century Post-Gothic History Early Gothic Norman architecture on either side of the English Channel developed in parallel towards Early Gothic. Gothic features, such as the rib vault, had appeared in England, Sicily and Normandy in the 11th century. Rib-vaults were employed in some parts of the cathedral at Durham (1093–) and in Lessay Abbey in Normandy (1098). However, the first buildings to be considered fully Gothic are the royal funerary abbey of the French kings, the Abbey of Saint-Denis (1135–1144), and the archiepiscopal cathedral at Sens (1135–1164). They were the first buildings to systematically combine rib vaulting, buttresses, and pointed arches. Most of the characteristics of later Early English were already present in the lower chevet of Saint-Denis. The Duchy of Normandy, part of the Angevin Empire until the 13th century, developed its own version of Gothic. One of these was the Norman chevet, a small apse or chapel attached to the choir at the east end of the church, which typically had a half-dome. The lantern tower was another common feature in Norman Gothic. One example of early Norman Gothic is Bayeux Cathedral (1060–1070) where the Romanesque cathedral nave and choir were rebuilt into the Gothic style. Lisieux Cathedral was begun in 1170. Rouen Cathedral (begun 1185) was rebuilt from Romanesque to Gothic with distinct Norman features, including a lantern tower, deeply moulded decoration, and high pointed arcades. Coutances Cathedral was remade into Gothic beginning about 1220. Its most distinctive feature is the octagonal lantern on the crossing of the transept, decorated with ornamental ribs, and surrounded by sixteen bays and sixteen lancet windows. Saint-Denis was the work of the Abbot Suger, a close adviser of Kings Louis VI and Louis VII. Suger reconstructed portions of the old Romanesque church with the rib vault in order to remove walls and to make more space for windows. He described the new ambulatory as "a circular ring of chapels, by virtue of which the whole church would shine with the wonderful and uninterrupted light of most luminous windows, pervading the interior beauty." To support the vaults he also introduced columns with capitals of carved vegetal designs, modelled upon the classical columns he had seen in Rome. In addition, he installed a circular rose window over the portal on the façade. These also became a common feature of Gothic cathedrals. Some elements of Gothic style appeared very early in England. Durham Cathedral was the first cathedral to employ a rib vault, built between 1093 and 1104. The first cathedral built entirely in the new style was Sens Cathedral, begun between 1135 and 1140 and consecrated in 1160. Sens Cathedral features a Gothic choir, and six-part rib vaults over the nave and collateral aisles, alternating pillars and doubled columns to support the vaults, and buttresses to offset the outward thrust from the vaults. One of the builders who is believed to have worked on Sens Cathedral, William of Sens, later travelled to England and became the architect who, between 1175 and 1180, reconstructed the choir of Canterbury Cathedral in the new Gothic style. Sens Cathedral was influential in its strongly vertical appearance and in its three-part elevation, typical of subsequent Gothic buildings, with a clerestory at the top supported by a triforium, all carried on high arcades of pointed arches. In the following decades flying buttresses began to be used, allowing the construction of lighter, higher walls. French Gothic churches were heavily influenced both by the ambulatory and side-chapels around the choir at Saint-Denis, and by the paired towers and triple doors on the western façade. Sens was quickly followed by Senlis Cathedral (begun 1160), and Notre-Dame de Paris (begun 1160). Their builders abandoned the traditional plans and introduced the new Gothic elements from Saint-Denis. The builders of Notre-Dame went further by introducing the flying buttress, heavy columns of support outside the walls connected by arches to the upper walls. The buttresses counterbalanced the outward thrust from the rib vaults. This allowed the builders to construct higher, thinner walls and larger windows. Early English and High Gothic Following the destruction by fire of the choir of Canterbury Cathedral in 1174, a group of master builders was invited to propose plans for the reconstruction. The master-builder William of Sens, who had worked on Sens Cathedral, won the competition. Work began that same year, but in 1178 William was badly injured by falling from the scaffolding, and returned to France, where he died. His work was continued by William the Englishman who replaced his French namesake in 1178. The resulting structure of the choir of Canterbury Cathedral is considered the first work of Early English Gothic. The cathedral churches of Worcester (1175–), Wells (c.1180–), Lincoln (1192–), and Salisbury (1220–) are all, with Canterbury, major examples. Tiercerons – decorative vaulting ribs – seem first to have been used in vaulting at Lincoln Cathedral, installed c.1200. Instead of a triforium, Early English churches usually retained a gallery. High Gothic (–1250) was a brief but very productive period, which produced some of the great landmarks of Gothic art. The first building in the High Gothic () was Chartres Cathedral, an important pilgrimage church south of Paris. The Romanesque cathedral was destroyed by fire in 1194, but was swiftly rebuilt in the new style, with contributions from King Philip II of France, Pope Celestine III, local gentry, merchants, craftsmen, and Richard the Lionheart, king of England. The builders simplified the elevation used at Notre Dame, eliminated the tribune galleries, and used flying buttresses to support the upper walls. The walls were filled with stained glass, mainly depicting the story of the Virgin Mary but also, in a small corner of each window, illustrating the crafts of the guilds who donated those windows. The model of Chartres was followed by a series of new cathedrals of unprecedented height and size. These were Reims Cathedral (begun 1211), where coronations of the kings of France took place; Amiens Cathedral (1220–1226); Bourges Cathedral (1195–1230) (which, unlike the others, continued to use six-part rib vaults); and Beauvais Cathedral (1225–). In central Europe, the High Gothic style appeared in the Holy Roman Empire, first at Toul (1220–), whose Romanesque cathedral was rebuilt in the style of Reims Cathedral; then Trier's Liebfrauenkirche parish church (1228–), and then throughout the Reich, beginning with the Elisabethkirche at Marburg (1235–) and the cathedral at Metz (c.1235–). In High Gothic, the whole surface of the clerestory was given over to windows. At Chartres Cathedral, plate tracery was used for the rose window, but at Reims the bar-tracery was free-standing. Lancet windows were supplanted by multiple lights separated by geometrical bar-tracery. Tracery of this kind distinguishes Middle Pointed style from the simpler First Pointed. Inside, the nave was divided into by regular bays, each covered by a quadripartite rib vaults. Other characteristics of the High Gothic were the development of rose windows of greater size, using bar-tracery, higher and longer flying buttresses, which could reach up to the highest windows, and walls of sculpture illustrating biblical stories filling the façade and the fronts of the transept. Reims Cathedral had two thousand three hundred statues on the front and back side of the façade. The new High Gothic churches competed to be the tallest, with increasingly ambitious structures lifting the vault yet higher. Chartres Cathedral's height of was exceeded by Beauvais Cathedral's , but on account of the latter's collapse in 1248, no further attempt was made to build higher. Attention turned from achieving greater height to creating more awe-inspiring decoration. Rayonnant Gothic and Decorated Style Rayonnant Gothic maximized the coverage of stained glass windows such that the walls are effectively entirely glazed; examples are the nave of Saint-Denis (1231–) and the royal chapel of Louis IX of France on the Île de la Cité in the Seine – the Sainte-Chapelle (c.1241–1248). The high and thin walls of French Rayonnant Gothic allowed by the flying buttresses enabled increasingly ambitious expanses of glass and decorated tracery, reinforced with ironwork. Shortly after Saint-Denis, in the 1250s, Louis IX commissioned the rebuilt transepts and enormous rose windows of Notre-Dame de Paris (1250s for the north transept, 1258 for the beginning of south transept). This first 'international style' was also used in the clerestory of Metz Cathedral (c. 1245–), then in the choir of Cologne's cathedral (c. 1250–), and again in the nave of the cathedral at Strasbourg (c. 1250–). Masons elaborated a series of tracery patterns for windows – from the basic geometrical to the reticulated and the curvilinear – which had superseded the lancet window. Bar-tracery of the curvilinear, flowing, and reticulated types distinguish Second Pointed style. Decorated Gothic similarly sought to emphasize the windows, but excelled in the ornamentation of their tracery. Churches with features of this style include Westminster Abbey (1245–), the cathedrals at Lichfield (after 1257–) and Exeter (1275–), Bath Abbey (1298–), and the retro choir at Wells Cathedral (c.1320–). The Rayonnant developed its second 'international style' with increasingly autonomous and sharp-edged tracery mouldings apparent in the cathedral at Clermont-Ferrand (1248–), the papal collegiate church at Troyes, Saint-Urbain (1262–), and the west façade of Strasbourg Cathedral (1276–1439)). By 1300, there were examples influenced by Strasbourg in the cathedrals of Limoges (1273–), Regensburg (c. 1275–), and in the cathedral nave at York (1292–). Late Gothic: flamboyant and perpendicular Central Europe began to lead the emergence of a new, international flamboyant style with the construction of a new cathedral at Prague (1344–) under the direction of Peter Parler. This model of rich and variegated tracery and intricate reticulated rib-vaulting was definitive in the Late Gothic of continental Europe, emulated not only by the collegiate churches and cathedrals, but by urban parish churches which rivalled them in size and magnificence. The minster at Ulm and other parish churches like the Heilig-Kreuz-Münster at Schwäbisch Gmünd (c.1320–), St Barbara's Church at Kutná Hora (1389–), and the Heilig-Geist-Kirche (1407–) and St Martin's Church (c.1385–) in Landshut are typical. Use of ogees was especially common. The flamboyant style was characterised by the multiplication of the ribs of the vaults, with new purely decorative ribs, called tiercons and liernes, and additional diagonal ribs. One common ornament of flamboyant in France is the arc-en-accolade, an arch over a window topped by a pinnacle, which was itself topped with fleuron, and flanked by other pinnacles. Examples of French flamboyant building include the west façade of Rouen Cathedral, and especially the façades of Sainte-Chapelle de Vincennes (1370s) and choir Mont-Saint-Michel's abbey church (1448). In England, ornamental rib-vaulting and tracery of Decorated Gothic co-existed with, and then gave way to, the perpendicular style from the 1320s, with straightened, orthogonal tracery topped with fan-vaulting. Perpendicular Gothic was unknown in continental Europe and unlike earlier styles had no equivalent in Scotland or Ireland. It first appeared in the cloisters and chapter-house () of Old St Paul's Cathedral in London by William de Ramsey. The chancel of Gloucester Cathedral (1357) and its latter 14th century cloisters are early examples. Four-centred arches were often used, and lierne vaults seen in early buildings were developed into fan vaults, first at the latter 14th century chapter-house of Hereford Cathedral (demolished 1769) and cloisters at Gloucester, and then at Reginald Ely's King's College Chapel, Cambridge (14461461) and the brothers William and Robert Vertue's Henry VII Chapel (1512) at Westminster Abbey. Perpendicular is sometimes called Third Pointed and was employed over three centuries; the fan-vaulted staircase at Christ Church, Oxford built around 1640. Lacey patterns of tracery continued to characterize continental Gothic building, with very elaborate and articulated vaulting, as at Saint Barbara's, Kutná Hora (1512). In certain areas, Gothic architecture continued to be employed until the 17th and 18th centuries, especially in provincial and ecclesiastical contexts, notably at Oxford. Decline and transition Beginning in the mid-15th century, the Gothic style gradually lost its dominance in Europe. It had never been popular in Italy, and in the mid-15th century the Italians, drawing upon ancient Roman ruins, returned to classical models. The dome of Florence Cathedral (1420–1436) by Filippo Brunelleschi, inspired by the Pantheon, Rome, was one of the first Renaissance landmarks, but it also employed Gothic technology; the outer skin of the dome was supported by a framework of twenty-four ribs. In the 16th century, as Renaissance architecture from Italy began to appear in France and other countries in Europe. The Gothic style began to be described as outdated, ugly and even barbaric. The term "Gothic" was first used as a pejorative description. Giorgio Vasari used the term "barbarous German style" in his 1550 Lives of the Artists to describe what is now considered the Gothic style. In the introduction to the Lives he attributed various architectural features to the Goths whom he held responsible for destroying the ancient buildings after they conquered Rome, and erecting new ones in this style. In the 17th century, Molière also mocked the Gothic style in the 1669 poem La Gloire: "...the insipid taste of Gothic ornamentation, these odious monstrosities of an ignorant age, produced by the torrents of barbarism..." The dominant styles in Europe became in turn Italian Renaissance architecture, Baroque architecture, and the grand classicism of the style Louis XIV. The Kings of France had first-hand knowledge of the new Italian style, because of the military campaign of Charles VIII to Naples and Milan (1494), and especially the campaigns of Louis XII and Francis I (1500–1505) to restore French control over Milan and Genoa. They brought back Italian paintings, sculpture and building plans, and, more importantly, Italian craftsmen and artists. The Cardinal Georges d'Amboise, chief minister of Louis XII, built the Chateau of Gaillon near Rouen (1502–1510) with the assistance of Italian craftsmen. The Château de Blois (1515–1524) introduced the Renaissance loggia and open stairway. King Francois I installed Leonardo da Vinci at his Chateau of Chambord in 1516, and introduced a Renaissance long gallery at the Palace of Fontainebleau in 1528–1540. In 1546 Francois I began building the first example of French classicism, the square courtyard of the Louvre Palace designed by Pierre Lescot. Nonetheless, new Gothic buildings, particularly churches, continued to be built. New Gothic churches built in Paris in this period included Saint-Merri (1520–1552) and Saint-Germain l'Auxerrois. The first signs of classicism in Paris churches did not appear until 1540, at Saint-Gervais-Saint-Protais. The largest new church, Saint-Eustache (1532–1560), rivalled Notre-Dame in size, long, wide, and high. As construction of this church continued, elements of Renaissance decoration, including the system of classical orders of columns, were added to the design, making it a Gothic-Renaissance hybrid. In Germany, some Italian elements were introduced at the Fugger Chapel of St. Anne's Church, Augsburg, (1510–1512) combined with Gothic vaults; and others appeared in the Church of St. Michael in Munich, but in Germany Renaissance elements were used primarily for decoration. Some Renaissance elements also appeared in Spain, in the new palace begun by Emperor Charles V in Granada, within the Alhambra (1485–1550), inspired by Bramante and Raphael, but it was never completed. The first major Renaissance work in Spain was El Escorial, the monastery-palace built by Philip II of Spain. Under Henry VIII and Elizabeth I, England was largely isolated from architectural developments on the continent. The first classical building in England was the Old Somerset House in London (1547–1552) (since demolished), built by Edward Seymour, 1st Duke of Somerset, who was regent as Lord Protector for Edward VI until the young king came of age in 1547. Somerset's successor, John Dudley, 1st Duke of Northumberland, sent the architectural scholar John Shute to Italy to study the style. Shute published the first book in English on classical architecture in 1570. The first English houses in the new style were Burghley House (1550s–1580s) and Longleat, built by associates of Somerset. With those buildings, a new age of architecture began in England. Gothic architecture, usually churches or university buildings, continued to be built. Ireland was an island of Gothic architecture in the 17th and 18th centuries, with the construction of Derry Cathedral (completed 1633), Sligo Cathedral (), and Down Cathedral (1790–1818) are other examples. In the 17th and 18th century several important Gothic buildings were constructed at Oxford University and Cambridge University, including Tom Tower (1681–82) at Christ Church, Oxford, by Christopher Wren. It also appeared, in a whimsical fashion, in Horace Walpole's Twickenham villa, Strawberry Hill (1749–1776). The two western towers of Westminster Abbey were constructed between 1722 and 1745 by Nicholas Hawksmoor, opening a new period of Gothic Revival. Gothic architecture survived the early modern period and flourished again in a revival from the late 18th century and throughout the 19th. Perpendicular was the first Gothic style revived in the 18th century. Survival, rediscovery and revival In England, partly in response to a philosophy propounded by the Oxford Movement and others associated with the emerging revival of 'high church' or Anglo-Catholic ideas during the second quarter of the 19th century, neo-Gothic began to become promoted by influential establishment figures as the preferred style for ecclesiastical, civic and institutional architecture. The appeal of this Gothic revival (which after 1837, in Britain, is sometimes termed Victorian Gothic), gradually widened to encompass "low church" as well as "high church" clients. This period of more universal appeal, spanning 1855–1885, is known in Britain as High Victorian Gothic. The Palace of Westminster in London by Sir Charles Barry with interiors by a major exponent of the early Gothic Revival, Augustus Welby Pugin, is an example of the Gothic revival style from its earlier period in the second quarter of the 19th century. Examples from the High Victorian Gothic period include George Gilbert Scott's design for the Albert Memorial in London, and William Butterfield's chapel at Keble College, Oxford. From the second half of the 19th century onwards, it became more common in Britain for neo-Gothic to be used in the design of non-ecclesiastical and non-governmental buildings types. Gothic details even began to appear in working-class housing schemes subsidised by philanthropy, though given the expense, less frequently than in the design of upper and middle-class housing. The middle of the 19th century was a period marked by the restoration, and in some cases modification, of ancient monuments and the construction of neo-Gothic edifices such as the nave of Cologne Cathedral and the Sainte-Clotilde of Paris as speculation of mediaeval architecture turned to technical consideration. London's Palace of Westminster, St Pancras railway station, New York's Trinity Church and St Patrick's Cathedral are also famous examples of Gothic Revival buildings. The style also reached the Far East in the period, for instance the Anglican St John's Cathedral located at the centre of Victoria City in Central, Hong Kong. Structural elements Pointed arches The defining characteristic of the Gothic style is the pointed arch, which was widely used in both structure and decoration. The pointed arch did not originate in Gothic architecture; they had been employed for centuries in the Near East in pre-Islamic as well as Islamic architecture for arches, arcades, and ribbed vaults. In Gothic architecture, particularly in the later Gothic styles, they became the most visible and characteristic element, giving a sensation of verticality and pointing upward, like the spires. Gothic rib vaults covered the nave, and pointed arches were commonly used for the arcades, windows, doorways, in the tracery, and especially in the later Gothic styles decorating the façades. They were also sometimes used for more practical purposes, such as to bring transverse vaults to the same height as diagonal vaults, as in the nave and aisles of Durham Cathedral, built in 1093. The earliest Gothic pointed arches were lancet lights or lancet windows, which are narrow windows terminating in a lancet arch. A lancet arch has a radius longer than their breadth (width) and resembles the blade of a lancet. In the 12th-century First Pointed phase of Gothic architecture (also called the Lancet style) and before the introduction of tracery in the windows in later styles, lancet windows predominated Gothic building. The Flamboyant style of Gothic architecture is particularly known for lavish pointed details such as the arc-en-accolade, where a pointed arch over a doorway was topped by a pointed sculptural ornament called a fleuron and by pointed pinnacles on either side. The arches of the doorway were further decorated with small cabbage-shaped sculptures called chou-frisés. Rib vaults The Gothic rib vault was one of the essential elements that made the great height and large windows of Gothic architecture possible. Unlike the semi-circular barrel vault of Roman and Romanesque buildings, where the weight pressed directly downward, and required thick walls and small windows, the Gothic rib vault was made of diagonal crossing arched ribs. These ribs directed the thrust outwards to the corners of the vault, and downwards via slender colonnettes and bundled columns, to the pillars and columns below. The space between the ribs was filled with thin panels of small pieces of stone, which were much lighter than earlier groin vaults. The outward thrust against the walls was countered by the weight of buttresses and later flying buttresses. As a result, the massive, thick walls of Romanesque buildings were no longer needed, as since the vaults were supported by columns and piers, the walls could be made thinner and higher, and filled with windows. The earlier Gothic rib vaults, used at Sens Cathedral (begun between 1135 and 1140) and Notre-Dame de Paris (begun 1163), were divided by the ribs into six compartments. They were very difficult to build and could only cross a limited space. Since each vault covered two bays, they needed support on the ground floor from alternating columns and piers. In later construction, the design was simplified, and the rib vaults were divided into only four compartments. The alternating rows of alternating columns and piers receiving the vaults' weight were replaced by simple pillars, each receiving the same weight. A single vault could cross the nave. This method was used at Chartres Cathedral (1194–1220), Amiens Cathedral (begun 1220), and Reims Cathedral. The four-part vaults made it possible for taller buildings to be constructed. Notre-Dame, which had begun with six-part vaults, reached a height of . Amiens Cathedral, which had begun with the newer four-part ribs, reached a height of at the transept. Later vaults (13th–15th century) In France, the four-part rib vault, with two diagonals crossing at the center of the traverse, was the type used almost exclusively until the end of the Gothic period. However, in England, several imaginative new vaults were invented which had more elaborate decorative features. They became a signature of the later English Gothic styles. The first of these new vaults had an additional rib, called a tierceron, which ran down the median of the vault. It first appeared in the vaults of the choir of Lincoln Cathedral at the end of the 12th century, then at Worcester Cathedral in 1224, and then the south transept of Lichfield Cathedral. The 14th century brought the invention of several new types of vaults which were more and more decorative. These vaults often copied the forms form of the elaborate tracery of the Late Gothic styles. These included the stellar vault, where a group of additional ribs between the principal ribs forms a star design. The oldest vaults of this kind were found in the crypt of Saint Stephen at Westminster Palace, built about 1320. A second type was called a reticulated vault, which had a network of additional decorative ribs, in triangles and other geometric forms, placed between or over the traverse ribs. These were first used in the choir of Bristol Cathedral in about 1311. Another late Gothic form, the fan vault, with ribs spreading upwards and outwards, appeared later in the 14th century. An example is the cloister of Gloucester Cathedral (). Another new form was the skeleton vault, which appeared in the English Decorated style. It has an additional network of ribs, like the ribs of an umbrella, which criss-cross the vault but are only directly attached to it at certain points. It appeared in a chapel of Lincoln Cathedral in 1300. and then several other English churches. This style of vault was adopted in the 14th century in particular by German architects, particularly Peter Parler, and in other parts of central Europe. Another exists in the south porch of the Prague Cathedral Elaborate vaults also appeared in civic architecture. An example is the ceiling of the Vladislav Hall in Prague Castle in Bohemia designed by Benedikt Ried in 1493. The ribs twist and intertwine in fantasy patterns, which later critics called "Rococo Gothic". Columns and piers In early French Gothic architecture, the capitals of the columns were modeled after Roman columns of the Corinthian order, with finely-sculpted leaves. They were used in the ambulatory of the Abbey church of Saint-Denis. According to its builder, the Abbot Suger, they were inspired by the columns he had seen in the ancient baths in Rome. They were used later at Sens, at Notre-Dame de Paris and at Canterbury in England. In early Gothic churches with six-part rib vaults, the columns in the nave alternated with more massive piers to provide support for the vaults. With the introduction of the four-part rib vault, all of the piers or columns in the nave could have the same design. In the High Gothic period, a new form was introduced, composed of a central core surrounded several attached slender columns, or colonettes, going up to the vaults. These clustered columns were used at Chartres, Amiens, Reims and Bourges, Westminster Abbey and Salisbury Cathedral. Another variation was a quadrilobe column, shaped like a clover, formed of four attached columns. In England, the clustered columns were often ornamented with stone rings, as well as columns with carved leaves. Later styles added further variations. Sometimes the piers were rectangular and fluted, as at Seville Cathedral, In England, parts of columns sometimes had contrasting colours, using combining white stone with dark Purbeck marble. In place of the Corinthian capital, some columns used a stiff-leaf design. In later Gothic, the piers became much taller, reaching up more than half of the nave. Another variation, particularly popular in eastern France, was a column without a capital, which continued upward without capitals or other interruption, all the way to the vaults, giving a dramatic display of verticality. Flying buttresses An important feature of Gothic architecture was the flying buttress, a half-arch outside the building which carried the thrust of weight of the roof or vaults inside over a roof or an aisle to a heavy stone column. The buttresses were placed in rows on either side of the building, and were often topped by heavy stone pinnacles, both to give extra weight and for additional decoration. Buttresses had existed since Roman times, usually set directly against the building, but the Gothic vaults were more sophisticated. In later structures, the buttresses often had several arches, each reaching in to a different level of the structure. The buttresses permitted the buildings to be both taller, and to have thinner walls, with greater space for windows. Over time, the buttresses and pinnacles became more elaborate supporting statues and other decoration, as at Beauvais Cathedral and Reims Cathedral. The arches had an additional practical purpose; they contained lead channels which carried rainwater off the roof; it was expelled from the mouths of stone gargoyles placed in rows on the buttresses. Flying buttresses were used less frequently in England, where the emphasis was more on length than height. One example of English buttresses was Canterbury Cathedral, whose choir and buttresses were rebuilt in Gothic style by William of Sens and William the Englishman. However, they were very popular in Germany: in Cologne Cathedral the buttresses were lavishly decorated with statuary and other ornament, and were a prominent feature of the exterior. Towers and spires Towers, spires and flèches were an important feature of Gothic churches. They presented a dramatic spectacle of great height, helped make their churches the tallest and most visible buildings in their city, and symbolised the aspirations of their builders toward heaven. They also had a practical purpose; they often served as bell towers supporting belfries, whose bells told the time by announcing religious services, warned of fire or enemy attack, and celebrated special occasions like military victories and coronations. Sometimes the bell tower is built separate from a church; the best-known example of this is the Leaning Tower of Pisa. The towers of cathedrals were usually the last part of the structure to be built. Since cathedral construction usually took many years, and was extremely expensive, by the time the tower were to be built public enthusiasm waned, and tastes changed. Many projected towers were never built, or were built in different styles than other parts of the cathedral, or with different styles on each level of the tower. At Chartres Cathedral, the south tower was built in the 12th century, in the simpler Early Gothic, while the north tower is the more highly decorated Flamboyant style. Chartres would have been even more exuberant if the second plan had been followed; it called for seven towers around the transept and sanctuary. In the Île-de-France, cathedral towers followed the Romanesque tradition of two identical towers, one on either side of the portals. The west front of the Saint-Denis, became the model for the early Gothic cathedrals and High Gothic cathedrals in northern France, including Notre-Dame de Paris, Reims Cathedral, and Amiens Cathedral. The early and High Gothic Laon Cathedral has a square lantern tower over the crossing of the transept; two towers on the western front; and two towers on the ends of the transepts. Laon's towers, with the exception of the central tower, are built with two stacked vaulted chambers pierced by lancet openings. The two western towers contain life-size stone statues of sixteen oxen in their upper arcades, said to honour the animals who hauled the stone during the cathedral's construction. In Normandy, cathedrals and major churches often had multiple towers, built over the centuries; the Abbaye aux Hommes (begun 1066), Caen has nine towers and spires, placed on the façade, the transepts, and the centre. A lantern tower was often placed the centre of the nave, at the meeting point with the transept, to give light to the church below. In later periods of Gothic, pointed needle-like spires were often added to the towers, giving them much greater height. A variation of the spire was the flèche, a slender, spear-like spire, which was usually placed on the transept where it crossed the nave. They were often made of wood covered with lead or other metal. They sometimes had open frames, and were decorated with sculpture. Amiens Cathedral has a flèche. The most famous example was that of Notre-Dame de Paris. The original flèche of Notre-Dame was built on the crossing of the transept in the middle of the 13th century, and housed five bells. It was removed in 1786 during a program to modernize the cathedral, but was put back in a new form designed by Eugène Viollet-le-Duc. The new flèche, of wood covered with lead, was decorated with statues of the Apostles; the figure of St Thomas resembled Viollet-le-Duc. The flèche was destroyed in the 2019 fire, but is being restored in the same design. In English Gothic, the major tower was often placed at the crossing of the transept and nave, and was much higher than the other. The most famous example is the tower of Salisbury Cathedral, completed in 1320 by William of Farleigh. It was a remarkable feat of construction, since it was built upon the pillars of the much earlier church. A crossing tower was constructed at Canterbury Cathedral in 1493–1501 by John Wastell, who had previously worked on King's College at Cambridge. It was finished by Henry Yevele, who also built the present nave of Canterbury. The new central tower at Wells Cathedral caused a problem; it was too heavy for the original structure. An unusual double arch had to be constructed in the centre of the crossing to give the tower the extra support it needed. England's Gothic parish churches and collegiate churches generally have a single western tower. A number of the finest churches have masonry spires, with those of St James Church, Louth; St Wulfram's Church, Grantham; St Mary Redcliffe in Bristol; and Coventry Cathedral. These spires all exceed in height. Westminster Abbey's crossing tower has for centuries remained unbuilt, and numerous architects have proposed various ways of completing it since the 1250s, when work began on the tower under Henry III. A century and half later, an octagonal roof lantern resembling that of Ely Cathedral was installed instead, which was then demolished in the 16th century. Construction began again in 1724 to the design of Nicholas Hawksmoor, after first Christopher Wren had proposed a design in 1710, but stopped again in 1727. The crossing remains covered by the stub of the lantern and a 'temporary' roof. Later Gothic towers in Central Europe often followed the French model, but added even denser decorative tracery. Cologne Cathedral had been started in the 13th century, following the plan of Amiens Cathedral, but only the apse and the base of one tower were finished in the Gothic period. The original plans were conserved and rediscovered in 1817, and the building was completed in the 19th century following the origin design. It has two spectacularly ornamented towers, covered with arches, gables, pinnacles and openwork spires pointing upwards. The tower of Ulm Minster has a similar history, begun in 1377, stopped in 1543, and not completed until the 19th century. Regional variants of Gothic towers appeared in Spain and Italy. Burgos Cathedral was inspired by Northern Europe. It has an exceptional cluster of openwork spires, towers, and pinnacles, drenched with ornament. It was begun in 1444 by a German architect, Juan de Colonia (John of Cologne) and eventually completed by a central tower (1540) built by his grandson. In Italy the towers were sometimes separate from the cathedral; and the architects usually kept their distance from the Northern European style. the leaning tower of Pisa Cathedral, built between 1173 and 1372, is the best-known example. The Campanile of Florence Cathedral was built by Giotto in the Florentine Gothic style, decorated with encrustations of polychrome marble. It was originally designed to have a spire. Tracery Tracery is an architectural solution by which windows (or screens, panels, and vaults) are divided into sections of various proportions by stone bars or ribs of moulding. Pointed arch windows of Gothic buildings were initially (late 12th–late 13th centuries) lancet windows, a solution typical of the Early Gothic or First Pointed style and of the Early English Gothic. Plate tracery was the first type of tracery to be developed, emerging in the later phase of Early Gothic or First Pointed. Second Pointed is distinguished from First by the appearance of bar–tracery, allowing the construction of much larger window openings, and the development of Curvilinear, Flowing, and Reticulated tracery, ultimately contributing to the Flamboyant style. Late Gothic in most of Europe saw tracery patterns resembling lace develop, while in England Perpendicular Gothic or Third Pointed preferred plainer vertical mullions and transoms. Tracery is practical as well as decorative, because the increasingly large windows of Gothic buildings needed maximum support against the wind. Plate tracery, in which lights were pierced in a thin wall of ashlar, allowed a window arch to have more than one light – typically two side by side and separated by flat stone spandrels. The spandrels were then sculpted into figures like a roundel or a quatrefoil. Plate tracery reached the height of its sophistication with the 12th century windows of Chartres Cathedral and in the "Dean's Eye" rose window at Lincoln Cathedral. At the beginning of the 13th century, plate tracery was superseded by bar-tracery. Bar-tracery divides the large lights from one another with moulded mullions. Stone bar-tracery, an important decorative element of Gothic styles, first was used at Reims Cathedral shortly after 1211, in the chevet built by Jean D'Orbais. It was employed in England around 1240. After 1220, master builders in England had begun to treat the window openings as a series of openings divided by thin stone bars, while before 1230 the apse chapels of Reims Cathedral were decorated with bar-tracery with cusped circles (with bars radiating from the centre). Bar-tracery became common after , with increasing complexity and decreasing weight. The lines of the mullions continued beyond the tops of the window lights and subdivided the open spandrels above the lights into a variety of decorative shapes. Rayonnant style () was enabled by the development of bar-tracery in Continental Europe and is named for the radiation of lights around a central point in circular rose windows. Rayonnant also deployed mouldings of two different types in tracery, where earlier styles had used moulding of a single size, with different sizes of mullions. The rose windows of Notre-Dame de Paris (c.1270) are typical. The early phase of Middle Pointed style (late 13th century) is characterized by Geometrical tracery – simple bar-tracery forming patterns of foiled arches and circles interspersed with triangular lights. The mullions of Geometrical style typically had capitals with curved bars emerging from them. Intersecting bar-tracery (c.1300) deployed mullions without capitals which branched off equidistant to the window-head. The window-heads themselves were formed of equal curves forming a pointed arch and the tracery-bars were curved by drawing curves with differing radii from the same centres as the window-heads. The mullions were in consequence branched into Y-shaped designs further ornamented with cusps. The intersecting branches produced an array of lozenge-shaped lights in between numerous lancet arched lights.Y-tracery was often employed in two-light windows c.1300. Second Pointed (14th century) saw Intersecting tracery elaborated with ogees, creating a complex reticular (net-like) design known as Reticulated tracery. Second Pointed architecture deployed tracery in highly decorated fashion known as Curvilinear and Flowing (Undulating). These types of bar-tracery were developed further throughout Europe in the 15th century into the Flamboyant style, named for the characteristic flame-shaped spaces between the tracery-bars. These shapes are known as daggers, fish-bladders, or mouchettes. Third Pointed or Perpendicular Gothic developed in England from the later 14th century and is typified by Rectilinear tracery (panel-tracery). The mullions are often joined together by transoms and continue up their straight vertical lines to the top of the window's main arch, some branching off into lesser arches, and creating a series of panel-like lights. Perpendicular strove for verticality and dispensed with the Curvilinear style's sinuous lines in favour of unbroken straight mullions from top to bottom, transected by horizontal transoms and bars. Four-centred arches were used in the 15th and 16th centuries to create windows of increasing size with flatter window-heads, often filling the entire wall of the bay between each buttress. The windows were themselves divided into panels of lights topped by pointed arches struck from four centres. The transoms were often topped by miniature crenellations. The windows at Cambridge of King's College Chapel (1446–1515) represent the heights of Perpendicular tracery. Tracery was used on both the interior and exterior of buildings. It frequently covered the façades, and the interior walls of the nave and choir were covered with blind arcades. It also often picked up and repeated the designs in the stained glass windows. Strasbourg Cathedral has a west front lavishly ornamented with bar tracery matching the windows. Elements of Romanesque and Gothic architecture compared Plans The plan of Gothic cathedrals and churches was usually based on the Latin cross (or "cruciform") plan, taken from the ancient Roman Basilica, and from the later Romanesque churches. They have a long nave making the body of the church, where the parishioners worshipped; a transverse arm called the transept and, beyond it to the east, the choir, also known as a chancel or presbytery, that was usually reserved for the clergy. The eastern end of the church was rounded in French churches, and was occupied by several radiating chapels, which allowed multiple ceremonies to go on simultaneously. In English churches the eastern end also had chapels, but was usually rectangular. A passage called the ambulatory circled the choir. This allowed parishioners, and especially pilgrims, to walk past the chapels to see the relics displayed there without disturbing other services going on. Each vault of the nave formed a separate cell, with its own supporting piers or columns. The early cathedrals, like Notre-Dame, had six-part rib vaults, with alternating columns and piers, while later cathedrals had the simpler and stronger four-part vaults, with identical columns. Following the model of Romanesque architecture and the Basilica of Saint Denis, cathedrals usually had two towers flanking the west façade. Towers over the crossing were common in England (Salisbury Cathedral), York Minister) but rarer in France. Transepts were usually short in early French Gothic architecture, but became longer and were given large rose windows in the Rayonnant period. The choirs became more important. The choir was often flanked by a double disambulatory, which was crowned by a ring of small chapels. In England, transepts were more important, and the floor plans were usually much more complex than in French cathedrals, with the addition of attached Lady Chapels, an octagonal Chapter House, and other structures (See plans of Salisbury Cathedral and York Minster below). This reflected a tendency in France to carry out multiple functions in the same space, while English cathedrals compartmentalized them. This contrast is visible in the difference between Amiens Cathedral, with its minimal transepts and semicircular apse, filled with chapels, on the east end, compared with the double transepts, projecting north porch, and rectangular east end of Salisbury and York. Elevations and the search for height Gothic architecture was a continual search for greater height, thinner walls, and more light. This was clearly illustrated in the evolving elevations of the cathedrals. In Early Gothic architecture, following the model of the Romanesque churches, the buildings had thick, solid walls with a minimum of windows in order to give enough support for the vaulted roofs. An elevation typically had four levels. On the ground floor was an arcade with massive piers alternating with thinner columns, which supported the six-part rib vaults. Above that was a gallery, called the tribune, which provided stability to the walls, and was sometimes used to provide seating for the nuns. Above that was a narrower gallery, called the triforium, which also helped provide additional thickness and support. At the top, just beneath the vaults, was the clerestory, where the high windows were placed. The upper level was supported from the outside by the flying buttresses. This system was used at Noyon Cathedral, Sens Cathedral, and other early structures. In the High Gothic period, thanks to the introduction of the four part rib vault, a simplified elevation appeared at Chartres Cathedral and others. The alternating piers and columns on the ground floor were replaced by rows of identical circular piers wrapped in four engaged columns. The tribune disappeared, which meant that the arcades could be higher. This created more space at the top for the upper windows, which were expanded to include a smaller circular window above a group of lancet windows. The new walls gave a stronger sense of verticality and brought in more light. A similar arrangement was adapted in England, at Salisbury Cathedral, Lincoln Cathedral, and Ely Cathedral. An important characteristic of Gothic church architecture is its height, both absolute and in proportion to its width, the verticality suggesting an aspiration to Heaven. The increasing height of cathedrals over the Gothic period was accompanied by an increasing proportion of the wall devoted to windows, until, by the late Gothic, the interiors became like cages of glass. This was made possible by the development of the flying buttress, which transferred the thrust of the weight of the roof to the supports outside the walls. As a result, the walls gradually became thinner and higher, and masonry was replaced with glass. The four-part elevation of the naves of early Cathedrals such as Notre-Dame (arcade, tribune, triforium, clerestory) was transformed in the choir of Beauvais Cathedral to very tall arcades, a thin triforium, and soaring windows up to the roof. Beauvais Cathedral reached the limit of what was possible with Gothic technology. A portion of the choir collapsed in 1284, causing alarm in all of the cities with very tall cathedrals. Panels of experts were created in Sienna and Chartres to study the stability of those structures. Only the transept and choir of Beauvais were completed, and in the 21st century, the transept walls were reinforced with cross-beams. No cathedral built since exceeded the height of the choir of Beauvais. West front Churches traditionally face east, with the altar at the east, and the west front, or façade, was considered the most important entrance. Gothic façades were adapted from the model of the Romanesque façades. The façades usually had three portals, or doorways, leading into the nave. Over each doorway was a tympanum, a work of sculpture crowded with figures. The sculpture of the central tympanum was devoted to the Last Judgement, that to the left to the Virgin Mary, and that to the right to the Saints honoured at that particular cathedral. In the early Gothic, the columns of the doorways took the form of statues of saints, making them literally "pillars of the church". In the early Gothic, the façades were characterized by height, elegance, harmony, unity, and a balance of proportions. They followed the doctrine expressed by Saint Thomas Aquinas that beauty was a "harmony of contrasts". Following the model of Saint-Denis and later Notre-Dame de Paris, the façade was flanked by two towers proportional to the rest of the façade, which balanced the horizontal and vertical elements. Early Gothic façades often had a small rose window placed above the central portal. In England the rose window was often replaced by several lancet windows. In the High Gothic period, the façades grew higher, and had more dramatic architecture and sculpture. At Amiens Cathedral (), the porches were deeper, the niches and pinnacles were more prominent. The portals were crowned with high arched gables, composed of concentric arches filled with sculpture. The rose windows became enormous, filling an entirely wall above the central portal, and they were themselves covered with a large pointed arch. The rose windows were pushed upwards by the growing profusion of decoration below. The towers were adorned with their own arches, often crowned with pinnacles. The towers themselves were crowned with spires, often of open-work sculpture. One of the finest examples of a Flamboyant façade is Notre-Dame de l'Épine (1405–1527). While French cathedrals emphasized the height of the façade, English cathedrals, particularly in earlier Gothic, often emphasized the width. The west front of Wells Cathedral is 146 feet across, compared with 116 feet wide at the nearly contemporary Amiens Cathedral, though Amiens is twice as high. The west front of Wells was almost entirely covered with statuary, like Amiens, and was given even further emphasis by its colors; traces of blue, scarlet, and gold are found on the sculpture, as well as painted stars against the dark background on other sections. Italian Gothic façades have the three traditional portals and rose windows, or sometimes simply a large circular window without tracery plus an abundance of flamboyant elements, including sculpture, pinnacles and spires. However, they added distinctive Italian elements. as seen in the façades of Siena Cathedral ) and of Orvieto Cathedral, The Orvieto façade was largely the work of a master mason, Lorenzo Maitani, who worked on the façade from 1308 until his death in 1330. He broke away from the French emphasis on height, and eliminated the column statutes and statuary in the arched entries, and covered the façade with colourful mosaics of biblical scenes (The current mosaics are of a later date). He also added sculpture in relief on the supporting contreforts. Another important feature of the Italian Gothic portal was the sculpted bronze door. The sculptor Andrea Pisano made the celebrated bronze doors for Florence Baptistry (1330–1336). They were not the first; Abbot Suger had commissioned bronze doors for Saint-Denis in 1140, but they were replaced with wooden doors when the Abbey was enlarged. Pisano's work, with its realism and emotion, pointed toward the coming Renaissance. East end Cathedrals and churches were traditionally constructed with the altar at the east end, so that the priest and congregation faced the rising sun during the morning liturgy. The sun was considered the symbol of Christ and the Second Coming, a major theme in Cathedral sculpture. The portion of the church east of altar is the choir, reserved for members of the clergy. There is usually a single or double ambulatory, or aisle, around the choir and east end, so parishioners and pilgrims could walk freely easily around east end. In Romanesque churches, the east end was very dark, due to the thick walls and small windows. In the ambulatory of the Basilica of Saint Denis, Abbot Suger first used the novel combination rib vaults and buttresses to replace the thick walls and replace them with stained glass, opening up that portion of the church to what he considered "divine light". In French Gothic churches, the east end, or chevet, often had an apse, a semi-circular projection with a vaulted or domed roof. The chevet of large cathedrals frequently had a ring of radiating chapels, placed between the buttresses to get maximum light. There are three such chapels at Chartres Cathedral, seven at Notre Dame de Paris, Amiens Cathedral, Prague Cathedral and Cologne Cathedral, and nine at Basilica of Saint Anthony of Padua in Italy. In England, the east end is more often rectangular, and gives access to a separate and large Lady Chapel, dedicated to the Virgin Mary. Lady Chapels were also common in Italy. Sculpture Portals and tympanum Sculpture was an important element of Gothic architecture. Its intent was present the stories of the Bible in vivid and understandable fashion to the great majority of the faithful who could not read. The iconography of the sculptural decoration on the façade was not left to the sculptors. An edict of the Second Council of Nicaea in 787 had declared: "The composition of religious images is not to be left to the inspiration of artists; it is derived from the principles put in place by the Catholic Church and religious tradition. Only the art belongs to the artist; the composition belongs to the Fathers." In Early Gothic churches, following the Romanesque tradition, sculpture appeared on the façade or west front in the triangular tympanum over the central portal. Gradually, as the style evolved, the sculpture became more and more prominent, taking over the columns of the portal, and gradually climbing above the portals, until statues in niches covered the entire façade, as in Wells Cathedral, to the transepts, and, as at Amiens Cathedral, even on the interior of the façade. Some of the earliest examples are found at Chartres Cathedral, where the three portals of the west front illustrate the three epiphanies in the Life of Christ. At Amiens, the tympanum over the central portal depicted the Last Judgement, the right portal showed the Coronation of the Virgin, and the left portal showed the lives of saints who were important in the diocese. This set a pattern of complex iconography which was followed at other churches. The columns below the tympanum are in the form of statues of saints, literally representing them as "the pillars of the church". Each saint had his own symbol at his feet so viewers could recognize them; a winged lion meant Saint Mark, an eagle with four wings meant Saint John the Apostle, and a winged bull symbolized Saint Luke. Floral and vegetal decoration was also very common, representing the Garden of Eden; grapes represented the wines of Eucharist. The tympanum over the central portal on the west façade of Notre-Dame de Paris vividly illustrates the Last Judgement, with figures of sinners being led off to hell, and good Christians taken to heaven. The sculpture of the right portal shows the coronation of the Virgin Mary, and the left portal shows the lives of saints who were important to Parisians, particularly Saint Anne, the mother of the Virgin Mary. To make the message even more prominent, the sculpture of the tympanum was painted in bright colors. following a system of colours codified in the 12th century; yellow, called gold, symbolized intelligence, grandeur and virtue; white, called argent, symbolized purity, wisdom, and correctness; black, or sable, meant sadness, but also will; green, or sinople, represented hope, liberty and joy; red or gueules (see gules) meant charity or victory; blue or azure symbolised the sky, faithfulness and perseverance; and violet, or pourpre, was the colour of royalty and sovereignty. In the later Gothic, the sculpture became more naturalistic; the figures were separated from the walls, and had much more expressive faces, showing emotion and personality. The drapery was very skilfully carved. The torments of hell were even more vividly depicted. The late Gothic sculpture at Siena Cathedral, by Nino Pisano, pointing toward the Renaissance, is particularly notable. Much of it is now kept in a museum to protect it from deterioration. Grotesques and Labyrinths Besides saints and apostles, the exteriors of Gothic churches were also decorated with sculptures of a variety of fabulous and frightening grotesques or monsters. These included the chimera, a mythical hybrid creature which usually had the body of a lion and the head of a goat, and the strix or stryge, a creature resembling an owl or bat, which was said to eat human flesh. The strix appeared in classical Roman literature; it was described by the Roman poet Ovid, who was widely read in the Middle Ages, as a large-headed bird with transfixed eyes, rapacious beak, and greyish white wings. They were part of the visual message for the illiterate worshippers, symbols of the evil and danger that threatened those who did not follow the teachings of the church. The gargoyles, which were added to Notre-Dame in about 1240, had a more practical purpose. They were the rain spouts of the church, designed to divide the torrent of water which poured from the roof after rain, and to project it outwards as far as possible from the buttresses and the walls and windows so that it would not erode the mortar binding the stone. To produce many thin streams rather than a torrent of water, a large number of gargoyles were used, so they were also designed to be a decorative element of the architecture. The rainwater ran from the roof into lead gutters, then down channels on the flying buttresses, then along a channel cut in the back of the gargoyle and out of the mouth away from the church. Many of the statues at Notre-Dame, particularly the grotesques, were removed from the façade in the 17th and 18th century, or were destroyed during the French Revolution. They were replaced with figures in the Gothic style, designed by Eugène Viollet-le-Duc during the 19th-century restoration. Similar figures appear on the other major Gothic churches of France and England. Another common feature of Gothic cathedrals in France was a labyrinth or maze on the floor of the nave near the choir, which symbolised the difficult and often complicated journey of a Christian life before attaining paradise. Most labyrinths were removed by the 18th century, but a few, like the one at Amiens Cathedral, have been reconstructed, and the labyrinth at Chartres Cathedral still exists essentially in its original form. Windows and stained glass Increasing the amount of light in the interior was a primary objective of the founders of the Gothic movement. Abbot Suger described the new kind of architecture he had created in the east end of the Saint-Denis: "a circular ring of chapels, by virtue of which the whole church would shine with the wonderful and uninterrupted light of most luminous windows, pervading the interior beauty." Religious teachings in the Middle Ages, particularly the writings of Pseudo-Dionysius the Areopagite, a 6th-century mystic whose book, De Coelesti Hierarchia, was popular among monks in France, taught that all light was divine. When the Abbot Suger ordered the reconstruction of choir of the abbey church at Saint-Denis, he had the builders create seventy windows, admitting as much light as possible, as the means by which the faithful could be elevated from the material world to the immaterial world. The placement of the windows was also determined by religious doctrine. The windows on the north side, frequently in the shade, had windows depicting the Old Testament. The windows of the east, corresponding to the direction of the sunrise, had images of Christ and scenes from the New Testament. In the Early Gothic period, the glass was particularly thick and was deeply coloured with metal oxides; cobalt for blue, copper for a ruby red, iron for green, and antimony for yellow. The process of making the windows was described in detail by the 12th-century monk known as Theophilus Presbyter. The glass of each colour was melted with the oxide, blown, shaped into small sheets, cracked with a hot iron into small pieces, and assembled on a large table. The details were painted onto the glass in vitreous enamel, then baked in a kiln to fuse the enamel on the glass. The pieces were fit into a framework of thin lead strips, and then put into a more solid frame or iron armatures between the panels. The finished window was set into the stone opening. Thin vertical and horizontal bars of iron, called vergettes or barlotierres, were placed inside the window to reinforce the glass against the wind. The use of iron rods between the panels of glass and a framework of stone mullions, or ribs, made it possible to create much larger windows. The three rose windows at Chartres (1203–1240) each were more than in diameter. Larger windows also appeared at York Minster (1140–1160) and Canterbury Cathedral (1178–1200) The stained glass windows were extremely complex and expensive to create. King Louis IX paid for the rose windows in the transept of Notre-Dame de Paris, but other windows were financed by the contributions of the professions or guilds of the city. These windows usually had a panel which illustrated the work of the guild which funded it, such as the drapers, stonemasons, or coopers. The 13th century saw the introduction of a new kind of window, with grisaille, or white glass, with a geometric pattern, usually joined with medallions of stained glass. These windows allowed much more light into the cathedral, but diminished the vividness of the stained glass, since there was less contrast between the dark interior and bright exterior. The most remarkable and influential work of stained glass in the 13th century was the royal chapel, Sainte-Chapelle (1243–1248), where the windows of the upper chapel, high, occupied all of the walls on the three sides, with 1,134 individual scenes. Sainte-Chapelle became the model for other chapels across Europe. The 14th century brought a variety of new colours, and the use of more realistic shading and half-toning. This was done by the development of flashed glass. Clear glass was dipped into coloured glass, then portions of the coloured glass were ground away to give exactly the right shade. In the 15th century, artists began painting directly onto the glass with enamel colours. Gradually the art of glass came closer and closer to traditional painting. One of the most celebrated Flamboyant buildings was the Sainte-Chapelle de Vincennes (1370s), with walls of glass from floor to ceiling. The original glass was destroyed, and is replaced by grisaille glass. King's College Chapel (15th century), also followed the model of walls entirely filled with glass. The stained glass windows were extremely complex and expensive to create. King Louis IX paid for the rose windows in the transept of Notre-Dame de Paris, while other windows were often financed by the contributions of the professions or guilds of the city. These windows usually incorporated a panel which illustrates the work of the guild which funded it, such as the drapers, stonemasons, or barrel-makers. In England, the stained glass windows also grew in size and importance; major examples were the Becket Windows at Canterbury Cathedral (1200–1230) and the windows of Lincoln Cathedral (1200–1220). Enormous windows were also an important element of York Minster and Gloucester Cathedral. Much of the stained glass in Gothic churches today dates from later restorations, but a few, notably Chartres Cathedral and Bourges Cathedral, still have many of their original windows Rose windows Rose windows were a prominent feature of many Gothic churches and cathedrals. The rose was a symbol of the Virgin Mary, and they were particularly used in churches dedicated to her. The French Gothic cathedrals of Chartres, Notre Dame de Paris, Reims, and Laon have them in the west façade, and in the transepts as well. Amiens Cathedral, Strasbourg Cathedral and Westminster Abbey also have them in transepts. The designs of their tracery became increasingly complex, and gave their names to two periods; the Rayonnant and the Flamboyant. Two of the most famous Rayonnant rose windows were constructed in the transepts of Notre-Dame in the 13th century. High Gothic architectural elements, 1180–1230 Flying buttresses developed Higher vaults were possible because of the flying buttresses Larger clerestory windows because of the flying buttresses. Clerestory windows had geometric tracery Rose windows became larger, with geometric tracery The west front of Notre-Dame set a formula adopted by other cathedrals. Transept ends had ornate portals like the west front Rayonnant Gothic architectural elements, 1230–1350 Cathedrals increasingly tall in relation to width, facilitated by the development of complex systems of buttressing Quadripartite vaults over a single bay Vaults in France maintained simple forms but elsewhere the patterns of ribs became more elaborate. Emphasis on the appearance of high internally. Abandonment of fourth stage, either the deep triforium gallery or the shallow tribune gallery, in the internal elevation. Columns of classical proportion disappear in favour of increasingly tall columns surrounded by clusters of shafts. Complex shafted piers Large windows divided by mullions into several lights (vertical panels) with geometric tracery in the arch Large rose windows in geometric or radiating designs Flamboyant Gothic architectural elements, 1350–1550 The design of tracery no longer dependent on circular shapes, developed S curves and flame-like shapes. Complex vaults with Flamboyant shapes in the ribs, particularly in Spain and Central Europe, but rare in France Many rose windows built with Flamboyant tracery, many in France. Large windows of several lights with Flamboyant tracery in the arch The Flamboyant arch, drafted from four centres, used for smaller openings, e.g. doorways and niches. Mouldings of Flamboyant shape often used as non structural decoration over openings, topped by a floral finial (poupée) Palaces The Gothic style was used in royal and papal residences as well as in churches. Prominent examples include the Palais de la Cité the Medieval Louvre, the Chateau de Vincennes in Paris, residences of the French kings, the Doge's Palace in Venice, and the Palace of the Kings of Navarre in Olite (1269–1512). Another is the Palais des Papes (Palace of the Popes), the former Papal residence in Avignon. It was constructed between 1252 and 1364, during the Avignon Papacy. Given the complicated political situation, it combined the functions of a church, a seat of government and a fortress. The Palais de la Cité in Paris, close to Notre-Dame de Paris, begun in 1119, which was the principal residence of the French kings until 1417. Most of the Palais de la Cité is gone, but two of the original towers along the Seine, of the towers, the vaulted ceilings of the Hall of the Men-at-Arms (1302), (now in the Conciergerie; and the original chapel, Sainte-Chapelle, can still be seen. The Louvre Palace was originally built by Philippe II of France beginning in 1190 to house the King's archives and treasures, and given machicoulis and features of a Gothic fortress. However, it was soon made obsolete by the development of artillery, and in the 15th century it was remodelled into a comfortable residential palace. While the outer walls retained their original military appearance, the castle itself, with a profusion of spires, towers, pinnacles, arches and gables, became a visible symbol of royalty and aristocracy. The style was copied in chateaux and other aristocratic residences across France and other parts of Europe. Civic architecture In the 15th century, following the late Gothic period or flamboyant style, elements of Gothic decoration began to appear in the town halls of northern France, Flanders and the Netherlands. The Rouen Courthouse in Normandy is representative of Flamboyant Gothic in France. The Hôtel de Ville of Compiègne has an imposing Gothic bell tower, featuring a spire surrounded by smaller towers, and its windows are decorated with ornate accolades or ornamental arches. Similarly flamboyant town halls were found in Arras, Douai, and Saint-Quentin, Aisne, and in modern Belgium, in Brussels, Ghent, Bruges, Audenarde, Mons and Leuven. Gothic civil architecture in Spain includes the Silk Exchange in Valencia, Spain (1482–1548), a major marketplace, which has a main hall with twisting columns beneath its vaulted ceiling. University Gothic The Gothic style was adopted in the late 13th to 15th centuries in early English university buildings, with inspiration coming from monasteries and manor houses. The oldest existing example in England is probably the Mob Quad of Merton College at Oxford University, constructed between 1288 and 1378. The style was further refined by William of Wykeham, Chancellor of England and founder of New College, Oxford, in 1379. His architect, William Wynford, designed the New College quadrangle in the 1380s, which combined a hall, chapel, library, and residences for Fellows and undergraduates. A similar kind of academic cloister was created at Queen's College, Oxford, in the 1140s, likely designed by Reginald Ely. The design of the colleges was influenced not only by abbeys, but also the design of English manor houses of the 14th and 15th century, such as Haddon Hall in Derbyshire. They were composed of rectangular courtyards with covered walkways which separated the wings. Some colleges, like Balliol College, Oxford, borrowed a military style from Gothic castles, with battlements and crenelated walls. King's College Chapel, Cambridge is one of the finest examples of the late Gothic style. It was built by King Henry VI, who was displeased by the excessive decoration of earlier styles. He wrote in 1447 that he wanted his chapel "to proceed in large form, clean and substantial, setting apart superfluity of too great curious works of entail and busy moulding." The chapel, built between 1508 and 1515, has glass walls from floor to ceiling, rising to spreading fan vaults designed by John Wastell. The glass walls are supported by large external buttresses concealed at the base by side chapels. Other European examples include Collegio di Spagna in the University of Bologna, built during the 14th and 15th centuries; the Collegium Carolinum of the Charles University in Prague in the Czech Republic (); the Escuelas mayores of the University of Salamanca in Spain; and the Collegium Maius of the Jagiellonian University in Kraków, Poland. Military architecture In the 13th century, the design of the castle () evolved in response to contact with the more sophisticated fortifications of the Byzantine Empire and the Islamic world during the Crusades. These new fortifications were more geometric, with a central high tower called a keep () which could be defended even if the curtain walls of the castle were breached. The donjon of the Château de Vincennes, begun by Philip VI of France was a good example. It was high, and, even though within the moat and walls of the fortress, had its own separate drawbridge to going to higher floor. Towers, usually round, were placed at the corners and along the walls in the Phillipienne castle, close enough together to support each other. The walls had two levels of walkways on the inside, a crenellated parapet with merlons, and projecting machicolations from which missiles could be dropped on besiegers. The upper walls also had protected protruding balconies, échauguettes and bretèches, from which soldiers could see what was happening at the corners or on the ground below. In addition, the towers and walls were pierced with arrowslits, which sometimes took the form of crosses to enable a wider field of fire for archers and crossbowmen. Castles were surrounded by a deep moat, spanned by a single drawbridge. The entrance was also protected by a grill of iron which could be opened and closed. The walls at the bottom were often sloping, and protected with earthen barriers. One good surviving example is the Château de Dourdan, near Nemours. After the end of the Hundred Years War (1337–1453), with improvements in artillery, the castles lost most of their military importance. They remained as symbols of the rank of their noble occupants; the narrowing openings in the walls were often widened into the windows of bedchambers and ceremonial halls. The tower of the Château de Vincennes became a part-time royal residence until the Palace of Versailles was completed. Synagogues Although Christianity played a dominant role in the Gothic sacred architecture, Jewish communities were present in many European cities during the Middle Ages and they also built their houses of prayer in the Gothic style. Unfortunately, most of the Gothic synagogues did not survive, because they were often destroyed in connection with persecution of the Jews (e. g. in Bamberg, Nürnberg, Regensburg, Vienna). One of the best preserved examples of a Gothic synagogue is the Old New Synagogue in Prague which was completed around 1270 and never rebuilt. Influences Romanesque and Norman influence Romanesque architecture and Norman architecture had a major influence upon Gothic architecture. The plan of the Gothic cathedral was based upon the plan of the ancient Roman basilica, which was adopted by Romanesque architecture. The Latin cross form, with a nave and transept, choir, disambulatory, and radiating chapels, came from the Romanesque model. The grand arcades of columns separating the central vessel of the nave from the collateral aisles, the triforium over the grand arcades, and the windows high on the walls allowing light into the nave were all also adapted from the Romanesque model. The portal with a tympanum filled with sculpture was another characteristic Romanesque feature, as was the use of the buttress to support the walls from the outside. Gothic architects improved them by adding the flying buttress with high arches connecting the buttresses to the upper walls. In the interior, Romanesque architecture used the barrel vault with a round arch to cover the nave, and a groin vault when two barrel vaults met at right angles. These vaults were the immediate ancestors of the Gothic rib vault. One of the first use of the Gothic rib vaults to cover a nave was in the Romanesque Durham Cathedral, (1093–1104). Norman Architecture, similar to the Romanesque style, also influenced the Gothic style. Early examples are found in Lessay Abbey in Normandy, which also featured early rib vaults in the nave similar to the Gothic vaults. Cefalu Cathedral (1131–1267) in Sicily, built when Sicily was under Norman rule, is another interesting example. It featured pointed arches and large Gothic rib vaults combined with ornamental mosaic decoration. Romanesque architecture had become a pan-European style and manner of construction, affecting buildings in countries as far apart as Ireland and Croatia, and Sweden and Sicily. The same wide geographic area was then affected by the development of Gothic architecture, but the acceptance of the Gothic style and methods of construction differed from place to place, as did the expressions of Gothic taste. The proximity of some regions meant that modern country borders did not define divisions of style. Many different factors like geographical/geological, economic, social, or political situations caused the regional differences in the great abbey churches and cathedrals of the Romanesque period that would often become even more apparent in the Gothic. For example, studies of the population statistics reveals disparities such as the multitude of churches, abbeys, and cathedrals in northern France while in more urbanised regions construction activity of a similar scale was reserved to a few important cities. Such an example comes from Roberto López, wherein the French city of Amiens was able to fund its architectural projects whereas Cologne could not because of the economic inequality of the two. This wealth, concentrated in rich monasteries and noble families, would eventually spread certain Italian, Catalan, and Hanseatic bankers. This would be amended when the economic hardships of the 13th century were no longer felt, allowing Normandy, Tuscany, Flanders, and the southern Rhineland to enter into competition with France. Islamic influence The pointed arch, one of the defining attributes of Gothic, was earlier featured in Islamic architecture, Though it did not have the same functions. Precursor of pointed arch appeared in Byzantine and Sassanian architectures, This was evidenced in early church building in Syria and occasional secular structures, like the Karamagara Bridge; in Sassanian architecture, employed in palace and sacred construction. These pre-Islamic arches were decorative rather than structural in their function. The pointed arch as an architectonic principle was first clearly established in Islamic architecture; as an architectonic principle, the pointed arch was entirely alien to the pre-Islamic world. Use of the pointed arch seems to have taken off dramatically in Islamic architecture. It begins to appear throughout the Islamic world in close succession after its adoption in the late Umayyad or early Abbasid period. Some examples are the Al-Ukhaidir Palace (775 AD), the Abbasid reconstruction of the Al-Aqsa mosque in 780 AD, the Ramlah Cisterns (789 AD), the Great Mosque of Samarra (851 AD), and the Mosque of Ibn Tulun (879 AD) in Cairo. It also appears in one of the early reconstructions of the Great Mosque of Kairouan in Tunisia, and the Mosque–Cathedral of Córdoba in 987 AD. The pointed arch had already been used in Syria, but in the mosque of Ibn Tulun we have one of the earliest examples of its use on an extensive scale, some centuries before it was exploited in the West by the Gothic architects. A kind of rib vault was also used in Islamic architecture, for example in the ceiling of the Mosque-Cathedral of Cordoba. In Cordoba, the dome was supported by pendentives, which connected the dome to the arches below. The pendentives were decorated with ribs. Unlike the Gothic rib vault, the Islamic ribs were purely decorative; they did not extend outside of the vault, and they were not part of the structure supporting the roof. The military and cultural contacts with the medieval Islamic world, including the Norman conquest of Islamic Sicily in 1090, the Crusades (beginning 1096), and the Islamic presence in Spain, may have influenced Medieval Europe's adoption of the pointed arch. Another feature of Gothic architecture, a kind of rib vault, had also earlier appeared in Islamic architecture, and spread to Western Europe via Islamic Spain and Sicily. The early rib vaults in Spain were used to support cupolas, and were decorative. The dome of the Mosque-Cathedral of Cordoba was supported by pendentives, rather than the vault. These were frequently used in Romanesque and Byzantine architecture, as in the dome of Hagia Sophia in Istanbul, which also was supported by pendentives. The Gothic rib vault, among other features, such as the flying buttress, have their antecedents in Romanesque architecture, such as Durham Cathedral, constructed between 1093 and 1096. In those parts of the Western Mediterranean subject to Islamic control or influence, rich regional variants arose, fusing Romanesque and later Gothic traditions with Islamic decorative forms. For example, in Monreale and Cefalù Cathedrals, the Alcázar of Seville, and Teruel Cathedral. Armenian influence A number of scholars have cited the Armenian Cathedral of Ani, completed 1001 or 1010, as a possible influence on the Gothic, especially due to its use of pointed arches and cluster piers. However, other scholars such as Sirarpie Der Nersessian, who rejected this notion as she argued that the pointed arches did not serve the same function of supporting the vault. Lucy Der Manuelian contends that some Armenians (historically documented as being in Western Europe in the Middle Ages) could have brought the knowledge and technique employed at Ani to the west. Subvarieties Styles Notable examples Austria St. Stephen's Cathedral, Vienna Belarus Mir Castle Complex Muravanka Church Church of St.Barys And St.Hlieb, Navahradak Church of St. Michael, Synkavichy Church of the Holy Trinity, Iškaldź Belgium Brussels Town Hall Brussels Cathedral Belfry of Bruges Belfry of Ghent Tournai Cathedral Antwerp Cathedral Leuven Town Hall Mechelen Cathedral Croatia Zagreb Cathedral Czech Republic Basilica of St. Ludmila Cathedral of St. Peter and Paul (Brno) Charles Bridge Karlštejn Castle Prague Cathedral Old Town Hall (Prague) St. Barbara's Church in Kutná Hora Vladislav Hall France Albi Cathedral Amiens Cathedral Blois-Vienne Church Chartres Cathedral Fontevraud Abbey Notre-Dame de Paris Palais des papes Reims Cathedral Rouen Cathedral Saint Denis Basilica Sainte-Chapelle Strasbourg Cathedral Germany Ulm Minster Cologne Cathedral Maulbronn Monastery Regensburg Cathedral Freiburg Minster Bremen Town Hall Frauenkirche Hungary Matthias Church Ireland Christ Church Cathedral Saint Patrick's Cathedral Italy Fossanova Abbey Santa Maria Arabona Abbey Casamari Abbey Basilica di Sant'Andrea (Vercelli) Milan Cathedral Orvieto Cathedral Florence Cathedral Church of Santa Croce (Florence) Siena Cathedral Lucera Cathedral Naples Cathedral Church of San Francesco d’Assisi (Palermo) Church of Santa Maria dello Spasimo (Palermo) Church of Santa Maria della Catena (Palermo) Church of San Lorenzo Maggiore (Naples) Church of Santa Maria Donna Regina Vecchia (Naples) Church of Santa Chiara (Naples) Doge's palace Palace of the Popes (Viterbo) Palazzo Chiaramonte Palazzo Abatellis Palazzo Corvaja Palazzo Pubblico Palazzo Vecchio Giotto's Campanile White Tower (Brixen) Castello Maniace Castello Ursino Castel Nuovo Castel del Monte (Apulia) Lithuania Kaunas Castle Trakai Peninsula Castle Trakai Island Castle Medininkai Castle Vilnius Upper Castle Saint Nicholas Church Vytautas' the Great Church Kaunas Cathedral Basilica Church of St. Anne House of Perkūnas Netherlands St. John's Cathedral ('s-Hertogenbosch) Ridderzaal, The Hague Grote or Sint-Jacobskerk (The Hague) Middelburg Town Hall, Middelburg St. Martin's Cathedral, Utrecht Nieuwe Kerk (Amsterdam) Nieuwe Kerk (Delft) Cathedral of St Bavo, Haarlem Grote Kerk, Haarlem City Hall (Haarlem) Grote Kerk (Breda) St. Christopher's Cathedral, Roermond Dinghuis, Maastricht Oude Kerk (Delft) Grote Kerk, Dordrecht Hooglandse Kerk, Leiden Grote of Sint-Laurenskerk (Rotterdam) St Eusebius' Church, Arnhem Norway Nidaros Cathedral Haakon's Hall, Bergenhus Poland Wrocław Town Hall Gdańsk Town Hall Copernicus House in Toruń Frombork Cathedral Gniezno Cathedral Wawel Cathedral Pelplin Abbey Toruń Cathedral Wrocław Cathedral Gniew Castle Kwidzyn Castle Lidzbark Castle Malbork Castle St. Mary's Basilica, Kraków Basilica of St. James and St. Agnes, Nysa Collegiate Basilica of the Birth of the Blessed Virgin Mary, Wiślica St. Mary's Church, Gdańsk St. Catherine's Church, Gdańsk St. Mary's Church, Stargard Basilica of the Holy Trinity, Kraków Corpus Christi Basilica St Elizabeth's Church, Wrocław St Dorothea Church, Wrocław Collegiate Church of the Holy Cross and St. Bartholomew, Wrocław Church of St Mary on the Sand St. John the Evangelist's Church, Paczków Saints Peter and Paul Basilica, Strzegom Kraków Barbican Collegium Maius, Kraków St. Florian's Gate Portugal Jeronimos Monastery Monastery of Batalha Monastery of Alcobaça Evora Cathedral Carmo Convent Guarda Cathedral Lisbon Cathedral Oporto Cathedral Silves Cathedral Cathedral of Funchal Convent of Christ Castle of Leiria Sabugal Castle Castle of Estremoz Castle of Bragança Castle of Santa Maria da Feira Belém Tower Monastery of Jesus of Setúbal Convent of Nossa Senhora da Conceição de Beja Graça Church Santa Maria dos Olivais Church Leça do Balio Monastery Saint John of Alporão Church Monastery of Santa Clara-a-Velha Monastery of São Francisco Romania Black Church Corvin Castle Saschiz fortified church Sebeș Lutheran church Sibiu Lutheran Cathedral St. Michael's Church, Cluj-Napoca Spain Palace of the Kings of Navarre of Olite Palau de la Generalitat Llotja de la Seda León Cathedral Burgos Cathedral Toledo Cathedral Cathedral of Avila Palace of the Borgias Oviedo Cathedral Valencia Cathedral Seville Cathedral, the largest Gothic church Palma Cathedral Sweden Linköping Cathedral Uppsala Cathedral Visby Cathedral Switzerland Basel Minster Slovakia St Elisabeth Cathedral St Martin's Cathedral, Bratislava United Kingdom Bath Abbey Beverley Minster Bristol Cathedral Canterbury Cathedral Christ Church, Oxford Ely Cathedral Glasgow Cathedral King's College Chapel, Cambridge Lichfield Cathedral Lincoln Cathedral Peterborough Cathedral Salisbury Cathedral St George's Chapel, Windsor Castle Wells Cathedral Westminster Abbey Winchester Cathedral York Minster See also Architectural history Architecture of cathedrals and great churches Carpenter Gothic Collegiate Gothic in North America Gothicmed Gothic cathedrals and churches List of Gothic architecture Mudéjar Tented roof Abbey de Sainte-Marie-au-Bois Notes Citations Bibliography Clark, W. W.; King, R. (1983). Laon Cathedral, Architecture. Courtauld Institute Illustration Archives. 1. London: Harvey Miller Publishers. . Further reading Fletcher, Banister; Cruickshank, Dan, Sir Banister Fletcher's a History of Architecture, Architectural Press, 20th edition, 1996 (first published 1896). . Cf. Part Two, Chapter 14. Cram, Ralph Adams (1909). "Gothic Architecture." The Catholic Encyclopedia. Vol. 6. New York: Robert Appleton Company, 1909. Glaser, Stephanie, "The Gothic Cathedral and Medievalism", in: Falling into Medievalism, ed. Anne Lair and Richard Utz. Special Issue of UNIversitas: The University of Northern Iowa Journal of Research, Scholarship, and Creative Activity, 2.1 (2006). (on the Gothic revival of the 19th century and the depictions of Gothic cathedrals in the Arts) Rudolph, Conrad ed., A Companion to Medieval Art: Romanesque and Gothic in Northern Europe, 2nd ed. (2016) Tonazzi, Pascal (2007) Florilège de Notre-Dame de Paris (anthologie), Editions Arléa, Paris, Rivière, Rémi; Lavoye, Agnès (2007). La Tour Jean sans Peur, Association des Amis de la tour Jean sans Peur. External links Mapping Gothic France — a project by Columbia University and Vassar College with a database of images, 360° panoramas, texts, charts and historical maps. Gothic Architecture — Encyclopædia Britannica Gutenberg.org, from Project Gutenberg Archive.org, from Internet Archive Parker, J. H. (1881), A B C of Gothic Architecture. Oxford: Parker & Co. Architectural history Architectural styles European architecture Architecture in England Architecture in Italy Medieval French architecture Catholic architecture 12th-century architecture 13th-century architecture 14th-century architecture 15th-century architecture 16th-century architecture
Gothic architecture
[ "Engineering" ]
20,089
[ "Architectural history", "Architecture" ]
54,048
https://en.wikipedia.org/wiki/Steppe
In physical geography, a steppe () is an ecoregion characterized by grassland plains without closed forests except near rivers and lakes. Steppe biomes may include: the montane grasslands and shrublands biome the tropical and subtropical grasslands, savannas, and shrublands biome the temperate grasslands, savannas, and shrublands biome A steppe is usually covered with grass and shrubs, depending on the season and latitude. The term steppe climate denotes a semi-arid climate, which is encountered in regions too dry to support a forest, but not dry enough to be a desert. Steppes are usually characterized by a semi-arid or continental climate. Extremes can be recorded in the summer of up to and in winter of down to . Besides this major seasonal difference, fluctuations between day and night are also significant. In both the highlands of Mongolia and northern Nevada, can be reached during the day with sub-freezing readings at night. Steppes average of annual precipitation and feature hot summers and cold winters when located in mid-latitudes. In addition to the precipitation level, its combination with potential evapotranspiration defines a steppe climate. Classification Steppe can be classified by climate: Temperate steppe: the true steppe, found in continental climates can be further subdivided, as in the Rocky Mountains Steppes Subtropical steppe: a similar association of plants occurring in the driest areas with a Mediterranean climate; it usually has a short wet period It can also be classified by vegetation type, e.g. shrub-steppe and alpine-steppe. Cold steppe The world's largest steppe region, often referred to as "the Great Steppe", is found in Eastern Europe and Central Asia, and neighbouring countries stretching from Ukraine in the west through Russia, Kazakhstan, Turkmenistan and Uzbekistan to the Altai, Koppet Dag and Tian Shan ranges in China. The Eurasian Steppe is speculated by David W. Anthony to have had a role in the spread of the horse, the wheel and Indo-European languages. In the Eurasian steppe, soils often consist of chernozem. The inner parts of Anatolia in Turkey, Central Anatolia and East Anatolia in particular and also some parts of Southeast Anatolia, as well as much of Armenia and Iran are largely dominated by cold steppe. The Pannonian Plain is another steppe region in Central Europe, centered in Hungary but also including portions of Slovakia, Poland, Ukraine, Romania, Serbia, Croatia, Slovenia, and Austria. Another large steppe area (prairie) is located in the central United States, western Canada and the northern part of Mexico. The shortgrass prairie steppe is the westernmost part of the Great Plains region. The Columbia Plateau in southern British Columbia, Oregon, Idaho, and Washington state, is an example of a steppe region in North America outside of the Great Plains. In South America, cold steppe can be found in Patagonia and much of the high elevation regions east of the southern Andes. Relatively small steppe areas can be found in the interior of the South Island of New Zealand. In Australia, a moderately sized temperate steppe region exists in the northern and northwest regions of Victoria, extending to the southern and mid regions of New South Wales. This area borders the semi-arid and arid Australian Outback which is found farther inland on the continent. Subtropical steppe In Europe, some Mediterranean areas have a steppe-like vegetation, such as central Sicily in Italy, southern Portugal, parts of Greece in the southern Athens area, and central-eastern Spain, especially the southeastern coast (around Murcia), and places cut off from adequate moisture due to rain shadow effects such as Zaragoza. In northern Africa, the Mediterranean area also hosts the same steppe-like vegetation, such as the Algerian-Moroccan Hautes Plaines and by extension the North Saharan steppe and woodlands. In Asia, a subtropical steppe can be found in semi-arid lands that fringe the Thar Desert of the Indian subcontinent as well as much of the Deccan Plateau in the rain shadow of the Western Ghats, and the Badia of the Levant. In Australia, subtropical steppe can be found in a belt surrounding the most severe deserts of the continent and around the Musgrave Ranges. In North America this environment is typical of transition areas between zones with a Mediterranean climate and true deserts, such as Reno, Nevada, the inner part of California, and much of western Texas and adjacent areas in Mexico. See also Grassland Plain Prairie Tugay Tundra References Sources Ecology and Conservation of Steppe-land Birds by Manuel B.Morales, Santi Mañosa, Jordi Camprodón, Gerard Bota. International Symposium on Ecology and Conservation of steppe-land birds. Lleida, Spain. December 2004. External links Grasslands Montane grasslands and shrublands Temperate grasslands, savannas, and shrublands Ecoregions Plains Prairies
Steppe
[ "Biology" ]
966
[ "Grasslands", "Ecosystems" ]
54,050
https://en.wikipedia.org/wiki/Typographic%20unit
Typographic units are the units of measurement used in typography or typesetting. Traditional typometry units are different from familiar metric units because they were established in the early days of printing. Though most printing is digital now, the old terms and units have persisted. Even though these units are all very small, across a line of print they add up quickly. Confusions such as resetting text originally in type of one unit in type of another will result in words moving from one line to the next, resulting in all sorts of typesetting errors (viz. rivers, widows and orphans, disrupted tables, and misplaced captions). Before the popularization of desktop publishing, type measurements were done with a tool called a typometer. Development In Europe, the Didot point system was created by François-Ambroise Didot (1730–1804) in c. 1783. Didot's system was based on Pierre Simon Fournier's (1712–1768), but Didot modified Fournier's by adjusting the base unit precisely to a French Royal inch (pouce), as Fournier's unit was based on a less common foot. (Fournier's printed scale of his point system, from Manuel Typographique, Barbou, Paris 1764, enlarged) However, the basic idea of the point system – to generate different type sizes by multiplying a single minimum unit calculated by dividing a base measurement unit such as one French Royal inch – was not Didot's invention, but Fournier's. In Fournier's system, an approximate French Royal inch (pouce) is divided by 12 to calculate 1 ligne, which is then divided by 6 to get 1 point. Didot just made the base unit (one French Royal inch) identical to the standard value defined by the government. In Didot's point system: 1 point = 1⁄6 ligne = 1⁄72 French Royal inch = 15 625⁄41 559 mm ≤ 0.375 971 510 4 mm, however in practice mostly: 0.376 mm (i.e. + 0.0076%). Both in Didot's and Fournier's systems, some point sizes have traditional names such as Cicero (before introduction of point systems, type sizes were called by names such as Cicero, Pica, Ruby, Great Primer, etc.). 1 cicero = 12 Didot points = 1⁄6 French Royal inch = 62 500⁄13 853 mm ≤ 4.511 658 124 6 mm, also in practice mostly: 4.512 mm (i.e. + 0.0076%). The Didot point system has been widely used in European countries. An abbreviation for it that these countries use is "dd", employing an old method for indicating plurals. Hence "12 dd" means twelve didot points. In Britain and the United States, many proposals for type size standardization had been made by the end of 19th century (such as Bruce Typefoundry's mathematical system that was based on a precise geometric progression). However, no nationwide standard was created until the American Point System was decided in 1886. The American Point System was proposed by Nelson C. Hawks of Marder Luse & Company in Chicago in the 1870s, and his point system used the same method of size division as Fournier's; viz. dividing 1 inch by 6 to get 1 pica, and dividing it again by 12 to get 1 point. However, the American Point System standardized finally in 1886 is different from Hawks' original idea in that 1 pica is not precisely equal to 1⁄6 inch (neither the Imperial inch nor the US inch), as the United States Type Founders' Association defined the standard pica to be the Johnson Pica, which had been adopted and used by Mackellar, Smiths and Jordan type foundry (MS&J), Philadelphia. As MS&J was very influential in those days, many other type foundries were using the Johnson Pica. Also, MS&J defined that 83 Picas are equal to 35 centimeters. The choice of the metric unit for the prototype was because at the time the Imperial and US inches differed in size slightly, and neither country could legally specify a unit of the other. The Johnson Pica was named after Lawrence Johnson who had succeeded Binny & Ronaldson in 1833. Binny & Ronaldson was one of the oldest type foundries in the United States, established in Philadelphia in 1796. Binny & Ronaldson had bought the type founding equipment of Benjamin Franklin's (1706–1790) type foundry established in 1786 and run by his grandson Benjamin Franklin Bache (1769–1798). The equipment is thought to be that which Benjamin Franklin purchased from Pierre Simon Fournier when he visited France for diplomatic purposes (1776–85). The official standard approved by the Fifteenth Meeting of the Type Founders Association of the United States in 1886 was this Johnson pica, equal to exactly 0.166 inch. Therefore, the two other – very close – definitions, 1200 / 7227 inch and 350 / 83 mm, are both unofficial. Monotype wedges used in England and America were based on a pica = .1660 inch. But on the European continent all available wedges were based on the "old-pica" 1 pica - .1667 inch. These wedges were marked with an extra E behind the numbers of the wedge and the set. These differences can also be found in the tables of the manuals. In the American point system: 1 Johnson pica = exactly 0.166 inch (versus 0.166 = 1⁄6 inch for the DTP-pica) = 4.2164 mm. 1 point = 1⁄12 pica = exactly 0.013 83 inch = 0.351 36 mm. The American point system has been used in the US, Britain, Japan, and many other countries. Today, digital printing and display devices and page layout software use a unit that is different from these traditional typographic units. On many digital printing systems (desktop publishing systems in particular), the following equations are applicable (with exceptions, most notably the popular TeX typesetting system and its derivatives). 1 pica = 1⁄6 inch (British/American inch of today) = 4.233 mm. 1 point = 1⁄12 pica = 1⁄72 inch = 127⁄360 mm = 0.3527 mm. Digital displays and printing led to the use an additional unit: 1 twip = 1⁄20 point = 1⁄1440 inch = 127⁄7200 mm = 0.017 638 mm. Fournier's original method of division is now restored in today's digital typography. Comparing a piece of type in didots for Continental European countries – 12 dd, for example – to a piece of type for an English-speaking country – 12 pt – shows that the main body of a character is actually about the same size. The difference is that the languages of the former often need extra space atop the capital letters for accent marks (e.g. Ñ, Â, Ö, É), but English rarely needs this. Metric units The traditional typographic units are based either on non-metric units, or on odd multiples (such as 35⁄83) of a metric unit. There are no specifically metric units for this particular purpose, although there is a DIN standard sometimes used in German publishing, which measures type sizes in multiples of 0.25 mm, and proponents of the metrication of typography generally recommend the use of the millimetre for typographical measurements, rather than the development of new specifically typographical metric units. The Japanese already do this for their own characters (using the kyu, which is q in romanized Japanese and is also 0.25 mm), and have metric-sized type for European languages as well. One advantage of the q is that it reintroduces the proportional integer division of 3 mm (12 q) by 6 & 4. During the age of the French Revolution or Napoleonic Empire, the French established a typographic unit of 0.4 mm, but except for the government's print shops, this did not catch on. In 1973, the didot was restandardized in the EU as 0.375 (= 3⁄8) mm. Care must be taken because the name of the unit is often left unmodified. The Germans, however, use the terms Fournier-Punkt and Didot-Punkt for the earlier ones, and Typografischer Punkt for this metric one. The TeX typesetting system uses the abbreviation dd for the earlier definition, and nd for the metric new didot Notes Select bibliography Boag, Andrew. "Typographic measurement: a chronology", Typography papers, no. 1, 1996, The Department of Typography and Graphic Communication, The University of Reading, Reading 1996. Bruce's Son & Company, Specimen of Printing Types, incl. Theo. L. DeVinne's "The Invention of Printing", New York 1878. Carter, Harry. Fournier on Typefounding, The Soncino Press, London 1930. Fournier, Pierre Simon, The Manuel Typographique of Pierre-Simon Fournier le jeune, Vols. I–III, Ed. by James Mosley, Darmstadt 1995. Fournier, Pierre Simon. Modèles des Caractères de l'Imprimerie, including James Mosley's introduction, Eugrammia Press, London 1965. Fournier, Pierre Simon. Manuel Typographique, Vols. I & II, Fournier & Barbou, Paris 1764–1766. Hansard, T. C. Typographia, Baldwin, Cradock, and Joy, London 1825. Hopkins, Richard L. Origin of The American Point System, Hill & Dale Private Press, Terra Alta 1976. Hutt, Allen. Fournier, the complete typographer, Rowman and Littlefield, Totowa, NJ 1972. Johnson, John. Typographia, Longman, Hurst, Rees, Orme, Brown & Green, London 1824. Jones, Thomas Roy, Printing in America, The Newcomen Society of England, American Branch, New York 1948. MacKellar Smiths & Jordan. One Hundred Years, Philadelphia 1896. Mosley, James. "French Academicians and Modern Typography: Designing New Types in the 1690s", Typography papers, no. 2, 1997, The Department of Typography and Graphic Communication, The University of Reading, Reading 1997. Moxon, Joseph. Mechanick Exercises On The Whole Art Of Printing, Oxford University Press, London 1958. Ovink, G. Willem. "From Fournier to metric, and from lead to film", Quaerendo, Volume IX 2 & 4, Theatrum Orbis Terrarum Ltd., Amsterdam 1979. Smith, John. The Printer's Grammar, L. Wayland, London 1787. Yamamoto, Taro. pt – Type Sizing Units Converter, http://www.kt.rim.or.jp/~tyamamot/pt.htm Tokyo 2001. References External links Typographic Unit Converter Metric Typographic Units Typographic Measurement Units Units of length fr:Point typographique he:יחידה טיפוגרפית
Typographic unit
[ "Mathematics" ]
2,383
[ "Quantity", "Units of measurement", "Units of length" ]
54,058
https://en.wikipedia.org/wiki/Hypanthium
In angiosperms, a hypanthium or floral cup is a structure where basal portions of the calyx, the corolla, and the stamens form a cup-shaped tube. It is sometimes called a floral tube, a term that is also used for corolla tube and calyx tube. It often contains the nectaries of the plant. It is present in many plant families, although varies in structural dimensions and appearance. This differentiation between the hypanthium in particular species is useful for identification. Some geometric forms are obconic shapes as in toyon, whereas some are saucer-shaped as in Mitella caulescens. Its presence is diagnostic of many families, including the Rosaceae, Grossulariaceae, and Fabaceae. In some cases, it can be so deep, with such a narrow top, that the flower can appear to have an inferior ovary - the ovary is below the other attached floral parts. The hypanthium is known by different common names in differing species. In the eucalypts, it is referred to as the gum nut; in roses it is called the hip. Variations in plant species In myrtles, the hypanthium can either surround the ovary loosely or tightly; in some cases, it can be fused to the walls of the ovary. It can vary in length. The rims around the outside of the hypanthium contain the calyx lobes or free sepals, petals and either the stamen or multiple stamen that are attached at one or two points. The flowers of the family Rosaceae, or the rose family, always have some type of hypanthium or at least a floral cup from which the sepals, petals and stamens all arise, and which is lined with nectar-producing tissue known as nectaries. The nectar is a sugary substance that attracts birds and bees to the flower, who then take the pollen from the lining of the hypanthium and transfer it to the next flower they visit, usually a neighbouring plant. The stamens borne on the hypanthium are the pollen-producing reproductive organs of the flower. The hypanthium helps in many ways with the reproduction and cross pollination pathways of most plants. It provides weather protection and a medium to sustain the lost pollen, increasing the probability of fertility and cross-pollination. The retained pollen can then attach to pollinators such as birds, bees, moths, beetles, bats, butterflies and other animals. Wind can act as an instigator for fertilisation. The hypanthium is also an adaptive feature for structural support. It helps the stem fuse with the flower, in turn strengthening the bond and overall stability and integrity. References Bibliography Books Websites External links Hypanthium images on MorphBank, a biological image database Plant morphology
Hypanthium
[ "Biology" ]
601
[ "Plant morphology", "Plants" ]
54,061
https://en.wikipedia.org/wiki/Amphitheatre
An amphitheatre (U.S. English: amphitheater) is an open-air venue used for entertainment, performances, and sports. The term derives from the ancient Greek (), from (), meaning "on both sides" or "around" and (), meaning "place for viewing". Ancient Greek theatres were typically built on hillsides and semi-circular in design. The first amphitheatre may have been built at Pompeii around 70 BC. Ancient Roman amphitheatres were oval or circular in plan, with seating tiers that surrounded the central performance area, like a modern open-air stadium. In contrast, both ancient Greek and ancient Roman theatres were built in a semicircle, with tiered seating rising on one side of the performance area. Modern English parlance uses "amphitheatre" for any structure with sloping seating, including theatre-style stages with spectator seating on only one side, theatres in the round, and stadia. They can be indoor or outdoor. Roman amphitheatres About 230 Roman amphitheatres have been found across the area of the Roman Empire. Their typical shape, functions and name distinguish them from Roman theatres, which are more or less semicircular in shape; from the circuses (similar to hippodromes) whose much longer circuits were designed mainly for horse or chariot racing events; and from the smaller stadia, which were primarily designed for athletics and footraces. Roman amphitheatres were circular or oval in plan, with a central arena surrounded by perimeter seating tiers. The seating tiers were pierced by entrance-ways controlling access to the arena floor, and isolating it from the audience. Temporary wooden structures functioning as amphitheaters would have been erected for the funeral games held in honour of deceased Roman magnates by their heirs, featuring fights to the death by gladiators, usually armed prisoners of war, at the funeral pyre or tomb of the deceased. These games are described in Roman histories as , gifts, entertainments or duties to honour deceased individuals, Rome's gods and the Roman community. Some Roman writers interpret the earliest attempts to provide permanent amphitheaters and seating for the lower classes as populist political graft, rightly blocked by the Senate as morally objectionable; too-frequent, excessively "luxurious" would corrode traditional Roman morals. The provision of permanent seating was thought a particularly objectionable luxury. The earliest permanent, stone and timber Roman amphitheatre with perimeter seating was built in the in 29 BCE. Most were built under Imperial rule, from the Augustan period (27 BCE–14 CE) onwards. Imperial amphitheatres were built throughout the Roman Empire, especial in provincial capitals and major colonies, as an essential aspect of Romanitas. There was no standard size; the largest could accommodate 40,000–60,000 spectators. The most elaborate featured multi-storeyed, arcaded façades and were decorated with marble, stucco and statuary. The best-known and largest Roman amphitheatre is the Colosseum in Rome, also known as the Flavian Amphitheatre (), after the Flavian dynasty who had it built. After the ending of gladiatorial games in the 5th century and of staged animal hunts in the 6th, most amphitheatres fell into disrepair. Their materials were mined or recycled. Some were razed, and others were converted into fortifications. A few continued as convenient open meeting places; in some of these, churches were sited. Modern amphitheatres In modern english usage of the word, an amphitheatre is not only a circular, but can also be a semicircular or curved performance space, particularly one located outdoors. Contemporary amphitheatres often include standing structures, called bandshells, sometimes curved or bowl-shaped, both behind the stage and behind the audience, creating an area which echoes or amplifies sound, making the amphitheatre ideal for musical or theatrical performances. Small-scale amphitheatres can serve to host outdoor local community performances. Notable modern amphitheatres include the Shoreline Amphitheatre, the Hollywood Bowl and the Aula Magna at Stockholm University. The term "amphitheatre" is also used for some indoor venues, such as the (by now demolished) Gibson Amphitheatre and Chicago International Amphitheatre. In other languages (like German) an amphitheatre can only be a circular performance space. A performance space where the audience is not all around the stage can not be called an amphitheatre—by definition of the word. Natural amphitheatres A natural amphitheatre is a performance space located in a spot where a steep mountain or a particular rock formation naturally amplifies or echoes sound, making it ideal for musical and theatrical performances. An amphitheatre can be naturally occurring formations which would be ideal for this purpose, even if no theatre has been constructed there. Notable natural amphitheatres include the Drakensberg Amphitheatre in South Africa, Slane Castle in Ireland, the Supernatural Amphitheatre in Australia, and the Red Rocks and the Gorge Amphitheatres in the western United States. There is evidence that the Anasazi people used natural amphitheatres for the public performance of music in Pre-Columbian times including a large constructed performance space in Chaco Canyon, New Mexico. See also Odeon (building) Colosseum Ancient theatres Theatre of ancient Greece List of ancient Greek theatres Arena Thingplatz List of Roman amphitheatres List of contemporary amphitheatres List of indoor arenas Notes References Buildings and structures by type
Amphitheatre
[ "Engineering" ]
1,171
[ "Buildings and structures by type", "Architecture" ]
54,099
https://en.wikipedia.org/wiki/Pantothenic%20acid
Pantothenic acid (vitamin B5) is a B vitamin and an essential nutrient. All animals need pantothenic acid in order to synthesize coenzyme A (CoA), which is essential for cellular energy production and for the synthesis and degradation of proteins, carbohydrates, and fats. Pantothenic acid is the combination of pantoic acid and β-alanine. Its name comes from the Greek pantothen, meaning "from everywhere", because pantothenic acid, at least in small amounts, is in almost all foods. Deficiency of pantothenic acid is very rare in humans. In dietary supplements and animal feed, the form commonly used is calcium pantothenate, because chemically it is more stable, and hence makes for longer product shelf-life, than sodium pantothenate and free pantothenic acid. Definition Pantothenic acid is a water-soluble vitamin, one of the B vitamins. It is synthesized from the amino acid β-alanine and pantoic acid (see biosynthesis and structure of coenzyme A figures). Unlike vitamin E or vitamin K, which occurs in several chemically related forms known as vitamers, pantothenic acid is only one chemical compound. It is a starting compound in the synthesis of coenzyme A (CoA), a cofactor for many enzyme processes. Use in biosynthesis of coenzyme A Pantothenic acid is a precursor to CoA via a five-step process. The biosynthesis requires pantothenic acid, cysteine, and four equivalents of ATP (see figure). Pantothenic acid is phosphorylated to 4′-phosphopantothenate by the enzyme pantothenate kinase. This is the committed step in CoA biosynthesis and requires ATP. A cysteine is added to 4′-phosphopantothenate by the enzyme phosphopantothenoylcysteine synthetase to form 4'-phospho-N-pantothenoylcysteine (PPC). This step is coupled with ATP hydrolysis. PPC is decarboxylated to 4′-phosphopantetheine by phosphopantothenoylcysteine decarboxylase 4′-Phosphopantetheine is adenylated (or more properly, AMPylated) to form dephospho-CoA by the enzyme phosphopantetheine adenylyl transferase Finally, dephospho-CoA is phosphorylated to coenzyme A by the enzyme dephosphocoenzyme A kinase. This final step also requires ATP. This pathway is suppressed by end-product inhibition, meaning that CoA is a competitive inhibitor of pantothenate kinase, the enzyme responsible for the first step. Coenzyme A is necessary in the reaction mechanism of the citric acid cycle. This process is the body's primary catabolic pathway and is essential in breaking down the building blocks of the cell such as carbohydrates, amino acids and lipids, for fuel. CoA is important in energy metabolism for pyruvate to enter the tricarboxylic acid cycle (TCA cycle) as acetyl-CoA, and for α-ketoglutarate to be transformed to succinyl-CoA in the cycle. CoA is also required for acylation and acetylation, which, for example, are involved in signal transduction, and various enzyme functions. In addition to functioning as CoA, this compound can act as an acyl group carrier to form acetyl-CoA and other related compounds; this is a way to transport carbon atoms within the cell. CoA is also required in the formation of acyl carrier protein (ACP), which is required for fatty acid synthesis. Its synthesis also connects with other vitamins such as thiamin and folic acid. Dietary recommendations The US Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for B vitamins in 1998. At that time, there was not sufficient information to establish EARs and RDAs for pantothenic acid. In instances such as this, the Board sets Adequate Intakes (AIs), with the understanding that at some later date, AIs may be replaced by more exact information. The current AI for teens and adults ages 14 and up is 5 mg/day. This was based in part on the observation that for a typical diet, urinary excretion was approximately 2.6 mg/day, and that bioavailability of food-bound pantothenic acid was roughly 50%. AI for pregnancy is 6 mg/day. AI for lactation is 7 mg/day. For infants up to 12 months, the AI is 1.8 mg/day. For children ages 1–13 years, the AI increases with age from 2 to 4 mg/day. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs). While for many nutrients, the US Department of Agriculture uses food composition data combined with food consumption survey results to estimate average consumption, the surveys and reports do not include pantothenic acid in the analyses. Less formal estimates of adult daily intakes report about 4 to 7 mg/day. The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the US. For women and men over age 11, the Adequate Intake (AI) is set at 5 mg/day. AI for pregnancy is 5 mg/day, for lactation 7 mg/day. For children ages 1–10 years, the AI is 4 mg/day. These AIs are similar to the US AIs. Safety As for safety, the IOM sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of pantothenic acid, there is no UL, as there is no human data for adverse effects from high doses. The EFSA also reviewed the safety question and reached the same conclusion as in the United States – that there was not sufficient evidence to set a UL for pantothenic acid. Labeling requirements For US food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For pantothenic acid labeling purposes, 100% of the Daily Value was 10 mg, but as of May 2016 it was revised to 5 mg to bring it into agreement with the AI. Compliance with the updated labeling regulations was required by January 2020 for manufacturers with US$10 million or more in annual food sales, and by January 2021 for manufacturers with lower volume food sales. A table of the old and new adult daily values is provided at Reference Daily Intake. Sources Dietary Food sources of pantothenic acid include animal-sourced foods, including dairy foods and eggs. Potatoes, tomato products, oat-cereals, sunflower seeds, avocado are good plant sources. Mushrooms are good sources, too. Whole grains are another source of the vitamin, but milling to make white rice or white flour removes much of the pantothenic acid, as it is found in the outer layers of whole grains. In animal feeds, the most important sources are alfalfa, cereal, fish meal, peanut meal, molasses, rice bran, wheat bran, and yeasts. Supplements Dietary supplements of pantothenic acid commonly use pantothenol (or panthenol), a shelf-stable analog, which is converted to pantothenic acid once consumed. Calcium pantothenate – a salt – may be used in manufacturing because it is more resistant than pantothenic acid to factors that deteriorate stability, such as acid, alkali or heat. The amount of pantothenic acid in dietary supplement products may contain up to 1,000 mg (200 times the Adequate Intake level for adults), without evidence that such large amounts provide any benefit. According to WebMD, pantothenic acid supplements have a long list of claimed uses, but there is insufficient scientific evidence to support any of them. As a dietary supplement, pantothenic acid is not the same as pantethine, which is composed of two pantothenic acid molecules linked by a disulfide bridge. Sold as a high-dose supplement (600 mg), pantethine may be effective for lowering blood levels of LDL cholesterol – a risk factor for cardiovascular diseases – but its long-term effects are unknown, so use should be supervised by a physician. Dietary supplementation with pantothenic acid does not have the cholesterol-lowering effect as pantethine. Fortification According to the Global Fortification Data Exchange, pantothenic acid deficiency is so rare that no countries require that foods be fortified. Absorption, metabolism and excretion When found in foods, most pantothenic acid is in the form of CoA or bound to acyl carrier protein (ACP). For the intestinal cells to absorb this vitamin, it must be converted into free pantothenic acid. Within the lumen of the intestine, CoA and ACP are hydrolyzed into 4'-phosphopantetheine. The 4'-phosphopantetheine is then dephosphorylated into pantetheine. Pantetheinase, an intestinal enzyme, then hydrolyzes pantetheine into free pantothenic acid. Free pantothenic acid is absorbed into intestinal cells via a saturable, sodium-dependent active transport system. At high levels of intake, when this mechanism is saturated, some pantothenic acid may also be additionally absorbed via passive diffusion. As a whole, when intake increases 10-fold, absorption rate decreases to 10%. Pantothenic acid is excreted in urine. This occurs after its release from CoA. Urinary amounts are on the order of 2.6 mg/day, but decreased to negligible amounts when subjects in multi-week experimental situations were fed diets devoid of the vitamin. Deficiency Pantothenic acid deficiency in humans is very rare and has not been thoroughly studied. In the few cases where deficiency has been seen (prisoners of war during World War II, victims of starvation, or limited volunteer trials), nearly all symptoms were reversed with orally administered pantothenic acid. Symptoms of deficiency are similar to other vitamin B deficiencies. There is impaired energy production, due to low CoA levels, which could cause symptoms of irritability, fatigue, and apathy. Acetylcholine synthesis is also impaired; therefore, neurological symptoms can also appear in deficiency; they include sensation of numbness in hands and feet, paresthesia and muscle cramps. Additional symptoms could include restlessness, malaise, sleep disturbances, nausea, vomiting and abdominal cramps. In animals, symptoms include disorders of the nervous, gastrointestinal, and immune systems, reduced growth rate, decreased food intake, skin lesions and changes in hair coat, and alterations in lipid and carbohydrate metabolism. In rodents, there can be loss of hair color, which led to marketing of pantothenic acid as a dietary supplement which could prevent or treat graying of hair in humans (despite the lack of any human trial evidence). Pantothenic acid status can be assessed by measuring either whole blood concentration or 24-hour urinary excretion. In humans, whole blood values less than 1 μmol/L are considered low, as is urinary excretion of less than 4.56 mmol/day. Animal nutrition Calcium pantothenate and dexpanthenol (D-panthenol) are European Food Safety Authority (EFSA) approved additives to animal feed. Supplementation is on the order of 8–20 mg/kg for pigs, 10–15 mg/kg for poultry, 30–50 mg/kg for fish and 8–14 mg/kg feed for pets. These are recommended concentrations, designed to be higher than what are thought to be requirements. There is some evidence that feed supplementation increases pantothenic acid concentration in tissues, i.e., meat, consumed by humans, and also for eggs, but this raises no concerns for consumer safety. No dietary requirement for pantothenic acid has been established in ruminant species. Synthesis of pantothenic acid by ruminal microorganisms appears to be 20 to 30 times more than dietary amounts. Net microbial synthesis of pantothenic acid in the rumen of steer calves has been estimated to be 2.2 mg/kg of digestible organic matter consumed per day. Supplementation of pantothenic acid at 5 to 10 times theoretical requirements did not improve growth performance of feedlot cattle. Synthesis Biosynthesis Bacteria synthesize pantothenic acid from the amino acids aspartate and a precursor to the amino acid valine. Aspartate is converted to β-alanine. The amino group of valine is replaced by a keto-moiety to yield α-ketoisovalerate, which, in turn, forms α-ketopantoate following transfer of a methyl group, then D-pantoate (also known as pantoic acid) following reduction. β-alanine and pantoic acid are then condensed to form pantothenic acid (see figure). Industrial synthesis The industrial synthesis of pantothenic acid starts with the aldol condensation of isobutyraldehyde and formaldehyde. The resulting hydroxypivaldehyde is converted to its cyanohydrin derivative. which is cyclised to give racemic pantolactone. This sequence of reactions was first published in 1904. Synthesis of the vitamin is completed by resolution of the lactone using quinine, for example, followed by treatment with the calcium or sodium salt of β-alanine. History The term vitamin is derived from the word vitamine, which was coined in 1912 by Polish biochemist Casimir Funk, who isolated a complex of water-soluble micronutrients essential to life, all of which he presumed to be amines. When this presumption was later determined not to be true, the "e" was dropped from the name, hence "vitamin". Vitamin nomenclature was alphabetical, with Elmer McCollum calling these fat-soluble A and water-soluble B. Over time, eight chemically distinct, water-soluble B vitamins were isolated and numbered, with pantothenic acid as vitamin B5. The essential nature of pantothenic acid was discovered by Roger J. Williams in 1933 by showing it was required for the growth of yeast. Three years later Elvehjem and Jukes demonstrated that it was a growth and anti-dermatitis factor in chickens. Williams dubbed the compound "pantothenic acid", deriving the name from the Greek word pantothen, which translates as "from everywhere". His reason was that he found it to be present in almost every food he tested. Williams went on to determine the chemical structure in 1940. In 1953, Fritz Lipmann shared the Nobel Prize in Physiology or Medicine "for his discovery of co-enzyme A and its importance for intermediary metabolism", work he had published in 1946. References Carboxamides B vitamins Primary alcohols Carboxylic acids Secondary alcohols
Pantothenic acid
[ "Chemistry" ]
3,287
[ "Carboxylic acids", "Functional groups" ]
54,114
https://en.wikipedia.org/wiki/Vitamin%20A
Vitamin A is a fat-soluble vitamin that is an essential nutrient. The term "vitamin A" encompasses a group of chemically related organic compounds that includes retinol, retinyl esters, and several provitamin (precursor) carotenoids, most notably β-carotene (beta-carotene). Vitamin A has multiple functions: growth during embryo development, maintaining the immune system, and healthy vision. For aiding vision specifically, it combines with the protein opsin to form rhodopsin, the light-absorbing molecule necessary for both low-light (scotopic vision) and color vision. Vitamin A occurs as two principal forms in foods: A) retinoids, found in animal-sourced foods, either as retinol or bound to a fatty acid to become a retinyl ester, and B) the carotenoids α-carotene (alpha-carotene), β-carotene, γ-carotene (gamma-carotene), and the xanthophyll beta-cryptoxanthin (all of which contain β-ionone rings) that function as provitamin A in herbivore and omnivore animals which possess the enzymes that cleave and convert provitamin carotenoids to retinol. Some carnivore species lack this enzyme. The other carotenoids do not have retinoid activity. Dietary retinol is absorbed from the digestive tract via passive diffusion. Unlike retinol, β-carotene is taken up by enterocytes by the membrane transporter protein scavenger receptor B1 (SCARB1), which is upregulated in times of vitamin A deficiency (VAD). Retinol is stored in lipid droplets in the liver. A high capacity for long-term storage of retinol means that well-nourished humans can go months on a vitamin A-deficient diet, while maintaining blood levels in the normal range. Only when the liver stores are nearly depleted will signs and symptoms of deficiency show. Retinol is reversibly converted to retinal, then irreversibly to retinoic acid, which activates hundreds of genes. Vitamin A deficiency is common in developing countries, especially in Sub-Saharan Africa and Southeast Asia. Deficiency can occur at any age but is most common in pre-school age children and pregnant women, the latter due to a need to transfer retinol to the fetus. Vitamin A deficiency is estimated to affect approximately one-third of children under the age of five around the world, resulting in hundreds of thousands of cases of blindness and deaths from childhood diseases because of immune system failure. Reversible night blindness is an early indicator of low vitamin A status. Plasma retinol is used as a biomarker to confirm vitamin A deficiency. Breast milk retinol can indicate a deficiency in nursing mothers. Neither of these measures indicates the status of liver reserves. The European Union and various countries have set recommendations for dietary intake, and upper limits for safe intake. Vitamin A toxicity also referred to as hypervitaminosis A, occurs when there is too much vitamin A accumulating in the body. Symptoms may include nervous system effects, liver abnormalities, fatigue, muscle weakness, bone and skin changes, and others. The adverse effects of both acute and chronic toxicity are reversed after consumption of high dose supplements is stopped. Definition Vitamin A is a fat-soluble vitamin, a category that also includes vitamins D, E and K. The vitamin encompasses several chemically related naturally occurring compounds or metabolites, i.e., vitamers, that all contain a β-ionone ring. The primary dietary form is retinol, which may have a fatty acid molecule attached, creating a retinyl ester, when stored in the liver. Retinol the transport and storage form of vitamin A is interconvertible with retinal, catalyzed to retinal by retinol dehydrogenases and back to retinol by retinaldehyde reductases. retinal + NADH + H+ retinol + NAD+ retinol + NAD+ retinal + NADH + H+ Retinal, (also known as retinaldehyde) can be irreversibly converted to all-trans-retinoic acid by the action of retinal dehydrogenase retinal + NAD+ + H2O → retinoic acid + NADH + H+ Retinoic acid is actively transported into the cell nucleus by CRABp2 where it regulates thousands of genes by binding directly to gene targets via retinoic acid receptors. In addition to retinol, retinal and retinoic acid, there are plant-, fungi- or bacteria-sourced carotenoids which can be metabolized to retinol, and are thus vitamin A vitamers. There are also what are referred to as 2nd, 3rd and 4th generation retinoids which are not considered vitamin A vitamers because they cannot be converted to retinol, retinal or all-trans-retinoic acid. Some are prescription drugs, oral or topical, for various indications. Examples are etretinate, acitretin, adapalene, bexarotene, tazarotene and trifarotene. Absorption, metabolism and excretion Retinyl esters from animal-sourced foods (or synthesized for dietary supplements for humans and domesticated animals) are acted upon by retinyl ester hydrolases in the lumen of the small intestine to release free retinol. Retinol enters enterocytes by passive diffusion. Absorption efficiency is in the range of 70 to 90%. Humans are at risk for acute or chronic vitamin A toxicity because there are no mechanisms to suppress absorption or excrete the excess in urine. Within the cell, retinol is there bound to retinol binding protein 2 (RBP2). It is then enzymatically re-esterified by the action of lecithin retinol acyltransferase and incorporated into chylomicrons that are secreted into the lymphatic system. Unlike retinol, β-carotene is taken up by enterocytes by the membrane transporter protein scavenger receptor B1 (SCARB1). The protein is upregulated in times of vitamin A deficiency. If vitamin A status is in the normal range, SCARB1 is downregulated, reducing absorption. Also downregulated is the enzyme beta-carotene 15,15'-dioxygenase (formerly known as beta-carotene 15,15'-monooxygenase) coded for by the BCMO1 gene, responsible for symmetrically cleaving β-carotene into retinal. Absorbed β-carotene is either incorporated as such into chylomicrons or first converted to retinal and then retinol, bound to RBP2. After a meal, roughly two-thirds of the chylomicrons are taken up by the liver with the remainder delivered to peripheral tissues. Peripheral tissues also can convert chylomicron β-carotene to retinol. The capacity to store retinol in the liver means that well-nourished humans can go months on a vitamin A deficient diet without manifesting signs and symptoms of deficiency. Two liver cell types are responsible for storage and release: hepatocytes and hepatic stellate cells (HSCs). Hepatocytes take up the lipid-rich chylomicrons, bind retinol to retinol-binding protein 4 (RBP4), and transfer the retinol-RBP4 to HSCs for storage in lipid droplets as retinyl esters. Mobilization reverses the process: retinyl ester hydrolase releases free retinol which is transferred to hepatocytes, bound to RBP4, and put into blood circulation. Other than either after a meal or when consumption of large amounts exceeds liver storage capacity, more than 95% of retinol in circulation is bound to RBP4. Carnivores Strict carnivores manage vitamin A differently than omnivores and herbivores. Carnivores are more tolerant of high intakes of retinol because those species have the ability to excrete retinol and retinyl esters in urine. Carnivores also have the ability to store more in the liver, due to a higher ratio of liver HSCs to hepatocytes compared to omnivores and herbivores. For humans, liver content can range from 20 to 30 μg/gram wet weight. Notoriously, polar bear liver is acutely toxic to humans because content has been reported in range of 2,215 to 10,400 μg/g wet weight. As noted, in humans, retinol circulates bound to RBP4. Carnivores maintain R-RBP4 within a tight range while also having retinyl esters in circulation. Bound retinol is delivered to cells while the esters are excreted in the urine. In general, carnivore species are poor converters of ionone-containing carotenoids, and pure carnivores such as felidae (cats) lack the cleaving enzyme entirely. They must have retinol or retinyl esters in their diet. Herbivores Herbivores consume ionone-containing carotenoids and convert those to retinal. Some species, including cattle and horses, have measurable amounts of β-carotene circulating in the blood, and stored in body fat, creating yellow fat cells. Most species have white fat and no β-carotene in circulation. Activation and excretion In the liver and peripheral tissues of humans, retinol is reversibly converted to retinal by the action of alcohol dehydrogenases, which are also responsible for the conversion of ethanol to acetaldehyde. Retinal is irreversibly oxidized to retinoic acid (RA) by the action of aldehyde dehydrogenases. RA regulates the activation or deactivation of genes. The oxidative degradation of RA is induced by RA – its presence triggers its removal, making for a short-acting gene transcription signal. This deactivation is mediated by a cytochrome P450 (CYP) enzyme system, specifically enzymes CYP26A1, CYP26B1 and CYP26C1. CYP26A1 is the predominant form in the human liver; all other human adult tissues contained higher levels of CYP26B1. CYP26C1 is expressed mainly during embryonic development. All three convert retinoic acid into 4-oxo-RA, 4-OH-RA and 18-OH-RA. Glucuronic acid forms water-soluble glucuronide conjugates with the oxidized metabolites, which are then excreted in urine and feces. Metabolic functions Other than for vision, the metabolic functions of vitamin A are mediated by all-trans-retinoic acid (RA). The formation of RA from retinal is irreversible. To prevent accumulation of RA it is oxidized and eliminated fairly quickly, i.e., has a short half-life. Three cytochromes catalyze the oxidation of retinoic acid. The genes for Cyp26A1, Cyp26B1 and Cyp26C1 are induced by high levels of RA, providing a self-regulating feedback loop. Vision and eye health Vitamin A status involves eye health via two separate functions. Retinal is an essential factor in rod cells and cone cells in the retina responding to light exposure by sending nerve signals to the brain. An early sign of vitamin A deficiency is night blindness. Vitamin A in the form of retinoic acid is essential to normal epithelial cell functions. Severe vitamin A deficiency, common in infants and young children in southeast Asia causes xerophthalmia characterized by dryness of the conjunctival epithelium and cornea. Untreated, xerophthalmia progresses to corneal ulceration and blindness. Vision The role of vitamin A in the visual cycle is specifically related to the retinal compound. Retinol is converted by the enzyme RPE65 within the retinal pigment epithelium into 11-cis-retinal. Within the eye, 11-cis-retinal is bound to the protein opsin to form rhodopsin in rod cells and iodopsin in cone cells. As light enters the eye, the 11-cis-retinal is isomerized to the all-trans form. The all-trans-retinal dissociates from the opsin in a series of steps called photo-bleaching. This isomerization induces a nervous signal along the optic nerve to the visual center of the brain. After separating from opsin, the all-trans-retinal is recycled and converted back to the 11-cis-retinal form by a series of enzymatic reactions, which then completes the cycle by binding to opsin to reform rhodopsin in the retina. In addition, some of the all-trans-retinal may be converted to all-trans-retinol form and then transported with an interphotoreceptor retinol-binding protein to the retinal pigmented epithelial cells. Further esterification into all-trans-retinyl esters allow for storage of all-trans-retinol within the pigment epithelial cells to be reused when needed. It is for this reason that a deficiency in vitamin A will inhibit the reformation of rhodopsin, and will lead to one of the first symptoms, night blindness. Night blindness Vitamin A deficiency-caused night blindness is a reversible difficulty for the eyes to adjust to dim light. It is common in young children who have a diet inadequate in retinol and β-carotene. A process called dark adaptation typically causes an increase in photopigment amounts in response to low levels of illumination. This increases light sensitivity by up to 100,000 times compared to normal daylight conditions. Significant improvement in night vision takes place within ten minutes, but the process can take up to two hours to reach maximal effect. People expecting to work in a dark environment wore red-tinted goggles or were in a red light environment to not reverse the adaptation because red light does not deplete rhodopsin versus what occurs with yellow or green light. Xerophthalmia and childhood blindness Xerophthalmia, caused by a severe vitamin A deficiency, is described by pathologic dryness of the conjunctival epithelium and cornea. The conjunctiva becomes dry, thick, and wrinkled. Indicative is the appearance of Bitot's spots, which are clumps of keratin debris that build up inside the conjunctiva. If untreated, xerophthalmia can lead to dry eye syndrome, corneal ulceration and ultimately to blindness as a result of cornea and retina damage. Although xerophthalmia is an eye-related issue, prevention (and reversal) are functions of retinoic acid having been synthesized from retinal rather than the 11-cis-retinal to rhodopsin cycle. Throughout southeast Asia, estimates are that more than half of children under the age of six years have subclinical vitamin A deficiency and night blindness, with progression to xerophthalmia being the leading cause of preventable childhood blindness. Estimates are that each year there are 350,000 cases of childhood blindness due to vitamin A deficiency. The causes are vitamin A deficiency during pregnancy, followed by low transfer of vitamin A during lactation and infant/child diets low in vitamin A or β-carotene. The prevalence of pre-school age children who are blind due to vitamin A deficiency is lower than expected from incidence of new cases only because childhood vitamin A deficiency significantly increases all-cause mortality. According to a 2017 Cochrane review, vitamin A deficiency, using serum retinol less than 0.70 μmol/L as a criterion, is a major public health problem affecting an estimated 190 million children under five years of age in low- and middle-income countries, primarily in Sub-Saharan Africa and Southeast Asia. In lieu of or in combination with food fortification programs, many countries have implemented public health programs in which children are periodically given very large oral doses of synthetic vitamin A, usually retinyl palmitate, as a means of preventing and treating vitamin A deficiency. Doses were 50,000 to 100,000 IU (International units) for children aged 6 to 11 months and 100,000 to 200,000 IU for children aged 12 months to five years, the latter typically every four to six months. In addition to a 24% reduction in all-cause mortality, eye-related results were reported. Prevalence of Bitot's spots at follow-up were reduced by 58%, night blindness by 68%, xerophthalmia by 69%. Gene regulation RA regulates gene transcription by binding to nuclear receptors known as retinoic acid receptors (RARs; RARα, RARβ, RARγ) which are bound to DNA as heterodimers with retinoid "X" receptors (RXRs; RXRα, RXRβ, RXRγ). RARs and RXRs must dimerize before they can bind to the DNA. Expression of more than 500 genes is responsive to retinoic acid. RAR-RXR heterodimers recognize retinoic acid response elements on DNA. Upon binding retinoic acid, the receptors undergo a conformational change that causes co-repressors to dissociate from the receptors. Coactivators can then bind to the receptor complex, which may help to loosen the chromatin structure from the histones or may interact with the transcriptional machinery. This response upregulates or downregulates the expression of target genes, including the genes that encode for the receptors themselves. To deactivate retinoic acid receptor signaling, three cytochromes (Cyp26A1, Cyp26B1 Cyp26C1) catalyze the oxidation of RA. The genes for these proteins are induced by high concentrations of RA, thus providing a regulatory feedback mechanism. Embryology In vertebrates and invertebrate chordates, RA has a pivotal role during development. Altering levels of endogenous RA signaling during early embryology, either too low or too high, leads to birth defects, including congenital vascular and cardiovascular defects. Of note, fetal alcohol spectrum disorder encompasses congenital anomalies, including craniofacial, auditory, and ocular defects, neurobehavioral anomalies and mental disabilities caused by maternal consumption of alcohol during pregnancy. It is proposed that in the embryo there is competition between acetaldehyde, an ethanol metabolite, and retinaldehyde (retinal) for aldehyde dehydrogenase activity, resulting in a retinoic acid deficiency, and attributing the congenital birth defects to the loss of RA activated gene activation. In support of this theory, ethanol-induced developmental defects can be ameliorated by increasing the levels of retinol or retinal. As for the risks of too much RA during embryogenesis, the prescription drugs tretinoin (all-trans-retinoic acid) and isotretinoin (13-cis-retinoic acid), used orally or topically for acne treatment, are labeled with boxed warnings for pregnant women or women who may become pregnant, as they are known human teratogens. Immune functions Vitamin A deficiency has been linked to compromised resistance to infectious diseases. In countries where early childhood vitamin A deficiency is common, vitamin A supplementation public health programs initiated in the 1980s were shown to reduce the incidence of diarrhea and measles, and all-cause mortality. Vitamin A deficiency also increases the risk of immune system over-reaction, leading to chronic inflammation in the intestinal system, stronger allergic reactions and autoimmune diseases. Lymphocytes and monocytes are types of white blood cells of the immune system. Lymphocytes include natural killer cells, which function in innate immunity, T cells for adaptive cellular immunity and B cells for antibody-driven adaptive humoral immunity. Monocytes differentiate into macrophages and dendritic cells. Some lymphocytes migrate to the thymus where they differentiate into several types of T cells, in some instances referred to as "killer" or "helper" T cells and further differentiate after leaving the thymus. Each subtype has functions driven by the types of cytokines secreted and organs to which the cells preferentially migrate, also described as trafficking or homing. Retinoic acid (RA) triggers receptors in bone marrow, resulting in generation of new white blood cells. RA regulates proliferation and differentiation of white blood cells, the directed movement of T cells to the intestinal system, and to the up- and down-regulation of lymphocyte function. If RA is adequate, T helper cell subtype Th1 is suppressed and subtypes Th2, Th17 and iTreg (for regulatory) are induced. Dendritic cells located in intestinal tissue have enzymes that convert retinal to all-trans-retinoic acid, to be taken up by retinoic acid receptors on lymphocytes. The process triggers gene expression that leads to T cell types Th2, Th17 and iTreg moving to and taking up residence in mesenteric lymph nodes and Peyer's patches, respectively outside and on the inner wall of the small intestine. The net effect is a down-regulation of immune activity, seen as tolerance of food allergens, and tolerance of resident bacteria and other organisms in the microbiome of the large intestine. In a vitamin A deficient state, innate immunity is compromised and pro-inflammatory Th1 cells predominate. Skin Deficiencies in vitamin A have been linked to an increased susceptibility to skin infection and inflammation. Vitamin A appears to modulate the innate immune response and maintains homeostasis of epithelial tissues and mucosa through its metabolite, retinoic acid (RA). As part of the innate immune system, toll-like receptors in skin cells respond to pathogens and cell damage by inducing a pro-inflammatory immune response which includes increased RA production. The epithelium of the skin encounters bacteria, fungi and viruses. Keratinocytes of the epidermal layer of the skin produce and secrete antimicrobial peptides (AMPs). Production of AMPs resistin and cathelicidin, are promoted by RA. Units of measurement As some carotenoids can be converted into vitamin A, attempts have been made to determine how much of them in the diet is equivalent to a particular amount of retinol, so that comparisons can be made of the benefit of different foods. The situation can be confusing because the accepted equivalences have changed over time. For many years, a system of equivalencies in which an international unit (IU) was equal to 0.3 μg of retinol (~1 nmol), 0.6 μg of β-carotene, or 1.2 μg of other provitamin-A carotenoids was used. This relationship was alternatively expressed by the retinol equivalent (RE): one RE corresponded to 1 μg retinol, to 2 μg β-carotene dissolved in oil, to 6 μg β-carotene in foods, and to 12 μg of either α-carotene, γ-carotene, or β-cryptoxanthin in food. Newer research has shown that the absorption of provitamin-A carotenoids is only half as much as previously thought. As a result, in 2001 the US Institute of Medicine recommended a new unit, the retinol activity equivalent (RAE). Each μg RAE corresponds to 1 μg retinol, 2 μg of β-carotene in oil, 12 μg of "dietary" β-carotene, or 24 μg of the three other dietary provitamin-A carotenoids. Animal models have shown that at the enterocyte cell wall, β-carotene is taken up by the membrane transporter protein scavenger receptor class B, type 1 (SCARB1). Absorbed β-carotene is converted to retinal and then retinol. The first step of the conversion process consists of one molecule of β-carotene cleaved by the enzyme β-carotene-15, 15'-monooxygenase, which in humans and other mammalian species is encoded by the BCM01 gene, into two molecules of retinal. When plasma retinol is in the normal range, gene expression for SCARB1 and BC01 are suppressed, creating a feedback loop that suppresses β-carotene absorption and conversion. Absorption suppression is not complete, as receptor 36 is not downregulated. Dietary recommendations The US National Academy of Medicine updated Dietary Reference Intakes (DRIs) in 2001 for vitamin A, which included Recommended Dietary Allowances (RDAs). For infants up to 12 months, there was not sufficient information to establish an RDA, so Adequate Intake (AI) is shown instead. As for safety, tolerable upper intake levels (ULs) were also established. For ULs, carotenoids are not added when calculating total vitamin A intake for safety assessments. The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For women and men of ages 15 and older, the PRIs are set respectively at 650 and 750 μg RE/day. PRI for pregnancy is 700 μg RE/day, for lactation 1300/day. For children of ages 1–14 years, the PRIs increase with age from 250 to 600 μg RE/day. These PRIs are similar to the US RDAs. The EFSA reviewed the same safety question as the United States, and set ULs at 800 for ages 1–3, 1100 for ages 4–6, 1500 for ages 7–10, 2000 for ages 11–14, 2600 for ages 15–17 and 3000 μg/day for ages 18 and older for preformed vitamin A, i.e., not including dietary contributions from carotenoids. Safety Vitamin A toxicity (hypervitaminosis A) occurs when too much vitamin A accumulates in the body. It comes from consumption of preformed vitamin A but not of carotenoids, as conversion of the latter to retinol is suppressed by the presence of adequate retinol. Retinol safety There are historical reports of acute hypervitaminosis from Arctic explorers consuming bearded seal or polar bear liver, both very rich sources of stored retinol, and there are also case reports of acute hypervitaminosis from consuming fish liver, but otherwise there is no risk from consuming too much via commonly consumed foods. Only consumption of retinol-containing dietary supplements can result in acute or chronic toxicity. Acute toxicity occurs after a single or short-term doses of greater than 150,000 μg. Symptoms include blurred vision, nausea, vomiting, dizziness and headache within 8 to 24 hours. For infants ages 0–6 months given an oral dose to prevent development of vitamin A deficiency, bulging skull fontanel was evident after 24 hours, usually resolved by 72 hours. Chronic toxicity may occur with long-term consumption of vitamin A at doses of 25,000–33,000 IU/day for several months. Excessive consumption of alcohol can lead to chronic toxicity at lower intakes. Symptoms may include nervous system effects, liver abnormalities, fatigue, muscle weakness, bone and skin changes and others. The adverse effects of both acute and chronic toxicity are reversed after consumption is stopped. In 2001, for the purpose of determining ULs for adults, the US Institute of Medicine considered three primary adverse effects and settled on two: teratogenicity, i.e., causing birth defects, and liver abnormalities. Reduced bone mineral density was considered, but dismissed because the human evidence was contradictory. During pregnancy, especially during the first trimester, consumption of retinol in amounts exceeding 4,500 μg/day increased the risk of birth defects, but not below that amount, thus setting a "No-Observed Adverse-Effect Level" (NOAEL). Given the quality of the clinical trial evidence, the NOAEL was divided by an uncertainty factor of 1.5 to set the UL for women of reproductive age at 3,000 μg/day of preformed vitamin A. For all other adults, liver abnormalities were detected at intakes above 14,000 μg/day. Given the weak quality of the clinical evidence, an uncertainty factor of 5 was used, and with rounding, the UL was set at 3,000 μg/day. For children, ULs were extrapolated from the adult value, adjusted for relative body weight. For infants, several case studies reported adverse effects that include bulging fontanels, increased intracranial pressure, loss of appetite, hyperirritability and skin peeling after chronic ingestion of the order of 6,000 or more μg/day. Given the small database, an uncertainty factor of 10 divided into the "Lowest-Observed-Adverse-Effect Level" (LOAEL) led to a UL of 600 μg/day. β-carotene safety No adverse effects other than carotenemia have been reported for consumption of β-carotene rich foods. Supplementation with β-carotene does not cause hypervitaminosis A. Two large clinical trials (ATBC and CARET) were conducted in tobacco smokers to see if years of β-carotene supplementation at 20 or 30 mg/day in oil-filled capsules would reduce the risk of lung cancer. These trials were implemented because observational studies had reported a lower incidence of lung cancer in tobacco smokers who had diets higher in β-carotene. Unexpectedly, high-dose β-carotene or retinol supplementation resulted in a higher incidence of lung cancer and of total mortality due to cardiac mortality. Taking this and other evidence into consideration, the U.S. Institute of Medicine decided not to set a Tolerable Upper Intake Level (UL) for β-carotene. The European Food Safety Authority, acting for the European Union, also decided not to set a UL for β-carotene. Carotenosis Carotenoderma, also referred to as carotenemia, is a benign and reversible medical condition where an excess of dietary carotenoids results in orange discoloration of the outermost skin layer. It is associated with a high blood β-carotene value. This can occur after a month or two of consumption of β-carotene rich foods, such as carrots, carrot juice, tangerine juice, mangos, or in Africa, red palm oil. β-carotene dietary supplements can have the same effect. The discoloration extends to palms and soles of feet, but not to the white of the eye, which helps distinguish the condition from jaundice. Consumption of greater than 30 mg/day for a prolonged period has been confirmed as leading to carotenemia. U.S. labeling For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For vitamin A labeling purposes, 100% of the Daily Value was set at 5,000 IU, but it was revised to 900 μg RAE on 27 May 2016. A table of the old and new adult daily values is provided at Reference Daily Intake. Sources Vitamin A is found in many foods. Vitamin A in food exists either as preformed retinol an active form of vitamin A found in animal liver, dairy and egg products, and some fortified foods, or as provitamin A carotenoids, which are plant pigments digested into vitamin A after consuming carotenoid-rich plant foods, typically in red, orange, or yellow colors. Carotenoid pigments may be masked by chlorophylls in dark green leaf vegetables, such as spinach. The relatively low bioavailability of plant-food carotenoids results partly from binding to proteins chopping, homogenizing or cooking disrupts the plant proteins, increasing provitamin A carotenoid bioavailability. Vegetarian and vegan diets can provide sufficient vitamin A in the form of provitamin A carotenoids if the diet contains carrots, carrot juice, sweet potatoes, green leafy vegetables such as spinach and kale, and other carotenoid-rich foods. In the U.S., the average daily intake of β-carotene is in the range 2–7 mg. Some manufactured foods and dietary supplements are sources of vitamin A or β-carotene. Fortification Some countries require or recommend fortification of foods. As of January 2022, 37 countries, mostly in Sub-Saharan Africa, require food fortification of cooking oil, rice, wheat flour or maize (corn) flour with vitamin A, usually as retinyl palmitate or retinyl acetate. Examples include Pakistan, oil, 11.7 mg/kg and Nigeria, oil, 6 mg/kg; wheat and maize flour, 2 mg/kg. An additional 12 countries, mostly in southeast Asia, have a voluntary fortification program. For example, the government of India recommends 7.95 mg/kg in oil and 0.626 mg/kg for wheat flour and rice. However, compliance in countries with voluntary fortification is lower than countries with mandatory fortification. No countries in Europe or North America fortify foods with vitamin A. Other means of fortifying foods via genetic engineering have been explored. Research on rice began in 1982. The first field trials of golden rice cultivars were conducted in 2004. The result was "Golden Rice", a variety of Oryza sativa rice produced through genetic engineering to biosynthesize β-carotene, a precursor of retinol, in the edible parts of rice. In May 2018, regulatory agencies in the United States, Canada, Australia and New Zealand had concluded that Golden Rice met food safety standards. In July 2021, the Philippines became the first country to officially issue the biosafety permit for commercially propagating Golden Rice. However, in April 2023, the Supreme Court of the Philippines issued a Writ of Kalikasan ordering the Department of Agriculture to stop the commercial distribution of genetically modified rice in the country. Vitamin A supplementation (VAS) Delivery of oral high-dose supplements remains the principal strategy for minimizing deficiency. As of 2017, more than 80 countries worldwide are implementing universal VAS programs targeted to children 6–59 months of age through semi-annual national campaigns. Doses in these programs are one dose of 50,000 or 100,000 IU for children aged 6 to 11 months and 100,000 to 200,000 IU for children aged 12 months to five years, every four to six months. Deficiency Primary causes Vitamin A deficiency is common in developing countries, especially in Sub-Saharan Africa and Southeast Asia. Deficiency can occur at any age, but is most common in pre-school-age children and pregnant women, the latter due to a need to transfer retinol to the fetus. The causes are low intake of retinol-containing, animal-sourced foods and low intake of carotene-containing, plant-sourced foods. Vitamin A deficiency is estimated to affect approximately one third of children under the age of five around the world, possibly leading to the deaths of 670,000 children under five annually. Between 250,000 and 500,000 children in developing countries become blind each year owing to vitamin A deficiency. Vitamin A deficiency is "the leading cause of preventable childhood blindness", according to UNICEF. It also increases the risk of death from common childhood conditions, such as diarrhea. UNICEF regards addressing vitamin A deficiency as critical to reducing child mortality, the fourth of the United Nations' Millennium Development Goals. During diagnosis, night blindness and dry eyes are signs of vitamin A deficiency that can be recognized without requiring biochemical tests. Plasma retinol is used to confirm vitamin A status. A plasma concentration of about 2.0 μmol/L is normal; less than 0.70 μmol/L (equivalent to 20 μg/dL) indicates moderate vitamin A deficiency, and less than 0.35 μmol/L (10 μg/dL) indicates severe vitamin A deficiency. Breast milk retinol of less than 8 μg/gram milk fat is considered insufficient. One weakness of these measures is that they are not good indicators of liver vitamin A stores as retinyl esters in hepatic stellate cells. The amount of vitamin A leaving the liver, bound to retinol binding protein (RBP), is under tight control as long as there are sufficient liver reserves. Only when liver content of vitamin A drops below approximately 20 μg/gram will concentration in the blood decline. Secondary causes There are causes for deficiency other than low dietary intake of vitamin A as retinol or carotenes. Adequate dietary protein and caloric energy are needed for a normal rate of synthesis of RBP, without which, retinol cannot be mobilized to leave the liver. Systemic infections can cause transient decreases in RBP synthesis even if protein-calorie malnutrition is absent. Chronic alcohol consumption reduces liver vitamin A storage. Non-alcoholic fatty liver disease (NAFLD), characterized by the accumulation of fat in the liver, is the hepatic manifestation of metabolic syndrome. Liver damage from NAFLD reduces liver storage capacity for retinol and reduces the ability to mobilize liver stores to maintain normal circulating concentration. Vitamin A appears to be involved in the pathogenesis of anemia by diverse biological mechanisms, such as the enhancement of growth and differentiation of erythrocyte progenitor cells, potentiation of immunity to infection , and mobilization of iron stores from tissues. Animal requirements All vertebrate and chordate species require vitamin A, either as dietary carotenoids or preformed retinol from consuming other animals. Deficiencies have been reported in laboratory-raised and pet dogs, cats, birds, reptiles and amphibians, also commercially raised chickens and turkeys. Herbivore species such as horses, cattle and sheep can get sufficient β-carotene from green pasture to be healthy, but the content in pasture grass dry due to drought and long-stored hay can be too low, leading to vitamin A deficiency. Omnivore and carnivore species, especially those toward the top of the food chain, can accrue large amounts of retinyl esters in their livers, or else excrete retinyl esters in urine as a means of dealing with surplus. Before the era of synthetic retinol, cod liver oil, high in vitamins A and D, was a commonly consumed dietary supplement. Invertebrates cannot synthesize carotenoids or retinol, and thus must accrue these essential nutrients from consumption of algae, plants or animals. Medical uses In 2022, vitamin A was the 346th most commonly prescribed medication in the United States, with more than 50,000 prescriptions. Preventing and treating vitamin A deficiency Recognition of its prevalence and consequences has led to governments and non-government organizations promoting vitamin A fortification of foods and creating programs that administer large bolus-size oral doses of vitamin A to young children every four to six months. In 2008, the World Health Organization estimated that vitamin A supplementation over a decade in 40 countries averted 1.25 million deaths due to vitamin A deficiency. A Cochrane review reported that vitamin A supplementation is associated with a clinically meaningful reduction in morbidity and mortality in children ages six month to five years of age. All-cause mortality was reduced by 14%, and incidence of diarrhea by 12%. However, a Cochrane review by the same group concluded there was insufficient evidence to recommend blanket vitamin A supplementation for infants one to six months of age, as it did not reduce infant mortality or morbidity. Acne Topical retinoic acid and retinol Retinoic acids tretinoin (all-trans-retinoic acid) and isotretinoin (13-cis-retinoic acid) are prescription topical medications used to treat moderate to severe cystic acne and acne not responsive to other treatments. These are usually applied as a skin cream to the face after cleansing to remove make-up and skin oils. Tretinoin and isotretinoin act by binding to two nuclear receptor families within keratinocytes: the retinoic acid receptors (RAR) and the retinoid X receptors (RXR). These events contribute to the normalization of follicular keratinization and decreased cohesiveness of keratinocytes, resulting in reduced follicular occlusion and microcomedone formation. The retinoid-receptor complex competes for coactivator proteins of AP-1, a key transcription factor involved in inflammation. Retinoic acid products also reduce sebum secretion, a nutrient source for bacteria, from facial pores. These drugs, when applied topically, are US-designated Pregnancy Category C (animal reproduction studies have shown an adverse effect on the fetus), and should not be used by pregnant women or women who are anticipating becoming pregnant. Many countries established a physician- and patient- education pregnancy prevention policy. Trifarotene is a prescription retinoid for the topical treatment acne vulgaris. It functions as a retinoic acid receptor (RAR)-γ agonist. Non-prescription topical products that have health claims for reducing facial acne, combating skin dark spots and reducing wrinkles and lines associated with aging often contain retinyl palmitate. The hypothesis is that this is absorbed and de-esterified to free retinol, then converted to retinaldehyde and further metabolized to all-trans-retinoic acid, whence it will have the same effects as prescription products with fewer side effects. There is some ex vivo evidence with human skin that esterified retinol is absorbed and then converted to retinol. In addition to esterified retinol, some of these products contain hydroxypinacolone retinoate, identified as esterified 9-cis-retinoic acid. Oral isotretinoin Oral isotretinoin (retinoic acid isomer) is recommended for treating treatment resistant acne, acne that can lead to scarring, and acne that is associated with psychosocial distress. It is approved by the FDA for treating severe acne vulgaris that is resistant to other treatment options. Isotretinoin is a known teratogen, with an estimated 20–35% risk of physical birth defects to infants that are exposed to isotretinoin in utero, including numerous congenital defects such as craniofacial defects, cardiovascular and neurological malformations or thymic disorders. Neurocognitive impairments in the absence of any physical defects has been established to be 30–60%. For these reasons, physician- and patient-education programs were initiated, recommending that for women of child-bearing age, contraception be initiated a month before starting oral (or topical) isotretinoin, and continue for a month after treatment ended. In the US, isotretinoin was released to the market in 1982 as a revolutionary treatment for severe and refractory acne vulgaris. It was shown that a dose of 0.5–1.0 mg/kg body weight/day is enough to produce a reduction in sebum excretion by 90% within a month or two, but the recommended treatment duration is 4 to 6 months. The mechanism by which orally consumed retinoic acid (RA), as all-trans-tretinoin or 13-cis-isotretinoin improves facial skin health is thought to be by switching on genes and differentiating keratinocytes (immature skin cells) into mature epidermal cells. RA reduces the size and secretion of the sebaceous glands, and by doing so reduces bacterial numbers in both the ducts and skin surface. It reduces inflammation via inhibition of chemotactic responses of monocytes and neutrophils. Other dermatological conditions In addition to the approved use for treating acne vulgaris, researchers have investigated off-label applications for dermatological conditions, such as rosacea, psoriasis, and other conditions. Rosacea was reported as responding favorably to doses lower than used for acne. Isotretinoin in combination with ultraviolet light was shown affective for treating psoriasis. Isotretinoin in combination with injected interferon-alpha showed some potential for treating genital warts. Isotretinoin in combination with topical fluorouracil or injected interferon-alpha showed some potential for treating precancerous skin lesions and skin cancer. Immune function Vitamin A plays an important role in the body's immune function, both the adaptive response, and to help the body fight off infection. The anti-inflammatory effects of vitamin A also contribute to repairing mucosal cells that can be damaged by an infection. For these reasons, there have been quite a few studies looking at the potential role that Vitamin A supplementation may play in improving an immune response or to helping the body fight off an infection. The evidence supporting vitamin A supplementation for children under the age of 7 years to prevent upper respiratory tract infections is weak, and the weak evidence from low-quality clinical trials does not support vitamin A as being effective or having a benefit. More research is needed to consider different doses, the ages and populations of people who may potentially benefit, and the length of treatment. Synthesis Biosynthesis Carotenoid synthesis takes place in plants, certain fungi, and bacteria. Structurally carotenes are tetraterpenes, meaning that they are synthesized biochemically from four 10-carbon terpene units, which in turn were formed from eight 5-carbon isoprene units. Intermediate steps are the creation of a 40-carbon phytoene molecule, conversion to lycopene via desaturation, and then creation of ionone rings at both ends of the molecule. β-carotene has a β-ionone ring at both ends, meaning that the molecule can be divided symmetrically to yield two retinol molecules. α-Carotene has a β-ionone ring at one end and an Ɛ-ionone ring at the other, so it has half the retinol conversion capacity. In most animal species, retinol is synthesized from the breakdown of the plant-formed provitamin, β-carotene. First, the enzyme beta-carotene 15,15'-dioxygenase (BCO-1) cleaves β-carotene at the central double bond, creating an epoxide. This epoxide is then attacked by water creating two hydroxyl groups in the center of the structure. The cleavage occurs when these alcohols are oxidized to the aldehydes using NAD+. The resultant retinal is then quickly reduced to retinol by the enzyme retinol dehydrogenase. Omnivore species such as dogs, wolves, coyotes and foxes in general are low producers of BCO-1. The enzyme is lacking in felids (cats), meaning that vitamin A requirements are met from the retinyl ester content of prey animals. Industrial synthesis β-carotene can be extracted from fungus Blakeslea trispora, marine algae Dunaliella salina or genetically modified yeast Saccharomyces cerevisiae, starting with xylose as a substrate. Chemical synthesis uses either a method developed by BASF or a Grignard reaction utilized by Hoffman-La Roche. The world market for synthetic retinol is primarily for animal feed, leaving approximately 13% for a combination of food, prescription medication and dietary supplement use. Industrial methods for the production of retinol rely on chemical synthesis. The first industrialized synthesis of retinol was achieved by the company Hoffmann-La Roche in 1947. In the following decades, eight other companies developed their own processes. β-ionone, synthesized from acetone, is the essential starting point for all industrial syntheses. Each process involves elongating the unsaturated carbon chain. Pure retinol is extremely sensitive to oxidization and is prepared and transported at low temperatures and oxygen-free atmospheres. When prepared as a dietary supplement or food additive, retinol is stabilized as the ester derivatives retinyl acetate or retinyl palmitate. Prior to 1999, three companies, Roche, BASF and Rhone-Poulenc controlled 96% of global vitamin A sales. In 2001, the European Commission imposed total fines of 855.22 million euros on these and five other companies for their participation in eight distinct market-sharing and price-fixing cartels that dated back to 1989. Roche sold its vitamin division to DSM in 2003. DSM and BASF have the major share of industrial production. A biosynthesis alternative utilizes genetically engineered yeast species Saccharomyces cerevisiae to synthesize retinal and retinol, using xylose as a starting substrate. This was accomplished by having the yeast first synthesize β-carotene and then the cleaving enzyme β-carotene 15,15'-dioxygenase to yield retinal. Research Brain Animal research (on mice), which is pre-clinical, also found Retinoid acid, the bioactive metabolite of vitamin A, to have an effect on brain areas responsible for memory and learning. Cancer Meta-analyses of intervention and observational trials for various types of cancer report mixed results. Supplementation with β-carotene did not appear to decrease the risk of cancer overall, nor specific cancers including: pancreatic, colorectal, prostate, breast, melanoma, or skin cancer generally. High-dose β-carotene supplementation unexpectedly resulted in a higher incidence of lung cancer and of total mortality in people who were cigarette smokers. For dietary retinol, no effects were observed for high dietary intake and breast cancer survival, risk of liver cancer, risk of bladder cancer or risk of colorectal cancer, although the last review did report lower risk for higher β-carotene consumption. In contrast, an inverse association was reported between retinol intake and relative risk of esophageal cancer, gastric cancer, ovarian cancer, pancreatic cancer, lung cancer, melanoma, and cervical cancer. For lung cancer, an inverse association was also seen for β-carotene intake, separate from the retinol results. When high dietary intake was compared to low dietary intake, the decreases in relative risk were in the range of 15 to 20%. For gastric cancer, a meta-analysis of prevention trials reported a 29% decrease in relative risk from retinol supplementation at 1500 μg/day Fetal alcohol spectrum disorder Fetal alcohol spectrum disorder (FASD), formerly referred to as fetal alcohol syndrome, presents as craniofacial malformations, neurobehavioral disorders and mental disabilities, all attributed to exposing human embryos to alcohol during fetal development. The risk of FASD depends on the amount consumed, the frequency of consumption, and the points in pregnancy at which the alcohol is consumed. Ethanol is a known teratogen, i.e., causes birth defects. Ethanol is metabolized by alcohol dehydrogenase enzymes into acetaldehyde. The subsequent oxidation of acetaldehyde into acetate is performed by aldehyde dehydrogenase enzymes. Given that retinoic acid (RA) regulates numerous embryonic and differentiation processes, one of the proposed mechanisms for the teratogenic effects of ethanol is a competition for the enzymes required for the biosynthesis of RA from vitamin A. Animal research demonstrates that in the embryo, the competition takes place between acetaldehyde and retinaldehyde for aldehyde dehydrogenase activity. In this model, acetaldehyde inhibits the production of retinoic acid by retinaldehyde dehydrogenase. Ethanol-induced developmental defects can be ameliorated by increasing the levels of retinol, retinaldehyde, or retinaldehyde dehydrogenase. Thus, animal research supports the reduction of retinoic acid activity as an etiological trigger in the induction of FASD. Malaria Malaria and vitamin A deficiency are both common among young children in sub-Saharan Africa. Vitamin A supplementation to children in regions where vitamin A deficiency is common has repeatedly been shown to reduce overall mortality rates, especially from measles and diarrhea. For malaria, clinical trial results are mixed, either showing that vitamin A treatment did not reduce the incidence of probable malarial fever, or else did not affect incidence, but did reduce slide-confirmed parasite density and reduced the number of fever episodes. The question was raised as to whether malaria causes vitamin A deficiency, or vitamin A deficiency contributes to the severity of malaria, or both. Researchers proposed several mechanisms by which malaria (and other infections) could contribute to vitamin A deficiency, including a fever-induced reduction in synthesis of retinal-binding protein (RBP) responsible for transporting retinol from liver to plasma and tissues, but reported finding no evidence for a transient depression or restoration of plasma RBP or retinol after a malarial infection was eliminated. In history In 1912, Frederick Gowland Hopkins demonstrated that unknown accessory factors found in milk, other than carbohydrates, proteins, and fats were necessary for growth in rats. Hopkins received a Nobel Prize for this discovery in 1929. By 1913, one of these substances was independently discovered by Elmer McCollum and Marguerite Davis at the University of Wisconsin–Madison, and Lafayette Mendel and Thomas Burr Osborne at Yale University. McCollum and Davis ultimately received credit because they submitted their paper three weeks before Mendel and Osborne. Both papers appeared in the same issue of the Journal of Biological Chemistry in 1913. The "accessory factors" were termed "fat soluble" in 1918, and later "vitamin A" in 1920. In 1919, Harry Steenbock (University of Wisconsin–Madison) proposed a relationship between yellow plant pigments (β-carotene) and vitamin A. In 1931, Swiss chemist Paul Karrer described the chemical structure of vitamin A. Retinoic acid and retinol were first synthesized in 1946 and 1947 by two Dutch chemists, David Adriaan van Dorp and Jozef Ferdinand Arens. During World War II, German bombers would attack at night to evade British defenses. In order to keep the 1939 invention of a new on-board Airborne Intercept Radar system secret from Germany, the British Ministry of Information told newspapers an unproven claim that the nighttime defensive success of Royal Air Force pilots was due to a high dietary intake of carrots rich in β-carotene, successfully convincing many people. In 1967, George Wald shared the Nobel Prize in Physiology and Medicine for his work on chemical visual processes in the eye. Wald had demonstrated in 1935 that photoreceptor cells in the eye contain rhodopsin, a chromophore composed of the protein opsin and 11-cis-retinal. When struck by light, 11-cis-retinal undergoes photoisomerization to all-trans-retinal and via signal transduction cascade send a nerve signal to the brain. The all-trans-retinal is reduced to all-trans-retinol and travels back to the retinal pigment epithelium to be recycled to 11-cis-retinal and reconjugated to opsin. Wald's work was the culmination of nearly 60 years of research. In 1877, Franz Christian Boll identified a light-sensitive pigment in the outer segments of rod cells of the retina that faded/bleached when exposed to light, but was restored after light exposure ceased. He suggested that this substance, by a photochemical process, conveyed the impression of light to the brain. The research was taken up by Wilhelm Kühne, who named the pigment rhodopsin, also known as "visual purple." Kühne confirmed that rhodopsin is extremely sensitive to light, and thus enables vision in low-light conditions, and that it was this chemical decomposition that stimulated nerve impulses to the brain. Research stalled until after identification of "fat-soluble vitamin A" as a dietary substance found in milkfat but not lard, would reverse night blindness and xerophthalmia. In 1925, Fridericia and Holm demonstrated that vitamin A deficient rats were unable to regenerate rhodopsin after being moved from a light to a dark room. References External links WHO publications on Vitamin A Deficiency Biomolecules Unsaturated compounds
Vitamin A
[ "Chemistry", "Biology" ]
12,089
[ "Vitamin A", "Natural products", "Biochemistry", "Organic compounds", "Biomolecules", "Unsaturated compounds", "Structural biology", "Molecular biology" ]
54,118
https://en.wikipedia.org/wiki/Biotin
Biotin (also known as vitamin B7 or vitamin H) is one of the B vitamins. It is involved in a wide range of metabolic processes, both in humans and in other organisms, primarily related to the utilization of fats, carbohydrates, and amino acids. The name biotin, borrowed from the German , derives from the Ancient Greek word (; 'life') and the suffix "-in" (a suffix used in chemistry usually to indicate 'forming'). Biotin appears as a white, needle-like crystalline solid. Chemical description Biotin is classified as a heterocyclic compound, with a sulfur-containing tetrahydrothiophene ring fused to a ureido group. A C5-carboxylic acid side chain is appended to the former ring. The ureido ring, containing the −N−CO−N− group, serves as the carbon dioxide carrier in carboxylation reactions. Biotin is a coenzyme for five carboxylase enzymes, which are involved in the catabolism of amino acids and fatty acids, synthesis of fatty acids, and gluconeogenesis. Biotinylation of histone proteins in nuclear chromatin plays a role in chromatin stability and gene expression. Dietary recommendations The US National Academy of Medicine updated Dietary Reference Intakes for many vitamins in 1998. At that time there was insufficient information to establish estimated average requirement or recommended dietary allowance, terms that exist for most vitamins. In instances such as this, the academy sets adequate intakes (AIs) with the understanding that at some later date, when the physiological effects of biotin are better understood, AIs will be replaced by more exact information. The biotin AIs for both males and females are: Australia and New Zealand set AIs similar to the US. The European Food Safety Authority (EFSA) also identifies AIs, setting values at 40 μg/day for adults, pregnancy at 40 μg/day, and breastfeeding at 45 μg/day. For children ages 1–17 years, the AIs increase with age from 20 to 35 μg/day. Safety The US National Academy of Medicine estimates upper limits for vitamins and minerals when evidence for a true limit is sufficient. For biotin, however, there is no upper limit because the adverse effects of high biotin intake have not been determined. The EFSA also reviewed safety and reached the same conclusion as in the United States. Labeling regulations For US food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of daily value. For biotin labeling purposes, 100% of the daily value was 300 μg/day, but as of May 27, 2016, it was revised to 30 μg/day to agree with the adequate intake. Compliance with the updated labeling regulations was required by January 1, 2020, for manufacturers with US$10 million or more in annual food sales, and by January 1, 2021, for manufacturers with lower volume food sales. A table of the old and new adult daily values is provided at Reference Daily Intake. Sources Biotin is stable at room temperature and is not destroyed by cooking. The dietary biotin intake in Western populations has been estimated to be in the range of 35 to 70 μg/day. Nursing infants ingest about 6 μg/day. Biotin is available in dietary supplements, individually or as an ingredient in multivitamins. According to the Global Fortification Data Exchange, biotin deficiency is so rare that no countries require that foods be fortified. Physiology Biotin is a water-soluble B vitamin. Consumption of large amounts as a dietary supplement results in absorption, followed by excretion into urine as biotin. Consumption of biotin as part of a normal diet results in urinary excretion of biotin and biotin metabolites. Absorption Biotin in food is bound to proteins. Digestive enzymes reduce the proteins to biotin-bound peptides. The intestinal enzyme biotinidase, found in pancreatic secretions and in the brush border membranes of all three parts of the small intestine, frees biotin, which is then absorbed from the small intestine. When consumed as a biotin dietary supplement, absorption is nonsaturable, meaning that even very high amounts are absorbed effectively. Transport across the jejunum is faster than across the ileum. The large intestine microbiota synthesizes amounts of biotin estimated to be similar to the amount taken in the diet, and a significant portion of this biotin exists in the free (protein-unbound) form and, thus, is available for absorption. How much is absorbed in humans is unknown, although a review did report that human colon epithelial cells in vitro demonstrated an ability to uptake biotin. Once absorbed, sodium-dependent multivitamin transporter (SMVT) mediates biotin uptake into the liver. SMVT also binds pantothenic acid, so high intakes of either of these vitamins can interfere with the transport of the other. Metabolism and excretion Biotin catabolism occurs via two pathways. In one, the valeric acid sidechain is cleaved, resulting in bisnorbiotin. In the other path, the sulfur is oxidized, resulting in biotin sulfoxide. Urine content is proportionally about half biotin, plus bisnorbiotin, biotin sulfoxide, and small amounts of other metabolites. Factors that affect biotin requirements Chronic alcohol use is associated with a significant reduction in plasma biotin. Intestinal biotin uptake also appears to be sensitive to the effect of the anti-epilepsy drugs carbamazepine and primidone. Relatively low levels of biotin have also been reported in the urine or plasma of patients who have had a partial gastrectomy or have other causes of achlorhydria, as well as burn patients, elderly individuals, and athletes. Pregnancy and lactation may be associated with an increased demand for biotin. In pregnancy, this may be due to a possible acceleration of biotin catabolism, whereas, in lactation, the higher demand has yet to be elucidated. Recent studies have shown marginal biotin deficiency can be present in human gestation, as evidenced by increased urinary excretion of 3-hydroxyisovaleric acid, decreased urinary excretion of biotin and bisnorbiotin, and decreased plasma concentration of biotin. Biosynthesis Biotin, synthesized in plants, is essential to plant growth and development. Bacteria also synthesize biotin, and it is thought that bacteria resident in the large intestine may synthesize biotin that is absorbed and utilized by the host organism. Biosynthesis starts from two precursors, alanine and pimeloyl-CoA. These form 7-keto-8-aminopelargonic acid (KAPA). KAPA is transported from plant peroxisomes to mitochondria where it is converted to 7,8-diaminopelargonic acid (DAPA) with the help of the enzyme, BioA. The enzyme dethiobiotin synthetase catalyzes the formation of the ureido ring via a DAPA carbamate activated with ATP, creating dethiobiotin with the help of the enzyme, BioD, which is then converted into biotin which is catalyzed by BioB. The last step is catalyzed by biotin synthase, a radical SAM enzyme. The sulfur is donated by an unusual [2Fe-2S] ferredoxin. Depending on the species of bacteria, Biotin can be synthesized via multiple pathways. Cofactor biochemistry The enzyme holocarboxylase synthetase covalently attaches biotin to five human carboxylase enzymes: Acetyl-CoA carboxylase alpha (ACC1) Acetyl-CoA carboxylase beta (ACC2) Pyruvate carboxylase (PC) Methylcrotonyl-CoA carboxylase (MCC) Propionyl-CoA carboxylase (PCC) For the first two, biotin serves as a cofactor responsible for the transfer of bicarbonate to acetyl-CoA, converting it to malonyl-CoA for fatty acid synthesis. PC participates in gluconeogenesis. MCC catalyzes a step in leucine metabolism. PCC catalyzes a step in the metabolism of propionyl-CoA. Metabolic degradation of the biotinylated carboxylases leads to the formation of biocytin. This compound is further degraded by biotinidase to release biotin, which is then reutilized by holocarboxylase synthetase. Biotinylation of histone proteins in nuclear chromatin is a posttranslational modification that plays a role in chromatin stability and gene expression. Deficiency Primary biotin deficiency, meaning deficiency due to too little biotin in the diet, is rare because biotin is contained in many foods. Subclinical deficiency can cause mild symptoms, such as hair thinning, brittle fingernails, or skin rash, typically on the face. Aside from inadequate dietary intake (rare), biotin deficiency can be caused by a genetic disorder that affects biotin metabolism. The most common among these is biotinidase deficiency. Low activity of this enzyme causes a failure to recycle biotin from biocytin. Rarer are carboxylase and biotin transporter deficiencies. Neonatal screening for biotinidase deficiency started in the United States in 1984, with many countries now also testing for this genetic disorder at birth. Treatment is a lifelong dietary supplement with biotin. If biotinidase deficiency goes untreated, it can be fatal. Diagnosis Low serum and urine biotin are not sensitive indicators of inadequate biotin intake. However, serum testing can be useful for confirmation of consumption of biotin-containing dietary supplements, and whether a period of refraining from supplement use is long enough to eliminate the potential for interfering with drug tests. Indirect measures depend on the biotin requirement for carboxylases. 3-Methylcrotonyl-CoA is an intermediate step in the catabolism of the amino acid leucine. Without biotin, the pathway diverts to 3-hydroxyisovaleric acid. Urinary excretion of this compound is an early and sensitive indicator of biotin deficiency. Deficiency as a result of metabolic disorders Biotinidase deficiency is a deficiency of the enzyme that recycles biotin, due to an inherited genetic mutation. Biotinidase catalyzes the cleavage of biotin from biocytin and biotinyl-peptides (the proteolytic degradation products of each holocarboxylase) and thereby recycles biotin. It is also important in freeing biotin from dietary protein-bound biotin. Neonatal screening for biotinidase deficiency started in the United States in 1984, which as of 2017 was reported as required in more than 30 countries. Profound biotinidase deficiency, defined as less than 10% of normal serum enzyme activity, which has been reported as 7.1 nmol/min/mL, has an incidence of 1 in 40,000 to 1 in 60,000, but with rates as high as 1 in 10,000 in countries with high incidence of consanguineous marriages (second cousin or closer). Partial biotinidase deficiency is defined as 10% to 30% of normal serum activity. Incidence data stems from government-mandated newborn screening. For profound deficiency, treatment is oral dosing with 5 to 20 mg per day. Seizures are reported as resolving in hours to days, with other symptoms resolving within weeks. Treatment of partial biotinidase deficiency is also recommended even though some untreated people never manifest symptoms. Lifelong treatment with supplemental biotin is recommended for both profound and partial biotinidase deficiency. Inherited metabolic disorders characterized by deficient activities of biotin-dependent carboxylases are termed multiple carboxylase deficiency. These include deficiencies in the enzymes holocarboxylase synthetase. Holocarboxylase synthetase deficiency prevents the body's cells from using biotin effectively and thus interferes with multiple carboxylase reactions. There can also be a genetic defect affecting the sodium-dependent multivitamin transporter protein. Biochemical and clinical manifestations of any of these metabolic disorders can include ketolactic acidosis, organic aciduria, hyperammonemia, rash, hypotonia, seizures, developmental delay, alopecia and coma. Use in biotechnology Chemically modified versions of biotin are widely used throughout the biotechnology industry to isolate proteins and non-protein compounds for biochemical assays. Because egg-derived avidin binds strongly to biotin with a dissociation constant Kd ≈ 10−15 M, biotinylated compounds of interest can be isolated from a sample by exploiting this highly stable interaction. First, the chemically modified biotin reagents are bound to the targeted compounds in a solution via a process called biotinylation. The choice of which chemical modification to use is responsible for the biotin reagent binding to a specific protein. Second, the sample is incubated with avidin bound to beads, then rinsed, removing all unbound proteins, while leaving only the biotinylated protein bound to avidin. Last, the biotinylated protein can be eluted from the beads with excess free biotin. The process can also utilize bacteria-derived streptavidin bound to beads, but because it has a higher dissociation constant than avidin, very harsh conditions are needed to elute the biotinylated protein from the beads, which often will denature the protein of interest. Interference with medical laboratory results When people are ingesting high levels of biotin in dietary supplements, a consequence can be clinically significant interference with diagnostic blood tests that use biotin-streptavidin technology. This methodology is commonly used to measure levels of hormones such as thyroid hormones, and other analytes such as 25-hydroxyvitamin D. Biotin interference can produce both falsely normal and falsely abnormal results. In the US, biotin as a non-prescription dietary supplement is sold in amounts of 1 to 10 mg per serving, with claims for supporting hair and nail health, and as 300 mg per day as a possibly effective treatment for multiple sclerosis (see § Research). Overconsumption of 5 mg/day or higher causes elevated concentration in plasma that interferes with biotin-streptavidin immunoassays in an unpredictable manner. Healthcare professionals are advised to instruct patients to stop taking biotin supplements for 48 h or even up to weeks before the test, depending on the specific test, dose, and frequency of biotin uptake. Guidance for laboratory staff is proposed to detect and manage biotin interference. History In 1916, W. G. Bateman observed that a diet high in raw egg whites caused toxic symptoms in dogs, cats, rabbits, and humans. By 1927, scientists such as Margarete Boas and Helen Parsons had performed experiments demonstrating the symptoms associated with "egg-white injury." They had found that rats fed large amounts of egg whites as their only protein source exhibited neurological dysfunction, hair loss, dermatitis, and eventually, death. In 1936, Fritz Kögl and Benno Tönnis documented isolating a yeast growth factor in a journal article titled "." (Representation of crystallized biotin from egg yolk). The name biotin derives from the Greek word ('to live') and the suffix "-in" (a general chemical suffix used in organic chemistry). Other research groups, working independently, had isolated the same compound under different names. Hungarian scientist Paul Gyorgy began investigating the factor responsible for egg-white injury in 1933 and in 1939, was successful in identifying what he called "Vitamin H" (the H represents , German for 'hair and skin'). Further chemical characterization of vitamin H revealed that it was water-soluble and present in high amounts in the liver. After experiments performed with yeast and Rhizobium trifolii, West and Wilson isolated a compound they called co-enzyme R. By 1940, it was recognized that all three compounds were identical and were collectively given the name: biotin. Gyorgy continued his work on biotin and in 1941 published a paper demonstrating that egg-white injury was caused by the binding of biotin by avidin. Unlike for many vitamins, there is insufficient information to establish a recommended dietary allowance, so dietary guidelines identify an "adequate intake" based on best available science with the understanding that at some later date this will be replaced by more exact information. Using E. coli, a biosynthesis pathway was proposed by Rolfe and Eisenberg in 1968. The initial step was described as a condensation of pimelyl-CoA and alanine to form 7-oxo-8-aminopelargonic acid. From there, they described a three-step process, the last being introducing a sulfur atom to form the tetrahydrothiophene ring. Research Multiple sclerosis High-dose biotin (300 mg/day = 10,000 times adequate intake) has been used in clinical trials for treatment of multiple sclerosis, a demyelinating autoimmune disease. The hypothesis is that biotin may promote remyelination of the myelin sheath of nerve cells, slowing or even reversing neurodegeneration. The proposed mechanisms are that biotin activates acetyl-CoA carboxylase, a key rate-limiting enzyme during the synthesis of myelin, and by reducing axonal hypoxia through enhanced energy production. Clinical trial results are mixed; a 2019 review concluded that a further investigation of the association between multiple sclerosis symptoms and biotin should be undertaken, whereas two 2020 reviews of a larger number of clinical trials reported no consistent evidence for benefits, and some evidence for increased disease activity and higher risk of relapse. Hair, nails, skin In the United States, biotin is promoted as a dietary supplement for strengthening hair and fingernails, though scientific data supporting these outcomes in humans are very weak. A review of the fingernails literature reported brittle nail improvement as evidence from two pre-1990 clinical trials that had administered an oral dietary supplement of 2.5 mg/day for several months, without a placebo control comparison group. There is no more recent clinical trial literature. A review of biotin as a treatment for hair loss identified case studies of infants and young children with genetic defect biotin deficiency having improved hair growth after supplementation, but went on to report that "there have been no randomized, controlled trials to prove the efficacy of supplementation with biotin in normal, healthy individuals." Biotin is also incorporated into topical hair and skin products with similar claims. The Dietary Supplement Health and Education Act of 1994 states that the US Food and Drug Administration must allow on the product label what are described as "Structure:Function" (S:F) health claims that ingredient(s) are essential for health. For example: Biotin helps maintain healthy skin, hair, and nails. If a S:F claim is made, the label must include the disclaimer "This statement has not been evaluated by the Food and Drug Administration. This product is not intended to diagnose, treat, cure, or prevent any disease." Animals In cattle, biotin is necessary for hoof health. Lameness due to hoof problems is common, with herd prevalence estimated at 10 to 35%. Consequences of lameness include less food consumption, lower milk production, and increased veterinary treatment costs. Results after 4–6 months from supplementing biotin at 20 mg/day into daily diet reduces the risk of lameness. A review of controlled trials reported that supplementation at 20 mg/day increased milk yield by 4.8%. The discussion speculated that this could be an indirect consequence of improved hoof health or a direct effect on milk production. For horses, conditions such as chronic laminitis, cracked hooves, or dry, brittle feet incapable of holding shoes are a common problem. Biotin is a popular nutritional supplement. There are recommendations that horses need 15 to 25 mg/day. Studies report biotin improves the growth of new hoof horn rather than improving the status of existing hoof, so months of supplementation are needed for the hoof wall to be completely replaced. See also Biotin deficiency Biotin sulfoxide Biotinidase deficiency Biotinylation Multiple carboxylase deficiency NeutrAvidin Photobiotin References External links B vitamins Cofactors Ureas Carboxylic acids Thiolanes
Biotin
[ "Chemistry" ]
4,354
[ "Organic compounds", "Carboxylic acids", "Functional groups", "Ureas" ]
54,125
https://en.wikipedia.org/wiki/Breccia
Breccia ( , ; ) is a rock composed of large angular broken fragments of minerals or rocks cemented together by a fine-grained matrix. The word has its origins in the Italian language, in which it means "rubble". A breccia may have a variety of different origins, as indicated by the named types including sedimentary breccia, fault or tectonic breccia, igneous breccia, impact breccia, and hydrothermal breccia. A megabreccia is a breccia composed of very large rock fragments, sometimes kilometers across, which can be formed by landslides, impact events, or caldera collapse. Types Breccia is composed of coarse rock fragments held together by cement or a fine-grained matrix. Like conglomerate, breccia contains at least 30 percent of gravel-sized particles (particles over 2mm in size), but it is distinguished from conglomerate because the rock fragments have sharp edges that have not been worn down. These indicate that the gravel was deposited very close to its source area, since otherwise the edges would have been rounded during transport. Most of the rounding of rock fragments takes place within the first few kilometers of transport, though complete rounding of pebbles of very hard rock may take up to of river transport. A megabreccia is a breccia containing very large rock fragments, from at least a meter in size to greater than 400 meters. In some cases, the clasts are so large that the brecciated nature of the rock is not obvious. Megabreccias can be formed by landslides, impact events, or caldera collapse. Breccias are further classified by their mechanism of formation. Sedimentary Sedimentary breccia is breccia formed by sedimentary processes. For example, scree deposited at the base of a cliff may become cemented to form a talus breccia without ever experiencing transport that might round the rock fragments. Thick sequences of sedimentary (colluvial) breccia are generally formed next to fault scarps in grabens. Sedimentary breccia may be formed by submarine debris flows. Turbidites occur as fine-grained peripheral deposits to sedimentary breccia flows. In a karst terrain, a collapse breccia may form due to collapse of rock into a sinkhole or in cave development. Collapse breccias also form by dissolution of underlying evaporite beds. Fault Fault or tectonic breccia results from the grinding action of two fault blocks as they slide past each other. Subsequent cementation of these broken fragments may occur by means of the introduction of mineral matter in groundwater. Igneous Igneous clastic rocks can be divided into two classes: Broken, fragmental rocks associated with volcanic eruptions, both of the lava and pyroclastic type; Broken, fragmental rocks produced by intrusive processes, usually associated with plutons or porphyry stocks. Volcanic Volcanic pyroclastic rocks are formed by explosive eruption of lava and any rocks which are entrained within the eruptive column. This may include rocks plucked off the wall of the magma conduit, or physically picked up by the ensuing pyroclastic surge. Lavas, especially rhyolite and dacite flows, tend to form clastic volcanic rocks by a process known as autobrecciation. This occurs when the thick, nearly solid lava breaks up into blocks and these blocks are then reincorporated into the lava flow again and mixed in with the remaining liquid magma. The resulting breccia is uniform in rock type and chemical composition. Caldera collapse leads to the formation of megabreccias, which are sometimes mistaken for outcrops of the caldera floor. These are instead blocks of precaldera rock, often coming from the unstable oversteepened rim of the caldera. They are distinguished from mesobreccias whose clasts are less than a meter in size and which form layers in the caldera floor. Some clasts of caldera megabreccias can be over a kilometer in length. Within the volcanic conduits of explosive volcanoes the volcanic breccia environment merges into the intrusive breccia environment. There the upwelling lava tends to solidify during quiescent intervals only to be shattered by ensuing eruptions. This produces an alloclastic volcanic breccia. Intrusive Clastic rocks are also commonly found in shallow subvolcanic intrusions such as porphyry stocks, granites and kimberlite pipes, where they are transitional with volcanic breccias. Intrusive rocks can become brecciated in appearance by multiple stages of intrusion, especially if fresh magma is intruded into partly consolidated or solidified magma. This may be seen in many granite intrusions where later aplite veins form a late-stage stockwork through earlier phases of the granite mass. When particularly intense, the rock may appear as a chaotic breccia. Clastic rocks in mafic and ultramafic intrusions have been found and form via several processes: consumption and melt-mingling with wall rocks, where the wall rocks are softened and gradually invaded by the hotter ultramafic intrusion (producing taxitic texture); accumulation of rocks which fall through the magma chamber from the roof, forming chaotic remnants; autobrecciation of partly consolidated cumulate by fresh magma injections; accumulation of xenoliths within a feeder conduit or vent conduit, forming a diatreme breccia pipe. Impact Impact breccias are thought to be diagnostic of an impact event such as an asteroid or comet striking the Earth and are normally found at impact craters. Impact breccia, a type of impactite, forms during the process of impact cratering when large meteorites or comets impact with the Earth or other rocky planets or asteroids. Breccia of this type may be present on or beneath the floor of the crater, in the rim, or in the ejecta expelled beyond the crater. Impact breccia may be identified by its occurrence in or around a known impact crater, and/or an association with other products of impact cratering such as shatter cones, impact glass, shocked minerals, and chemical and isotopic evidence of contamination with extraterrestrial material (e.g., iridium and osmium anomalies). An example of an impact breccia is the Neugrund breccia, which was formed in the Neugrund impact. Hydrothermal Hydrothermal breccias usually form at shallow crustal levels (<1 km) between 150 and 350 °C, when seismic or volcanic activity causes a void to open along a fault deep underground. The void draws in hot water, and as pressure in the cavity drops, the water violently boils. In addition, the sudden opening of a cavity causes rock at the sides of the fault to destabilise and implode inwards, and the broken rock gets caught up in a churning mixture of rock, steam and boiling water. Rock fragments collide with each other and the sides of the void, and the angular fragments become more rounded. Volatile gases are lost to the steam phase as boiling continues, in particular carbon dioxide. As a result, the chemistry of the fluids changes and ore minerals rapidly precipitate. Breccia-hosted ore deposits are quite common. The morphology of breccias associated with ore deposits varies from tabular sheeted veins and clastic dikes associated with overpressured sedimentary strata, to large-scale intrusive diatreme breccias (breccia pipes), or even some synsedimentary diatremes formed solely by the overpressure of pore fluid within sedimentary basins. Hydrothermal breccias are usually formed by hydrofracturing of rocks by highly pressured hydrothermal fluids. They are typical of the epithermal ore environment and are intimately associated with intrusive-related ore deposits such as skarns, greisens and porphyry-related mineralisation. Epithermal deposits are mined for copper, silver and gold. In the mesothermal regime, at much greater depths, fluids under lithostatic pressure can be released during seismic activity associated with mountain building. The pressurised fluids ascend towards shallower crustal levels that are under lower hydrostatic pressure. On their journey, high-pressure fluids crack rock by hydrofracturing, forming an angular in situ breccia. Rounding of rock fragments is less common in the mesothermal regime, as the formational event is brief. If boiling occurs, methane and hydrogen sulfide may be lost to the steam phase, and ore may precipitate. Mesothermal deposits are often mined for gold. Ornamental uses For thousands of years, the striking visual appearance of breccias has made them a popular sculptural and architectural material. Breccia was used for column bases in the Minoan palace of Knossos on Crete in about 1800 BC. Breccia was used on a limited scale by the ancient Egyptians; one of the best-known examples is the statue of the goddess Tawaret in the British Museum. Breccia was regarded by the Romans as an especially precious stone and was often used in high-profile public buildings. Many types of marble are brecciated, such as Breccia Oniciata. See also References Further reading it:Rocce sedimentarie clastiche#Brecce ja:礫岩#角礫岩
Breccia
[ "Materials_science" ]
1,969
[ "Breccias", "Fracture mechanics" ]
54,137
https://en.wikipedia.org/wiki/Methane%20clathrate
Methane clathrate (CH4·5.75H2O) or (4CH4·23H2O), also called methane hydrate, hydromethane, methane ice, fire ice, natural gas hydrate, or gas hydrate, is a solid clathrate compound (more specifically, a clathrate hydrate) in which a large amount of methane is trapped within a crystal structure of water, forming a solid similar to ice. Originally thought to occur only in the outer regions of the Solar System, where temperatures are low and water ice is common, significant deposits of methane clathrate have been found under sediments on the ocean floors of the Earth (around 1100m below the sea level). Methane hydrate is formed when hydrogen-bonded water and methane gas come into contact at high pressures and low temperatures in oceans. Methane clathrates are common constituents of the shallow marine geosphere and they occur in deep sedimentary structures and form outcrops on the ocean floor. Methane hydrates are believed to form by the precipitation or crystallisation of methane migrating from deep along geological faults. Precipitation occurs when the methane comes in contact with water within the sea bed subject to temperature and pressure. In 2008, research on Antarctic Vostok Station and EPICA Dome C ice cores revealed that methane clathrates were also present in deep Antarctic ice cores and record a history of atmospheric methane concentrations, dating to 800,000 years ago. The ice-core methane clathrate record is a primary source of data for global warming research, along with oxygen and carbon dioxide. Methane clathrates used to be considered as a potential source of abrupt climate change, following the clathrate gun hypothesis. In this scenario, heating causes catastrophic melting and breakdown of primarily undersea hydrates, leading to a massive release of methane and accelerating warming. Current research shows that hydrates react very slowly to warming, and that it's very difficult for methane to reach the atmosphere after dissociation. Some active seeps instead act as a minor carbon sink, because with the majority of methane dissolved underwater and encouraging methanotroph communities, the area around the seep also becomes more suitable for phytoplankton. As the result, methane hydrates are no longer considered one of the tipping points in the climate system, and according to the IPCC Sixth Assessment Report, no "detectable" impact on the global temperatures will occur in this century through this mechanism. Over several millennia, a more substantial response may still be seen. General Methane hydrates were discovered in Russia in the 1960s, and studies for extracting gas from it emerged at the beginning of the 21st century. Structure and composition The nominal methane clathrate hydrate composition is (CH4)4(H2O)23, or 1 mole of methane for every 5.75 moles of water, corresponding to 13.4% methane by mass, although the actual composition is dependent on how many methane molecules fit into the various cage structures of the water lattice. The observed density is around 0.9 g/cm3, which means that methane hydrate will float to the surface of the sea or of a lake unless it is bound in place by being formed in or anchored to sediment. One litre of fully saturated methane clathrate solid would therefore contain about 120 grams of methane (or around 169 litres of methane gas at 0 °C and 1 atm), or one cubic metre of methane clathrate releases about 160 cubic metres of gas. Methane forms a "structure-I" hydrate with two dodecahedral (12 vertices, thus 12 water molecules) and six tetradecahedral (14 water molecules) water cages per unit cell. (Because of sharing of water molecules between cages, there are only 46 water molecules per unit cell.) This compares with a hydration number of 20 for methane in aqueous solution. A methane clathrate MAS NMR spectrum recorded at 275 K and 3.1 MPa shows a peak for each cage type and a separate peak for gas phase methane. In 2003, a clay-methane hydrate intercalate was synthesized in which a methane hydrate complex was introduced at the interlayer of a sodium-rich montmorillonite clay. The upper temperature stability of this phase is similar to that of structure-I hydrate. Natural deposits Methane clathrates are restricted to the shallow lithosphere (i.e. < 2,000 m depth). Furthermore, necessary conditions are found only in either continental sedimentary rocks in polar regions where average surface temperatures are less than 0 °C; or in oceanic sediment at water depths greater than 300 m where the bottom water temperature is around 2 °C. In addition, deep fresh water lakes may host gas hydrates as well, e.g. the fresh water Lake Baikal, Siberia. Continental deposits have been located in Siberia and Alaska in sandstone and siltstone beds at less than 800 m depth. Oceanic deposits seem to be widespread in the continental shelf (see Fig.) and can occur within the sediments at depth or close to the sediment-water interface. They may cap even larger deposits of gaseous methane. Oceanic Methane hydrate can occur in various forms like massive, dispersed within pore spaces, nodules, veins/fractures/faults, and layered horizons. Generally, it is found unstable at standard pressure and temperature conditions, and 1 m3 of methane hydrate upon dissociation yields about 164 m3 of methane and 0.87 m3 of freshwater. There are two distinct types of oceanic deposits. The most common is dominated (> 99%) by methane contained in a structure I clathrate and generally found at depth in the sediment. Here, the methane is isotopically light (δ13C < −60‰), which indicates that it is derived from the microbial reduction of CO2. The clathrates in these deep deposits are thought to have formed in situ from the microbially produced methane since the δ13C values of clathrate and surrounding dissolved methane are similar. However, it is also thought that freshwater used in the pressurization of oil and gas wells in permafrost and along the continental shelves worldwide combines with natural methane to form clathrate at depth and pressure since methane hydrates are more stable in freshwater than in saltwater. Local variations may be widespread since the act of forming hydrate, which extracts pure water from saline formation waters, can often lead to local and potentially significant increases in formation water salinity. Hydrates normally exclude the salt in the pore fluid from which it forms. Thus, they exhibit high electric resistivity like ice, and sediments containing hydrates have higher resistivity than sediments without gas hydrates (Judge [67]). These deposits are located within a mid-depth zone around 300–500 m thick in the sediments (the gas hydrate stability zone, or GHSZ) where they coexist with methane dissolved in the fresh, not salt, pore-waters. Above this zone methane is only present in its dissolved form at concentrations that decrease towards the sediment surface. Below it, methane is gaseous. At Blake Ridge on the Atlantic continental rise, the GHSZ started at 190 m depth and continued to 450 m, where it reached equilibrium with the gaseous phase. Measurements indicated that methane occupied 0-9% by volume in the GHSZ, and ~12% in the gaseous zone. In the less common second type found near the sediment surface, some samples have a higher proportion of longer-chain hydrocarbons (< 99% methane) contained in a structure II clathrate. Carbon from this type of clathrate is isotopically heavier (δ13C is −29 to −57 ‰) and is thought to have migrated upwards from deep sediments, where methane was formed by thermal decomposition of organic matter. Examples of this type of deposit have been found in the Gulf of Mexico and the Caspian Sea. Some deposits have characteristics intermediate between the microbially and thermally sourced types and are considered formed from a mixture of the two. The methane in gas hydrates is dominantly generated by microbial consortia degrading organic matter in low oxygen environments, with the methane itself produced by methanogenic archaea. Organic matter in the uppermost few centimeters of sediments is first attacked by aerobic bacteria, generating CO2, which escapes from the sediments into the water column. Below this region of aerobic activity, anaerobic processes take over, including, successively with depth, the microbial reduction of nitrite/nitrate, metal oxides, and then sulfates are reduced to sulfides. Finally, methanogenesis becomes a dominant pathway for organic carbon remineralization. If the sedimentation rate is low (about 1  cm/yr), the organic carbon content is low (about 1% ), and oxygen is abundant, aerobic bacteria can use up all the organic matter in the sediments faster than oxygen is depleted, so lower-energy electron acceptors are not used. But where sedimentation rates and the organic carbon content are high, which is typically the case on continental shelves and beneath western boundary current upwelling zones, the pore water in the sediments becomes anoxic at depths of only a few centimeters or less. In such organic-rich marine sediments, sulfate becomes the most important terminal electron acceptor due to its high concentration in seawater. However, it too is depleted by a depth of centimeters to meters. Below this, methane is produced. This production of methane is a rather complicated process, requiring a highly reducing environment (Eh −350 to −450 mV) and a pH between 6 and 8, as well as a complex syntrophic, consortia of different varieties of archaea and bacteria. However, it is only archaea that actually emit methane. In some regions (e.g., Gulf of Mexico, Joetsu Basin) methane in clathrates may be at least partially derive from thermal degradation of organic matter (e.g. petroleum generation), with oil even forming an exotic component within the hydrate itself that can be recovered when the hydrate is disassociated. The methane in clathrates typically has a biogenic isotopic signature and highly variable δ13C (−40 to −100‰), with an approximate average of about −65‰ . Below the zone of solid clathrates, large volumes of methane may form bubbles of free gas in the sediments. The presence of clathrates at a given site can often be determined by observation of a "bottom simulating reflector" (BSR), which is a seismic reflection at the sediment to clathrate stability zone interface caused by the unequal densities of normal sediments and those laced with clathrates. Gas hydrate pingos have been discovered in the Arctic oceans Barents sea. Methane is bubbling from these dome-like structures, with some of these gas flares extending close to the sea surface. Reservoir size The size of the oceanic methane clathrate reservoir is poorly known, and estimates of its size decreased by roughly an order of magnitude per decade since it was first recognized that clathrates could exist in the oceans during the 1960s and 1970s. The highest estimates (e.g. 3 m3) were based on the assumption that fully dense clathrates could litter the entire floor of the deep ocean. Improvements in our understanding of clathrate chemistry and sedimentology have revealed that hydrates form in only a narrow range of depths (continental shelves), at only some locations in the range of depths where they could occur (10-30% of the Gas hydrate stability zone), and typically are found at low concentrations (0.9–1.5% by volume) at sites where they do occur. Recent estimates constrained by direct sampling suggest the global inventory occupies between . This estimate, corresponding to 500–2500 gigatonnes carbon (Gt C), is smaller than the 5000 Gt C estimated for all other geo-organic fuel reserves but substantially larger than the ~230 Gt C estimated for other natural gas sources. The permafrost reservoir has been estimated at about 400 Gt C in the Arctic, but no estimates have been made of possible Antarctic reservoirs. These are large amounts. In comparison, the total carbon in the atmosphere is around 800 gigatons (see Carbon: Occurrence). These modern estimates are notably smaller than the 10,000 to 11,000 Gt C (2 m3) proposed by previous researchers as a reason to consider clathrates to be a geo-organic fuel resource (MacDonald 1990, Kvenvolden 1998). Lower abundances of clathrates do not rule out their economic potential, but a lower total volume and apparently low concentration at most sites does suggest that only a limited percentage of clathrates deposits may provide an economically viable resource. Continental Methane clathrates in continental rocks are trapped in beds of sandstone or siltstone at depths of less than 800 m. Sampling indicates they are formed from a mix of thermally and microbially derived gas from which the heavier hydrocarbons were later selectively removed. These occur in Alaska, Siberia, and Northern Canada. In 2008, Canadian and Japanese researchers extracted a constant stream of natural gas from a test project at the Mallik gas hydrate site in the Mackenzie River delta. This was the second such drilling at Mallik: the first took place in 2002 and used heat to release methane. In the 2008 experiment, researchers were able to extract gas by lowering the pressure, without heating, requiring significantly less energy. The Mallik gas hydrate field was first discovered by Imperial Oil in 1971–1972. Commercial use Economic deposits of hydrate are termed natural gas hydrate (NGH) and store 164 m3 of methane, 0.8 m3 water in 1 m3 hydrate. Most NGH is found beneath the seafloor (95%) where it exists in thermodynamic equilibrium. The sedimentary methane hydrate reservoir probably contains 2–10 times the currently known reserves of conventional natural gas, . This represents a potentially important future source of hydrocarbon fuel. However, in the majority of sites deposits are thought to be too dispersed for economic extraction. Other problems facing commercial exploitation are detection of viable reserves and development of the technology for extracting methane gas from the hydrate deposits. In August 2006, China announced plans to spend 800 million yuan (US$100 million) over the next 10 years to study natural gas hydrates. A potentially economic reserve in the Gulf of Mexico may contain approximately of gas. Bjørn Kvamme and Arne Graue at the Institute for Physics and technology at the University of Bergen have developed a method for injecting into hydrates and reversing the process; thereby extracting CH4 by direct exchange. The University of Bergen's method is being field tested by ConocoPhillips and state-owned Japan Oil, Gas and Metals National Corporation (JOGMEC), and partially funded by the U.S. Department of Energy. The project has already reached injection phase and was analyzing resulting data by March 12, 2012. On March 12, 2013, JOGMEC researchers announced that they had successfully extracted natural gas from frozen methane hydrate. In order to extract the gas, specialized equipment was used to drill into and depressurize the hydrate deposits, causing the methane to separate from the ice. The gas was then collected and piped to surface where it was ignited to prove its presence. According to an industry spokesperson, "It [was] the world's first offshore experiment producing gas from methane hydrate". Previously, gas had been extracted from onshore deposits, but never from offshore deposits which are much more common. The hydrate field from which the gas was extracted is located from central Japan in the Nankai Trough, under the sea. A spokesperson for JOGMEC remarked "Japan could finally have an energy source to call its own". Marine geologist Mikio Satoh remarked "Now we know that extraction is possible. The next step is to see how far Japan can get costs down to make the technology economically viable." Japan estimates that there are at least 1.1 trillion cubic meters of methane trapped in the Nankai Trough, enough to meet the country's needs for more than ten years. Both Japan and China announced in May 2017 a breakthrough for mining methane clathrates, when they extracted methane from hydrates in the South China Sea. China described the result as a breakthrough; Praveen Linga from the Department of Chemical and Biomolecular Engineering at the National University of Singapore agreed "Compared with the results we have seen from Japanese research, the Chinese scientists have managed to extract much more gas in their efforts". Industry consensus is that commercial-scale production remains years away. Environmental concerns Experts caution that environmental impacts are still being investigated and that methane—a greenhouse gas with around 86 times as much global warming potential over a 20-year period (GWP100) as carbon dioxide—could potentially escape into the atmosphere if something goes wrong. Furthermore, while cleaner than coal, burning natural gas also creates carbon dioxide emissions. Hydrates in natural gas processing Routine operations Methane clathrates (hydrates) are also commonly formed during natural gas production operations, when liquid water is condensed in the presence of methane at high pressure. It is known that larger hydrocarbon molecules like ethane and propane can also form hydrates, although longer molecules (butanes, pentanes) cannot fit into the water cage structure and tend to destabilise the formation of hydrates. Once formed, hydrates can block pipeline and processing equipment. They are generally then removed by reducing the pressure, heating them, or dissolving them by chemical means (methanol is commonly used). Care must be taken to ensure that the removal of the hydrates is carefully controlled, because of the potential for the hydrate to undergo a phase transition from the solid hydrate to release water and gaseous methane at a high rate when the pressure is reduced. The rapid release of methane gas in a closed system can result in a rapid increase in pressure. It is generally preferable to prevent hydrates from forming or blocking equipment. This is commonly achieved by removing water, or by the addition of ethylene glycol (MEG) or methanol, which act to depress the temperature at which hydrates will form. In recent years, development of other forms of hydrate inhibitors have been developed, like Kinetic Hydrate Inhibitors (increasing the required sub-cooling which hydrates require to form, at the expense of increased hydrate formation rate) and anti-agglomerates, which do not prevent hydrates forming, but do prevent them sticking together to block equipment. Effect of hydrate phase transition during deep water drilling When drilling in oil- and gas-bearing formations submerged in deep water, the reservoir gas may flow into the well bore and form gas hydrates owing to the low temperatures and high pressures found during deep water drilling. The gas hydrates may then flow upward with drilling mud or other discharged fluids. When the hydrates rise, the pressure in the annulus decreases and the hydrates dissociate into gas and water. The rapid gas expansion ejects fluid from the well, reducing the pressure further, which leads to more hydrate dissociation and further fluid ejection. The resulting violent expulsion of fluid from the annulus is one potential cause or contributor to the "kick". (Kicks, which can cause blowouts, typically do not involve hydrates: see Blowout: formation kick). Measures which reduce the risk of hydrate formation include: High flow-rates, which limit the time for hydrate formation in a volume of fluid, thereby reducing the kick potential. Careful measuring of line flow to detect incipient hydrate plugging. Additional care in measuring when gas production rates are low and the possibility of hydrate formation is higher than at relatively high gas flow rates. Monitoring of well casing after it is "shut in" (isolated) may indicate hydrate formation. Following "shut in", the pressure rises while gas diffuses through the reservoir to the bore hole; the rate of pressure rise exhibit a reduced rate of increase while hydrates are forming. Additions of energy (e.g., the energy released by setting cement used in well completion) can raise the temperature and convert hydrates to gas, producing a "kick". Blowout recovery At sufficient depths, methane complexes directly with water to form methane hydrates, as was observed during the Deepwater Horizon oil spill in 2010. BP engineers developed and deployed a subsea oil recovery system over oil spilling from a deepwater oil well below sea level to capture escaping oil. This involved placing a dome over the largest of the well leaks and piping it to a storage vessel on the surface. This option had the potential to collect some 85% of the leaking oil but was previously untested at such depths. BP deployed the system on May 7–8, but it failed due to buildup of methane clathrate inside the dome; with its low density of approximately 0.9 g/cm3 the methane hydrates accumulated in the dome, adding buoyancy and obstructing flow. Methane clathrates and climate change Natural gas hydrates for gas storage and transportation Since methane clathrates are stable at a higher temperature than liquefied natural gas (LNG) (−20 vs −162 °C), there is some interest in converting natural gas into clathrates (Solidified Natural Gas or SNG) rather than liquifying it when transporting it by seagoing vessels. A significant advantage would be that the production of natural gas hydrate (NGH) from natural gas at the terminal would require a smaller refrigeration plant and less energy than LNG would. Offsetting this, for 100 tonnes of methane transported, 750 tonnes of methane hydrate would have to be transported; since this would require a ship of 7.5 times greater displacement, or require more ships, it is unlikely to prove economically feasible.. Recently, methane hydrate has received considerable interest for large scale stationary storage application due to the very mild storage conditions with the inclusion of tetrahydrofuran (THF) as a co-guest. With the inclusion of tetrahydrofuran, though there is a slight reduction in the gas storage capacity, the hydrates have been demonstrated to be stable for several months in a recent study at −2 °C and atmospheric pressure. A recent study has demonstrated that SNG can be formed directly with seawater instead of pure water in combination with THF. See also Future energy development Long-term effects of global warming The Swarm (Schätzing novel) Unconventional (oil & gas) reservoir Notes References External links Are there deposits of methane under the sea? Will global warming release the methane to the atmosphere? (2007) Methane seeps from Arctic sea bed (BBC) Bubbles of warming, beneath the ice (LA Times 2009) online calculator : hydrate formation conditions with different EOSs Research Centre for Arctic Gas Hydrate, Environment and Climate (CAGE) Center for Hydrate Research USGS Geological Research Activities with U.S. Minerals Management Service - Methane Gas Hydrates Carbon Neutral Methane Energy Production from Hydrate Deposits (Columbia University) Video USGS Gas Hydrates Lab (2012) Ancient Methane Explosions Created Ocean Craters (2017) Clathrate hydrates Hydrocarbons Methane Unconventional gas Natural gas
Methane clathrate
[ "Chemistry" ]
4,811
[ "Hydrocarbons", "Methane", "Hydrates", "Organic compounds", "Clathrates", "Clathrate hydrates", "Greenhouse gases" ]
54,140
https://en.wikipedia.org/wiki/Clathrate%20hydrate
Clathrate hydrates, or gas hydrates, clathrates, or hydrates, are crystalline water-based solids physically resembling ice, in which small non-polar molecules (typically gases) or polar molecules with large hydrophobic moieties are trapped inside "cages" of hydrogen bonded, frozen water molecules. In other words, clathrate hydrates are clathrate compounds in which the host molecule is water and the guest molecule is typically a gas or liquid. Without the support of the trapped molecules, the lattice structure of hydrate clathrates would collapse into conventional ice crystal structure or liquid water. Most low molecular weight gases, including , , , , , , , , , and as well as some higher hydrocarbons and freons, will form hydrates at suitable temperatures and pressures. Clathrate hydrates are not officially chemical compounds, as the enclathrated guest molecules are never bonded to the lattice. The formation and decomposition of clathrate hydrates are first order phase transitions, not chemical reactions. Their detailed formation and decomposition mechanisms on a molecular level are still not well understood. Clathrate hydrates were first documented in 1810 by Sir Humphry Davy who found that water was a primary component of what was earlier thought to be solidified chlorine. Clathrates have been found to occur naturally in large quantities. Around 6.4 trillion () tonnes of methane is trapped in deposits of methane clathrate on the deep ocean floor. Such deposits can be found on the Norwegian continental shelf in the northern headwall flank of the Storegga Slide. Clathrates can also exist as permafrost, as at the Mallik gas hydrate site in the Mackenzie Delta of northwestern Canadian Arctic. These natural gas hydrates are seen as a potentially vast energy resource and several countries have dedicated national programs to develop this energy resource. Clathrate hydrate has also been of great interest as technology enabler for many applications like seawater desalination, gas storage, carbon dioxide capture & storage, cooling medium for data centre and district cooling etc. Hydrocarbon clathrates cause problems for the petroleum industry, because they can form inside gas pipelines, often resulting in obstructions. Deep sea deposition of carbon dioxide clathrate has been proposed as a method to remove this greenhouse gas from the atmosphere and control climate change. Clathrates are suspected to occur in large quantities on some outer planets, moons and trans-Neptunian objects, binding gas at fairly high temperatures. History and etymology Clathrate hydrates were discovered in 1810 by Humphry Davy. Clathrates were studied by P. Pfeiffer in 1927 and in 1930, E. Hertel defined "molecular compounds" as substances decomposed into individual components following the mass action law in solution or gas state. Clathrate hydrates were discovered to form blockages in gas pipelines in 1934 by Hammerschmidt that led to increase in research to avoid hydrate formation. In 1945, H. M. Powell analyzed the crystal structure of these compounds and named them clathrates. Gas production through methane hydrates has since been realized and has been tested for energy production in Japan and China. The word clathrate is derived from the Latin (), meaning 'with bars, latticed'. Structure Gas hydrates usually form two crystallographic cubic structures: structure (Type) I (named sI) and structure (Type) II (named sII) of space groups and respectively. A third hexagonal structure of space group may also be observed (Type H). The unit cell of Type I consists of 46 water molecules, forming two types of cages – small and large. The unit cell contains two small cages and six large ones. The small cage has the shape of a pentagonal dodecahedron (512) (which is not a regular dodecahedron) and the large one that of a tetradecahedron, specifically a hexagonal truncated trapezohedron (51262). Together, they form a version of the Weaire–Phelan structure. Typical guests forming Type I hydrates are CO2 in carbon dioxide clathrate and CH4 in methane clathrate. The unit cell of Type II consists of 136 water molecules, again forming two types of cages – small and large. In this case there are sixteen small cages and eight large ones in the unit cell. The small cage again has the shape of a pentagonal dodecahedron (512), but the large one is a hexadecahedron (51264). Type II hydrates are formed by gases like O2 and N2. The unit cell of Type H consists of 34 water molecules, forming three types of cages – two small ones of different types, and one "huge". In this case, the unit cell consists of three small cages of type 512, two small ones of type 435663 and one huge of type 51268. The formation of Type H requires the cooperation of two guest gases (large and small) to be stable. It is the large cavity that allows structure H hydrates to fit in large molecules (e.g. butane, hydrocarbons), given the presence of other smaller help gases to fill and support the remaining cavities. Structure H hydrates were suggested to exist in the Gulf of Mexico. Thermogenically produced supplies of heavy hydrocarbons are common there. The molar fraction of water of most clathrate hydrates is 85%. Clathrate hydrates are derived from organic hydrogen-bonded frameworks. These frameworks are prepared from molecules that "self-associate" by multiple hydrogen-bonding interactions. Small molecules or gases (i.e. methane, carbon dioxide, hydrogen) can be encaged as a guest in hydrates. The ideal guest/host ratio for clathrate hydrates range from 0.8 to 0.9. The guest interaction with the host is limited to van der Waals forces. Certain exceptions exist in semiclathrates where guests incorporate into the host structure via hydrogen bonding with the host structure. Hydrates form often with partial guest filling and collapse in the absence of guests occupying the water cages. Like ice, clathrate hydrates are stable at low temperatures and high pressure and possess similar properties like electrical resistivity. Clathrate hydrates are naturally occurring and can be found in the permafrost and oceanic sediments. Hydrates can also be synthesized through seed crystallization or using amorphous precursors for nucleation. Clathrates have been explored for many applications including: gas storage, gas production, gas separation, desalination, thermoelectrics, photovoltaics, and batteries. Hydrates on Earth Natural gas hydrates Naturally on Earth gas hydrates can be found on the seabed, in ocean sediments, in deep lake sediments (e.g. Lake Baikal), as well as in the permafrost regions. The amount of methane potentially trapped in natural methane hydrate deposits may be significant (1015 to 1017 cubic metres), which makes them of major interest as a potential energy resource. Catastrophic release of methane from the decomposition of such deposits may lead to a global climate change, referred to as the "clathrate gun hypothesis", because CH4 is a more potent greenhouse gas than CO2 (see Atmospheric methane). The fast decomposition of such deposits is considered a geohazard, due to its potential to trigger landslides, earthquakes and tsunamis. However, natural gas hydrates do not contain only methane but also other hydrocarbon gases, as well as H2S and CO2. Air hydrates are frequently observed in polar ice samples. Pingos are common structures in permafrost regions. Similar structures are found in deep water related to methane vents. Significantly, gas hydrates can even be formed in the absence of a liquid phase. Under that situation, water is dissolved in gas or in liquid hydrocarbon phase. In 2017, both Japan and China announced that attempts at large-scale resource extraction of methane hydrates from under the seafloor were successful. However, commercial-scale production remains years away. The 2020 Research Fronts report identified gas hydrate accumulation and mining technology as one of the top 10 research fronts in the geosciences. Gas hydrates in pipelines Thermodynamic conditions favouring hydrate formation are often found in pipelines. This is highly undesirable, because the clathrate crystals might agglomerate and plug the line and cause flow assurance failure and damage valves and instrumentation. The results can range from flow reduction to equipment damage. Hydrate formation, prevention and mitigation philosophy Hydrates have a strong tendency to agglomerate and to adhere to the pipe wall and thereby plug the pipeline. Once formed, they can be decomposed by increasing the temperature and/or decreasing the pressure. Even under these conditions, the clathrate dissociation is a slow process. Therefore, preventing hydrate formation appears to be the key to the problem. A hydrate prevention philosophy could typically be based on three levels of security, listed in order of priority: Avoid operational conditions that might cause formation of hydrates by depressing the hydrate formation temperature using glycol dehydration; Temporarily change operating conditions in order to avoid hydrate formation; Prevent formation of hydrates by addition of chemicals that (a) shift the hydrate equilibrium conditions towards lower temperatures and higher pressures or (b) increase hydrate formation time (inhibitors) The actual philosophy would depend on operational circumstances such as pressure, temperature, type of flow (gas, liquid, presences of water etc.). Hydrate inhibitors When operating within a set of parameters where hydrates could be formed, there are still ways to avoid their formation. Altering the gas composition by adding chemicals can lower the hydrate formation temperature and/or delay their formation. Two options generally exist: Thermodynamic inhibitors Kinetic inhibitors and anti-agglomerants The most common thermodynamic inhibitors are methanol, monoethylene glycol (MEG), and diethylene glycol (DEG), commonly referred to as glycol. All may be recovered and recirculated, but the economics of methanol recovery is not favourable in most cases. MEG is preferred over DEG for applications where the temperature is expected to be −10 °C or lower due to high viscosity at low temperatures. Triethylene glycol (TEG) has too low vapour pressure to be suited as an inhibitor injected into a gas stream. More methanol is lost in the gas phase when compared to MEG or DEG. The use of kinetic inhibitors and anti-agglomerants in actual field operations is a new and evolving technology. It requires extensive tests and optimisation to the actual system. While kinetic inhibitors work by slowing down the kinetics of the nucleation, anti-agglomerants do not stop the nucleation, but stop the agglomeration (sticking together) of gas hydrate crystals. These two kinds of inhibitors are also known as low dosage hydrate inhibitors, because they require much smaller concentrations than the conventional thermodynamic inhibitors. Kinetic inhibitors, which do not require water and hydrocarbon mixture to be effective, are usually polymers or copolymers and anti-agglomerants (requires water and hydrocarbon mixture) are polymers or zwitterionic – usually ammonium and COOH – surfactants being both attracted to hydrates and hydrocarbons. Empty clathrate hydrates Empty clathrate hydrates are thermodynamically unstable (guest molecules are of paramount importance to stabilize these structures) with respect to ice, and as such their study using experimental techniques is greatly limited to very specific formation conditions; however, their mechanical stability renders theoretical and computer simulation methods the ideal choice to address their thermodynamic properties. Starting from very cold samples (110–145 K), Falenty et al. degassed Ne–sII clathrates for several hours using vacuum pumping to obtain a so-called ice XVI, while employing neutron diffraction to observe that (i) the empty sII hydrate structure decomposes at and, furthermore, (ii) the empty hydrate shows a negative thermal expansion at , and it is mechanically more stable and has a larger lattice constant at low temperatures than the Ne-filled analogue. The existence of such a porous ice had been theoretically predicted before. From a theoretical perspective, empty hydrates can be probed using Molecular Dynamics or Monte Carlo techniques. Conde et al. used empty hydrates and a fully atomic description of the solid lattice to estimate the phase diagram of H2O at negative pressures and , and obtain the differences in chemical potentials between ice Ih and the empty hydrates, central to the van der Waals−Platteeuw theory. Jacobson et al. performed simulations using a monoatomic (coarse-grained) model developed for H2O that is capable of capturing the tetrahedral symmetry of hydrates. Their calculations revealed that, under 1 atm pressure, sI and sII empty hydrates are metastable regarding the ice phases up to their melting temperatures, and , respectively. Matsui et al. employed molecular dynamics to perform a thorough and systematic study of several ice polymorphs, namely space fullerene ices, zeolitic ices, and aeroices, and interpreted their relative stability in terms of geometrical considerations. The thermodynamics of metastable empty sI clathrate hydrates have been probed over broad temperature and pressure ranges, and , by Cruz et al. using large-scale simulations and compared with experimental data at 100 kPa. The whole p–V–T surface obtained was fitted by the universal form of the Parsafar and Mason equation of state with an accuracy of 99.7–99.9%. Framework deformation caused by applied temperature followed a parabolic law, and there is a critical temperature above which the isobaric thermal expansion becomes negative, ranging from 194.7 K at 100 kPa to 166.2 K at 500 MPa. Response to the applied (p, T) field was analyzed in terms of angle and distance descriptors of a classical tetrahedral structure and observed to occur essentially by means of angular alteration for (p, T) > (200 MPa, 200 K). The length of the hydrogen bonds responsible for framework integrity was insensitive to the thermodynamic conditions and its average value is . CO2 hydrate Clathrate hydrate, which encaged CO2 as guest molecule is termed as CO2 hydrate. The term CO2 hydrates are more commonly used these days with its relevance in anthropogenic CO2 capture and sequestration. A nonstoichiometric compound, carbon dioxide hydrate, is composed of hydrogen-bonded water molecules arranged in ice-like frameworks that are occupied by molecules with appropriate sizes and regions. In structure I, the CO2 hydrate crystallizes as one of two cubic hydrates composed of 46 H2O molecules (or D2O) and eight CO2 molecules occupying both large cavities (tetrakaidecahedral) and small cavities (pentagonal dodecahedral). Researchers believed that oceans and permafrost have immense potential to capture anthropogenic CO2 in the form CO2 hydrates. The utilization of additives to shift the CO2 hydrate equilibrium curve in phase diagram towards higher temperature and lower pressures is still under scrutiny to make extensive large-scale storage of CO2 viable in shallower subsea depths. See also Clathrate Star formation and evolution Clathrate gun hypothesis References Further reading External links Gas hydrates, from Leibniz Institute of Marine Sciences, Kiel (IFM-GEOMAR) The SUGAR Project (Submarine Gas Hydrate Reservoirs), from Leibniz Institute of Marine Sciences, Kiel (IFM-GEOMAR) Gas hydrates in video and – Background knowledge about gas hydrates, their prevention and removal (by manufacturer of hydrate autoclaves) > Ice Gases Industrial gases Natural gas
Clathrate hydrate
[ "Physics", "Chemistry" ]
3,351
[ "Physical phenomena", "Phase transitions", "Matter", "Phases of matter", "Critical phenomena", "Hydrates", "Industrial gases", "Clathrates", "Clathrate hydrates", "Chemical process engineering", "Statistical mechanics", "Gases" ]
54,147
https://en.wikipedia.org/wiki/Bremsstrahlung
In particle physics, (; ) is electromagnetic radiation produced by the deceleration of a charged particle when deflected by another charged particle, typically an electron by an atomic nucleus. The moving particle loses kinetic energy, which is converted into radiation (i.e., photons), thus satisfying the law of conservation of energy. The term is also used to refer to the process of producing the radiation. has a continuous spectrum, which becomes more intense and whose peak intensity shifts toward higher frequencies as the change of the energy of the decelerated particles increases. Broadly speaking, or braking radiation is any radiation produced due to the acceleration (positive or negative) of a charged particle, which includes synchrotron radiation (i.e., photon emission by a relativistic particle), cyclotron radiation (i.e. photon emission by a non-relativistic particle), and the emission of electrons and positrons during beta decay. However, the term is frequently used in the more narrow sense of radiation from electrons (from whatever source) slowing in matter. Bremsstrahlung emitted from plasma is sometimes referred to as free–free radiation. This refers to the fact that the radiation in this case is created by electrons that are free (i.e., not in an atomic or molecular bound state) before, and remain free after, the emission of a photon. In the same parlance, bound–bound radiation refers to discrete spectral lines (an electron "jumps" between two bound states), while free–bound radiation refers to the radiative combination process, in which a free electron recombines with an ion. This article uses SI units, along with the scaled single-particle charge . Classical description If quantum effects are negligible, an accelerating charged particle radiates power as described by the Larmor formula and its relativistic generalization. Total radiated power The total radiated power is where (the velocity of the particle divided by the speed of light), is the Lorentz factor, is the vacuum permittivity, signifies a time derivative of and is the charge of the particle. In the case where velocity is parallel to acceleration (i.e., linear motion), the expression reduces to where is the acceleration. For the case of acceleration perpendicular to the velocity (), for example in synchrotrons, the total power is Power radiated in the two limiting cases is proportional to or . Since , we see that for particles with the same energy the total radiated power goes as or , which accounts for why electrons lose energy to bremsstrahlung radiation much more rapidly than heavier charged particles (e.g., muons, protons, alpha particles). This is the reason a TeV energy electron-positron collider (such as the proposed International Linear Collider) cannot use a circular tunnel (requiring constant acceleration), while a proton-proton collider (such as the Large Hadron Collider) can utilize a circular tunnel. The electrons lose energy due to bremsstrahlung at a rate times higher than protons do. Angular distribution The most general formula for radiated power as a function of angle is: where is a unit vector pointing from the particle towards the observer, and is an infinitesimal solid angle. In the case where velocity is parallel to acceleration (for example, linear motion), this simplifies to where is the angle between and the direction of observation . Simplified quantum-mechanical description The full quantum-mechanical treatment of bremsstrahlung is very involved. The "vacuum case" of the interaction of one electron, one ion, and one photon, using the pure Coulomb potential, has an exact solution that was probably first published by Arnold Sommerfeld in 1931. This analytical solution involves complicated mathematics, and several numerical calculations have been published, such as by Karzas and Latter. Other approximate formulas have been presented, such as in recent work by Weinberg and Pradler and Semmelrock. This section gives a quantum-mechanical analog of the prior section, but with some simplifications to illustrate the important physics. We give a non-relativistic treatment of the special case of an electron of mass , charge , and initial speed decelerating in the Coulomb field of a gas of heavy ions of charge and number density . The emitted radiation is a photon of frequency and energy . We wish to find the emissivity which is the power emitted per (solid angle in photon velocity space * photon frequency), summed over both transverse photon polarizations. We express it as an approximate classical result times the free−free emission Gaunt factor gff accounting for quantum and other corrections: if , that is, the electron does not have enough kinetic energy to emit the photon. A general, quantum-mechanical formula for exists but is very complicated, and usually is found by numerical calculations. We present some approximate results with the following additional assumptions: Vacuum interaction: we neglect any effects of the background medium, such as plasma screening effects. This is reasonable for photon frequency much greater than the plasma frequency with the plasma electron density. Note that light waves are evanescent for and a significantly different approach would be needed. Soft photons: , that is, the photon energy is much less than the initial electron kinetic energy. With these assumptions, two unitless parameters characterize the process: , which measures the strength of the electron-ion Coulomb interaction, and , which measures the photon "softness" and we assume is always small (the choice of the factor 2 is for later convenience). In the limit , the quantum-mechanical Born approximation gives: In the opposite limit , the full quantum-mechanical result reduces to the purely classical result where is the Euler–Mascheroni constant. Note that which is a purely classical expression without the Planck constant . A semi-classical, heuristic way to understand the Gaunt factor is to write it as where and are maximum and minimum "impact parameters" for the electron-ion collision, in the presence of the photon electric field. With our assumptions, : for larger impact parameters, the sinusoidal oscillation of the photon field provides "phase mixing" that strongly reduces the interaction. is the larger of the quantum-mechanical de Broglie wavelength and the classical distance of closest approach where the electron-ion Coulomb potential energy is comparable to the electron's initial kinetic energy. The above approximations generally apply as long as the argument of the logarithm is large, and break down when it is less than unity. Namely, these forms for the Gaunt factor become negative, which is unphysical. A rough approximation to the full calculations, with the appropriate Born and classical limits, is Thermal bremsstrahlung in a medium: emission and absorption This section discusses bremsstrahlung emission and the inverse absorption process (called inverse bremsstrahlung) in a macroscopic medium. We start with the equation of radiative transfer, which applies to general processes and not just bremsstrahlung: is the radiation spectral intensity, or power per (area × × photon frequency) summed over both polarizations. is the emissivity, analogous to defined above, and is the absorptivity. and are properties of the matter, not the radiation, and account for all the particles in the medium – not just a pair of one electron and one ion as in the prior section. If is uniform in space and time, then the left-hand side of the transfer equation is zero, and we find If the matter and radiation are also in thermal equilibrium at some temperature, then must be the blackbody spectrum: Since and are independent of , this means that must be the blackbody spectrum whenever the matter is in equilibrium at some temperature – regardless of the state of the radiation. This allows us to immediately know both and once one is known – for matter in equilibrium. In plasma: approximate classical results NOTE: this section currently gives formulas that apply in the Rayleigh–Jeans limit , and does not use a quantized (Planck) treatment of radiation. Thus a usual factor like does not appear. The appearance of in below is due to the quantum-mechanical treatment of collisions. In a plasma, the free electrons continually collide with the ions, producing bremsstrahlung. A complete analysis requires accounting for both binary Coulomb collisions as well as collective (dielectric) behavior. A detailed treatment is given by Bekefi, while a simplified one is given by Ichimaru. In this section we follow Bekefi's dielectric treatment, with collisions included approximately via the cutoff wavenumber, Consider a uniform plasma, with thermal electrons distributed according to the Maxwell–Boltzmann distribution with the temperature . Following Bekefi, the power spectral density (power per angular frequency interval per volume, integrated over the whole sr of solid angle, and in both polarizations) of the bremsstrahlung radiated, is calculated to be where is the electron plasma frequency, is the photon frequency, is the number density of electrons and ions, and other symbols are physical constants. The second bracketed factor is the index of refraction of a light wave in a plasma, and shows that emission is greatly suppressed for (this is the cutoff condition for a light wave in a plasma; in this case the light wave is evanescent). This formula thus only applies for . This formula should be summed over ion species in a multi-species plasma. The special function is defined in the exponential integral article, and the unitless quantity is is a maximum or cutoff wavenumber, arising due to binary collisions, and can vary with ion species. Roughly, when (typical in plasmas that are not too cold), where eV is the Hartree energy, and is the electron thermal de Broglie wavelength. Otherwise, where is the classical Coulomb distance of closest approach. For the usual case , we find The formula for is approximate, in that it neglects enhanced emission occurring for slightly above In the limit , we can approximate as where is the Euler–Mascheroni constant. The leading, logarithmic term is frequently used, and resembles the Coulomb logarithm that occurs in other collisional plasma calculations. For the log term is negative, and the approximation is clearly inadequate. Bekefi gives corrected expressions for the logarithmic term that match detailed binary-collision calculations. The total emission power density, integrated over all frequencies, is and decreases with ; it is always positive. For , we find Note the appearance of due to the quantum nature of . In practical units, a commonly used version of this formula for is This formula is 1.59 times the one given above, with the difference due to details of binary collisions. Such ambiguity is often expressed by introducing Gaunt factor , e.g. in one finds where everything is expressed in the CGS units. Relativistic corrections For very high temperatures there are relativistic corrections to this formula, that is, additional terms of the order of Bremsstrahlung cooling If the plasma is optically thin, the bremsstrahlung radiation leaves the plasma, carrying part of the internal plasma energy. This effect is known as the bremsstrahlung cooling. It is a type of radiative cooling. The energy carried away by bremsstrahlung is called bremsstrahlung losses and represents a type of radiative losses. One generally uses the term bremsstrahlung losses in the context when the plasma cooling is undesired, as e.g. in fusion plasmas. Polarizational bremsstrahlung Polarizational bremsstrahlung (sometimes referred to as "atomic bremsstrahlung") is the radiation emitted by the target's atomic electrons as the target atom is polarized by the Coulomb field of the incident charged particle. Polarizational bremsstrahlung contributions to the total bremsstrahlung spectrum have been observed in experiments involving relatively massive incident particles, resonance processes, and free atoms. However, there is still some debate as to whether or not there are significant polarizational bremsstrahlung contributions in experiments involving fast electrons incident on solid targets. It is worth noting that the term "polarizational" is not meant to imply that the emitted bremsstrahlung is polarized. Also, the angular distribution of polarizational bremsstrahlung is theoretically quite different than ordinary bremsstrahlung. Sources X-ray tube In an X-ray tube, electrons are accelerated in a vacuum by an electric field towards a piece of material called the "target". X-rays are emitted as the electrons hit the target. Already in the early 20th century physicists found out that X-rays consist of two components, one independent of the target material and another with characteristics of fluorescence. Now we say that the output spectrum consists of a continuous spectrum of X-rays with additional sharp peaks at certain energies. The former is due to bremsstrahlung, while the latter are characteristic X-rays associated with the atoms in the target. For this reason, bremsstrahlung in this context is also called continuous X-rays. The German term itself was introduced in 1909 by Arnold Sommerfeld in order to explain the nature of the first variety of X-rays. The shape of this continuum spectrum is approximately described by Kramers' law. The formula for Kramers' law is usually given as the distribution of intensity (photon count) against the wavelength of the emitted radiation: The constant is proportional to the atomic number of the target element, and is the minimum wavelength given by the Duane–Hunt law. The spectrum has a sharp cutoff at which is due to the limited energy of the incoming electrons. For example, if an electron in the tube is accelerated through 60 kV, then it will acquire a kinetic energy of 60 keV, and when it strikes the target it can create X-rays with energy of at most 60 keV, by conservation of energy. (This upper limit corresponds to the electron coming to a stop by emitting just one X-ray photon. Usually the electron emits many photons, and each has an energy less than 60 keV.) A photon with energy of at most 60 keV has wavelength of at least , so the continuous X-ray spectrum has exactly that cutoff, as seen in the graph. More generally the formula for the low-wavelength cutoff, the Duane–Hunt law, is: where is the Planck constant, is the speed of light, is the voltage that the electrons are accelerated through, is the elementary charge, and is picometres. Beta decay Beta particle-emitting substances sometimes exhibit a weak radiation with continuous spectrum that is due to bremsstrahlung (see the "outer bremsstrahlung" below). In this context, bremsstrahlung is a type of "secondary radiation", in that it is produced as a result of stopping (or slowing) the primary radiation (beta particles). It is very similar to X-rays produced by bombarding metal targets with electrons in X-ray generators (as above) except that it is produced by high-speed electrons from beta radiation. Inner and outer bremsstrahlung The "inner" bremsstrahlung (also known as "internal bremsstrahlung") arises from the creation of the electron and its loss of energy (due to the strong electric field in the region of the nucleus undergoing decay) as it leaves the nucleus. Such radiation is a feature of beta decay in nuclei, but it is occasionally (less commonly) seen in the beta decay of free neutrons to protons, where it is created as the beta electron leaves the proton. In electron and positron emission by beta decay the photon's energy comes from the electron-nucleon pair, with the spectrum of the bremsstrahlung decreasing continuously with increasing energy of the beta particle. In electron capture, the energy comes at the expense of the neutrino, and the spectrum is greatest at about one third of the normal neutrino energy, decreasing to zero electromagnetic energy at normal neutrino energy. Note that in the case of electron capture, bremsstrahlung is emitted even though no charged particle is emitted. Instead, the bremsstrahlung radiation may be thought of as being created as the captured electron is accelerated toward being absorbed. Such radiation may be at frequencies that are the same as soft gamma radiation, but it exhibits none of the sharp spectral lines of gamma decay, and thus is not technically gamma radiation. The internal process is to be contrasted with the "outer" bremsstrahlung due to the impingement on the nucleus of electrons coming from the outside (i.e., emitted by another nucleus), as discussed above. Radiation safety In some cases, such as the decay of , the bremsstrahlung produced by shielding the beta radiation with the normally used dense materials (e.g. lead) is itself dangerous; in such cases, shielding must be accomplished with low density materials, such as Plexiglas (Lucite), plastic, wood, or water; as the atomic number is lower for these materials, the intensity of bremsstrahlung is significantly reduced, but a larger thickness of shielding is required to stop the electrons (beta radiation). In astrophysics The dominant luminous component in a cluster of galaxies is the 107 to 108 kelvin intracluster medium. The emission from the intracluster medium is characterized by thermal bremsstrahlung. This radiation is in the energy range of X-rays and can be easily observed with space-based telescopes such as Chandra X-ray Observatory, XMM-Newton, ROSAT, ASCA, EXOSAT, Suzaku, RHESSI and future missions like IXO and Astro-H . Bremsstrahlung is also the dominant emission mechanism for H II regions at radio wavelengths. In electric discharges In electric discharges, for example as laboratory discharges between two electrodes or as lightning discharges between cloud and ground or within clouds, electrons produce Bremsstrahlung photons while scattering off air molecules. These photons become manifest in terrestrial gamma-ray flashes and are the source for beams of electrons, positrons, neutrons and protons. The appearance of Bremsstrahlung photons also influences the propagation and morphology of discharges in nitrogen–oxygen mixtures with low percentages of oxygen. Quantum mechanical description The complete quantum mechanical description was first performed by Bethe and Heitler. They assumed plane waves for electrons which scatter at the nucleus of an atom, and derived a cross section which relates the complete geometry of that process to the frequency of the emitted photon. The quadruply differential cross section, which shows a quantum mechanical symmetry to pair production, is where is the atomic number, the fine-structure constant, the reduced Planck constant and the speed of light. The kinetic energy of the electron in the initial and final state is connected to its total energy or its momenta via where is the mass of an electron. Conservation of energy gives where is the photon energy. The directions of the emitted photon and the scattered electron are given by where is the momentum of the photon. The differentials are given as The absolute value of the virtual photon between the nucleus and electron is The range of validity is given by the Born approximation where this relation has to be fulfilled for the velocity of the electron in the initial and final state. For practical applications (e.g. in Monte Carlo codes) it can be interesting to focus on the relation between the frequency of the emitted photon and the angle between this photon and the incident electron. Köhn and Ebert integrated the quadruply differential cross section by Bethe and Heitler over and and obtained: with and However, a much simpler expression for the same integral can be found in (Eq. 2BN) and in (Eq. 4.1). An analysis of the doubly differential cross section above shows that electrons whose kinetic energy is larger than the rest energy (511 keV) emit photons in forward direction while electrons with a small energy emit photons isotropically. Electron–electron bremsstrahlung One mechanism, considered important for small atomic numbers is the scattering of a free electron at the shell electrons of an atom or molecule. Since electron–electron bremsstrahlung is a function of and the usual electron-nucleus bremsstrahlung is a function of electron–electron bremsstrahlung is negligible for metals. For air, however, it plays an important role in the production of terrestrial gamma-ray flashes. See also Beamstrahlung Cyclotron radiation Wiggler (synchrotron) Free-electron laser History of X-rays Landau–Pomeranchuk–Migdal effect Nuclear fusion: bremsstrahlung losses Radiation length characterising energy loss by bremsstrahlung by high energy electrons in matter Synchrotron light source References Further reading External links Index of Early Bremsstrahlung Articles Atomic physics Plasma phenomena Scattering Quantum electrodynamics
Bremsstrahlung
[ "Physics", "Chemistry", "Materials_science" ]
4,428
[ "Physical phenomena", "Nuclear physics", "Plasma physics", "Plasma phenomena", "Quantum mechanics", "Scattering", "Condensed matter physics", "Atomic physics", "Particle physics", "Atomic", " molecular", " and optical physics" ]
54,176
https://en.wikipedia.org/wiki/Human%20body
The human body is the entire structure of a human being. It is composed of many different types of cells that together create tissues and subsequently organs and then organ systems. The external human body consists of a head, hair, neck, torso (which includes the thorax and abdomen), genitals, arms, hands, legs, and feet. The internal human body includes organs, teeth, bones, muscle, tendons, ligaments, blood vessels and blood, lymphatic vessels and lymph. The study of the human body includes anatomy, physiology, histology and embryology. The body varies anatomically in known ways. Physiology focuses on the systems and organs of the human body and their functions. Many systems and mechanisms interact in order to maintain homeostasis, with safe levels of substances such as sugar, iron, and oxygen in the blood. The body is studied by health professionals, physiologists, anatomists, and artists to assist them in their work. Composition The human body is composed of elements including hydrogen, oxygen, carbon, calcium and phosphorus. These elements reside in trillions of cells and non-cellular components of the body. The adult male body is about 60% total body water content of some . This is made up of about of extracellular fluid including about of blood plasma and about of interstitial fluid, and about of fluid inside cells. The content, acidity and composition of the water inside and outside cells is carefully maintained. The main electrolytes in body water outside cells are sodium and chloride, whereas within cells it is potassium and other phosphates. Cells The body contains trillions of cells, the fundamental unit of life. At maturity, there are roughly 30 trillion cells, and 38 trillion bacteria in the body, an estimate arrived at by totaling the cell numbers of all the organs of the body and cell types. The skin of the body is also host to billions of commensal organisms as well as immune cells. Not all parts of the body are made from cells. Cells sit in an extracellular matrix that consists of proteins such as collagen, surrounded by extracellular fluids. Genome Cells in the body function because of DNA. DNA sits within the nucleus of a cell. Here, parts of DNA are copied and sent to the body of the cell via RNA. The RNA is then used to create proteins, which form the basis for cells, their activity, and their products. Proteins dictate cell function and gene expression, a cell is able to self-regulate by the amount of proteins produced. However, not all cells have DNA; some cells such as mature red blood cells lose their nucleus as they mature. Tissues The body consists of many different types of tissue, defined as cells that act with a specialised function. The study of tissues is called histology and is often done with a microscope. The body consists of four main types of tissues. These are lining cells (epithelia), connective tissue, nerve tissue and muscle tissue. Cells Cells that line surfaces exposed to the outside world or gastrointestinal tract (epithelia) or internal cavities (endothelium) come in numerous shapes and forms – from single layers of flat cells, to cells with small beating hair-like cilia in the lungs, to column-like cells that line the stomach. Endothelial cells are cells that line internal cavities including blood vessels and glands. Lining cells regulate what can and cannot pass through them, protect internal structures, and function as sensory surfaces. Organs Organs, structured collections of cells with a specific function, mostly sit within the body, with the exception of skin. Examples include the heart, lungs and liver. Many organs reside within cavities within the body. These cavities include the abdomen (which contains the stomach, for example) and pleura, which contains the lungs. Heart The heart is an organ located in the thoracic cavity between the lungs and slightly to the left. It is surrounded by the pericardium, which holds it in place in the mediastinum and serves to protect it from blunt trauma, infection and help lubricate the movement of the heart via pericardial fluid. The heart works by pumping blood around the body allowing oxygen, nutrients, waste, hormones and white blood cells to be transported. The heart is composed of two atria and two ventricles. The primary purpose of the atria is to allow uninterrupted venous blood flow to the heart during ventricular systole. This allows enough blood to get into the ventricles during atrial systole. Consequently, the atria allows a cardiac output roughly 75% greater than would be possible without them. The purpose of the ventricles is to pump blood to the lungs through the right ventricle and to the rest of the body through the left ventricle. The heart has an electrical conduction system to control the contraction and relaxation of the muscles. It starts in the sinoatrial node traveling through the atria causing them to pump blood into the ventricles. It then travels to the atrioventricular node, which makes the signal slow down slightly allowing the ventricles to fill with blood before pumping it out and starting the cycle over again. Coronary artery disease is the leading cause of death worldwide, making up 16% of all deaths. It is caused by the buildup of plaque in the coronary arteries supplying the heart, eventually the arteries may become so narrow that not enough blood is able to reach the myocardium, a condition known as myocardial infarction or heart attack, this can cause heart failure or cardiac arrest and eventually death. Risk factors for coronary artery disease include obesity, smoking, high cholesterol, high blood pressure, lack of exercise and diabetes. Cancer can affect the heart, though it is exceedingly rare and has usually metastasized from another part of the body such as the lungs or breasts. This is because the heart cells quickly stop dividing and all growth occurs through size increase rather than cell division. Gallbladder The gallbladder is a hollow pear-shaped organ located posterior to the inferior middle part of the right lobe of the liver. It is variable in shape and size. It stores bile before it is released into the small intestine via the common bile duct to help with digestion of fats. It receives bile from the liver via the cystic duct, which connects to the common hepatic duct to form the common bile duct. The gallbladder gets its blood supply from the cystic artery, which in most people, emerges from the right hepatic artery. Gallstones is a common disease in which one or more stones form in the gallbladder or biliary tract. Most people are asymptomatic but if a stone blocks the biliary tract, it causes a gallbladder attack, symptoms may include sudden pain in the upper right abdomen or center of the abdomen. Nausea and vomiting may also occur. Typical treatment is removal of the gallbladder through a procedure called a cholecystectomy. Having gallstones is a risk factor for gallbladder cancer, which although quite uncommon, is rapidly fatal if not diagnosed early. Systems Circulatory system The circulatory system consists of the heart and blood vessels (arteries, veins and capillaries). The heart propels the circulation of the blood, which serves as a "transportation system" to transfer oxygen, fuel, nutrients, waste products, immune cells and signaling molecules (i.e. hormones) from one part of the body to another. Paths of blood circulation within the human body can be divided into two circuits: the pulmonary circuit, which pumps blood to the lungs to receive oxygen and leave carbon dioxide, and the systemic circuit, which carries blood from the heart off to the rest of the body. The blood consists of fluid that carries cells in the circulation, including some that move from tissue to blood vessels and back, as well as the spleen and bone marrow. Digestive system The digestive system consists of the mouth including the tongue and teeth, esophagus, stomach, (gastrointestinal tract, small and large intestines, and rectum), as well as the liver, pancreas, gallbladder, and salivary glands. It converts food into small, nutritional, non-toxic molecules for distribution and absorption into the body. These molecules take the form of proteins (which are broken down into amino acids), fats, vitamins and minerals (the last of which are mainly ionic rather than molecular). After being swallowed, food moves through the gastrointestinal tract by means of peristalsis: the systematic expansion and contraction of muscles to push food from one area to the next. Digestion begins in the mouth, which chews food into smaller pieces for easier digestion. Then it is swallowed, and moves through the esophagus to the stomach. In the stomach, food is mixed with gastric acids to allow the extraction of nutrients. What is left is called chyme; this then moves into the small intestine, which absorbs the nutrients and water from the chyme. What remains passes on to the large intestine, where it is dried to form feces; these are then stored in the rectum until they are expelled through the anus. Endocrine system The endocrine system consists of the principal endocrine glands: the pituitary, thyroid, adrenals, pancreas, parathyroids, and gonads, but nearly all organs and tissues produce specific endocrine hormones as well. The endocrine hormones serve as signals from one body system to another regarding an enormous array of conditions, resulting in variety of changes of function. Immune system The immune system consists of the white blood cells, the thymus, lymph nodes and lymph channels, which are also part of the lymphatic system. The immune system provides a mechanism for the body to distinguish its own cells and tissues from outside cells and substances and to neutralize or destroy the latter by using specialized proteins such as antibodies, cytokines, and toll-like receptors, among many others. Integumentary system The integumentary system consists of the covering of the body (the skin), including hair and nails as well as other functionally important structures such as the sweat glands and sebaceous glands. The skin provides containment, structure, and protection for other organs, and serves as a major sensory interface with the outside world. Lymphatic system The lymphatic system extracts, transports and metabolizes lymph, the fluid found in between cells. The lymphatic system is similar to the circulatory system in terms of both its structure and its most basic function, to carry a body fluid. Musculoskeletal system The musculoskeletal system consists of the human skeleton (which includes bones, ligaments, tendons, joints and cartilage) and attached muscles. It gives the body basic structure and the ability for movement. In addition to their structural role, the larger bones in the body contain bone marrow, the site of production of blood cells. Also, all bones are major storage sites for calcium and phosphate. This system can be split up into the muscular system and the skeletal system. Nervous system The nervous system consists of the body's neurons and glial cells, which together form the nerves, ganglia and gray matter, which in turn form the brain and related structures. The brain is the organ of thought, emotion, memory, and sensory processing; it serves many aspects of communication and controls various systems and functions. The special senses consist of vision, hearing, taste, and smell. The eyes, ears, tongue, and nose gather information about the body's environment. From a structural perspective, the nervous system is typically subdivided into two component parts: the central nervous system (CNS), composed of the brain and the spinal cord; and the peripheral nervous system (PNS), composed of the nerves and ganglia outside the brain and spinal cord. The CNS is mostly responsible for organizing motion, processing sensory information, thought, memory, cognition and other such functions. It remains a matter of some debate whether the CNS directly gives rise to consciousness. The peripheral nervous system (PNS) is mostly responsible for gathering information with sensory neurons and directing body movements with motor neurons. From a functional perspective, the nervous system is again typically divided into two component parts: the somatic nervous system (SNS) and the autonomic nervous system (ANS). The SNS is involved in voluntary functions like speaking and sensory processes. The ANS is involved in involuntary processes, such as digestion and regulating blood pressure. The nervous system is subject to many different diseases. In epilepsy, abnormal electrical activity in the brain can cause seizures. In multiple sclerosis, the immune system attacks the nerve linings, damaging the nerves' ability to transmit signals. Amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig's disease, is a motor neuron disease which gradually reduces movement in patients. There are also many other diseases of the nervous system. Reproductive system The purpose of the reproductive system is to reproduce and nurture the growth of offspring. The functions include the production of germ cells and hormones. The sex organs of the male reproductive system and the female reproductive system develops and mature at puberty. These systems include the internal and external genitalia. Female puberty generally occurs between the ages of 9 and 13 and is characterized by ovulation and menstruation; the growth of secondary sex characteristics, such as growth of pubic and underarm hair, breast, uterine and vaginal growth, widening hips and increased height and weight, also occur during puberty. Male puberty sees the further development of the penis and testicles. The female inner sex organs are the two ovaries, their fallopian tubes, the uterus, and the cervix. At birth there are about 70,000 immature egg cells that degenerate until at puberty there are around 40,000. No more egg cells are produced. Hormones stimulate the beginning of menstruation, and the ongoing menstrual cycles. The female external sex organs are the vulva (labia, clitoris, and vestibule). The male external genitalia include the penis and scrotum that contains the testicles. The testicle is the gonad, the sex gland that produces the sperm cells. Unlike the egg cells in the female, sperm cells are produced throughout life. Other internal sex organs are the epididymides, vasa deferentia, and some accessory glands. Diseases that affect the reproductive system include polycystic ovary syndrome, a number of disorders of the testicles including testicular torsion, and a number of sexually transmitted infections including syphilis, HIV, chlamydia, HPV and genital warts. Cancer can affect most parts of the reproductive system including the penis, testicles, prostate, ovaries, cervix, vagina, fallopian, uterus and vulva. Respiratory system The respiratory system consists of the nose, nasopharynx, trachea, and lungs. It brings oxygen from the air and excretes carbon dioxide and water back into the air. First, air is pulled through the trachea into the lungs by the diaphragm pushing down, which creates a vacuum. Air is briefly stored inside small sacs known as alveoli (sing.: alveolus) before being expelled from the lungs when the diaphragm contracts again. Each alveolus is surrounded by capillaries carrying deoxygenated blood, which absorbs oxygen out of the air and into the bloodstream. For the respiratory system to function properly, there need to be as few impediments as possible to the movement of air within the lungs. Inflammation of the lungs and excess mucus are common sources of breathing difficulties. In asthma, the respiratory system is persistently inflamed, causing wheezing or shortness of breath. Pneumonia occurs through infection of the alveoli, and may be caused by tuberculosis. Emphysema, commonly a result of smoking, is caused by damage to connections between the alveoli. Urinary system The urinary system consists of the two kidneys, two ureters, bladder, and urethra. It removes waste materials from the blood through urine, which carries a variety of waste molecules and excess ions and water out of the body. First, the kidneys filter the blood through their respective nephrons, removing waste products like urea, creatinine and maintaining the proper balance of electrolytes and turning the waste products into urine by combining them with water from the blood. The kidneys filter about 150 quarts (170 liters) of blood daily, but most of it is returned to the blood stream with only 1-2 quarts (1-2 liters) ending up as urine. The urine is brought by the ureters from the kidneys down to the bladder. The smooth muscle lining the ureter walls continuously tighten and relax through a process called peristalsis to force urine away from the kidneys and down into the bladder. Small amounts of urine are released into the bladder every 10–15 seconds. The bladder is a hollow balloon shaped organ located in the pelvis. It stores urine until the brain signals it to relax the urinary sphincter and release the urine into the urethra starting urination. A normal bladder can hold up to 16 ounces (half a liter) for 3–5 hours comfortably. Numerous diseases affect the urinary system including kidney stones, which are formed when materials in the urine concentrate enough to form a solid mass, urinary tract infections, which are infections of the urinary tract and can cause pain when urinating, frequent urination and even death if left untreated. Renal failure occurs when the kidneys fail to adequately filter waste from the blood and can lead to death if not treated with dialysis or kidney transplantation. Cancer can affect the bladder, kidneys, urethra and ureters, with the latter two being far more rare. Anatomy Human anatomy is the study of the shape and form of the human body. The human body has four limbs (two arms and two legs), a head and a neck, which connect to the torso. The body's shape is determined by a strong skeleton made of bone and cartilage, surrounded by fat (adipose tissue), muscle, connective tissue, organs, and other structures. The spine at the back of the skeleton contains the flexible vertebral column, which surrounds the spinal cord, which is a collection of nerve fibres connecting the brain to the rest of the body. Nerves connect the spinal cord and brain to the rest of the body. All major bones, muscles, and nerves in the body are named, with the exception of anatomical variations such as sesamoid bones and accessory muscles. Blood vessels carry blood throughout the body, which moves because of the beating of the heart. Venules and veins collect blood low in oxygen from tissues throughout the body. These collect in progressively larger veins until they reach the body's two largest veins, the superior and inferior vena cava, which drain blood into the right side of the heart. From here, the blood is pumped into the lungs where it receives oxygen and drains back into the left side of the heart. From here, it is pumped into the body's largest artery, the aorta, and then progressively smaller arteries and arterioles until it reaches tissue. Here, blood passes from small arteries into capillaries, then small veins and the process begins again. Blood carries oxygen, waste products, and hormones from one place in the body to another. Blood is filtered at the kidneys and liver. The body consists of a number of body cavities, separated areas which house different organ systems. The brain and central nervous system reside in an area protected from the rest of the body by the blood brain barrier. The lungs sit in the pleural cavity. The intestines, liver, and spleen sit in the abdominal cavity. Height, weight, shape and other body proportions vary individually and with age and sex. Body shape is influenced by the distribution of bones, muscle and fat tissue. Physiology Human physiology is the study of how the human body functions. This includes the mechanical, physical, bioelectrical, and biochemical functions of humans in good health, from organs to the cells of which they are composed. The human body consists of many interacting systems of organs. These interact to maintain homeostasis, keeping the body in a stable state with safe levels of substances such as sugar and oxygen in the blood. Each system contributes to homeostasis, of itself, other systems, and the entire body. Some combined systems are referred to by joint names. For example, the nervous system and the endocrine system operate together as the neuroendocrine system. The nervous system receives information from the body, and transmits this to the brain via nerve impulses and neurotransmitters. At the same time, the endocrine system releases hormones, such as to help regulate blood pressure and volume. Together, these systems regulate the internal environment of the body, maintaining blood flow, posture, energy supply, temperature, and acid balance (pH). Development Development of the human body is the process of growth to maturity. The process begins with fertilisation, where an egg released from the ovary of a female is penetrated by sperm. The egg then lodges in the uterus, where an embryo and later fetus develop until birth. Growth and development occur after birth, and include both physical and psychological development, influenced by genetic, hormonal, environmental and other factors. Development and growth continue throughout life, through childhood, adolescence, and through adulthood to old age, and are referred to as the process of aging. Society and culture Professional study Health professionals learn about the human body from illustrations, models, and demonstrations. Medical and dental students in addition gain practical experience, for example by dissection of cadavers. Human anatomy, physiology, and biochemistry are basic medical sciences, generally taught to medical students in their first year at medical school. Depiction In Western societies, the contexts for depictions of the human body include information, art and pornography. Information includes both science and education, such as anatomical drawings. Any ambiguous image not easily fitting into one of these categories may be misinterpreted, leading to disputes. The most contentious disputes are between fine art and erotic images, which define the legal distinction of which images are permitted or prohibited. History of anatomy In Ancient Greece, the Hippocratic Corpus described the anatomy of the skeleton and muscles. The 2nd century physician Galen of Pergamum compiled classical knowledge of anatomy into a text that was used throughout the Middle Ages. In the Renaissance, Andreas Vesalius (1514–1564) pioneered the modern study of human anatomy by dissection, writing the influential book De humani corporis fabrica. Anatomy advanced further with the invention of the microscope and the study of the cellular structure of tissues and organs. Modern anatomy uses techniques such as magnetic resonance imaging, computed tomography, fluoroscopy and ultrasound imaging to study the body in unprecedented detail. History of physiology The study of human physiology began with Hippocrates in Ancient Greece, around 420 BCE, and with Aristotle (384–322 BCE) who applied critical thinking and emphasis on the relationship between structure and function. Galen () was the first to use experiments to probe the body's functions. The term physiology was introduced by the French physician Jean Fernel (1497–1558). In the 17th century, William Harvey (1578–1657) described the circulatory system, pioneering the combination of close observation with careful experiment. In the 19th century, physiological knowledge began to accumulate at a rapid rate with the cell theory of Matthias Schleiden and Theodor Schwann in 1838, that organisms are made up of cells. Claude Bernard (1813–1878) created the concept of the milieu interieur (internal environment), which Walter Cannon (1871–1945) later said was regulated to a steady state in homeostasis. In the 20th century, the physiologists Knut Schmidt-Nielsen and George Bartholomew extended their studies to comparative physiology and ecophysiology. Most recently, evolutionary physiology has become a distinct subdiscipline. See also Organ system Outline of human anatomy The Birth of the Clinic: An Archaeology of Medical Perception Human body lists List of skeletal muscles of the human body List of organs of the human body List of distinct cell types in the adult human body List of human microbiota References Books External links The Book of Humans (from the late 18th and early 19th centuries) (archived 26 January 2014) Inner Body (archived 10 December 1997) Anatomia 1522–1867: Anatomical Plates from the Thomas Fisher Rare Book Library
Human body
[ "Physics" ]
5,210
[ "Human body", "Physical objects", "Matter" ]
54,180
https://en.wikipedia.org/wiki/LocalTalk
LocalTalk is a particular implementation of the physical layer of the AppleTalk networking system from Apple Computer. LocalTalk specifies a system of shielded twisted pair cabling, plugged into self-terminating transceivers, running at a rate of 230.4 kbit/s. CSMA/CA was implemented as a random multiple access method. Networking was envisioned in the Macintosh during planning, so the Mac was given expensive RS-422 capable serial ports, first on a nine-pin D-connector, then on a mini-DIN-8 connector. The ports were driven by the Zilog SCC, which could serve as either a standard UART or handle the much more complicated HDLC protocol, which was a packet oriented protocol that incorporated addressing, bit-stuffing, and packet checksumming in hardware. Coupled together with the RS422 electrical connections, this provided a reasonably-high-speed data connection. The 230.4 kbit/s bit rate is the highest in the series of standard serial bit rates (110, 150, 300, 600, 1200, 2400, 4800, 9600, 14400, 19200, 28800, 38400, 57600, 115200, 230400) derived from the 3.6864 MHz clock after the customary divide-by-16. This clock frequency, 3.6864 MHz, was chosen (in part) to support the common asynchronous baud rates up to 38.4 kbit/s using the SCC's internal baud-rate generator. When the SCC's internal PLL was used to lock to the clock embedded in the LocalTalk serial data stream (using its FM0 encoding method) a divide-by-16 setting on the PLL yielded the fastest rate available, namely 230.4 kbit/s. Originally released as "AppleTalk Personal Network", LocalTalk used shielded twisted-pair cable with three-pin mini-DIN connectors. Cables were daisy-chained from transceiver to transceiver. Each transceiver had two three-pin mini-DIN ports, and a "pigtail" cable to connect to the Mac's DE-9 serial connector. Later, when the Mac Plus introduced the eight-pin mini-DIN serial connector, transceivers were updated as well. A variation of LocalTalk called PhoneNET was introduced by Farallon Computing. It used standard unshielded side-by-side telephone wire, with six-position modular connectors (same as the popular RJ11 telephone connectors) connected to a PhoneNET transceiver, instead of the expensive, shielded, twisted-pair cable. In addition to being lower cost, PhoneNET-wired networks were more reliable due to the connections being more difficult to accidentally disconnect. In addition, because it used the "outer" pair of the modular connector, it could travel on many pre-existing phone cables and jacks where just the inner pair was in use for RJ11 telephone service. PhoneNET was also able to use an office's existing phone wire, allowing for entire floors of computers to be easily networked. Farallon introduced a 12-port hub, which made constructing star topology networks of up to 48 devices as easy as adding jacks at the workstations and some jumpers in the phone closet. These factors led to PhoneNET largely supplanting LocalTalk wiring in low-cost networking. The useful life of PhoneNET was extended with the introduction of LocalTalk switching technology by Tribe Computer Works. Introduced in 1990, the Tribe LocalSwitch was a 16-port packet switch designed to speed up overloaded PhoneNET networks. The widespread availability of Ethernet-based networking in the early 1990s led to the swift disappearance of both LocalTalk and PhoneNET. They remained in use for some time in low-cost applications and applications where Ethernet was not used. Macintosh Quadra and early models of Power Macintosh supported both 10BASE2 and 10BASE-T via the Apple Attachment Unit Interface (AAUI), and all other Ethernet media via an AAUI–AUI adapter, while still supporting LocalTalk-based networking. For older Macintosh computers that did not have built-in Ethernet, a high-speed SCSI-to-Ethernet adapter was available, and was particularly popular on PowerBooks. This enabled all but the earliest Macintosh models to access a high-speed Ethernet network. With the release of the iMac in 1998 the traditional Mac serial port—and thus, the ability to use both LocalTalk and PhoneNET—disappeared from new models of Macintosh. LocalTalk-to-Ethernet bridges were introduced to allow legacy devices (especially printers) to function on newer networks. For very old Macintosh computers, LocalTalk remains the only option. Design legacy The LocalTalk connector had the distinction of being the first to use Apple's unified AppleTalk Connector Family design, created by Brad Bissell of Frog Design using Rick Meadows' Apple Icon Family designs. LocalTalk connectors were first released in January 1985 to connect the LaserWriter printer initially with the Macintosh family of computers as an integral part of the newly announced Macintosh Office. However, well past the move to Ethernet, the connector's design continued to be used on all of Apple's peripherals and cable connectors as well as influencing the connectors used throughout the industry as a whole. References See also AppleTalk Econet List of device bandwidths Apple Inc. hardware Computer network technology Network protocols Networking hardware Physical layer protocols Link protocols Legacy hardware
LocalTalk
[ "Engineering" ]
1,131
[ "Computer networks engineering", "Networking hardware" ]
54,201
https://en.wikipedia.org/wiki/Sodium%20dodecyl%20sulfate
Sodium dodecyl sulfate (SDS) or sodium lauryl sulfate (SLS), sometimes written sodium laurilsulfate, is an organic compound with the formula and structure . It is an anionic surfactant used in many cleaning and hygiene products. This compound is the sodium salt of the 12-carbon organosulfate. Its hydrocarbon tail combined with a polar "headgroup" give the compound amphiphilic properties that make it useful as a detergent. SDS is also component of mixtures produced from inexpensive coconut and palm oils. SDS is a common component of many domestic cleaning, personal hygiene and cosmetic, pharmaceutical, and food products, as well as of industrial and commercial cleaning and product formulations. Physicochemical properties The critical micelle concentration (CMC) in water at 25 °C is 8.2 mM, and the aggregation number at this concentration is usually considered to be about 62. The micelle ionization fraction (α) is around 0.3 (or 30%). Applications Cleaning and hygiene SDS is mainly used in detergents for laundry with many cleaning applications. It is a highly effective surfactant and is used in any task requiring the removal of oily stains and residues. For example, it is found in higher concentrations with industrial products including engine degreasers, floor cleaners, and car exterior cleaners. It is a component in hand soap, toothpastes, shampoos, shaving creams, and bubble bath formulations, for its ability to create a foam (lather), for its surfactant properties, and in part for its thickening effect. Food additive Sodium dodecyl sulfate, appearing as its synonym sodium lauryl sulfate (SLS), is considered a generally recognized as safe (GRAS) ingredient for food use according to the USFDA (21 CFR 172.822). It is used as an emulsifying agent and whipping aid. As an emulsifier in or with egg whites the United States Code of Federal Regulations require that it must not exceed 1,000 parts per million (0.1%) in egg white solids or 125 parts per million (0.0125%) in frozen or liquid egg whites and as a whipping agent for the preparation of marshmallows it must not exceed 0.5% of the weight of gelatine. SLS is reported to temporarily diminish perception of sweetness. Laboratory applications SDS is used in cleaning procedures, and is commonly used as a component for lysing cells during RNA extraction or DNA extraction, inhibiting the activity of nucleases, enzymes that can degrade DNA, protecting the integrity of the isolated genetic material, and for denaturing proteins in preparation for electrophoresis in the SDS-PAGE technique. In the case of SDS-PAGE, the compound works by disrupting non-covalent bonds in the proteins, and so denaturing them, i.e. causing the protein molecules to lose their native conformations and shapes. By binding to proteins at a ratio of one SDS molecule per 2 amino acid residues, the negatively charged detergent provides all proteins with a similar net negative charge and therefore a similar charge-to-mass ratio. In this way, the difference in mobility of the polypeptide chains in the gel can be attributed solely to their length as opposed to both their native charge and shape. This separation based on the size of the polypeptide chain simplifies the analysis of protein molecules. Pharmaceutical applications Sodium lauryl sulfate is a widely used in the pharmaceutical field as an ionic solubilizer and emulsifier that is suitable for applications in liquid dispersions, solutions, emulsions and micro emulsions, tablets, foams and semi-solids such as creams, lotions and gels. Additionally, SLS aids in tablet wettability, as well as lubrication during manufacturing. Brand names of pharma-grade SLS include Kolliphor SLS and Kolliphor SLS Fine. Miscellaneous applications SLS is used in an improved technique for preparing brain tissues for study by optical microscopy. The technique, which has been branded as CLARITY, was the work of Karl Deisseroth and coworkers at Stanford University, and involves infusion of the organ with an acrylamide solution to bind the macromolecules of the organ (proteins, nucleic acids, etc.), followed by thermal polymerization to form a "brain–hydrogel" (a mesh interspersed throughout the tissue to fix the macromolecules and other structures in space), and then by lipid removal using SDS to eliminate light scattering with minimal protein loss, rendering the tissue quasi-transparent. Along with sodium dodecylbenzene sulfonate and Triton X-100, aqueous solutions of SDS are popular for dispersing or suspending nanotubes, such as carbon nanotubes. Other uses SLS has been proposed as a potentially effective topical microbicide, for intravaginal use, to inhibit and possibly prevent infection by various enveloped and non-enveloped viruses such as the herpes simplex viruses, HIV, and the Semliki Forest virus. Liquid membranes formed from SDS in water have been demonstrated to work as unusual particle separators. The device acts as a reverse filter, allowing large particles to pass while capturing smaller particles. Production Dodecyl alcohol is sulfated using sulfur trioxide. The reaction proceeds by initial formation of the pyrosulfate: Several million tons are produced annually. SDS can also be synthesized by treating lauryl alcohol with chlorosulfuric acid. The resulting half ester of sulfuric acid is then neutralized with alkali. Lauryl alcohol can be used in pure form or as a mixtures of fatty alcohols. When produced from these sources, "SDS" products are a mixture of various sodium alkyl sulfates with SDS being the main component. For instance, SDS is a component, along with other chain-length amphiphiles, when produced from coconut oil, and is known as sodium coco sulfate (SCS). SDS is available commercially in powder, pellet, and other forms (each differing in rates of dissolution), as well as in aqueous solutions of varying concentrations. Safety SDS is not carcinogenic in low concentrations according to some studies. Like all detergents, sodium lauryl sulfate removes oils from the skin, and can cause skin and eye irritation. It has been shown to irritate the skin of the face, with prolonged and constant exposure (more than an hour) in young adults. SDS may worsen skin problems in individuals with chronic skin hypersensitivity, with some people being affected more than others. Oral concerns SDS is a common ingredient in toothpastes due to its low cost, its lack of impact on taste, and its desirable action as a foaming agent. VSCs SDS may reduce the amount of bad breath-causing volatile sulfur compounds (VSCs) in the mouth. A series of small crossover studies (25–34 patients) have supported the efficacy of SLS in the reduction of VSCs, and its related positive impact on breath malodor, although these studies have been generally noted to reflect technical challenges in the control of study design variables. Dry mouth Primary sources from the group of Irma Rantanen at University of Turku, Finland claim that SLS-containing pastes cause more dry mouth (xerostomia) than their proposed alternative. However, a 2011 Cochrane review of these studies, and of the more general area, concludes that there "is no strong evidence... that any topical therapy is effective for relieving the symptom of dry mouth." Mouth ulceration A safety concern has been raised on the basis of several studies regarding the effect of toothpaste SDS on aphthous ulcers (more specifically, mouth ulcers or "canker sores"), commonly referred to as canker or white sores. According to the NHS, SLS is a cause for concern for mouth ulcers. As Lippert notes, of 2013, "very few... marketed toothpastes contain a surfactant other than SLS [SDS]," and leading manufacturers continue to formulate their produce with SDS. See also Sodium tetradecyl sulfate, another anionic surfactant in common use Mouth ulcer References External links Josh Clark, "Why does orange juice taste bad after you brush your teeth?" published on Aug 24, 2018 Science (journal) Organic sodium salts Cleaning product components Anionic surfactants Laxatives Excipients Reagents for biochemistry Sulfate esters Dodecyl compounds
Sodium dodecyl sulfate
[ "Chemistry", "Technology", "Biology" ]
1,826
[ "Biochemistry methods", "Salts", "Organic sodium salts", "Biochemistry", "Reagents for biochemistry", "Components", "Cleaning product components" ]
54,205
https://en.wikipedia.org/wiki/Diaper
A diaper (, NAmE) or a nappy (BrE, AuE, IrE) is a type of underwear that allows the wearer to urinate or defecate without using a toilet, by absorbing or containing waste products to prevent soiling of outer clothing or the external environment. When diapers become wet or soiled, they require changing, generally by a second person such as a parent or caregiver. Failure to change a diaper on a sufficiently regular basis can result in skin problems around the area covered by the diaper. Diapers are made of cloth or synthetic disposable materials. Cloth diapers are composed of layers of fabric such as cotton, hemp, bamboo, microfiber, or even plastic fibers such as PLA or PU, and can be washed and reused multiple times. Disposable diapers contain absorbent chemicals and are thrown away after use. Diapers are primarily worn by infants, toddlers who are not yet toilet trained, and by children who experience bedwetting. They are also used by adults under certain circumstances or with various conditions, such as incontinence. Adult users can include those of advanced age, patients bed-bound in a hospital, individuals with certain types of physical or mental disability, and people working in extreme conditions, such as astronauts. It is not uncommon for people to wear diapers under dry suits. History Etymology The Middle English word diaper originally referred to a type of cloth rather than the use thereof; "diaper" was the term for a pattern of repeated, rhombic shapes, and later came to describe white cotton or linen fabric with this pattern. According to the Oxford Dictionary, it is a piece of soft cloth or other thick material that is folded around a baby's bottom and between its legs to absorb and hold its body waste. The first cloth diapers consisted of a specific type of soft tissue sheet, cut into geometric shapes. The pattern visible in linen and other types of woven fabric was called "diaper". This meaning of the word has been in use since the 1590s in England. By the 19th century, baby diapers were being sewn from linen, giving us the modern-day reading of the word "diaper". This usage stuck in the United States and Canada following the British colonization of North America, but in the United Kingdom, the word "nappy" took its place. Most sources believe nappy is a diminutive form of the word napkin, which itself was originally a diminutive. Development In the 19th century, the modern diaper began to take shape and mothers in many parts of the world used cotton material, held in place with a fastening—eventually the safety pin. Cloth diapers in the United States were first mass-produced in 1887 by Maria Allen. In the UK, diapers were made out of terry towelling, often with an inner lining made out of soft muslin. Here is an extract from 'The Modern Home Doctor' written by physicians in the UK in 1935. Nice old, soft bits of good Turkish towelling, properly washed, will make the softest of diaper coverings, inside which specially absorbent napkins (diapers), see below at 1A, soft, light, and easily washed, are contained. These should rarely be soiled once regular habits have been inculcated, especially during the night period in which it is most important to prevent habit formation 1A -(squares of butter muslin or Harrington's packed rolls of "mutton cloth" in packets, sold for polishing motor-cars, would do equally well and are very cheap and soft) Wool pants, or, once available, rubber pants, were sometimes used over the cloth diaper to prevent leakage. Doctors believed that rubber pants were harmful because they thought the rubber acted as a poultice and damaged the skin of infants. The constant problem to be overcome was diaper rash, and the infection thereof. The concern was that lack of air circulation would worsen this condition. While lack of air circulation is a factor, it was later found that poor hygiene involving inefficiently washed diapers and infrequent changes of diapers, along with allowing the baby to lie for prolonged periods of time with fecal matter in contact with the skin, were the two main causes of these problems. In the 20th century, the disposable diaper was conceived. In the 1930s, Robinsons of Chesterfield had what were labeled "Destroyable Babies Napkins" listed in their catalogue for the wholesale market. In 1944, Hugo Drangel of the Swedish paper company Pauliström suggested a conceptual design which would entail the placing of sheets of paper tissue (cellulose wadding) inside the cloth diaper and rubber pants. However, cellulose wadding was rough against the skin and crumbled into balls when exposed to moisture. In 1946, Marion Donovan used a shower curtain from her bathroom to create the "Boater", a diaper cover made from army surplus nylon parachute cloth. First sold in 1949 at Saks Fifth Avenue's flagship store in New York City, patents were later issued in 1951 to Donovan, who later sold the rights to the waterproof diaper for $1 million. Donovan also designed a paper disposable diaper, but was unsuccessful in marketing it. In 1947, Scottish housewife Valerie Hunter Gordon started developing and making Paddi, a 2-part system consisting of a disposable pad (made of cellulose wadding covered with cotton wool) worn inside an adjustable plastic garment with press-studs/snaps. Initially, she used old parachutes for the garment. She applied for the patent in April 1948, and it was granted for the UK in October 1949. Initially, the big manufacturers were unable to see the commercial possibilities of disposable diapers. In 1948, Gordon made over 400 Paddis herself using her sewing machine at the kitchen table. Her husband had unsuccessfully approached several companies for help until he had a chance meeting with Sir Robert Robinson at a business dinner. In November 1949 Valerie Gordon signed a contract with Robinsons of Chesterfield who then went into full production. In 1950, Boots UK agreed to sell Paddi in all their branches. In 1951 the Paddi patent was granted for the US and worldwide. Shortly after that, Playtex and several other large international companies tried unsuccessfully to buy out Paddi from Robinsons. Paddi was very successful for many years until the advent of 'all in one' diapers. In Sweden, Hugo Drangel's daughter Lil Karhola Wettergren, in 1956 elaborated her father's original idea, by adding a garment (again making a 2-part system like Paddi). However she met the same problem, with the purchasing managers, declaring they would never allow their wives to "put paper on their children." After the Second World War, mothers increasingly wanted freedom from washing diapers so that they could work and travel, causing an increasing demand for disposable diapers. During the 1950s, companies such as Johnson and Johnson, Kendall, Parke-Davis, Playtex, and Molnlycke entered the disposable diaper market, and in 1956, Procter & Gamble began researching disposable diapers. Victor Mills, along with his project group including William Dehaas (both men who worked for the company) invented what would be trademarked "Pampers". Although Pampers were conceptualized in 1959, the diapers themselves were not launched into the market until 1961. Pampers now accounts for more than $10 billion in annual revenue at Procter & Gamble. As Audrey Quinn recounts about the 1980s "Diaper Wars", Over the next few decades, the disposable diaper industry boomed and the competition between Procter & Gamble's Pampers and Kimberly Clark's Huggies resulted in lower prices and drastic changes to diaper design. Several improvements were made, such as the use of double gussets to improve diaper fit and containment. As stated in Procter & Gamble's initial 1973 patent for the use of double gussets in a diaper, "The double gusset folded areas tend to readily conform to the thigh portions of the leg of the infant. This allows quick and easy fitting and provides a snug and comfortable diaper fit that will neither bind nor wad on the infant...as a result of this snugger fit obtained because of this fold configuration, the diaper is less likely to leak or, in other words, its containment characteristics are greatly enhanced." Further developments in diaper design were made, such as the introduction of refastenable tapes, the "hourglass shape" so as to reduce bulk at the crotch area, and the 1984 introduction of super-absorbent material from polymers known as sodium polyacrylate that were originally developed in 1966. Types Disposable The first waterproof diaper cover was invented in 1946 by Marion Donovan, a professional-turned-housewife who wanted to ensure her children's clothing and bedding remained dry while they slept. She also invented the first paper diapers, but executives did not invest in this idea and it was consequently scrapped for over ten years until Procter & Gamble used Donovan's design ideas to create Pampers. Another disposable diaper design was created by Valerie Hunter Gordon and patented in 1948 Ever since their introduction product innovations include the use of superabsorbent polymers, resealable tapes, and elasticised waist bands. They are now much thinner and much more absorbent. The product range has more recently been extended into children's toilet training phase with the introduction of training pants and pant diapers, which are now undergarments. Modern disposable baby diapers and incontinence products have a layered construction, which allows the transfer and distribution of urine to an absorbent core structure where it is locked in. Basic layers are an outer shell of breathable polyethylene film or a nonwoven and film composite which prevents wetness and soil transfer, an inner absorbent layer of a mixture of air-laid paper and superabsorbent polymers for wetness, and a layer nearest the skin of nonwoven material with a distribution layer directly beneath which will transfer wetness to the absorbent layer. Other common features of disposable diapers include one or more pairs of either adhesive or mechanical fastening tapes to keep the diaper securely fastened. Some diapers have tapes which are refastenable to allow adjusting of fit or reapplication after inspection. Elasticized fabric single and double gussets around the leg and waist areas aid in fitting and in containing urine or stool which has not been absorbed. Baby diapers now have wetness indicators, which consist of a moisture-sensitive ink printed in the front of the diaper as either a fading design or a color-changing line to alert the carer or user that the diaper is wet. A disposable diaper may also include an inner fabric designed to hold moisture against the skin for a brief period before absorption to alert a toilet training or bedwetting user that they have urinated. Most materials in the diaper are held together with the use of a hot-melt adhesive, which is applied in spray form or multi lines, an elastic hot melt is also used to help with pad integrity when the diaper is wet. Some disposable diapers include fragrance, lotions or essential oils in order to help mask the smell of a soiled diaper, or to protect the skin. Care of disposable diapers is minimal, and primarily consists of keeping them in a dry place before use, with proper disposal in a garbage receptacle upon soiling. Stool is supposed to be deposited in the toilet, but is generally put in the garbage with the rest of the diaper. Buying the right size of disposable diaper can be a little difficult for first time parents since different brands tend to have different sizing standards. Baby diaper sizes in general are based on the child's weight (kg or lbs) and not determined by age like in clothing or shoes. Common disposable baby diaper brands in the US include Huggies, Pampers, and Luvs. Sizing Cloth diaper Cloth diapers are reusable and can be made from natural fibers, synthetic materials, or a combination of both. They are often made from industrial cotton which may be bleached white or left the fiber's natural color. Other natural fiber cloth materials include wool, bamboo, and unbleached hemp. Man-made materials such as an internal absorbent layer of microfiber toweling or an external waterproof layer of polyurethane laminate (PUL) may be used. Polyester fleece and faux suedecloth are often used inside cloth diapers as a "stay-dry" wicking liner because of the non-absorbent properties of those synthetic fibers. Traditionally, cloth diapers consisted of a folded square or rectangle of cloth, fastened with safety pins. Today, most cloth diapers are fastened with hook and loop tape (velcro) or snaps. Modern cloth diapers come in a host of shapes, including preformed cloth diapers, all-in-one diapers with waterproof exteriors, fitted diaper with covers and pocket or "stuffable" diapers, which consist of a water-resistant outer shell sewn with an opening for insertion of absorbent material inserts. Many design features of modern cloth diapers have followed directly from innovations initially developed in disposable diapers, such as the use of the hour glass shape, materials to separate moisture from skin and the use of double gussets, or an inner elastic band for better fit and containment of waste material. Several cloth diaper brands use variations of Procter & Gamble's original 1973 patent use of a double gusset in Pampers. Compostable diapers Compostable diapers can be made from a range of different plant-based materials. Dyper makes their compostable diapers from bamboo fibers. Usage Children Babies may have their diapers changed five or more times a day. Parents and other primary childcare givers often carry spare diapers and necessities for diaper changing in a specialized diaper bag. Diapering may possibly serve as a good bonding experience for parent and child. Children who wear diapers may experience skin irritation, commonly referred to as diaper rash, due to continual contact with fecal matter, as feces contains urease which catalyzes the conversion of the urea in urine to ammonia which can irritate the skin and can cause painful redness. The age at which children should cease regularly wearing diapers and toilet training should begin is a subject of debate. Proponents of baby-led potty training and Elimination Communication argue that potty training can begin at birth with multiple benefits, with diapers only used as a backup. Keeping children in diapers beyond infancy can be controversial, with family psychologist John Rosemond claiming it is a "slap to the intelligence of a human being that one would allow baby to continue soiling and wetting himself past age two." Pediatrician T. Berry Brazelton, however, believes that toilet training is the child's choice and has encouraged this view in various commercials for Pampers Size 6, a diaper for older children. Brazelton warns that enforced toilet training can cause serious long-term problems, and that it is the child's decision when to stop wearing diapers, not the parents'. Children typically achieve daytime continence and stop wearing diapers during the day between the ages of two and four, depending on culture, diaper type, parental habits, and the child's personality. However, it is becoming increasingly common for children five to eleven years old to still wear diapers during the day, due to the child's opposition to toilet training, neglect, or unconventional parenting techniques. Other children may use diapers past toileting age due to disability, developmental disorders, or other medical reasons. This can pose a number of problems if the child is sent to school wearing diapers, including teasing from classmates and health issues resulting from soiled diapers. There has been recent pushback from teachers concerning a trend of more children in diapers. If a child soils themselves or their diaper, the teacher has to stop the lesson to focus on one child, which is distracting, and take away from the learning environment. Most children continue to wear diapers at night for a period of time following daytime continence. Older children may have problems with bladder control (primarily at night) and may wear diapers while sleeping to control bedwetting. Approximately 16% of children in the U.S. over the age of 5 wet the bed, 5% of children over 10 wet the bed, and 2% of children over 15 wet the bed. Some companies have diaper products specifically designed for bedwetting, traditionally hosting higher leak guards, and being pull on style similar to training pants. If bedwetting becomes a concern, the current recommendation is to consider forgoing the use of a diaper at night as they may prevent the child from wanting to get out of bed, although this is not a primary cause of bedwetting. This is particularly the case for children over the age of 8. Training pants Manufacturers have designed "training pants" which bridge the gap between baby diapers and normal underwear during the toilet training process. These are similar to infant diapers in construction, but they can be pulled on like normal underwear. Training pants are available for children who experience enuresis. Adults Although most commonly worn by and associated with babies and children, diapers are also worn by adults for a variety of reasons. In the medical community, they are usually referred to as "adult absorbent briefs" rather than diapers, which are associated with children and may have a negative connotation. The usage of adult diapers can be a source of embarrassment, and products are often marketed under euphemisms such as incontinence pads. The most common adult users of diapers are those with medical conditions which cause them to experience urinary incontinence (like bedwetting) or fecal incontinence, those who are bedridden or otherwise limited in their mobility, or for other emotional, physical, or mental needs. It is important that the user selects the proper type, size, and absorbency level for their needs as every diaper design is different. Scuba divers utilize diapers for their dry suits for long exposures. The Maximum Absorbency Garment is an adult-sized diaper with extra absorption material that NASA astronauts wear during liftoff, landing, and extra-vehicular activity (EVA). The NASA Maximum Absorbency Garment is however only designed to retain 2 liters, while the commercial brand by "Tykables Diapers" known as their "Little Rawrs" diaper line is ISO rated to retain up to 7.5 liters of urine. Animals Diapers and diaperlike products are sometimes used on pets, laboratory animals, or working animals. This is often due to the animal not being housebroken, or for older, sick, or injured pets who have become incontinent. In some cases, these are simply baby diapers with holes cut for the tails to fit through. In other cases, they are diaperlike waste collection devices. The diapers used on primates, canines, etc. are much like the diapers used by humans. The diapers used on equines are intended to catch excretions, as opposed to absorbing them. In 2002, the Vienna city council proposed that horses be made to wear diapers to prevent them from defecating in the street. This caused controversy amongst animal rights groups, who claimed that wearing diapers would be uncomfortable for the animals. The campaigners protested by lining the streets wearing diapers themselves, which spelled out the message "Stop pooh bags". In the Kenyan town of Limuru, donkeys were also diapered at the council's behest. A similar scheme in Blackpool ordered that horses be fitted with rubber and plastic diapers to stop them littering the promenade with dung. The council consulted the RSPCA to ensure that the diapers were not harmful to the horses' welfare. Other animals that are sometimes diapered include female dogs when ovulating and thus bleeding, and monkeys and apes or chickens. Diapers are often seen on trained animals who appear on TV shows, in movies, or for live entertainment or educational appearances. Cost of disposable diapers More than US$9 billion is spent on disposable diapers in North America each year. As of 2018, name-brand, mid-range disposable diapers in the U.S., such as Huggies and Pampers, were sold at an average cost of approximately $0.20–0.30 each, and their manufacturers earned about two cents in profit from each diaper sold. Premium brands had eco-friendly features, and sold for approximately twice that price. Generic disposable diapers cost less per diaper, at an average price of $0.15 each, and the typical manufacturer's profit was about one cent per diaper. However, the low-cost diapers needed to be changed more frequently, so the total cost savings was limited, as the lower cost per diaper was offset by the need to buy more diapers. In Latin America, some manufacturers sold disposable diapers at a price of approximately US$0.10 each. Environmental impact of cloth versus disposable diapers An average child will go through several thousand diapers in their life. Since disposable diapers are discarded after a single use, usage of disposable diapers increases the burden on landfill sites, and increased environmental awareness has led to a growth in campaigns for parents to use reusable alternatives such as cloth or hybrid diapers. An estimated 27.4 billion disposable diapers are used each year in the US, resulting in a possible 3.4 million tons of used diapers adding to landfills each year. A discarded disposable diaper takes approximately 450 years to decompose. The environmental impact of cloth as compared to disposable diapers has been studied several times. In one cradle-to-grave study sponsored by the National Association of Diaper Services (NADS) and conducted by Carl Lehrburger and colleagues, results stated that disposable diapers produce seven times more solid waste when discarded and three times more waste in the manufacturing process. In addition, effluents from the plastic, pulp, and paper industries are believed far more hazardous than those from the cotton-growing and -manufacturing processes. Single-use diapers consume less water than reusables laundered at home, but more than those sent to a commercial diaper service. Washing cloth diapers at home uses 50 to 70 gallons (approx. 189 to 264 litres) of water every three days, which is roughly equivalent to flushing the toilet 15 times a day, unless the user has a high-efficiency washing machine. An average diaper service puts its diapers through an average of 13 water changes, but uses less water and energy per diaper than one laundry load at home. In October 2008, "An updated lifecycle assessment study for disposable and reusable nappies" by the UK Environment Agency and Department for Environment, Food and Rural Affairs stated that reusable diapers can cause significantly less (up to 40 per cent) or significantly more damage to the environment than disposable ones, depending mostly on how parents wash and dry them. The "baseline scenario" showed that the difference in green-house emissions was insignificant (in fact, disposables even scored slightly better). However, much better results (emission cuts of up to 40 per cent) could be achieved by using reusable diapers more rationally. "The report shows that, in contrast to the use of disposable nappies, it is consumers' behaviour after purchase that determines most of the impacts from reusable nappies. Cloth nappy users can reduce their environmental impacts by: Line drying outside whenever possible Tumble drying as little as possible When replacing appliances, choosing more energy efficient appliances (A+ rated machines [according to the EU environmental rating] are preferred) Not washing above Washing fuller loads Using baby-led potty training techniques to reduce number of soiled nappies. Reusing nappies on other children." There are variations in the care of cloth diapers that can account for different measures of environmental impact. For example, using a cloth diaper laundering service involves additional pollution from the vehicle that picks up and drops off deliveries. Yet such a service uses less water per diaper in the laundering process. Some people who launder cloth diapers at home wash each load twice, considering the first wash a "prewash", and thus doubling the energy and water usage from laundering. Cloth diapers are most commonly made of cotton. "Conventional cotton is one of the most chemically-dependent crops, sucking up 10% of all agricultural chemicals and 25% of insecticides on 3% of our arable land; that's more than any other crop per unit." This effect can be mitigated by using organic cotton or other materials, such as bamboo and hemp. Another aspect to consider when choosing between disposable diapers and cloth diapers is cost. It is estimated that an average baby will use from $1,500 to $2,000 or more in disposable diapers before being potty-trained. In contrast, cloth diapers, while initially more expensive than disposables, if bought new cost about $100 to $300 for a basic set, although costs can rise with more expensive versions. The cost of washing and drying diapers must also be considered. The basic set, if one-sized, can last from birth to potty-training. Another factor in reusable cloth diaper impact is the ability to re-use the diapers for subsequent children or sell them on. These factors can alleviate the environmental and financial impact from manufacture, sale and use of brand-new reusable diapers. See also Adult diaper Changing table Diaper bag Infant clothing Swim diaper Diaper Genie Baby-led potty training Diaper fetishism Marion Donovan Training pants References Babycare Children's clothing Infancy Infants' clothing Undergarments Disposable products Clothing controversies Environmental controversies Hygiene
Diaper
[ "Biology" ]
5,516
[ "Diapers", "Excretion" ]
54,217
https://en.wikipedia.org/wiki/Pigeonhole%20principle
In mathematics, the pigeonhole principle states that if items are put into containers, with , then at least one container must contain more than one item. For example, of three gloves, at least two must be right-handed or at least two must be left-handed, because there are three objects but only two categories of handedness to put them into. This seemingly obvious statement, a type of counting argument, can be used to demonstrate possibly unexpected results. For example, given that the population of London is more than one unit greater than the maximum number of hairs that can be on a human's head, the principle requires that there must be at least two people in London who have the same number of hairs on their heads. Although the pigeonhole principle appears as early as 1624 in a book attributed to Jean Leurechon, it is commonly called Dirichlet's box principle or Dirichlet's drawer principle after an 1834 treatment of the principle by Peter Gustav Lejeune Dirichlet under the name ("drawer principle" or "shelf principle"). The principle has several generalizations and can be stated in various ways. In a more quantified version: for natural numbers and , if objects are distributed among sets, the pigeonhole principle asserts that at least one of the sets will contain at least objects. For arbitrary and , this generalizes to , where and denote the floor and ceiling functions, respectively. Though the principle's most straightforward application is to finite sets (such as pigeons and boxes), it is also used with infinite sets that cannot be put into one-to-one correspondence. To do so requires the formal statement of the pigeonhole principle: "there does not exist an injective function whose codomain is smaller than its domain". Advanced mathematical proofs like Siegel's lemma build upon this more general concept. Etymology Dirichlet published his works in both French and German, using either the German or the French . The strict original meaning of these terms corresponds to the English drawer, that is, an open-topped box that can be slid in and out of the cabinet that contains it. (Dirichlet wrote about distributing pearls among drawers.) These terms morphed to pigeonhole in the sense of a small open space in a desk, cabinet, or wall for keeping letters or papers, metaphorically rooted in structures that house pigeons. Because furniture with pigeonholes is commonly used for storing or sorting things into many categories (such as letters in a post office or room keys in a hotel), the translation pigeonhole may be a better rendering of Dirichlet's original "drawer". That understanding of the term pigeonhole, referring to some furniture features, is fading—especially among those who do not speak English natively but as a lingua franca in the scientific world—in favor of the more pictorial interpretation, literally involving pigeons and holes. The suggestive (though not misleading) interpretation of "pigeonhole" as "dovecote" has lately found its way back to a German back-translation of the "pigeonhole principle" as the "". Besides the original terms "" in German and "" in French, other literal translations are still in use in Arabic (), Bulgarian (""), Chinese (""), Danish (""), Dutch (""), Hungarian (""), Italian (""), Japanese (""), Persian (""), Polish (""), Portuguese (""), Swedish (""), Turkish (""), and Vietnamese (""). Examples Sock picking Suppose a drawer contains a mixture of black socks and blue socks, each of which can be worn on either foot. You pull a number of socks from the drawer without looking. What is the minimum number of pulled socks required to guarantee a pair of the same color? By the pigeonhole principle (, using one pigeonhole per color), the answer is three ( items). Either you have three of one color, or you have two of one color and one of the other. Hand shaking If people can shake hands with one another (where ), the pigeonhole principle shows that there is always a pair of people who will shake hands with the same number of people. In this application of the principle, the "hole" to which a person is assigned is the number of hands that person shakes. Since each person shakes hands with some number of people from 0 to , there are possible holes. On the other hand, either the "0" hole, the hole, or both must be empty, for it is impossible (if ) for some person to shake hands with everybody else while some person shakes hands with nobody. This leaves people to be placed into at most non-empty holes, so the principle applies. This hand-shaking example is equivalent to the statement that in any graph with more than one vertex, there is at least one pair of vertices that share the same degree. This can be seen by associating each person with a vertex and each edge with a handshake. Hair counting One can demonstrate there must be at least two people in London with the same number of hairs on their heads as follows. Since a typical human head has an average of around 150,000 hairs, it is reasonable to assume (as an upper bound) that no one has more than 1,000,000 hairs on their head holes). There are more than 1,000,000 people in London ( is bigger than 1 million items). Assigning a pigeonhole to each number of hairs on a person's head, and assigning people to pigeonholes according to the number of hairs on their heads, there must be at least two people assigned to the same pigeonhole by the 1,000,001st assignment (because they have the same number of hairs on their heads; or, ). Assuming London has 9.002 million people, it follows that at least ten Londoners have the same number of hairs, as having nine Londoners in each of the 1 million pigeonholes accounts for only 9 million people. For the average case () with the constraint: fewest overlaps, there will be at most one person assigned to every pigeonhole and the 150,001st person assigned to the same pigeonhole as someone else. In the absence of this constraint, there may be empty pigeonholes because the "collision" happens before the 150,001st person. The principle just proves the existence of an overlap; it says nothing about the number of overlaps (which falls under the subject of probability distribution). There is a passing, satirical, allusion in English to this version of the principle in A History of the Athenian Society, prefixed to A Supplement to the Athenian Oracle: Being a Collection of the Remaining Questions and Answers in the Old Athenian Mercuries (printed for Andrew Bell, London, 1710). It seems that the question whether there were any two persons in the World that have an equal number of hairs on their head? had been raised in The Athenian Mercury before 1704. Perhaps the first written reference to the pigeonhole principle appears in a short sentence from the French Jesuit Jean Leurechon's 1622 work Selectæ Propositiones: "It is necessary that two men have the same number of hairs, écus, or other things, as each other." The full principle was spelled out two years later, with additional examples, in another book that has often been attributed to Leurechon, but might be by one of his students. The birthday problem The birthday problem asks, for a set of randomly chosen people, what is the probability that some pair of them will have the same birthday? The problem itself is mainly concerned with counterintuitive probabilities, but we can also tell by the pigeonhole principle that among 367 people, there is at least one pair of people who share the same birthday with 100% probability, as there are only 366 possible birthdays to choose from. Team tournament Imagine seven people who want to play in a tournament of teams items), with a limitation of only four teams holes) to choose from. The pigeonhole principle tells us that they cannot all play for different teams; there must be at least one team featuring at least two of the seven players: Subset sum Any subset of size six from the set = {1,2,3,...,9} must contain two elements whose sum is 10. The pigeonholes will be labeled by the two element subsets {1,9}, {2,8}, {3,7}, {4,6} and the singleton {5}, five pigeonholes in all. When the six "pigeons" (elements of the size six subset) are placed into these pigeonholes, each pigeon going into the pigeonhole that has it contained in its label, at least one of the pigeonholes labeled with a two-element subset will have two pigeons in it. Hashing Hashing in computer science is the process of mapping an arbitrarily large set of data to fixed-size values. This has applications in caching whereby large data sets can be stored by a reference to their representative values (their "hash codes") in a "hash table" for fast recall. Typically, the number of unique objects in a data set is larger than the number of available unique hash codes , and the pigeonhole principle holds in this case that hashing those objects is no guarantee of uniqueness, since if you hashed all objects in the data set , some objects must necessarily share the same hash code. Uses and applications The principle can be used to prove that any lossless compression algorithm, provided it makes some inputs smaller (as "compression" suggests), will also make some other inputs larger. Otherwise, the set of all input sequences up to a given length could be mapped to the (much) smaller set of all sequences of length less than without collisions (because the compression is lossless), a possibility that the pigeonhole principle excludes. A notable problem in mathematical analysis is, for a fixed irrational number , to show that the set of fractional parts is dense in . One finds that it is not easy to explicitly find integers such that where is a small positive number and is some arbitrary irrational number. But if one takes such that by the pigeonhole principle there must be such that and are in the same integer subdivision of size (there are only such subdivisions between consecutive integers). In particular, one can find such that for some integers and in }. One can then easily verify that This implies that where or . This shows that 0 is a limit point of {[]}. One can then use this fact to prove the case for in : find such that then if the proof is complete. Otherwise and by setting one obtains Variants occur in a number of proofs. In the proof of the pumping lemma for regular languages, a version that mixes finite and infinite sets is used: If infinitely many objects are placed into finitely many boxes, then two objects share a box. In Fisk's solution to the Art gallery problem a sort of converse is used: If objects are placed into boxes, then there is a box containing at most objects. Alternative formulations The following are alternative formulations of the pigeonhole principle. If objects are distributed over places, and if , then some place receives at least two objects. (equivalent formulation of 1) If objects are distributed over places in such a way that no place receives more than one object, then each place receives exactly one object. (generalization of 1) If and are sets, and the cardinality of is greater than the cardinality of , then there is no injective function from to . If objects are distributed over places, and if , then some place receives no object. (equivalent formulation of 4) If objects are distributed over places in such a way that no place receives no object, then each place receives exactly one object. (generalization of 4) If and are sets, and the cardinality of is less than the cardinality of , then there is no surjective function from to . Strong form Let be positive integers. If objects are distributed into boxes, then either the first box contains at least objects, or the second box contains at least objects, ..., or the th box contains at least objects. The simple form is obtained from this by taking , which gives objects. Taking gives the more quantified version of the principle, namely: Let and be positive integers. If objects are distributed into boxes, then at least one of the boxes contains or more of the objects. This can also be stated as, if discrete objects are to be allocated to containers, then at least one container must hold at least objects, where is the ceiling function, denoting the smallest integer larger than or equal to . Similarly, at least one container must hold no more than objects, where is the floor function, denoting the largest integer smaller than or equal to . Generalizations of the pigeonhole principle A probabilistic generalization of the pigeonhole principle states that if pigeons are randomly put into pigeonholes with uniform probability , then at least one pigeonhole will hold more than one pigeon with probability where is the falling factorial . For and for (and ), that probability is zero; in other words, if there is just one pigeon, there cannot be a conflict. For (more pigeons than pigeonholes) it is one, in which case it coincides with the ordinary pigeonhole principle. But even if the number of pigeons does not exceed the number of pigeonholes (), due to the random nature of the assignment of pigeons to pigeonholes there is often a substantial chance that clashes will occur. For example, if 2 pigeons are randomly assigned to 4 pigeonholes, there is a 25% chance that at least one pigeonhole will hold more than one pigeon; for 5 pigeons and 10 holes, that probability is 69.76%; and for 10 pigeons and 20 holes it is about 93.45%. If the number of holes stays fixed, there is always a greater probability of a pair when you add more pigeons. This problem is treated at much greater length in the birthday paradox. A further probabilistic generalization is that when a real-valued random variable has a finite mean , then the probability is nonzero that is greater than or equal to , and similarly the probability is nonzero that is less than or equal to . To see that this implies the standard pigeonhole principle, take any fixed arrangement of pigeons into holes and let be the number of pigeons in a hole chosen uniformly at random. The mean of is , so if there are more pigeons than holes the mean is greater than one. Therefore, is sometimes at least 2. Infinite sets The pigeonhole principle can be extended to infinite sets by phrasing it in terms of cardinal numbers: if the cardinality of set is greater than the cardinality of set , then there is no injection from to . However, in this form the principle is tautological, since the meaning of the statement that the cardinality of set is greater than the cardinality of set is exactly that there is no injective map from to . However, adding at least one element to a finite set is sufficient to ensure that the cardinality increases. Another way to phrase the pigeonhole principle for finite sets is similar to the principle that finite sets are Dedekind finite: Let and be finite sets. If there is a surjection from to that is not injective, then no surjection from to is injective. In fact no function of any kind from to is injective. This is not true for infinite sets: Consider the function on the natural numbers that sends 1 and 2 to 1, 3 and 4 to 2, 5 and 6 to 3, and so on. There is a similar principle for infinite sets: If uncountably many pigeons are stuffed into countably many pigeonholes, there will exist at least one pigeonhole having uncountably many pigeons stuffed into it. This principle is not a generalization of the pigeonhole principle for finite sets however: It is in general false for finite sets. In technical terms it says that if and are finite sets such that any surjective function from to is not injective, then there exists an element of such that there exists a bijection between the preimage of and . This is a quite different statement, and is absurd for large finite cardinalities. Quantum mechanics Yakir Aharonov et al. presented arguments that quantum mechanics may violate the pigeonhole principle, and proposed interferometric experiments to test the pigeonhole principle in quantum mechanics. Later research has called this conclusion into question. In a January 2015 arXiv preprint, researchers Alastair Rae and Ted Forgan at the University of Birmingham performed a theoretical wave function analysis, employing the standard pigeonhole principle, on the flight of electrons at various energies through an interferometer. If the electrons had no interaction strength at all, they would each produce a single, perfectly circular peak. At high interaction strength, each electron produces four distinct peaks, for a total of 12 peaks on the detector; these peaks are the result of the four possible interactions each electron could experience (alone, together with the first other particle only, together with the second other particle only, or all three together). If the interaction strength was fairly low, as would be the case in many real experiments, the deviation from a zero-interaction pattern would be nearly indiscernible, much smaller than the lattice spacing of atoms in solids, such as the detectors used for observing these patterns. This would make it very difficult or impossible to distinguish a weak-but-nonzero interaction strength from no interaction whatsoever, and thus give an illusion of three electrons that did not interact despite all three passing through two paths. See also Axiom of choice Blichfeldt's theorem Combinatorial principles Combinatorial proof Dedekind-infinite set Dirichlet's approximation theorem Hilbert's paradox of the Grand Hotel Multinomial theorem Pochhammer symbol Ramsey's theorem Notes References External links "The strange case of The Pigeon-hole Principle"; Edsger Dijkstra investigates interpretations and reformulations of the principle. "The Pigeon Hole Principle"; Elementary examples of the principle in use by Larry Cusick. "Pigeonhole Principle from Interactive Mathematics Miscellany and Puzzles"; basic Pigeonhole Principle analysis and examples by Alexander Bogomolny. "16 fun applications of the pigeonhole principle"; Interesting facts derived by the principle. Combinatorics Theorems in discrete mathematics Mathematical principles Ramsey theory
Pigeonhole principle
[ "Mathematics" ]
3,833
[ "Mathematical principles", "Discrete mathematics", "Mathematical theorems", "Theorems in discrete mathematics", "Combinatorics", "Mathematical problems", "Ramsey theory" ]
54,223
https://en.wikipedia.org/wiki/Creator%20deity
A creator deity or creator god is a deity responsible for the creation of the Earth, world, and universe in human religion and mythology. In monotheism, the single God is often also the creator. A number of monolatristic traditions separate a secondary creator from a primary transcendent being, identified as a primary creator. Monotheism Atenism Initiated by Pharaoh Akhenaten and Queen Nefertiti around 1330 BCE, during the New Kingdom period in ancient Egyptian history. They built an entirely new capital city (Akhetaten) for themselves and worshippers of their sole creator god in a wilderness. His father used to worship Aten alongside other gods of their polytheistic religion. Aten, for a long time before his father's time, was revered as a god among the many gods and goddesses in Egypt. Atenism was countermanded by later pharaoh Tutankhamun, as chronicled in the artifact, the Restoration Stela. Despite different views, Atenism is considered by some scholars to be one of the frontiers of monotheism in human history. Abrahamic religions Judaism The Genesis creation narrative is the creation myth of both Judaism and Christianity. The narrative is made up of two stories, roughly equivalent to the first two chapters of the Book of Genesis. In the first, Elohim (the Hebrew generic word for God) creates the heavens and the Earth, the animals, and mankind in six days, then rests on, blesses and sanctifies the seventh (i.e. the Biblical Sabbath). In the second story, God, now referred to by the personal name Yahweh, creates Adam, the first man, from dust and places him in the Garden of Eden, where he is given dominion over the animals. Eve, the first woman, is created from Adam and as his companion. It expounds themes parallel to those in Mesopotamian mythology, emphasizing the Israelite people's belief in one God. The first major comprehensive draft of the Pentateuch (the series of five books which begins with Genesis and ends with Deuteronomy) was composed in the late 7th or the 6th century BCE (the Jahwist source) and was later expanded by other authors (the Priestly source) into a work very similar to Genesis as known today. The two sources can be identified in the creation narrative: Priestly and Jahwistic. The combined narrative is a critique of the Mesopotamian theology of creation: Genesis affirms monotheism and denies polytheism. Robert Alter described the combined narrative as "compelling in its archetypal character, its adaptation of myth to monotheistic ends". Christianity The Abrahamic creation narrative is made up of two stories, roughly equivalent to the two first chapters of the Book of Genesis. The first account (1:1 through 2:3) employs a repetitious structure of divine fiat and fulfillment, then the statement "And there was evening and there was morning, the [xth] day," for each of the six days of creation. In each of the first three days there is an act of division: day one divides the darkness from light, day two the "waters above" from the "waters below", and day three the sea from the land. In each of the next three days these divisions are populated: day four populates the darkness and light with sun, moon, and stars; day five populates seas and skies with fish and fowl; and finally, land-based creatures and mankind populate the land. The first (the Priestly story) was concerned with the cosmic plan of creation, while the second (the Yahwist story) focuses on man as cultivator of his environment and as a moral agent. The second account, in contrast to the regimented seven-day scheme of Genesis 1, uses a simple flowing narrative style that proceeds from God's forming the first man through the Garden of Eden to the creation of the first woman and the institution of marriage. In contrast to the omnipotent God of Genesis 1 creating a god-like humanity, the God of Genesis 2 can fail as well as succeed. The humanity he creates is not god-like, but is punished for acts which would lead to their becoming god-like (Genesis 3:1-24) and the order and method of creation itself differs. "Together, this combination of parallel character and contrasting profile point to the different origin of materials in Genesis 1:1 and Gen 2:4, however elegantly they have now been combined." An early conflation of Greek philosophy with the narratives in the Hebrew Bible came from Philo of Alexandria (d. 50 CE), writing in the context of Hellenistic Judaism. Philo equated the Hebrew creator-deity Yahweh with Aristotle's unmoved mover (First Cause) in an attempt to prove that the Jews had held monotheistic views even before the Greeks. A similar theoretical proposition was demonstrated by Thomas Aquinas, who linked Aristotelian philosophy with the Christian faith, followed by the statement that God is the First Being, the First Mover, and is Pure Act. The deuterocanonical 2 Maccabees has two relevant passages. At chapter 7, it narrows about the mother of a Jewish proto-martyr telling to her son: "I beseech thee, my son, look upon heaven and earth, and all that is in them: and consider that God made them out of nothing, and mankind also"; at chapter 1, it refers a solemn prayer hymned by Jonathan, Nehemiah and the Priest of Israel, while making sacrifices in honour of God: "O Lord, Lord God, Creator of all things, who art fearefull, and strong, and righteous, and mercifull, and the onely, and gracious king". The Prologue to the Gospel of John begins with: "In the beginning was the Word, and the Word was with God, and the Word was God. 2 The same was in the beginning with God. 3 All things were made by him, and without him was not any thing made that was made." Christianity affirms the creation by God since its early time in the Apostles' Creed ("I believe in God, the Father almighty, creator of heaven and earth.", 1st century CE), that is symmetrical to the Nicene Creed (4th century CE). Nowadays, theologians debate whether the Bible itself teaches if this creation by God is a creation ex nihilo. Traditional interpreters argue on grammatical and syntactical grounds that this is the meaning of Genesis 1:1, which is commonly rendered: "In the beginning God created the heavens and the earth." However, other interpreters understand creation ex nihilo as a 2nd-century theological development. According to this view, church fathers opposed notions appearing in pre-Christian creation myths and in Gnosticism—notions of creation by a demiurge out of a primordial state of matter (known in religious studies as chaos after the Greek term used by Hesiod in his Theogony). Jewish thinkers took up the idea, which became important to Judaism. Islam According to Islam, the creator deity, God, known as Allah, is the all-powerful and all-knowing Creator, Sustainer, Ordainer, and Judge of the universe. Creation is seen as an act of divine choice and mercy, one with a grand purpose: "And We did not create the heaven and earth and that between them in play." Rather, the purpose of humanity is to be tested: "Who has created death and life, that He may test you which of you is best in deed. And He is the All-Mighty, the Oft-Forgiving;" Those who pass the test are rewarded with Paradise: "Verily for the Righteous there will be a fulfilment of (the heart's) desires;" According to the Islamic teachings, God exists above the heavens and the creation itself. The Quran mentions, "He it is Who created for you all that is on earth. Then He Istawa (rose over) towards the heaven and made them seven heavens and He is the All-Knower of everything." At the same time, God is unlike anything in creation: "There is nothing like unto Him, and He is the Hearing, the Seeing." and nobody can perceive God in totality: "Vision perceives Him not, but He perceives [all] vision; and He is the Subtle, the Acquainted." God in Islam is not only majestic and sovereign, but also a personal God: "And indeed We have created man, and We know what his ownself whispers to him. And We are nearer to him than his jugular vein (by Our Knowledge)." Allah commands the believers to constantly remember Him ("O you who have believed, remember Allah with much remembrance") and to invoke Him alone ("And whoever invokes besides Allah another deity for which he has no proof—then his account is only with his Lord. Indeed, the disbelievers will not succeed."). Islam teaches that God as referenced in the Qur'an is the only god and the same God worshipped by members of other Abrahamic religions such as Christianity and Judaism. Sikhism One of the biggest responsibilities in the faith of Sikhism is to worship God as "The Creator", termed Waheguru, who is shapeless, timeless, and sightless, i.e., Nirankar, Akal, and Alakh Niranjan. The religion only takes after the belief in "One God for All" or Ik Onkar. Baháʼí Faith In the Baháʼí Faith God is the imperishable, uncreated being who is the source of all existence. He is described as "a personal God, unknowable, inaccessible, the source of all Revelation, eternal, omniscient, omnipresent and almighty". Although transcendent and inaccessible directly, his image is reflected in his creation. The purpose of creation is for the created to have the capacity to know and love its creator. Mandaeism In Mandaeism, Hayyi Rabbi (lit=The Great Life), or 'The Great Living God', is the supreme God from which all things emanate. He is also known as 'The First Life', since during the creation of the material world, Yushamin emanated from Hayyi Rabbi as the "Second Life." "The principles of the Mandaean doctrine: the belief of the only one great God, Hayyi Rabbi, to whom all absolute properties belong; He created all the worlds, formed the soul through his power, and placed it by means of angels into the human body. So He created Adam and Eve, the first man and woman." Mandaeans recognize God to be the eternal, creator of all, the one and only in domination who has no partner. Monolatrism Monolatristic traditions would separate a secondary creator from the primary transcendent being, identified as a primary creator. According to Gaudiya Vaishnavas, Brahma is the secondary creator and not the supreme. Vishnu is the primary creator. According to Vaishnava belief Vishnu creates the basic universal shell and provides all the raw materials and also places the living entities within the material world, fulfilling their own independent will. Brahma works with the materials provided by Vishnu to actually create what are believed to be planets in Puranic terminology, and he supervises the population of them. Monism Monism is the philosophy that asserts oneness as its fundamental premise, and it contradicts the dualism-based theistic premise that there is a creator God that is eternal and separate from the rest of existence. There are two types of monism, namely spiritual monism which holds that all spiritual reality is one, and material monism which holds that everything including all material reality is one and the same thing. Non-creationism Buddhism Buddhism denies a creator deity and posits that mundane deities such as Mahabrahma are misperceived to be a creator. Jainism Jainism does not support belief in a creator deity. According to Jain doctrine, the universe and its constituents—soul, matter, space, time, and principles of motion have always existed (a static universe similar to that of Epicureanism and steady state cosmological model). All the constituents and actions are governed by universal natural laws. It is not possible to create matter out of nothing and hence the sum total of matter in the universe remains the same (similar to law of conservation of mass). Similarly, the soul of each living being is unique and uncreated and has existed since beginningless time. The Jain theory of causation holds that a cause and its effect are always identical in nature and therefore a conscious and immaterial entity like God cannot create a material entity like the universe. Furthermore, according to the Jain concept of divinity, any soul who destroys its karmas and desires achieves liberation. A soul who destroys all its passions and desires has no desire to interfere in the working of the universe. Moral rewards and sufferings are not the work of a divine being, but a result of an innate moral order in the cosmos; a self-regulating mechanism whereby the individual reaps the fruits of his own actions through the workings of the karmas. Through the ages, Jain philosophers have adamantly rejected and opposed the concept of creator and omnipotent God and this has resulted in Jainism being labeled as nāstika darsana or atheist philosophy by the rival religious philosophies. The theme of non-creationism and absence of omnipotent God and divine grace runs strongly in all the philosophical dimensions of Jainism, including its cosmology, karma, moksa and its moral code of conduct. Jainism asserts a religious and virtuous life is possible without the idea of a creator god. Polytheism In polytheistic creation, the world often comes into being organically, e.g. sprouting from a primal seed, sexually, by miraculous birth (sometimes by parthenogenesis), by hieros gamos, violently, by the slaying of a primeval monster, or artificially, by a divine demiurge or "craftsman". Sometimes, a god is involved, wittingly or unwittingly, in bringing about creation. Examples include: Sub-Saharan African contexts: Mbombo of Bakuba mythology, who vomited out the world upon feeling a stomachache in Zulu mythology American contexts: (Great Rabbit), Ojibwe deity, a shape-shifter and a cocreator of the world in Aztec mythology (and/or Bague) in Muisca mythology in cosmology of the O'odham peoples Viracocha in Inca mythology A trickster deity in the form of a Raven in Inuit mythology Near Eastern contexts: Egyptian mythology Atum in Ennead, whose semen becomes the primal component of the universe Ptah creating the universe by the Word Neith, who wove all of the universe and existence into being on her loom. in Canaanite religion killing in the Babylonian Asian contexts: Atingkok Maru Sidaba in Manipuri mythology, the creator of the universe in Mongolian mythology, king of the skies Eskeri - in Tungusic mythology in Ainu mythology, who built the world on the back of a trout and in Japanese mythology, who churned the ocean with a spear, creating the islands of Japan in Chinese mythology, he is the one who separated heaven and earth and became geographic features such as mountains and rivers the god who created the world in Vietnamese mythology European contexts: The sons of slaying the primeval giant in Norse mythology in Slavic mythology or (Radien Father) in Sámi mythology Oceanic contexts: , creator of humanity, the god of fertility and the chief god of the "" or "bird-man" cult of Rapa Nui mythology. , the Sky Father, and , the Earth Mother in Māori mythology Platonic demiurge Plato, in his dialogue Timaeus, describes a creation myth involving a being called the demiurge ( "craftsman"). Neoplatonism and Gnosticism continued and developed this concept. In Neoplatonism, the demiurge represents the second cause or dyad, after the monad. In Gnostic dualism, the demiurge is an imperfect spirit and possibly an evil being, transcended by divine Fullness (Pleroma). Unlike the Abrahamic God, Plato's demiurge is unable to create ex-nihilo. Hinduism Hinduism is a diverse system of thought with beliefs spanning monotheism, polytheism, panentheism, pantheism, pandeism, monism, and atheism among others; and its concept of creator deity is complex and depends upon each individual and the tradition and philosophy followed. Hinduism is sometimes referred to as henotheistic (i.e., involving devotion to a single god while accepting the existence of others), but any such term is an overgeneralization. The Nasadiya Sukta (Creation Hymn) of the Rigveda is one of the earliest texts which "demonstrates a sense of metaphysical speculation" about what created the universe, the concept of god(s) and The One, and whether even The One knows how the universe came into being. The Rig Veda praises various deities, none superior nor inferior, in a henotheistic manner. The hymns repeatedly refer to One Truth and Reality. The "One Truth" of Vedic literature, in modern era scholarship, has been interpreted as monotheism, monism, as well as a deified Hidden Principles behind the great happenings and processes of nature. The post-Vedic texts of Hinduism offer multiple theories of cosmogony, many involving Brahma. These include Sarga (primary creation of universe) and Visarga (secondary creation), ideas related to the Indian thought that there are two levels of reality, one primary that is unchanging (metaphysical) and other secondary that is always changing (empirical), and that all observed reality of the latter is in an endless repeating cycle of existence, that cosmos and life we experience is continually created, evolved, dissolved and then re-created. The primary creator is extensively discussed in Vedic cosmogonies with Brahman or Purusha or Devi among the terms used for the primary creator, while the Vedic and post-Vedic texts name different gods and goddesses as secondary creators (often Brahma in post-Vedic texts), and in some cases a different god or goddess is the secondary creator at the start of each cosmic cycle (kalpa, aeon). Brahma is a "secondary creator" as described in the Mahabharata and Puranas, and among the most studied and described. Born from a lotus emerging from the navel of Vishnu, Brahma creates all the forms in the universe, but not the primordial universe itself. In contrast, the Shiva-focused Puranas describe Brahma and Vishnu to have been created by Ardhanarishvara, that is half Shiva and half Parvati; or alternatively, Brahma was born from Rudra, or Vishnu, Shiva and Brahma creating each other cyclically in different aeons (kalpa). Thus in most Puranic texts, Brahma's creative activity depends on the presence and power of a higher god. In other versions of creation, the creator deity is the one who is equivalent to the Brahman, the metaphysical reality in Hinduism. In Vaishnavism, Vishnu creates Brahma and orders him to order the rest of universe. In Shaivism, Shiva may be treated as the creator. In Shaktism, the Great Goddess creates the Trimurti. Other Kongo religion The Bakongo people traditionally believe in Nzambi Mpungu, the Creator God, whom the Portuguese compared to the Christian God during colonization. They also believe his female counterpart called Nzambici, the ancestors (bakulu) as well as guardian spirits, such as Lemba, the basimbi, bakisi and bakita. Oral tradition accounts that in the beginning, there was only a circular void (mbûngi) with no life. Nzambi Mpungu summoned a spark of fire (Kalûnga) that grew until it filled the mbûngi. When it grew too large, Kalûnga became a great force of energy and unleashed heated elements across space, forming the universe with the sun, stars, planets, etc. Because of this, Kalûnga is seen as the origin of life and a force of motion. The Bakongo believe that life requires constant change and perpetual motion. Nzambi Mpunga is also referred to as Kalûnga, the God of change. Similarities between the Bakongo belief of Kalûnga and the Big Bang Theory have been studied. Nzambi is also said to have created two worlds. As Kalûnga filled mbûngi, it created an invisible line that divided the circle in half. The top half represents the physical world (Ku Nseke or nsi a bamôyo), while the bottom half represents the spiritual world of the ancestors (Ku Mpèmba). The Kalûnga line separates these two worlds, and all living things exists on one side or another. After creation, the line and the mbûngi circle became a river, carrying people between the worlds at birth and death. Then the process repeats and a person is reborn. A simbi (pl. bisimbi) is a water spirit that is believed to inhabit bodies of water and rocks, having the ability to guide bakulu, or the ancestors, along the Kalûnga line to the spiritual world after death. They are also present during the baptisms of African American Christians, according to Hoodoo tradition. Chinese traditional cosmology Pangu can be interpreted as another creator deity. In the beginning there was nothing in the universe except a formless chaos. However this chaos began to coalesce into a cosmic egg for eighteen thousand years. Within it, the perfectly opposed principles of yin and yang became balanced and Pangu emerged (or woke up) from the egg. Pangu is usually depicted as a primitive, hairy giant with horns on his head and clad in furs. Pangu set about the task of creating the world: he separated Yin from Yang with a swing of his giant axe, creating the Earth (murky Yin) and the Sky (clear Yang). To keep them separated, Pangu stood between them and pushed up the Sky. This task took eighteen thousand years, with each day the sky grew ten feet higher, the Earth ten feet wider, and Pangu ten feet taller. In some versions of the story, Pangu is aided in this task by the four most prominent beasts, namely the Turtle, the Qilin, the Phoenix, and the Dragon. After eighteen thousand years had elapsed, Pangu was laid to rest. His breath became the wind; his voice the thunder; left eye the sun and right eye the moon; his body became the mountains and extremes of the world; his blood formed rivers; his muscles the fertile lands; his facial hair the stars and milky way; his fur the bushes and forests; his bones the valuable minerals; his bone marrows sacred diamonds; his sweat fell as rain; and the fleas on his fur carried by the wind became human beings all over the world. The first writer to record the myth of Pangu was Xu Zheng during the Three Kingdoms period. Shangdi is another creator deity, possibly prior to Pangu; sharing concepts similar to Abrahamic faiths. Kazakh According to Kazakh folk tales, Jasagnan is the creator of the world. See also Notes References Bibliography External links Creation myths
Creator deity
[ "Astronomy" ]
4,945
[ "Cosmogony", "Creation myths" ]
54,229
https://en.wikipedia.org/wiki/Cocoa%20bean
The cocoa bean, also known as cocoa () or cacao (), is the dried and fully fermented seed of Theobroma cacao, the cacao tree, from which cocoa solids (a mixture of nonfat substances) and cocoa butter (the fat) can be extracted. Cacao trees are native to the Amazon rainforest. They are the basis of chocolate and Mesoamerican foods including tejate, an indigenous Mexican drink. The cacao tree was first domesticated at least 5,300 years ago by the Mayo-Chinchipe culture in South America before it was introduced in Mesoamerica. Cacao was consumed by pre-Hispanic cultures in spiritual ceremonies, and its beans were a common currency in Mesoamerica. The cacao tree grows in a limited geographical zone; today, West Africa produces nearly 81% of the world's crop. The three main varieties of cocoa plants are Forastero, Criollo, and Trinitario, with Forastero being the most widely used. In 2020, global cocoa bean production reached 5.8 million tonnes, with Ivory Coast leading at 38% of the total, followed by Ghana and Indonesia. Cocoa beans, cocoa butter, and cocoa powder are traded on futures markets, with London focusing on West African cocoa and New York on Southeast Asian cocoa. Various international and national initiatives aim to support sustainable cocoa production, including the Swiss Platform for Sustainable Cocoa (SWISSCO), the German Initiative on Sustainable Cocoa (GISCO), and Belgium's Beyond Chocolate. At least 29% of global cocoa production was compliant with voluntary sustainability standards in 2016. Deforestation due to cocoa production remains a concern, especially in West Africa. Sustainable agricultural practices, such as agroforestry, can support cocoa production while conserving biodiversity. Cocoa contributes significantly to economies such as Nigeria's, and demand for cocoa products has grown at over 3% annually since 2008. Cocoa contains phytochemicals like flavanols, procyanidins, and other flavonoids, and flavanol-rich chocolate and cocoa products may have a small blood pressure lowering effect. The beans also contain theobromine and a small amount of caffeine. The tree takes five years to grow and has a typical lifespan of 100 years. Etymology Cocoa is a variant of cacao, likely due to confusion with the word coco. It is ultimately derived from kakaw(a), but whether that word originates in Nahuatl or a Mixe-Zoquean language is the subject of substantial linguistic debate. The term cocoa beans originated in the 19th century; during the 18th century they were called chocolate nuts, cocoa nuts or just cocoa. History The cacao tree is native to the Amazon rainforest. It was first domesticated at least 5,300 years ago, in equatorial South America from the Santa Ana-La Florida (SALF) site in what is present-day southeast Ecuador (Zamora-Chinchipe Province) by the Mayo-Chinchipe culture, before being introduced in Mesoamerica. More than 3,000 years ago, it was consumed by pre-Hispanic cultures along the Yucatán, including the Maya, and as far back as Olmeca civilization in spiritual ceremonies. It also grows in the foothills of the Andes in the Amazon region and the Orinoco basins of South America, such as in Colombia and Venezuela. Wild cacao still grows there. Its range may have been larger in the past; evidence of its wild range may be obscured by cultivation of the tree in these areas since long before the Spanish arrived. As of 2018, evidence suggests that cacao was first domesticated in equatorial South America, before being domesticated in Central America roughly 1,500 years later. Artifacts found at Santa-Ana-La Florida, in Ecuador, indicate that the Mayo-Chinchipe people were cultivating cacao as long as 5,300 years ago. Chemical analysis of residue extracted from pottery excavated at an archaeological site at Puerto Escondido, in Honduras, indicates that cocoa products were first consumed there sometime between 1500 and 1400 BC. Evidence also indicates that, long before the flavor of the cacao seed (or bean) became popular, the sweet pulp of the chocolate fruit, used in making a fermented (5.34% alcohol) beverage, first drew attention to the plant in the Americas. The cocoa bean was a common currency throughout Mesoamerica before the Spanish conquest.The bean was utilized in pre-modern Latin America to purchase small items such as tamales and rabbit dinners. A greater quantity of cocoa beans was used to purchase turkey hens and other large items. Cacao trees grow in a limited geographical zone, of about 20° to the north and south of the Equator. More than 70% of the world's cacao crop is grown in Africa, with Ivory Coast and Ghana producing approximately 58% of global production. The cacao plant was first given its botanical name by Swedish natural scientist Carl Linnaeus in his original classification of the plant kingdom, where he called it Theobroma ("food of the gods") cacao. Cocoa was an important commodity in pre-Columbian Mesoamerica. A Spanish soldier who was on Hernan Cortés' side during the conquest of the Aztec Empire tells that when Moctezuma II, emperor of the Aztecs, dined, he took no other beverage than chocolate, served in a golden goblet. Flavored with vanilla or other spices, his chocolate was whipped into a froth that dissolved in the mouth. No fewer than 60 portions each day reportedly may have been consumed by Moctezuma II, and 2,000 more by the nobles of his court. Chocolate was introduced to Europe by the Spaniards, and became a popular beverage by the mid-17th century. Venezuela became the largest producer of cocoa beans in the world. Spaniards also introduced the cacao tree into the West Indies and the Philippines. It was also introduced into the rest of Asia, South Asia and into West Africa by Europeans. In the Gold Coast, modern Ghana, cacao was introduced by a Ghanaian, Tetteh Quarshie. Varieties Cocoa beans are traditionally classified into three main varieties: Forastero, Criollo and Trinitario. Use of these terms has changed across different contexts and times, and recent genetic research has found that the categories of Forastero and Triniario are better understood as geohistorical inventions rather than as having a botanical basis. They are still used frequently in marketing material. Criollo has traditionally been the most prized variety. Believed to have been native to South America, by the time of the Spanish conquest they were grown in Mesoamerica. After European colonization, disease and population decrease led to the Spanish and Portuguese using different cacao varieties from South America. Different from the Criollo beans, these new beans were named Forastero, which can be translated as strange or foreign. They are generally of the Amelonado type and are associated with West Africa. Trinitario refers to any hybrid between Criollo and Forastero. Cultivation A cocoa pod (fruit) is about long and has a rough, leathery rind about thick (varying with the origin and variety of pod) filled with sweet, mucilaginous pulp (called baba de cacao in South America) with a lemonade-like taste enclosing 30 to 50 large seeds that are fairly soft and a pale lavender to dark brownish purple color. During harvest, the pods are opened, the seeds are kept, and the empty pods are discarded and the pulp made into juice. The seeds are placed where they can ferment. Due to heat buildup in the fermentation process, cacao beans lose most of the purplish hue and become mostly brown in color, with an adhered skin which includes the dried remains of the fruity pulp. This skin is released easily by winnowing after roasting. White seeds are found in some rare varieties, usually mixed with purples, and are considered of higher value. Harvesting Cacao trees grow in hot, rainy tropical areas within 20° of latitude from the Equator. Cocoa harvest is not restricted to one period per year and a harvest typically occurs over several months. In fact, in many countries, cocoa can be harvested at any time of the year. Pesticides are often applied to the trees to combat capsid bugs, and fungicides to fight black pod disease. Immature cocoa pods have a variety of colours, but most often are green, red, or purple, and as they mature, their colour tends towards yellow or orange, particularly in the creases. Unlike most fruiting trees, the cacao pod grows directly from the trunk or large branch of a tree rather than from the end of a branch, similar to jackfruit. This makes harvesting by hand easier as most of the pods will not be up in the higher branches. The pods on a tree do not ripen together; harvesting needs to be done periodically through the year. Harvesting occurs between three and four times weekly during the harvest season. The ripe and near-ripe pods, as judged by their colour, are harvested from the trunk and branches of the cacao tree with a curved knife on a long pole. Care must be used when cutting the stem of the pod to avoid damaging the junction of the stem with the tree, as this is where future flowers and pods will emerge. One person can harvest an estimated 650 pods per day. Harvest processing The harvested pods are opened, typically with a machete, to expose the beans. The pulp and cocoa seeds are removed and the rind is discarded. The pulp and seeds are then piled in heaps, placed in bins, or laid out on grates for several days. During this time, the seeds and pulp undergo "sweating", where the thick pulp liquefies as it ferments. The fermented pulp trickles away, leaving cocoa seeds behind to be collected. Sweating is important for the quality of the beans, which originally have a strong, bitter taste. If sweating is interrupted, the resulting cocoa may be ruined; if underdone, the cocoa seed maintains a flavor similar to raw potatoes and becomes susceptible to mildew. Some cocoa-producing countries distill alcoholic spirits using the liquefied pulp. A typical pod contains 30 to 40 beans and about 400 dried beans are required to make of chocolate. Cocoa pods weigh an average of and each one yields dried beans; this yield is 9–10% of the total weight in the pod. One person can separate the beans from about 2000 pods per day. The wet beans are then transported to a facility so they can be fermented and dried. The farmer removes the beans from the pods, packs them into boxes or heap them into piles, then covers them with mats or banana leaves for three to seven days. Finally, the beans are trodden and shuffled about (often using bare human feet) and sometimes, during this process, red clay mixed with water is sprinkled over the beans to obtain a finer color, polish, and protection against molds during shipment to factories in other countries. Drying in the sun is preferable to drying by artificial means, as no extraneous flavors such as smoke or oil are introduced which might otherwise taint the flavor. The beans should be dry for shipment, which is usually by sea. Traditionally exported in jute bags, over the last decade, beans are increasingly shipped in "mega-bulk" parcels of several thousand tonnes at a time on ships, or standardized to per bag and 200 () or 240 () bags per container. Shipping in bulk significantly reduces handling costs. Shipment in bags, either in a ship's hold or in containers, is still common. Throughout Mesoamerica where they are native, cocoa beans are used for a variety of foods. The harvested and fermented beans may be ground to order at tiendas de chocolate, or chocolate mills. At these mills, the cocoa can be mixed with a variety of ingredients such as cinnamon, chili peppers, almonds, vanilla, and other spices to create drinking chocolate. The ground cocoa is also an important ingredient in tejate. Child slavery The first allegations that child slavery is used in cocoa production appeared in 1998. In late 2000, a BBC documentary reported the use of enslaved children in the production of cocoa in West Africa. Other media followed by reporting widespread child slavery and child trafficking in the production of cocoa. The cocoa industry was accused of profiting from child slavery and trafficking. The Harkin–Engel Protocol is an effort to end these practices. In 2001, it was signed and witnessed by the heads of eight major chocolate companies, US senators Tom Harkin and Herb Kohl, US Representative Eliot Engel, the ambassador of the Ivory Coast, the director of the International Programme on the Elimination of Child Labor, and others. It has, however, been criticized by some groups including the International Labor Rights Forum as an industry initiative which falls short, as the goal to eliminate the "worst forms of child labor" from cocoa production by 2005 was not reached. The deadline was extended multiple times and the goal changed to a 70% child labor reduction. Child labour was growing in some West African countries in 2008–09 when it was estimated that 819,921 children worked on cocoa farms in Ivory Coast alone; by 2013–14, the number went up to 1,303,009. During the same period in Ghana, the estimated number of children working on cocoa farms was 957,398 children. The 2010 documentary The Dark Side of Chocolate revealed that children smuggled from Mali to the Ivory Coast were forced to earn income for their parents, while others were sold as slaves for €230. In 2010, the US Department of Labor formed the Child Labor Cocoa Coordinating Group as a public-private partnership with the governments of Ghana and Côte d'Ivoire to address child labor practices in the cocoa industry. As of 2017, approximately 2.1 million children in Ghana and Côte d'Ivoire were involved in harvesting cocoa, carrying heavy loads, clearing forests, and being exposed to pesticides. According to Sona Ebai, the former secretary general of the Alliance of Cocoa Producing Countries: "I think child labor cannot be just the responsibility of industry to solve. I think it's the proverbial all-hands-on-deck: government, civil society, the private sector. And there, you really need leadership." As Reported in 2018, a three-year pilot program, conducted by Nestlé with 26,000 farmers mostly located in Côte d'Ivoire, observed a 51% decrease in the number of children doing hazardous jobs in cocoa farming. Lawsuits In 2021, several companies were named in a class action lawsuit filed by eight former children from Mali who alleged that the companies aided and abetted their enslavement on cocoa plantations in Ivory Coast. The suit accused Barry Callebaut, Cargill, The Hershey Company, Mars, Mondelez, Nestlé, and Olam International, of knowingly engaging in forced labour, and the plaintiffs sought damages for unjust enrichment, negligent supervision, and intentional infliction of emotional distress. Production In 2022, world production of cocoa beans was 5.87 million tonnes, led by Ivory Coast with 38% of the total, while secondary producers were Ghana and Indonesia (table). Cocoa trading Cocoa beans are traditionally shipped and stored in burlap sacks, in which the beans are susceptible to pest attacks. Fumigation with methyl bromide was to be phased out globally by 2015. Additional cocoa protection techniques for shipping and storage include the application of pyrenoids, as well as hermetic storage in sealed bags or containers with lowered oxygen concentrations. Safe long-term storage facilitates the trading of cocoa products at commodity exchanges. Cocoa beans, cocoa butter and cocoa powder are traded on futures markets. The London market is based on West African cocoa and New York on cocoa predominantly from Southeast Asia. Cocoa is the world's smallest soft commodity market. The futures price of cocoa butter and cocoa powder is determined by multiplying the bean price by a ratio. The combined butter and powder ratio has tended to be around 3.5. If the combined ratio falls below 3.2 or so, production ceases to be economically viable and some factories cease extraction of butter and powder and trade exclusively in cocoa liquor. Cocoa futures traded on the ICE Futures US Softs exchange, are valued at 10 Tonnes per contract with a tick size of 1 and tick value of US$10. Sustainability Multiple international and national initiatives collaborate to support sustainable cocoa production. These include the Swiss Platform for Sustainable Cocoa (SWISSCO), the German Initiative on Sustainable Cocoa (GISCO), and Beyond Chocolate, Belgium. A memorandum between these three initiatives was signed in 2020 to measure and address issues including child labor, living income, deforestation and supply chain transparency. Similar partnerships between cocoa producing and consuming countries are being developed, such as the cooperation between the International Cocoa Organization (ICCO) and the Ghanaian Cocoa Authority, who aim to increase the proportion of sustainable cocoa being imported from Ghana to Switzerland to 80% by 2025. The ICCO is engaged in projects around the world to support sustainable cocoa production and provide current information on the world cocoa market. Cocoa is one of seven commodities included in the EU Regulation on Deforestation-free products (EUDR), which aims to guarantee that the products European Union (EU) citizens consume do not contribute to deforestation or forest degradation worldwide. Voluntary sustainability standards There are numerous voluntary certifications including Fairtrade and UTZ (now part of Rainforest Alliance) for cocoa which aim to differentiate between conventional cocoa production and that which is more sustainable in terms of social, economic and environmental concerns. As of 2016, at least 29% of global cocoa production was compliant with voluntary sustainability standards. However, among the different certifications there are significant differences in their goals and approaches, and a lack of data to show and compare the results on the farm level. While certifications can lead to increased farm income, the premium price paid for certified cocoa by consumers is not always reflected proportionally in the income for farmers. In 2012 the ICCO found that farm size mattered significantly when determining the benefits of certifications, and that farms an area less than 1ha were less likely to benefit from such programs, while those with slightly larger farms as well as access to member co-ops and the ability to improve productivity were most likely to benefit from certification. Certification often requires high up-front costs, which are a barrier to small farmers, and particularly, female farmers. The primary benefits to certification include improving conservation practices and reducing the use of agrochemicals, business support through cooperatives and resource sharing, and a higher price for cocoa beans which can improve the standard of living for farmers. Fair trade cocoa producer groups are established in Belize, Bolivia, Cameroon, the Congo, Costa Rica, the Dominican Republic, Ecuador, Ghana, Haiti, India, Ivory Coast, Nicaragua, Panama, Paraguay, Peru, Sierra Leone, and São Tomé and Príncipe. In 2018, the Beyond Chocolate partnership was created between multiple stakeholders in the global cocoa industry to decrease deforestation and provide a living income for cocoa farmers. The many international companies are currently participating in this agreement and the following voluntary certification programs are also partners in the Beyond Chocolate initiative: Rainforest Alliance, Fairtrade, ISEAL, BioForum Vlaanderen. Many major chocolate production companies around the world have started to prioritize buying fair trade cocoa by investing in fair trade cocoa production, improving fair trade cocoa supply chains and setting purchasing goals to increase the proportion of fair trade chocolate available in the global market. The Rainforest Alliance lists the following goals as part of their certification program: Forest protection and sustainable land management Improve rural livelihoods to reduce poverty Address human rights issues such as child labor, gender inequality and indigenous land rights The UTZ Certified-program (now part of Rainforest Alliance) included counteracting against child labor and exploitation of cocoa workers, requiring a code of conduct in relation to social and environmentally friendly factors, and improvement of farming methods to increase profits and salaries of farmers and distributors. Environmental impact The relative poverty of many cocoa farmers means that environmental consequences such as deforestation are given little significance. For decades, cocoa farmers have encroached on virgin forest, mostly after the felling of trees by logging companies. This trend has decreased as many governments and communities are beginning to protect their remaining forested zones. However, deforestation due to cocoa production is still a major concern in parts of West Africa. In Côte d'Ivoire and Ghana, barriers to land ownership have led migrant workers and farmers without financial resources to buy land to illegally expand their cocoa farming in protected forests. Many cocoa farmers in this region continue to prioritize expansion of their cocoa production, which often leads to deforestation. Sustainable agricultural practices such as utilizing cover crops to prepare the soil before planting and intercropping cocoa seedlings with companion plants can support cocoa production and benefit the farm ecosystem. Prior to planting cocoa, leguminous cover crops can improve the soil nutrients and structure, which are important in areas where cocoa is produced due to high heat and rainfall which can diminish soil quality. Plantains are often intercropped with cocoa to provide shade to young seedlings and improve drought resilience of the soil. If the soil lacks essential nutrients, compost or animal manure can improve soil fertility and help with water retention. The use of chemical fertilizers and pesticides by cocoa farmers is limited. When cocoa bean prices are high, farmers may invest in their crops, leading to higher yields which, in turn tends to result in lower market prices and a renewed period of lower investment. While governments and NGOs have made efforts to help cocoa farmers in Ghana and Côte d'Ivoire sustainably improve crop yields, many of the educational and financial resources provided are more readily available to male farmers versus female farmers. Access to credit is important for cocoa farmers, as it allows them to implement sustainable practices, such as agroforestry, and provide a financial buffer in case disasters like pest or weather patterns decrease crop yield. Cocoa production is likely to be affected in various ways by the expected effects of global warming. Specific concerns have been raised concerning its future as a cash crop in West Africa, the current centre of global cocoa production. If temperatures continue to rise, West Africa could simply become unfit to grow the beans. The International Center for Tropical Agriculture warned in a paper published in 2013 that Ghana and Côte d'Ivoire, the world's two top cocoa growers, will experience a decline in suitable areas for cocoa production as global temperatures rise by up to 2 °C by 2050. Climate change, coupled with pests, poor soil health, and the demand for sustainable cocoa, has led to a rapid decline in cocoa productivity, resulting in reduced income for smallholder cocoa farmers. Severe droughts have led to soil fertility decline, causing a decrease in yields, and resulting in some farmers abandoning cocoa production. Cocoa beans also have a potential to be used as a bedding material in farms for cows. Using cocoa bean husks in bedding material for cows may contribute to udder health (less bacterial growth) and ammonia levels (lower ammonia levels on bedding). Agroforestry Cocoa beans may be cultivated under shade, as done in agroforestry. Agroforestry can reduce the pressure on existing protected forests for resources, such as firewood, and conserve biodiversity. Integrating shade trees with cocoa plants reduces risk of soil erosion and evaporation, and protects young cocoa plants from extreme heat. Agroforests act as buffers to formally protected forests and biodiversity island refuges in an open, human-dominated landscape. Research of their shade-grown coffee counterparts has shown that greater canopy cover in plots is significantly associated with greater mammal species diversity. The amount of diversity in tree species is fairly comparable between shade-grown cocoa plots and primary forests. Economic effects Cocoa contributes significantly to Nigerian economic activity, comprising the largest part of the country's foreign exchange, and providing income for farmers. Farmers can grow a variety of fruit-bearing shade trees to supplement their income to help cope with the volatile cocoa prices. Although cocoa has been adapted to grow under a dense rainforest canopy, agroforestry does not significantly further enhance cocoa productivity. However, while growing cocoa in full sun without incorporating shade plants can temporarily increase cocoa yields, it will eventually decrease the quality of the soil due to nutrient loss, desertification and erosion, leading to unsustainable yields and dependency on inorganic fertilizers. Agroforestry practices stabilize and improve soil quality, which can sustain cocoa production in the long term. Over time, cocoa agroforestry systems become more similar to forest, although they never fully recover the original forest community within the life cycle of a productive cocoa plantation (approximately 25 years). Thus, although cocoa agroforests cannot replace natural forests, they are a valuable tool for conserving and protecting biodiversity while maintaining high levels of productivity in agricultural landscapes. In West Africa, where about 70% of global cocoa supply originates from smallholder farmers, recent public–private initiatives such as the Cocoa Forest Initiatives in Ghana and Côte d'Ivoire (World Cocoa Foundation, 2017) and the Green Cocoa Landscape Programme in Cameroon (IDH, 2019) aim to support the sustainable intensification and climate resilience of cocoa production, the prevention of further deforestation and the restoration of degraded forests. They often align with national REDD+ policies and plans. Consumption People around the world consume cocoa in many different forms, consuming more than 3 million tons of cocoa beans yearly. Once the cocoa beans have been harvested, fermented, dried and transported they are processed in several components. Processor grindings serve as the main metric for market analysis. Processing is the last phase in which consumption of the cocoa bean can be equitably compared to supply. After this step all the different components are sold across industries to many manufacturers of different types of products. Global market share for processing has remained stable, even as grindings increase to meet demand. One of the largest processing countries by volume is the Netherlands, handling around 13% of global grindings. Europe and Russia as a whole handle about 38% of the processing market. Average year after year demand growth has been just over 3% since 2008. While Europe and North America are relatively stable markets, increasing household income in developing countries is the main reason of the stable demand growth. As demand is awaited to keep growing, supply growth may slow down due to changing weather conditions in the largest cocoa production areas. Chocolate production To produce of chocolate, around 300 to 600 cocoa beans are processed. The beans are roasted, cracked, and deshelled, resulting in pieces called nibs (the cotyledons, of which beans generally contain two), which are ground into a thick paste known as chocolate liquor or cocoa paste. The liquor is processed into chocolate by adding cocoa butter, sugar, and sometimes vanilla and lecithin. Alternatively, cocoa powder and cocoa butter can be separated using a hydraulic press or the Broma process. Treating cocoa with an alkali produces Dutch process cocoa, which has a different flavor profile than untreated cocoa. Roasting can also be done on the whole bean or nib, affecting the final flavor. Most nibs are ground, using various methods, into a thick, creamy paste, known as chocolate liquor or cocoa paste. This "liquor" is then further processed into chocolate by mixing in (more) cocoa butter and sugar (and sometimes vanilla and lecithin as an emulsifier), and then refined, conched and tempered. Alternatively, it can be separated into cocoa powder and cocoa butter using a hydraulic press or the Broma process. This process produces around 50% cocoa butter and 50% cocoa powder. Cocoa powder may have a fat content of about 12%, but this varies significantly. Cocoa butter is used in chocolate bar manufacture, other confectionery, soaps, and cosmetics. Treating with an alkali produces Dutch process cocoa, which is less acidic, darker, and more mellow in flavor than untreated cocoa. Regular (non-alkalized) cocoa is acidic, so when cocoa is treated with an alkaline ingredient, generally potassium carbonate, the pH increases. This can be done at various stages during manufacturing, including during nib treatment, liquor treatment, or press cake treatment. Another process that helps develop the flavor is roasting, which can be done on the whole bean before shelling or on the nib after shelling. The time and temperature of the roast affect the result: A "low roast" produces a more acid, aromatic flavor, while a high roast gives a more intense, bitter flavor lacking complex flavor notes. Phytochemicals and research Cocoa contains various phytochemicals, such as flavanols (including epicatechin), procyanidins, and other flavonoids. A systematic review presented moderate evidence that the use of flavanol-rich chocolate and cocoa products causes a small (2 mmHg) blood pressure lowering effect in healthy adults—mostly in the short term. The highest levels of cocoa flavanols are found in raw cocoa and to a lesser extent, dark chocolate, since flavonoids degrade during cooking used to make chocolate. The beans contain theobromine, and between 0.1% and 0.7% caffeine, whereas dry coffee beans are about 1.2% caffeine. Theobromine found in the cocoa solids is fat soluble. See also Carob Cash crop Catechin and epicatechin, flavonoids present in cocoa Coenraad Johannes van Houten for Dutch process Coffee bean Domingo Ghirardelli Ghana Cocoa Board International CoCoa Farmers Organization External links References Sources Chocolate Components of chocolate Edible nuts and seeds Tropical fruit Crops originating from indigenous Americans Crops originating from Ecuador Crops originating from Peru Crops originating from North America Crops originating from South America Crops originating from Pre-Columbian North America Herbal and fungal stimulants Mesoamerican diet and subsistence Oaxacan cuisine Non-timber forest products Crops originating from Mexico
Cocoa bean
[ "Technology" ]
6,209
[ "Components of chocolate", "Components" ]
54,232
https://en.wikipedia.org/wiki/Reinforced%20concrete
Reinforced concrete, also called ferroconcrete, is a composite material in which concrete's relatively low tensile strength and ductility are compensated for by the inclusion of reinforcement having higher tensile strength or ductility. The reinforcement is usually, though not necessarily, steel reinforcing bars (known as rebar) and is usually embedded passively in the concrete before the concrete sets. However, post-tensioning is also employed as a technique to reinforce the concrete. In terms of volume used annually, it is one of the most common engineering materials. In corrosion engineering terms, when designed correctly, the alkalinity of the concrete protects the steel rebar from corrosion. Description Reinforcing schemes are generally designed to resist tensile stresses in particular regions of the concrete that might cause unacceptable cracking and/or structural failure. Modern reinforced concrete can contain varied reinforcing materials made of steel, polymers or alternate composite material in conjunction with rebar or not. Reinforced concrete may also be permanently stressed (concrete in compression, reinforcement in tension), so as to improve the behavior of the final structure under working loads. In the United States, the most common methods of doing this are known as pre-tensioning and post-tensioning. For a strong, ductile and durable construction the reinforcement needs to have the following properties at least: High relative strength High toleration of tensile strain Good bond to the concrete, irrespective of pH, moisture, and similar factors Thermal compatibility, not causing unacceptable stresses (such as expansion or contraction) in response to changing temperatures. Durability in the concrete environment, irrespective of corrosion or sustained stress for example. History French builder was the first one to use iron-reinforced concrete as a building technique. In 1853, Coignet built for himself the first iron reinforced concrete structure, a four-story house at 72 rue Charles Michels in the suburbs of Paris. Coignet's descriptions of reinforcing concrete suggests that he did not do it for means of adding strength to the concrete but for keeping walls in monolithic construction from overturning. The 1872–73 Pippen Building in Brooklyn, although not designed by Coignet, stands as a testament to his technique. In 1854, English builder William B. Wilkinson reinforced the concrete roof and floors in the two-story house he was constructing. His positioning of the reinforcement demonstrated that, unlike his predecessors, he had knowledge of tensile stresses. Between 1869 and 1870, Henry Eton would design, and Messrs W & T Phillips of London construct the wrought iron reinforced Homersfield Bridge bridge, with a 50' (15.25 meter) span, over the river Waveney, between the English counties of Norfolk and Suffolk. In 1877, Thaddeus Hyatt, published a report entitled An Account of Some Experiments with Portland-Cement-Concrete Combined with Iron as a Building Material, with Reference to Economy of Metal in Construction and for Security against Fire in the Making of Roofs, Floors, and Walking Surfaces, in which he reported his experiments on the behaviour of reinforced concrete. His work played a major role in the evolution of concrete construction as a proven and studied science. Without Hyatt's work, more dangerous trial and error methods might have been depended on for the advancement in the technology. Joseph Monier, a 19th-century French gardener, was a pioneer in the development of structural, prefabricated and reinforced concrete, having been dissatisfied with the existing materials available for making durable flowerpots. He was granted a patent for reinforcing concrete flowerpots by means of mixing a wire mesh and a mortar shell. In 1877, Monier was granted another patent for a more advanced technique of reinforcing concrete columns and girders, using iron rods placed in a grid pattern. Though Monier undoubtedly knew that reinforcing concrete would improve its inner cohesion, it is not clear whether he even knew how much the tensile strength of concrete was improved by the reinforcing. Before the 1870s, the use of concrete construction, though dating back to the Roman Empire, and having been reintroduced in the early 19th century, was not yet a proven scientific technology. Ernest L. Ransome, an English-born engineer, was an early innovator of reinforced concrete techniques at the end of the 19th century. Using the knowledge of reinforced concrete developed during the previous 50 years, Ransome improved nearly all the styles and techniques of the earlier inventors of reinforced concrete. Ransome's key innovation was to twist the reinforcing steel bar, thereby improving its bond with the concrete. Gaining increasing fame from his concrete constructed buildings, Ransome was able to build two of the first reinforced concrete bridges in North America. One of his bridges still stands on Shelter Island in New Yorks East End, One of the first concrete buildings constructed in the United States was a private home designed by William Ward, completed in 1876. The home was particularly designed to be fireproof. G. A. Wayss was a German civil engineer and a pioneer of the iron and steel concrete construction. In 1879, Wayss bought the German rights to Monier's patents and, in 1884, his firm, Wayss & Freytag, made the first commercial use of reinforced concrete. Up until the 1890s, Wayss and his firm greatly contributed to the advancement of Monier's system of reinforcing, established it as a well-developed scientific technology. One of the first skyscrapers made with reinforced concrete was the 16-story Ingalls Building in Cincinnati, constructed in 1904. The first reinforced concrete building in Southern California was the Laughlin Annex in downtown Los Angeles, constructed in 1905. In 1906, 16 building permits were reportedly issued for reinforced concrete buildings in the City of Los Angeles, including the Temple Auditorium and 8-story Hayward Hotel. In 1906, a partial collapse of the Bixby Hotel in Long Beach killed 10 workers during construction when shoring was removed prematurely. That event spurred a scrutiny of concrete erection practices and building inspections. The structure was constructed of reinforced concrete frames with hollow clay tile ribbed flooring and hollow clay tile infill walls. That practice was strongly questioned by experts and recommendations for "pure" concrete construction were made, using reinforced concrete for the floors and walls as well as the frames. In April 1904, Julia Morgan, an American architect and engineer, who pioneered the aesthetic use of reinforced concrete, completed her first reinforced concrete structure, El Campanil, a bell tower at Mills College, which is located across the bay from San Francisco. Two years later, El Campanil survived the 1906 San Francisco earthquake without any damage, which helped build her reputation and launch her prolific career. The 1906 earthquake also changed the public's initial resistance to reinforced concrete as a building material, which had been criticized for its perceived dullness. In 1908, the San Francisco Board of Supervisors changed the city's building codes to allow wider use of reinforced concrete. In 1906, the National Association of Cement Users (NACU) published Standard No. 1 and, in 1910, the Standard Building Regulations for the Use of Reinforced Concrete. Use in construction Many different types of structures and components of structures can be built using reinforced concrete elements including slabs, walls, beams, columns, foundations, frames and more. Reinforced concrete can be classified as precast or cast-in-place concrete. Designing and implementing the most efficient floor system is key to creating optimal building structures. Small changes in the design of a floor system can have significant impact on material costs, construction schedule, ultimate strength, operating costs, occupancy levels and end use of a building. Without reinforcement, constructing modern structures with concrete material would not be possible. Reinforced concrete elements When reinforced concrete elements are used in construction, these reinforced concrete elements exhibit basic behavior when subjected to external loads. Reinforced concrete elements may be subject to tension, compression, bending, shear, and/or torsion. Behavior Materials Concrete is a mixture of coarse (stone or brick chips) and fine (generally sand and/or crushed stone) aggregates with a paste of binder material (usually Portland cement) and water. When cement is mixed with a small amount of water, it hydrates to form microscopic opaque crystal lattices encapsulating and locking the aggregate into a rigid shape. The aggregates used for making concrete should be free from harmful substances like organic impurities, silt, clay, lignite, etc. Typical concrete mixes have high resistance to compressive stresses (about ); however, any appreciable tension (e.g., due to bending) will break the microscopic rigid lattice, resulting in cracking and separation of the concrete. For this reason, typical non-reinforced concrete must be well supported to prevent the development of tension. If a material with high strength in tension, such as steel, is placed in concrete, then the composite material, reinforced concrete, resists not only compression but also bending and other direct tensile actions. A composite section where the concrete resists compression and reinforcement "rebar" resists tension can be made into almost any shape and size for the construction industry. Key characteristics Three physical characteristics give reinforced concrete its special properties: The coefficient of thermal expansion of concrete is similar to that of steel, eliminating large internal stresses due to differences in thermal expansion or contraction. When the cement paste within the concrete hardens, this conforms to the surface details of the steel, permitting any stress to be transmitted efficiently between the different materials. Usually steel bars are roughened or corrugated to further improve the bond or cohesion between the concrete and steel. The alkaline chemical environment provided by the alkali reserve (KOH, NaOH) and the portlandite (calcium hydroxide) contained in the hardened cement paste causes a passivating film to form on the surface of the steel, making it much more resistant to corrosion than it would be in neutral or acidic conditions. When the cement paste is exposed to the air and meteoric water reacts with the atmospheric CO2, portlandite and the calcium silicate hydrate (CSH) of the hardened cement paste become progressively carbonated and the high pH gradually decreases from 13.5 – 12.5 to 8.5, the pH of water in equilibrium with calcite (calcium carbonate) and the steel is no longer passivated. As a rule of thumb, only to give an idea on orders of magnitude, steel is protected at pH above ~11 but starts to corrode below ~10 depending on steel characteristics and local physico-chemical conditions when concrete becomes carbonated. Carbonation of concrete along with chloride ingress are amongst the chief reasons for the failure of reinforcement bars in concrete. The relative cross-sectional area of steel required for typical reinforced concrete is usually quite small and varies from 1% for most beams and slabs to 6% for some columns. Reinforcing bars are normally round in cross-section and vary in diameter. Reinforced concrete structures sometimes have provisions such as ventilated hollow cores to control their moisture & humidity. Distribution of concrete (in spite of reinforcement) strength characteristics along the cross-section of vertical reinforced concrete elements is inhomogeneous. Mechanism of composite action of reinforcement and concrete The reinforcement in a RC structure, such as a steel bar, has to undergo the same strain or deformation as the surrounding concrete in order to prevent discontinuity, slip or separation of the two materials under load. Maintaining composite action requires transfer of load between the concrete and steel. The direct stress is transferred from the concrete to the bar interface so as to change the tensile stress in the reinforcing bar along its length. This load transfer is achieved by means of bond (anchorage) and is idealized as a continuous stress field that develops in the vicinity of the steel-concrete interface. The reasons that the two different material components concrete and steel can work together are as follows: (1) Reinforcement can be well bonded to the concrete, thus they can jointly resist external loads and deform. (2) The thermal expansion coefficients of concrete and steel are so close ( to for concrete and for steel) that the thermal stress-induced damage to the bond between the two components can be prevented. (3) Concrete can protect the embedded steel from corrosion and high-temperature induced softening. Anchorage (bond) in concrete: Codes of specifications Because the actual bond stress varies along the length of a bar anchored in a zone of tension, current international codes of specifications use the concept of development length rather than bond stress. The main requirement for safety against bond failure is to provide a sufficient extension of the length of the bar beyond the point where the steel is required to develop its yield stress and this length must be at least equal to its development length. However, if the actual available length is inadequate for full development, special anchorages must be provided, such as cogs or hooks or mechanical end plates. The same concept applies to lap splice length mentioned in the codes where splices (overlapping) provided between two adjacent bars in order to maintain the required continuity of stress in the splice zone. Anticorrosion measures In wet and cold climates, reinforced concrete for roads, bridges, parking structures and other structures that may be exposed to deicing salt may benefit from use of corrosion-resistant reinforcement such as uncoated, low carbon/chromium (micro composite), epoxy-coated, hot dip galvanized or stainless steel rebar. Good design and a well-chosen concrete mix will provide additional protection for many applications. Uncoated, low carbon/chromium rebar looks similar to standard carbon steel rebar due to its lack of a coating; its highly corrosion-resistant features are inherent in the steel microstructure. It can be identified by the unique ASTM specified mill marking on its smooth, dark charcoal finish. Epoxy-coated rebar can easily be identified by the light green color of its epoxy coating. Hot dip galvanized rebar may be bright or dull gray depending on length of exposure, and stainless rebar exhibits a typical white metallic sheen that is readily distinguishable from carbon steel reinforcing bar. Reference ASTM standard specifications A1035/A1035M Standard Specification for Deformed and Plain Low-carbon, Chromium, Steel Bars for Concrete Reinforcement, A767 Standard Specification for Hot Dip Galvanized Reinforcing Bars, A775 Standard Specification for Epoxy Coated Steel Reinforcing Bars and A955 Standard Specification for Deformed and Plain Stainless Bars for Concrete Reinforcement. Another, cheaper way of protecting rebars is coating them with zinc phosphate. Zinc phosphate slowly reacts with calcium cations and the hydroxyl anions present in the cement pore water and forms a stable hydroxyapatite layer. Penetrating sealants typically must be applied some time after curing. Sealants include paint, plastic foams, films and aluminum foil, felts or fabric mats sealed with tar, and layers of bentonite clay, sometimes used to seal roadbeds. Corrosion inhibitors, such as calcium nitrite [Ca(NO2)2], can also be added to the water mix before pouring concrete. Generally, 1–2 wt. % of [Ca(NO2)2] with respect to cement weight is needed to prevent corrosion of the rebars. The nitrite anion is a mild oxidizer that oxidizes the soluble and mobile ferrous ions (Fe2+) present at the surface of the corroding steel and causes them to precipitate as an insoluble ferric hydroxide (Fe(OH)3). This causes the passivation of steel at the anodic oxidation sites. Nitrite is a much more active corrosion inhibitor than nitrate, which is a less powerful oxidizer of the divalent iron. Reinforcement and terminology of beams A beam bends under bending moment, resulting in a small curvature. At the outer face (tensile face) of the curvature the concrete experiences tensile stress, while at the inner face (compressive face) it experiences compressive stress. A singly reinforced beam is one in which the concrete element is only reinforced near the tensile face and the reinforcement, called tension steel, is designed to resist the tension. A doubly reinforced beam is the section in which besides the tensile reinforcement the concrete element is also reinforced near the compressive face to help the concrete resist compression and take stresses. The latter reinforcement is called compression steel. When the compression zone of a concrete is inadequate to resist the compressive moment (positive moment), extra reinforcement has to be provided if the architect limits the dimensions of the section. An under-reinforced beam is one in which the tension capacity of the tensile reinforcement is smaller than the combined compression capacity of the concrete and the compression steel (under-reinforced at tensile face). When the reinforced concrete element is subject to increasing bending moment, the tension steel yields while the concrete does not reach its ultimate failure condition. As the tension steel yields and stretches, an "under-reinforced" concrete also yields in a ductile manner, exhibiting a large deformation and warning before its ultimate failure. In this case the yield stress of the steel governs the design. An over-reinforced beam is one in which the tension capacity of the tension steel is greater than the combined compression capacity of the concrete and the compression steel (over-reinforced at tensile face). So the "over-reinforced concrete" beam fails by crushing of the compressive-zone concrete and before the tension zone steel yields, which does not provide any warning before failure as the failure is instantaneous. A balanced-reinforced beam is one in which both the compressive and tensile zones reach yielding at the same imposed load on the beam, and the concrete will crush and the tensile steel will yield at the same time. This design criterion is however as risky as over-reinforced concrete, because failure is sudden as the concrete crushes at the same time of the tensile steel yields, which gives a very little warning of distress in tension failure. Steel-reinforced concrete moment-carrying elements should normally be designed to be under-reinforced so that users of the structure will receive warning of impending collapse. The characteristic strength is the strength of a material where less than 5% of the specimen shows lower strength. The design strength or nominal strength is the strength of a material, including a material-safety factor. The value of the safety factor generally ranges from 0.75 to 0.85 in Permissible stress design. The ultimate limit state is the theoretical failure point with a certain probability. It is stated under factored loads and factored resistances. Reinforced concrete structures are normally designed according to rules and regulations or recommendation of a code such as ACI-318, CEB, Eurocode 2 or the like. WSD, USD or LRFD methods are used in design of RC structural members. Analysis and design of RC members can be carried out by using linear or non-linear approaches. When applying safety factors, building codes normally propose linear approaches, but for some cases non-linear approaches. To see the examples of a non-linear numerical simulation and calculation visit the references: Prestressed concrete Prestressing concrete is a technique that greatly increases the load-bearing strength of concrete beams. The reinforcing steel in the bottom part of the beam, which will be subjected to tensile forces when in service, is placed in tension before the concrete is poured around it. Once the concrete has hardened, the tension on the reinforcing steel is released, placing a built-in compressive force on the concrete. When loads are applied, the reinforcing steel takes on more stress and the compressive force in the concrete is reduced, but does not become a tensile force. Since the concrete is always under compression, it is less subject to cracking and failure. Common failure modes of steel reinforced concrete Reinforced concrete can fail due to inadequate strength, leading to mechanical failure, or due to a reduction in its durability. Corrosion and freeze/thaw cycles may damage poorly designed or constructed reinforced concrete. When rebar corrodes, the oxidation products (rust) expand and tends to flake, cracking the concrete and unbonding the rebar from the concrete. Typical mechanisms leading to durability problems are discussed below. Mechanical failure Cracking of the concrete section is nearly impossible to prevent; however, the size and location of cracks can be limited and controlled by appropriate reinforcement, control joints, curing methodology and concrete mix design. Cracking can allow moisture to penetrate and corrode the reinforcement. This is a serviceability failure in limit state design. Cracking is normally the result of an inadequate quantity of rebar, or rebar spaced at too great a distance. The concrete cracks either under excess loading, or due to internal effects such as early thermal shrinkage while it cures. Ultimate failure leading to collapse can be caused by crushing the concrete, which occurs when compressive stresses exceed its strength, by yielding or failure of the rebar when bending or shear stresses exceed the strength of the reinforcement, or by bond failure between the concrete and the rebar. Carbonation Carbonation, or neutralisation, is a chemical reaction between carbon dioxide in the air and calcium hydroxide and hydrated calcium silicate in the concrete. When a concrete structure is designed, it is usual to specify the concrete cover for the rebar (the depth of the rebar within the object). The minimum concrete cover is normally regulated by design or building codes. If the reinforcement is too close to the surface, early failure due to corrosion may occur. The concrete cover depth can be measured with a cover meter. However, carbonated concrete incurs a durability problem only when there is also sufficient moisture and oxygen to cause electropotential corrosion of the reinforcing steel. One method of testing a structure for carbonation is to drill a fresh hole in the surface and then treat the cut surface with phenolphthalein indicator solution. This solution turns pink when in contact with alkaline concrete, making it possible to see the depth of carbonation. Using an existing hole does not suffice because the exposed surface will already be carbonated. Chlorides Chlorides can promote the corrosion of embedded rebar if present in sufficiently high concentration. Chloride anions induce both localized corrosion (pitting corrosion) and generalized corrosion of steel reinforcements. For this reason, one should only use fresh raw water or potable water for mixing concrete, ensure that the coarse and fine aggregates do not contain chlorides, rather than admixtures which might contain chlorides. It was once common for calcium chloride to be used as an admixture to promote rapid set-up of the concrete. It was also mistakenly believed that it would prevent freezing. However, this practice fell into disfavor once the deleterious effects of chlorides became known. It should be avoided whenever possible. The use of de-icing salts on roadways, used to lower the freezing point of water, is probably one of the primary causes of premature failure of reinforced or prestressed concrete bridge decks, roadways, and parking garages. The use of epoxy-coated reinforcing bars and the application of cathodic protection has mitigated this problem to some extent. Also FRP (fiber-reinforced polymer) rebars are known to be less susceptible to chlorides. Properly designed concrete mixtures that have been allowed to cure properly are effectively impervious to the effects of de-icers. Another important source of chloride ions is sea water. Sea water contains by weight approximately 3.5% salts. These salts include sodium chloride, magnesium sulfate, calcium sulfate, and bicarbonates. In water these salts dissociate in free ions (Na+, Mg2+, Cl−, , ) and migrate with the water into the capillaries of the concrete. Chloride ions, which make up about 50% of these ions, are particularly aggressive as a cause of corrosion of carbon steel reinforcement bars. In the 1960s and 1970s it was also relatively common for magnesite, a chloride rich carbonate mineral, to be used as a floor-topping material. This was done principally as a levelling and sound attenuating layer. However it is now known that when these materials come into contact with moisture they produce a weak solution of hydrochloric acid due to the presence of chlorides in the magnesite. Over a period of time (typically decades), the solution causes corrosion of the embedded rebars. This was most commonly found in wet areas or areas repeatedly exposed to moisture. Alkali silica reaction This a reaction of amorphous silica (chalcedony, chert, siliceous limestone) sometimes present in the aggregates with the hydroxyl ions (OH−) from the cement pore solution. Poorly crystallized silica (SiO2) dissolves and dissociates at high pH (12.5 - 13.5) in alkaline water. The soluble dissociated silicic acid reacts in the porewater with the calcium hydroxide (portlandite) present in the cement paste to form an expansive calcium silicate hydrate (CSH). The alkali–silica reaction (ASR) causes localised swelling responsible for tensile stress and cracking. The conditions required for alkali silica reaction are threefold: (1) aggregate containing an alkali-reactive constituent (amorphous silica), (2) sufficient availability of hydroxyl ions (OH−), and (3) sufficient moisture, above 75% relative humidity (RH) within the concrete. This phenomenon is sometimes popularly referred to as "concrete cancer". This reaction occurs independently of the presence of rebars; massive concrete structures such as dams can be affected. Conversion of high alumina cement Resistant to weak acids and especially sulfates, this cement cures quickly and has very high durability and strength. It was frequently used after World War II to make precast concrete objects. However, it can lose strength with heat or time (conversion), especially when not properly cured. After the collapse of three roofs made of prestressed concrete beams using high alumina cement, this cement was banned in the UK in 1976. Subsequent inquiries into the matter showed that the beams were improperly manufactured, but the ban remained. Sulfates Sulfates (SO4) in the soil or in groundwater, in sufficient concentration, can react with the Portland cement in concrete causing the formation of expansive products, e.g., ettringite or thaumasite, which can lead to early failure of the structure. The most typical attack of this type is on concrete slabs and foundation walls at grades where the sulfate ion, via alternate wetting and drying, can increase in concentration. As the concentration increases, the attack on the Portland cement can begin. For buried structures such as pipe, this type of attack is much rarer, especially in the eastern United States. The sulfate ion concentration increases much slower in the soil mass and is especially dependent upon the initial amount of sulfates in the native soil. A chemical analysis of soil borings to check for the presence of sulfates should be undertaken during the design phase of any project involving concrete in contact with the native soil. If the concentrations are found to be aggressive, various protective coatings can be applied. Also, in the US ASTM C150 Type 5 Portland cement can be used in the mix. This type of cement is designed to be particularly resistant to a sulfate attack. Steel plate construction In steel plate construction, stringers join parallel steel plates. The plate assemblies are fabricated off site, and welded together on-site to form steel walls connected by stringers. The walls become the form into which concrete is poured. Steel plate construction speeds reinforced concrete construction by cutting out the time-consuming on-site manual steps of tying rebar and building forms. The method results in excellent strength because the steel is on the outside, where tensile forces are often greatest. Fiber-reinforced concrete Fiber reinforcement is mainly used in shotcrete, but can also be used in normal concrete. Fiber-reinforced normal concrete is mostly used for on-ground floors and pavements, but can also be considered for a wide range of construction parts (beams, pillars, foundations, etc.), either alone or with hand-tied rebars. Concrete reinforced with fibers (which are usually steel, glass, plastic fibers) or cellulose polymer fiber is less expensive than hand-tied rebar. The shape, dimension, and length of the fiber are important. A thin and short fiber, for example short, hair-shaped glass fiber, is only effective during the first hours after pouring the concrete (its function is to reduce cracking while the concrete is stiffening), but it will not increase the concrete tensile strength. A normal-size fiber for European shotcrete (1 mm diameter, 45 mm length—steel or plastic) will increase the concrete's tensile strength. Fiber reinforcement is most often used to supplement or partially replace primary rebar, and in some cases it can be designed to fully replace rebar. Steel is the strongest commonly available fiber, and comes in different lengths (30 to 80 mm in Europe) and shapes (end-hooks). Steel fibers can only be used on surfaces that can tolerate or avoid corrosion and rust stains. In some cases, a steel-fiber surface is faced with other materials. Glass fiber is inexpensive and corrosion-proof, but not as ductile as steel. Recently, spun basalt fiber, long available in Eastern Europe, has become available in the U.S. and Western Europe. Basalt fiber is stronger and less expensive than glass, but historically has not resisted the alkaline environment of Portland cement well enough to be used as direct reinforcement. New materials use plastic binders to isolate the basalt fiber from the cement. The premium fibers are graphite-reinforced plastic fibers, which are nearly as strong as steel, lighter in weight, and corrosion-proof. Some experiments have had promising early results with carbon nanotubes, but the material is still far too expensive for any building. Non-steel reinforcement There is considerable overlap between the subjects of non-steel reinforcement and fiber-reinforcement of concrete. The introduction of non-steel reinforcement of concrete is relatively recent; it takes two major forms: non-metallic rebar rods, and non-steel (usually also non-metallic) fibers incorporated into the cement matrix. For example, there is increasing interest in glass fiber reinforced concrete (GFRC) and in various applications of polymer fibers incorporated into concrete. Although currently there is not much suggestion that such materials will replace metal rebar, some of them have major advantages in specific applications, and there also are new applications in which metal rebar simply is not an option. However, the design and application of non-steel reinforcing is fraught with challenges. For one thing, concrete is a highly alkaline environment, in which many materials, including most kinds of glass, have a poor service life. Also, the behavior of such reinforcing materials differs from the behavior of metals, for instance in terms of shear strength, creep and elasticity. Fiber-reinforced plastic/polymer (FRP) and glass-reinforced plastic (GRP) consist of fibers of polymer, glass, carbon, aramid or other polymers or high-strength fibers set in a resin matrix to form a rebar rod, or grid, or fiber. These rebars are installed in much the same manner as steel rebars. The cost is higher but, suitably applied, the structures have advantages, in particular a dramatic reduction in problems related to corrosion, either by intrinsic concrete alkalinity or by external corrosive fluids that might penetrate the concrete. These structures can be significantly lighter and usually have a longer service life. The cost of these materials has dropped dramatically since their widespread adoption in the aerospace industry and by the military. In particular, FRP rods are useful for structures where the presence of steel would not be acceptable. For example, MRI machines have huge magnets, and accordingly require non-magnetic buildings. Again, toll booths that read radio tags need reinforced concrete that is transparent to radio waves. Also, where the design life of the concrete structure is more important than its initial costs, non-steel reinforcing often has its advantages where corrosion of reinforcing steel is a major cause of failure. In such situations corrosion-proof reinforcing can extend a structure's life substantially, for example in the intertidal zone. FRP rods may also be useful in situations where it is likely that the concrete structure may be compromised in future years, for example the edges of balconies when balustrades are replaced, and bathroom floors in multi-story construction where the service life of the floor structure is likely to be many times the service life of the waterproofing building membrane. Plastic reinforcement often is stronger, or at least has a better strength to weight ratio than reinforcing steels. Also, because it resists corrosion, it does not need a protective concrete cover as thick as steel reinforcement does (typically 30 to 50 mm or more). FRP-reinforced structures therefore can be lighter and last longer. Accordingly, for some applications the whole-life cost will be price-competitive with steel-reinforced concrete. The material properties of FRP or GRP bars differ markedly from steel, so there are differences in the design considerations. FRP or GRP bars have relatively higher tensile strength but lower stiffness, so that deflections are likely to be higher than for equivalent steel-reinforced units. Structures with internal FRP reinforcement typically have an elastic deformability comparable to the plastic deformability (ductility) of steel reinforced structures. Failure in either case is more likely to occur by compression of the concrete than by rupture of the reinforcement. Deflection is always a major design consideration for reinforced concrete. Deflection limits are set to ensure that crack widths in steel-reinforced concrete are controlled to prevent water, air or other aggressive substances reaching the steel and causing corrosion. For FRP-reinforced concrete, aesthetics and possibly water-tightness will be the limiting criteria for crack width control. FRP rods also have relatively lower compressive strengths than steel rebar, and accordingly require different design approaches for reinforced concrete columns. One drawback to the use of FRP reinforcement is their limited fire resistance. Where fire safety is a consideration, structures employing FRP have to maintain their strength and the anchoring of the forces at temperatures to be expected in the event of fire. For purposes of fireproofing, an adequate thickness of cement concrete cover or protective cladding is necessary. The addition of 1 kg/m3 of polypropylene fibers to concrete has been shown to reduce spalling during a simulated fire. (The improvement is thought to be due to the formation of pathways out of the bulk of the concrete, allowing steam pressure to dissipate.) Another problem is the effectiveness of shear reinforcement. FRP rebar stirrups formed by bending before hardening generally perform relatively poorly in comparison to steel stirrups or to structures with straight fibers. When strained, the zone between the straight and curved regions are subject to strong bending, shear, and longitudinal stresses. Special design techniques are necessary to deal with such problems. There is growing interest in applying external reinforcement to existing structures using advanced materials such as composite (fiberglass, basalt, carbon) rebar, which can impart exceptional strength. Worldwide, there are a number of brands of composite rebar recognized by different countries, such as Aslan, DACOT, V-rod, and ComBar. The number of projects using composite rebar increases day by day around the world, in countries ranging from USA, Russia, and South Korea to Germany. See also Anchorage in reinforced concrete Concrete cover Concrete slab Corrosion engineering Cover meter Falsework Ferrocement Formwork Henri de Miffonis Interfacial transition zone Precast concrete Reinforced concrete structures durability Reinforced solid Structural robustness Types of concrete References Further reading / External links Threlfall A., et al. Reynolds's Reinforced Concrete Designer's Handbook – 11th ed. . Newby F., Early Reinforced Concrete, Ashgate Variorum, 2001, . Kim, S., Surek, J and J. Baker-Jarvis. "Electromagnetic Metrology on Concrete and Corrosion." Journal of Research of the National Institute of Standards and Technology, Vol. 116, No. 3 (May–June 2011): 655–669. Daniel R., Formwork UK "Concrete frame structures.". Short documentary about reinforced concrete and its challenges, 2024 (The Aesthetic City) Concrete buildings and structures Structural engineering Materials science Civil engineering
Reinforced concrete
[ "Physics", "Materials_science", "Engineering" ]
7,562
[ "Structural engineering", "Applied and interdisciplinary physics", "Materials science", "Construction", "Civil engineering", "nan" ]
54,240
https://en.wikipedia.org/wiki/Singularity%20%28mathematics%29
In mathematics, a singularity is a point at which a given mathematical object is not defined, or a point where the mathematical object ceases to be well-behaved in some particular way, such as by lacking differentiability or analyticity. For example, the reciprocal function has a singularity at , where the value of the function is not defined, as involving a division by zero. The absolute value function also has a singularity at , since it is not differentiable there. The algebraic curve defined by in the coordinate system has a singularity (called a cusp) at . For singularities in algebraic geometry, see singular point of an algebraic variety. For singularities in differential geometry, see singularity theory. Real analysis In real analysis, singularities are either discontinuities, or discontinuities of the derivative (sometimes also discontinuities of higher order derivatives). There are four kinds of discontinuities: type I, which has two subtypes, and type II, which can also be divided into two subtypes (though usually is not). To describe the way these two types of limits are being used, suppose that is a function of a real argument , and for any value of its argument, say , then the left-handed limit, , and the right-handed limit, , are defined by: , constrained by and , constrained by . The value is the value that the function tends towards as the value approaches from below, and the value is the value that the function tends towards as the value approaches from above, regardless of the actual value the function has at the point where  . There are some functions for which these limits do not exist at all. For example, the function does not tend towards anything as approaches . The limits in this case are not infinite, but rather undefined: there is no value that settles in on. Borrowing from complex analysis, this is sometimes called an essential singularity. The possible cases at a given value for the argument are as follows. A point of continuity is a value of for which , as one expects for a smooth function. All the values must be finite. If is not a point of continuity, then a discontinuity occurs at . A type I discontinuity occurs when both and exist and are finite, but at least one of the following three conditions also applies: ; is not defined for the case of ; or has a defined value, which, however, does not match the value of the two limits. Type I discontinuities can be further distinguished as being one of the following subtypes: A jump discontinuity occurs when , regardless of whether is defined, and regardless of its value if it is defined. A removable discontinuity occurs when , also regardless of whether is defined, and regardless of its value if it is defined (but which does not match that of the two limits). A type II discontinuity occurs when either or does not exist (possibly both). This has two subtypes, which are usually not considered separately: An infinite discontinuity is the special case when either the left hand or right hand limit does not exist, specifically because it is infinite, and the other limit is either also infinite, or is some well defined finite number. In other words, the function has an infinite discontinuity when its graph has a vertical asymptote. An essential singularity is a term borrowed from complex analysis (see below). This is the case when either one or the other limits or does not exist, but not because it is an infinite discontinuity. Essential singularities approach no limit, not even if valid answers are extended to include . In real analysis, a singularity or discontinuity is a property of a function alone. Any singularities that may exist in the derivative of a function are considered as belonging to the derivative, not to the original function. Coordinate singularities A coordinate singularity occurs when an apparent singularity or discontinuity occurs in one coordinate frame, which can be removed by choosing a different frame. An example of this is the apparent singularity at the 90 degree latitude in spherical coordinates. An object moving due north (for example, along the line 0 degrees longitude) on the surface of a sphere will suddenly experience an instantaneous change in longitude at the pole (in the case of the example, jumping from longitude 0 to longitude 180 degrees). This discontinuity, however, is only apparent; it is an artifact of the coordinate system chosen, which is singular at the poles. A different coordinate system would eliminate the apparent discontinuity (e.g., by replacing the latitude/longitude representation with an -vector representation). Complex analysis In complex analysis, there are several classes of singularities. These include the isolated singularities, the nonisolated singularities, and the branch points. Isolated singularities Suppose that is a function that is complex differentiable in the complement of a point in an open subset of the complex numbers Then: The point is a removable singularity of if there exists a holomorphic function defined on all of such that for all in The function is a continuous replacement for the function The point is a pole or non-essential singularity of if there exists a holomorphic function defined on with nonzero, and a natural number such that for all in The least such number is called the order of the pole. The derivative at a non-essential singularity itself has a non-essential singularity, with increased by (except if is so that the singularity is removable). The point is an essential singularity of if it is neither a removable singularity nor a pole. The point is an essential singularity if and only if the Laurent series has infinitely many powers of negative degree. Nonisolated singularities Other than isolated singularities, complex functions of one variable may exhibit other singular behaviour. These are termed nonisolated singularities, of which there are two types: Cluster points: limit points of isolated singularities. If they are all poles, despite admitting Laurent series expansions on each of them, then no such expansion is possible at its limit. Natural boundaries: any non-isolated set (e.g. a curve) on which functions cannot be analytically continued around (or outside them if they are closed curves in the Riemann sphere). Branch points Branch points are generally the result of a multi-valued function, such as or which are defined within a certain limited domain so that the function can be made single-valued within the domain. The cut is a line or curve excluded from the domain to introduce a technical separation between discontinuous values of the function. When the cut is genuinely required, the function will have distinctly different values on each side of the branch cut. The shape of the branch cut is a matter of choice, even though it must connect two different branch points (such as and for ) which are fixed in place. Finite-time singularity A finite-time singularity occurs when one input variable is time, and an output variable increases towards infinity at a finite time. These are important in kinematics and Partial Differential Equations – infinites do not occur physically, but the behavior near the singularity is often of interest. Mathematically, the simplest finite-time singularities are power laws for various exponents of the form of which the simplest is hyperbolic growth, where the exponent is (negative) 1: More precisely, in order to get a singularity at positive time as time advances (so the output grows to infinity), one instead uses (using t for time, reversing direction to so that time increases to infinity, and shifting the singularity forward from 0 to a fixed time ). An example would be the bouncing motion of an inelastic ball on a plane. If idealized motion is considered, in which the same fraction of kinetic energy is lost on each bounce, the frequency of bounces becomes infinite, as the ball comes to rest in a finite time. Other examples of finite-time singularities include the various forms of the Painlevé paradox (for example, the tendency of a chalk to skip when dragged across a blackboard), and how the precession rate of a coin spun on a flat surface accelerates towards infinite—before abruptly stopping (as studied using the Euler's Disk toy). Hypothetical examples include Heinz von Foerster's facetious "Doomsday's equation" (simplistic models yield infinite human population in finite time). Algebraic geometry and commutative algebra In algebraic geometry, a singularity of an algebraic variety is a point of the variety where the tangent space may not be regularly defined. The simplest example of singularities are curves that cross themselves. But there are other types of singularities, like cusps. For example, the equation defines a curve that has a cusp at the origin . One could define the -axis as a tangent at this point, but this definition can not be the same as the definition at other points. In fact, in this case, the -axis is a "double tangent." For affine and projective varieties, the singularities are the points where the Jacobian matrix has a rank which is lower than at other points of the variety. An equivalent definition in terms of commutative algebra may be given, which extends to abstract varieties and schemes: A point is singular if the local ring at this point is not a regular local ring. See also Catastrophe theory Defined and undefined Degeneracy (mathematics) Hyperbolic growth Movable singularity Pathological (mathematics) Regular singularity Singular solution References Mathematical analysis
Singularity (mathematics)
[ "Mathematics" ]
1,987
[ "Mathematical analysis" ]
54,244
https://en.wikipedia.org/wiki/Gravitational%20singularity
A gravitational singularity, spacetime singularity, or simply singularity, is a theoretical condition in which gravity is predicted to be so intense that spacetime itself would break down catastrophically. As such, a singularity is by definition no longer part of the regular spacetime and cannot be determined by "where" or "when". Gravitational singularities exist at a junction between general relativity and quantum mechanics; therefore, the properties of the singularity cannot be described without an established theory of quantum gravity. Trying to find a complete and precise definition of singularities in the theory of general relativity, the current best theory of gravity, remains a difficult problem. A singularity in general relativity can be defined by the scalar invariant curvature becoming infinite or, better, by a geodesic being incomplete. Gravitational singularities are mainly considered in the context of general relativity, where density would become infinite at the center of a black hole without corrections from quantum mechanics, and within astrophysics and cosmology as the earliest state of the universe during the Big Bang. Physicists have not reached a consensus about what actually happens at the extreme densities predicted by singularities (including at the start of the Big Bang). General relativity predicts that any object collapsing beyond a certain point (for stars this is the Schwarzschild radius) would form a black hole, inside which a singularity (covered by an event horizon) would be formed. The Penrose–Hawking singularity theorems define a singularity to have geodesics that cannot be extended in a smooth manner. The termination of such a geodesic is considered to be the singularity. Modern theory asserts that the initial state of the universe, at the beginning of the Big Bang, was a singularity. In this case, the universe did not collapse into a black hole, because currently-known calculations and density limits for gravitational collapse are usually based upon objects of relatively constant size, such as stars, and do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. Neither general relativity nor quantum mechanics can currently describe the earliest moments of the Big Bang, but in general, quantum mechanics does not permit particles to inhabit a space smaller than their Compton wavelengths. Interpretation Many theories in physics have mathematical singularities of one kind or another. Equations for these physical theories predict that the ball of mass of some quantity becomes infinite or increases without limit. This is generally a sign for a missing piece in the theory, as in the ultraviolet catastrophe, re-normalization, and instability of a hydrogen atom predicted by the Larmor formula. In classical field theories, including special relativity but not general relativity, one can say that a solution has a singularity at a particular point in spacetime where certain physical properties become ill-defined, with spacetime serving as a background field to locate the singularity. A singularity in general relativity, on the other hand, is more complex because spacetime itself becomes ill-defined, and the singularity is no longer part of the regular spacetime manifold. In general relativity, a singularity cannot be defined by "where" or "when". Some theories, such as the theory of loop quantum gravity, suggest that singularities may not exist. This is also true for such classical unified field theories as the Einstein–Maxwell–Dirac equations. The idea can be stated in the form that, due to quantum gravity effects, there is a minimum distance beyond which the force of gravity no longer continues to increase as the distance between the masses becomes shorter, or alternatively that interpenetrating particle waves mask gravitational effects that would be felt at a distance. Motivated by such philosophy of loop quantum gravity, recently it has been shown that such conceptions can be realized through some elementary constructions based on the refinement of the first axiom of geometry, namely, the concept of a point by considering Klein's prescription of accounting for the extension of a small spot that represents or demonstrates a point, which was a programmatic call that he called as a fusion of arithmetic and geometry. Klein's program, according to Born, was actually a mathematical route to consider 'natural uncertainty in all observations' while describing 'a physical situation' by means of 'real numbers'. Types There is only one type of singularity, each with different physical features that have characteristics relevant to the theories from which they originally emerged, such as the different shapes of the singularities, conical and curved. They have also been hypothesized to occur without event horizons, structures that delineate one spacetime section from another in which events cannot affect past the horizon; these are called naked. Conical A conical singularity occurs when there is a point where the limit of some diffeomorphism invariant quantity does not exist or is infinite, in which case spacetime is not smooth at the point of the limit itself. Thus, spacetime looks like a cone around this point, where the singularity is located at the tip of the cone. The metric can be finite everywhere the coordinate system is used. An example of such a conical singularity is a cosmic string and a Schwarzschild black hole. Curvature Solutions to the equations of general relativity or another theory of gravity (such as supergravity) often result in encountering points where the metric blows up to infinity. However, many of these points are completely regular, and the infinities are merely a result of using an inappropriate coordinate system at this point. To test whether there is a singularity at a certain point, one must check whether at this point diffeomorphism invariant quantities (i.e. scalars) become infinite. Such quantities are the same in every coordinate system, so these infinities will not "go away" by a change of coordinates. An example is the Schwarzschild solution that describes a non-rotating, uncharged black hole. In coordinate systems convenient for working in regions far away from the black hole, a part of the metric becomes infinite at the event horizon. However, spacetime at the event horizon is regular. The regularity becomes evident when changing to another coordinate system (such as the Kruskal coordinates), where the metric is perfectly smooth. On the other hand, in the center of the black hole, where the metric becomes infinite as well, the solutions suggest a singularity exists. The existence of the singularity can be verified by noting that the Kretschmann scalar, being the square of the Riemann tensor i.e. , which is diffeomorphism invariant, is infinite. While in a non-rotating black hole the singularity occurs at a single point in the model coordinates, called a "point singularity", in a rotating black hole, also known as a Kerr black hole, the singularity occurs on a ring (a circular line), known as a "ring singularity". Such a singularity may also theoretically become a wormhole. More generally, a spacetime is considered singular if it is geodesically incomplete, meaning that there are freely-falling particles whose motion cannot be determined beyond a finite time, being after the point of reaching the singularity. For example, any observer inside the event horizon of a non-rotating black hole would fall into its center within a finite period of time. The classical version of the Big Bang cosmological model of the universe contains a causal singularity at the start of time (t=0), where all time-like geodesics have no extensions into the past. Extrapolating backward to this hypothetical time 0 results in a universe with all spatial dimensions of size zero, infinite density, infinite temperature, and infinite spacetime curvature. Naked singularity Until the early 1990s, it was widely believed that general relativity hides every singularity behind an event horizon, making naked singularities impossible. This is referred to as the cosmic censorship hypothesis. However, in 1991, physicists Stuart Shapiro and Saul Teukolsky performed computer simulations of a rotating plane of dust that indicated that general relativity might allow for "naked" singularities. What these objects would actually look like in such a model is unknown. Nor is it known whether singularities would still arise if the simplifying assumptions used to make the simulation were removed. However, it is hypothesized that light entering a singularity would similarly have its geodesics terminated, thus making the naked singularity look like a black hole. Disappearing event horizons exist in the Kerr metric, which is a spinning black hole in a vacuum, if the angular momentum () is high enough. Transforming the Kerr metric to Boyer–Lindquist coordinates, it can be shown that the coordinate (which is not the radius) of the event horizon is, , where , and . In this case, "event horizons disappear" means when the solutions are complex for , or . However, this corresponds to a case where exceeds (or in Planck units, ; i.e. the spin exceeds what is normally viewed as the upper limit of its physically possible values. Similarly, disappearing event horizons can also be seen with the Reissner–Nordström geometry of a charged black hole if the charge () is high enough. In this metric, it can be shown that the singularities occur at , where , and . Of the three possible cases for the relative values of  and , the case where  causes both  to be complex. This means the metric is regular for all positive values of , or in other words, the singularity has no event horizon. However, this corresponds to a case where exceeds (or in Planck units, ; i.e. the charge exceeds what is normally viewed as the upper limit of its physically possible values. Also, actual astrophysical black holes are not expected to possess any appreciable charge. A black hole possessing the lowest value consistent with its and values and the limits noted above; i.e., one just at the point of losing its event horizon, is termed extremal. Entropy Before Stephen Hawking came up with the concept of Hawking radiation, the question of black holes having entropy had been avoided. However, this concept demonstrates that black holes radiate energy, which conserves entropy and solves the incompatibility problems with the second law of thermodynamics. Entropy, however, implies heat and therefore temperature. The loss of energy also implies that black holes do not last forever, but rather evaporate or decay slowly. Black hole temperature is inversely related to mass. All known black hole candidates are so large that their temperature is far below that of the cosmic background radiation, which means they will gain energy on net by absorbing this radiation. They cannot begin to lose energy on net until the background temperature falls below their own temperature. This will occur at a cosmological redshift of more than one million, rather than the thousand or so since the background radiation formed. See also 0-dimensional singularity: magnetic monopole 1-dimensional singularity: cosmic string 2-dimensional singularity: domain wall Fuzzball (string theory) Penrose–Hawking singularity theorems White hole BKL singularity Initial singularity References Bibliography §31.2 The nonsingularity of the gravitational radius, and following sections; §34 Global Techniques, Horizons, and Singularity Theorems Further reading The Elegant Universe by Brian Greene. This book provides a layman's introduction to string theory, although some of the views expressed have already become outdated. His use of common terms and his providing of examples throughout the text help the layperson understand the basics of string theory. General relativity Lorentzian manifolds Physical paradoxes Physical phenomena Concepts in astronomy
Gravitational singularity
[ "Physics", "Astronomy" ]
2,375
[ "Concepts in astronomy", "General relativity", "Physical phenomena", "Theory of relativity" ]
54,245
https://en.wikipedia.org/wiki/Technological%20singularity
The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence. The Hungarian-American mathematician John von Neumann (1903-1957) became the first known person to use the concept of a "singularity" in the technological context. Alan Turing, often regarded as the father of modern computer science, laid a crucial foundation for the contemporary discourse on the technological singularity. His pivotal 1950 paper, "Computing Machinery and Intelligence," introduces the idea of a machine's ability to exhibit intelligent behavior equivalent to or indistinguishable from that of a human. Stanislaw Ulam reported in 1958 an earlier discussion with von Neumann "centered on the accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue". Subsequent authors have echoed this viewpoint. The concept and the term "singularity" were popularized by Vernor Vingefirst in 1983 (in an article that claimed that once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole",) and later in his 1993 essay The Coming Technological Singularity, (in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate). He wrote that he would be surprised if it occurred before 2005 or after 2030. Another significant contributor to wider circulation of the notion was Ray Kurzweil's 2005 book The Singularity Is Near, predicting singularity by 2045. Some scientists, including Stephen Hawking, have expressed concern that artificial superintelligence (ASI) could result in human extinction. The consequences of a technological singularity and its potential benefit or harm to the human race have been intensely debated. Prominent technologists and academics dispute the plausibility of a technological singularity and the associated artificial intelligence explosion, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, Steven Pinker, Theodore Modis, and Gordon Moore. One claim made was that the artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones, as was observed in previously developed human technologies. Intelligence explosion Although technological progress has been accelerating in most areas, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia. However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is significantly more intelligent than humans. If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would, in theory, vastly improve over human problem-solving and inventive skills. Such an AI is referred to as Seed AI because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware to design an even more capable machine, which could repeat the process in turn. This recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities. I. J. Good speculated that superhuman intelligence might bring about an intelligence explosion: One version of intelligence explosion is where computing power approaches infinity in a finite amount of time. In this version, once AIs are performing the research to improve themselves, speed doubles e.g. after 2 years, then 1 year, then 6 months, then 3 months, then 1.5 months, etc., where the infinite sum of the doubling periods is 4 years. Unless prevented by physical limits of computation and time quantization, this process would achieve infinite computing power in 4 years, properly earning the name "singularity" for the final state. This form of intelligence explosion is described in Yudkowsky (1996). Emergence of superintelligence A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to the form or degree of intelligence possessed by such an agent. John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence, arguing that it is difficult or impossible for present-day humans to predict what human beings' lives would be like in a post-singularity world. The related concept "speed superintelligence" describes an AI that can function like a human mind, only much faster. For example, with a million-fold increase in the speed of information processing relative to that of humans, a subjective year would pass in 30 physical seconds. Such a difference in information processing speed could drive the singularity. Technology forecasters and researchers disagree regarding when, or whether, human intelligence will likely be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that bypass human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies focus on scenarios that combine these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification. The book The Age of Em by Robin Hanson describes a hypothetical future scenario in which human brains are scanned and digitized, creating "uploads" or digital versions of human consciousness. In this future, the development of these uploads may precede or coincide with the emergence of superintelligent artificial intelligence. Variations Non-AI singularity Some writers use "the singularity" in a broader way to refer to any radical changes in society brought about by new technology (such as molecular nanotechnology), although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity. Predictions There have been numerous dates predicted for the attainment of singularity. In 1965, Good wrote that it was more probable than not that an ultra-intelligent machine would be built within the twentieth century. That computing capabilities for human-level AI would be available in supercomputers before 2010 was predicted in 1988 by Moravec, assuming that the current rate of improvement continued. The attainment of greater-than-human intelligence between 2005 and 2030 was predicted by Vinge in 1993. A singularity in 2021 was predicted by Yudkowsky in 1996. Human-level AI around 2029 and the singularity in 2045 was predicted by Kurzweil in 2005. He reaffirmed these predictions in 2024 in The Singularity is Nearer. Human-level AI by 2040, and intelligence far beyond human by 2050 was predicted in 1998 by Moravec, revising his earlier prediction. A confidence of 50% that human-level AI would be developed by 2040–2050 was the outcome of four polls of AI researchers, conducted in 2012 and 2013 by Bostrom and Müller. Plausibility Prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, Steven Pinker, Theodore Modis, and Gordon Moore, whose law is often cited in support of the concept. Most proposed methods for creating superhuman or transhuman minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence. The many speculated ways to augment human intelligence include bioengineering, genetic engineering, nootropic drugs, AI assistants, direct brain–computer interfaces and mind uploading. These multiple possible paths to an intelligence explosion, all of which will presumably be pursued, makes a singularity more likely. Robin Hanson expressed skepticism of human intelligence augmentation, writing that once the "low-hanging fruit" of easy methods for increasing human intelligence have been exhausted, further improvements will become increasingly difficult. Despite all of the speculated ways for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option among the hypotheses that would advance the singularity. The possibility of an intelligence explosion depends on three factors. The first accelerating factor is the new intelligence enhancements made possible by each previous improvement. However, as the intelligences become more advanced, further advances will become more and more complicated, possibly outweighing the advantage of increased intelligence. Each improvement should generate at least one more improvement, on average, for movement towards singularity to continue. Finally, the laws of physics may eventually prevent further improvement. There are two logically independent, but mutually reinforcing, causes of intelligence improvements: increases in the speed of computation, and improvements to the algorithms used. The former is predicted by Moore's Law and the forecasted improvements in hardware, and is comparatively similar to previous technological advances. But Schulman and Sandberg argue that software will present more complex challenges than simply operating on hardware capable of running at human intelligence levels or beyond. A 2017 email survey of authors with publications at the 2015 NeurIPS and ICML machine learning conferences asked about the chance that "the intelligence explosion argument is broadly correct". Of the respondents, 12% said it was "quite likely", 17% said it was "likely", 21% said it was "about even", 24% said it was "unlikely" and 26% said it was "quite unlikely". Speed improvements Both for human and artificial intelligence, hardware improvements increase the rate of future hardware improvements. An analogy to Moore's Law suggests that if the first doubling of speed took 18 months, the second would take 18 subjective months; or 9 external months, whereafter, four months, two months, and so on towards a speed singularity. Some upper limit on speed may eventually be reached. Jeff Hawkins has stated that a self-improving computer system would inevitably run into upper limits on computing power: "in the end there are limits to how big and fast computers can run. We would end up in the same place; we'd just get there a bit faster. There would be no singularity." It is difficult to directly compare silicon-based hardware with neurons. But notes that computer speech recognition is approaching human capabilities, and that this capability seems to require 0.01% of the volume of the brain. This analogy suggests that modern computer hardware is within a few orders of magnitude of being as powerful as the human brain, as well as taking up a lot less space. Exponential growth The exponential growth in computing technology suggested by Moore's law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore's law. Computer scientist and futurist Hans Moravec proposed in a 1998 book that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit. Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes) increases exponentially, generalizing Moore's law in the same manner as Moravec's proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others. Between 1986 and 2007, machines' application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world's general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world's storage capacity per capita doubled every 40 months. On the other hand, it has been argued that the global acceleration pattern having the 21st century singularity as its parameter should be characterized as hyperbolic rather than exponential. Kurzweil reserves the term "singularity" for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that "The Singularity will allow us to transcend these limitations of our biological bodies and brains ... There will be no distinction, post-Singularity, between human and machine". He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date "will not represent the Singularity" because they do "not yet correspond to a profound expansion of our intelligence." Accelerating change Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term "singularity" in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change: Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the "law of accelerating returns". Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history". Kurzweil believes that the singularity will occur by approximately 2045. His predictions differ from Vinge's in that he predicts a gradual ascent to the singularity, rather than Vinge's rapidly self-improving superhuman intelligence. Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy's April 2000 Wired magazine article "Why The Future Doesn't Need Us". Algorithm improvements Some intelligence technologies, like "seed AI", may also have the potential to not just make themselves faster, but also more efficient, by modifying their source code. These improvements would make further improvements possible, which would make further improvements possible, and so on. The mechanism for a recursively self-improving set of algorithms differs from an increase in raw computation speed in two ways. First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately. An AI rewriting its own source code could do so while contained in an AI box. Second, as with Vernor Vinge's conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual algorithm improvements would be qualitatively different. Eliezer Yudkowsky compares it to the changes that human intelligence brought: humans changed the world thousands of times quicker than evolution had done, and in totally different ways. Similarly, the evolution of life was a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again. There are substantial dangers associated with an intelligence explosion singularity originating from a recursively self-improving set of algorithms. First, the goal structure of the AI might self-modify, potentially causing the AI to optimise for something other than what was originally intended. Secondly, AIs could compete for the same scarce resources humankind uses to survive. While not actively malicious, AIs would promote the goals of their programming, not necessarily broader human goals, and thus might crowd out humans. Carl Shulman and Anders Sandberg suggest that algorithm improvements may be the limiting factor for a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained. An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang". Criticism Some critics, like philosophers Hubert Dreyfus and John Searle, assert that computers or machines cannot achieve human intelligence. Others, like physicist Stephen Hawking, object that whether machines can achieve a true intelligence or merely something similar to intelligence is irrelevant if the net result is the same. Psychologist Steven Pinker stated in 2008: "There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems." Martin Ford postulates a "technology paradox" in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to those types of work traditionally considered to be "routine". Theodore Modis and Jonathan Huebner argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore's prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advances in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors. Theodore Modis holds the singularity cannot happen. He claims the "technological singularity" and especially Kurzweil lack scientific rigor; Kurzweil is alleged to mistake the logistic function (S-function) for an exponential function, and to see a "knee" in an exponential function where there can in fact be no such thing. In a 2021 article, Modis pointed out that no milestonesbreaks in historical perspective comparable in importance to the Internet, DNA, the transistor, or nuclear energyhad been observed in the previous twenty years while five of them would have been expected according to the exponential trend advocated by the proponents of the technological singularity. AI researcher Jürgen Schmidhuber stated that the frequency of subjectively "notable events" appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists. Microsoft co-founder Paul Allen argued the opposite of accelerating returns, the complexity brake: the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies, a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since. The growth of complexity eventually becomes self-limiting, and leads to a widespread "general systems collapse". Hofstadter (2006) raises concern that Ray Kurzweil is not sufficiently scientifically rigorous, that an exponential tendency of technology is not a scientific law like one of physics, and that exponential curves have no "knees". Nonetheless, he did not rule out the singularity in principle in the distant future and in the light of ChatGPT and other recent advancements has revised his opinion significantly towards dramatic technological change in the near future. Jaron Lanier denies that the singularity is inevitable: "I do not think the technology is creating itself. It's not an autonomous process." Furthermore: "The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it's the same thing operationally as denying people clout, dignity, and self-determination ... to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics." Economist Robert J. Gordon points out that measured economic growth slowed around 1970 and slowed even further since the financial crisis of 2007–2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I. J. Good. Philosopher and cognitive scientist Daniel Dennett said in 2017: "The whole singularity stuff, that's preposterous. It distracts us from much more pressing problems", adding "AI tools that we become hyper-dependent on, that is going to happen. And one of the dangers is that we will give them more authority than they warrant." In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil's iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary "events" were picked arbitrarily. Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. Kelly (2006) argues that the way the Kurzweil chart is constructed with x-axis having time before present, it always points to the singularity being "now", for any date on which one would construct such a chart, and shows this visually on Kurzweil's chart. Some critics suggest religious motivations or implications of singularity, especially Kurzweil's version of it. The buildup towards the singularity is compared with Christian end-of-time scenarios. Beam calls it "a Buck Rogers vision of the hypothetical Christian Rapture". John Gray says "the Singularity echoes apocalyptic myths in which history is about to be interrupted by a world-transforming event". David Streitfeld in The New York Times questioned whether "it might manifest first and foremost—thanks, in part, to the bottom-line obsession of today’s Silicon Valley—as a tool to slash corporate America’s head count." Potential impacts Dramatic changes in the rate of economic growth have occurred in the past because of technological advancement. Based on population growth, the economy doubled every 250,000 years from the Paleolithic era until the Neolithic Revolution. The new agricultural economy doubled every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world's economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, one would expect the economy to double at least quarterly and possibly on a weekly basis. Uncertainty and risk The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate. It is unclear whether an intelligence explosion resulting in a singularity would be beneficial or harmful, or even an existential threat. Because AI is a major factor in singularity risk, a number of organizations pursue a technical theory of aligning AI goal-systems with human values, including the Future of Humanity Institute, the Machine Intelligence Research Institute, the Center for Human-Compatible Artificial Intelligence, and the Future of Life Institute. Physicist Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." Hawking suggested that artificial intelligence should be taken more seriously and that more should be done to prepare for the singularity: claims that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by humankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators. Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments. AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources, and humans would be powerless to stop them. Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity. discusses human extinction scenarios, and lists superintelligence as a possible cause: According to Eliezer Yudkowsky, a significant problem in AI safety is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification. proposes an AI design that avoids several dangers including self-delusion, unintended instrumental actions, and corruption of the reward generator. He also discusses social impacts of AI and testing AI. His 2001 book Super-Intelligent Machines advocates the need for public education about AI and public control over AI. It also proposed a simple design that was vulnerable to corruption of the reward generator. Next step of sociobiological evolution While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description. In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. A 2016 article in Trends in Ecology & Evolution argues that "humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels... we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes... With one in three courtships leading to marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction". The article further argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life's evolution, the carbon-based biosphere has generated a system (humans) capable of creating technology that will result in a comparable evolutionary transition. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5 bytes). In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1 bytes. The digital realm stored 500 times more information than this in 2014 (see figure). The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3 base pairs, equivalent to 1.325 bytes of information. If growth in digital storage continues at its current rate of 30–38% compound annual growth per year, it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years". Implications for human society In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at the Asilomar conference center in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards. Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a "cockroach" stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist. Frank S. Robinson predicts that once humans achieve a machine with the intelligence of a human, scientific and technological problems will be tackled and solved with brainpower far superior to that of humans. He notes that artificial systems are able to share data more directly than humans, and predicts that this would result in a global network of super-intelligence that would dwarf human capability. Robinson also discusses how vastly different the future would potentially look after such an intelligence explosion. Hard or soft takeoff In a hard takeoff scenario, an artificial superintelligence rapidly self-improves, "taking control" of the world (perhaps in a matter of hours), too quickly for significant human-initiated error correction or for a gradual tuning of the agent's goals. In a soft takeoff scenario, the AI still becomes far more powerful than humanity, but at a human-like pace (perhaps on the order of decades), on a timescale where ongoing human interaction and correction can effectively steer the AI's development. Ramez Naam argues against a hard takeoff. He has pointed out that we already see recursive self-improvement by superintelligences, such as corporations. Intel, for example, has "the collective brainpower of tens of thousands of humans and probably millions of CPU cores to... design better CPUs!" However, this has not led to a hard takeoff; rather, it has led to a soft takeoff in the form of Moore's law. Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that "creating a mind of intelligence 2 is probably more than twice as hard as creating a mind of intelligence 1." J. Storrs Hall believes that "many of the more commonly seen scenarios for overnight hard takeoff are circularthey seem to assume hyperhuman capabilities at the starting point of the self-improvement process" in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world. Ben Goertzel agrees with Hall's suggestion that a new human-level AI would do well to use its intelligence to accumulate wealth. The AI's talents might inspire companies and governments to disperse its software throughout society. Goertzel is skeptical of a hard five minute takeoff but speculates that a takeoff from human to superhuman level on the order of five years is reasonable. He refers to this scenario as a "semihard takeoff". Max More disagrees, arguing that if there were only a few superfast human-level AIs, that they would not radically change the world, as they would still depend on other people to get things done and would still have human cognitive constraints. Even if all superfast AIs worked on intelligence augmentation, it is unclear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase. More further argues that a superintelligence would not transform the world overnight: a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world. "The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years." Relation to immortality and aging Eric Drexler, one of the founders of nanotechnology, theorized in 1986 the possibility of cell repair devices, including ones operating within cells and using as yet hypothetical biological machines. According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micromachines. Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom. Moravec predicted in 1988 the possibility of "uploading" human mind into a human-like robot, achieving quasi-immortality by extreme longevity via transfer of the human mind between successive new robots as the old ones wear out; beyond that, he predicts later exponential acceleration of subjective experience of time leading to a subjective sense of immortality. Kurzweil suggested in 2005 that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age. Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes. Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious." History of the concept A paper by Mahendra Prasad, published in AI Magazine, asserts that the 18th-century mathematician Marquis de Condorcet was the first person to hypothesize and mathematically model an intelligence explosion and its effects on humanity. An early description of the idea was made in John W. Campbell's 1932 short story "The Last Evolution". In his 1958 obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." In 1965, Good wrote his essay postulating an "intelligence explosion" of recursive self-improvement of a machine intelligence. In 1977, Hans Moravec wrote an article with unclear publishing status where he envisioned a development of self-improving thinking machines, a creation of "super-consciousness, the synthesis of terrestrial life, and perhaps jovian and martian life as well, constantly improving and extending itself, spreading outwards from the solar system, converting non-life into mind." The article describes the human mind uploading later covered in Moravec (1988). The machines are expected to reach human level and then improve themselves beyond that ("Most significantly of all, they [the machines] can be put to work as programmers and engineers, with the task of optimizing the software and hardware which make them what they are. The successive generations of machines produced this way will be increasingly smarter and more cost effective.") Humans will no longer be needed, and their abilities will be overtaken by the machines: "In the long run the sheer physical inability of humans to keep up with these rapidly evolving progeny of our minds will ensure that the ratio of people to machines approaches zero, and that a direct descendant of our culture, but not our genes, inherits the universe." While the word "singularity" is not used, the notion of human-level thinking machines thereafter improving themselves beyond human level is there. In this view, there is no intelligence explosion in the sense of a very rapid intelligence increase once human equivalence is reached. An updated version of the article was published in 1979 in Analog Science Fiction and Fact. In 1981, Stanisław Lem published his science fiction novel Golem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirements because it finds them lacking internal logical consistency. In 1983, Vernor Vinge addressed Good's intelligence explosion in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term "singularity" (although not "technological singularity") in a way that was specifically tied to the creation of intelligent machines: In 1985, in "The Time Scale of Artificial Intelligence", artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an "infinity point": if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time. In 1986, Vernor Vinge published Marooned in Realtime, a science-fiction novel where a few remaining humans traveling forward in the future have survived an unknown extinction event that might well be a singularity. In a short afterword, the author states that an actual technological singularity would not be the end of the human species: "of course it seems very unlikely that the Singularity would be a clean vanishing of the human race. (On the other hand, such a vanishing is the timelike analog of the silence we find all across the sky.)". In 1988, Vinge used the phrase "technological singularity" (including "technological") in the short story collection Threats and Other Promises, writing in the introduction to his story "The Whirligig of Time" (p. 72): Barring a worldwide catastrophe, I believe that technology will achieve our wildest dreams, and soon. When we raise our own intelligence and that of our creations, we are no longer in a world of human-sized characters. At that point we have fallen into a technological "black hole", a technological singularity. In 1988, Hans Moravec published Mind Children, in which he predicted human-level intelligence in supercomputers by 2010, self-improving intelligent machines far surpassing human intelligence later, human mind uploading into human-like robots later, intelligent machines leaving humans behind, and space colonization. He did not mention "singularity", though, and he did not speak of a rapid explosion of intelligence immediately after the human level is achieved. Nonetheless, the overall singularity tenor is there in predicting both human-level artificial intelligence and further artificial intelligence far surpassing humans later. Vinge's 1993 article "The Coming Technological Singularity: How to Survive in the Post-Human Era", spread widely on the internet and helped to popularize the idea. This article contains the statement, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express. Minsky's 1994 article says robots will "inherit the Earth", possibly with the use of nanotechnology, and proposes to think of robots as human "mind children", drawing the analogy from Moravec. The rhetorical effect of that analogy is that if humans are fine to pass the world to their biological children, they should be equally fine to pass it to robots, their "mind" children. As per Minsky, 'we could design our "mind-children" to think a million times faster than we do. To such a being, half a minute might seem as long as one of our years, and each hour as long as an entire human lifetime.' The feature of the singularity present in Minsky is the development of superhuman artificial intelligence ("million times faster"), but there is no talk of sudden intelligence explosion, self-improving thinking machines or unpredictability beyond any specific event and the word "singularity" is not used. Tipler's 1994 book The Physics of Immortality predicts a future where super–intelligent machines will build enormously powerful computers, people will be "emulated" in computers, life will reach every galaxy and people will achieve immortality when they reach Omega Point. There is no talk of Vingean "singularity" or sudden intelligence explosion, but intelligence much greater than human is there, as well as immortality. In 1996, Yudkowsky predicted a singularity by 2021. His version of singularity involves intelligence explosion: once AIs are doing the research to improve themselves, speed doubles after 2 years, then 1 one year, then after 6 months, then after 3 months, then after 1.5 months, and after more iterations, the "singularity" is reached. This construction implies that the speed reaches infinity in finite time. In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of robotics, genetic engineering, and nanotechnology. In 2005, Kurzweil published The Singularity Is Near. Kurzweil's publicity campaign included an appearance on The Daily Show with Jon Stewart. From 2006 to 2012, an annual Singularity Summit conference was organized by Machine Intelligence Research Institute, founded by Eliezer Yudkowsky. In 2007, Yudkowsky suggested that many of the varied definitions that have been assigned to "singularity" are mutually incompatible rather than mutually supporting. For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability. In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is "to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges." Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA's Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year. In politics In 2007, the Joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity. Former President of the United States Barack Obama spoke about singularity in his interview to Wired in 2016: See also :Category:Novels about technological singularity (Ai Pin) References Citations Sources William D. Nordhaus, "Why Growth Will Fall" (a review of Robert J. Gordon, The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War, Princeton University Press, 2016., 762 pp., $39.95), The New York Review of Books, vol. LXIII, no. 13 (August 18, 2016), pp. 64, 66, 68. John R. Searle, "What Your Computer Can't Know" (review of Luciano Floridi, The Fourth Revolution: How the Infosphere Is Reshaping Human Reality, Oxford University Press, 2014; and Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford University Press, 2014), The New York Review of Books, vol. LXI, no. 15 (October 9, 2014), pp. 52–55. Further reading Krüger, Oliver, Virtual Immortality. God, Evolution, and the Singularity in Post- and Transhumanism., Bielefeld: transcript 2021. . Marcus, Gary, "Am I Human?: Researchers need new ways to distinguish artificial intelligence from the natural kind", Scientific American, vol. 316, no. 3 (March 2017), pp. 58–63. Multiple tests of artificial-intelligence efficacy are needed because, "just as there is no single test of athletic prowess, there cannot be one ultimate test of intelligence." One such test, a "Construction Challenge", would test perception and physical action—"two important elements of intelligent behavior that were entirely absent from the original Turing test." Another proposal has been to give machines the same standardized tests of science and other disciplines that schoolchildren take. A so far insuperable stumbling block to artificial intelligence is an incapacity for reliable disambiguation. "[V]irtually every sentence [that people generate] is ambiguous, often in multiple ways." A prominent example is known as the "pronoun disambiguation problem": a machine has no way of determining to whom or what a pronoun in a sentence—such as "he", "she" or "it"—refers. External links singularity technology, britannica.com The Coming Technological Singularity: How to Survive in the Post-Human Era (on Vernor Vinge's web site, retrieved Jul 2019) Intelligence Explosion FAQ by the Machine Intelligence Research Institute Blog on bootstrapping artificial intelligence by Jacques Pitrat Why an Intelligence Explosion is Probable (Mar 2011) Why an Intelligence Explosion is Impossible (Nov 2017) How Close are We to Technological Singularity and When? The AI Revolution: Our Immortality or Extinction – Part 1 and Part 2 (Tim Urban, Wait But Why, January 22/27, 2015) Existential risk from artificial general intelligence Philosophy of artificial intelligence Science fiction themes
Technological singularity
[ "Technology" ]
9,912
[ "Existential risk from artificial general intelligence" ]
54,257
https://en.wikipedia.org/wiki/Desktop%20publishing
Desktop publishing (DTP) is the creation of documents using dedicated software on a personal ("desktop") computer. It was first used almost exclusively for print publications, but now it also assists in the creation of various forms of online content. Desktop publishing software can generate page layouts and produce text and image content comparable to the simpler forms of traditional typography and printing. This technology allows individuals, businesses, and other organizations to self-publish a wide variety of content, from menus to magazines to books, without the expense of commercial printing. Desktop publishing often requires the use of a personal computer and WYSIWYG page layout software to create documents for either large-scale publishing or small-scale local printing and distribution although non-WYSIWYG systems such as TeX and LaTeX are also used, especially in scientific publishing. Originally, desktop publishing methods provided more control over design, layout, and typography than word processing software but the latter has evolved to include most, if not all, capabilities previously available only with dedicated desktop publishing software. The same DTP skills and software used for common paper and book publishing are sometimes used to create graphics for point of sale displays, presentations, infographics, brochures, business cards, promotional items, trade show exhibits, retail package designs and outdoor signs. History Desktop publishing was first developed at Xerox PARC in the 1970s. A contradictory claim states that desktop publishing began in 1983 with a program developed by James Davise at a community newspaper in Philadelphia. The program Type Processor One ran on a PC using a graphics card for a WYSIWYG display and was offered commercially by Best Info in 1984. Desktop typesetting with only limited page makeup facilities arrived in 1978–1979 with the introduction of TeX, and was extended in 1985 with the introduction of LaTeX. The desktop publishing market took off in 1985 with the introduction in January of the Apple LaserWriter laser printer for the year-old Apple Macintosh personal computer. This momentum was kept up with the release that July of PageMaker software from Aldus, which rapidly became the standard software application for desktop publishing. With its advanced layout features, PageMaker immediately relegated word processors like Microsoft Word to the composition and editing of purely textual documents. Word did not begin to acquire desktop publishing features until a decade later, and by 2003, it was regarded only as "good" and not "great" at desktop publishing tasks. The term "desktop publishing" is attributed to Aldus founder Paul Brainerd, who sought a marketing catchphrase to describe the small size and relative affordability of this suite of products, in contrast to the expensive commercial phototypesetting equipment of the day. Before the advent of desktop publishing, the only option available to most people for producing typed documents (as opposed to handwritten documents) was a typewriter, which offered only a handful of typefaces (usually fixed-width) and one or two font sizes. Indeed, one popular desktop publishing book was titled The Mac is Not a Typewriter, and it had to actually explain how a Mac could do so much more than a typewriter. The ability to create WYSIWYG page layouts on screen and then print pages containing text and graphical elements at 300 dpi resolution was a major development for the personal computer industry. The ability to do all this with industry standards like PostScript also radically changed the traditional publishing industry, which at the time was accustomed to buying end-to-end turnkey solutions for digital typesetting which came with their own proprietary hardware workstations. Newspapers and other print publications began to transition to DTP-based programs from older layout systems such as Atex and other programs in the early 1980s. Desktop publishing was still in its early stage in the early 1980s. Users of the PageMaker/LaserWriter/Macintosh 512K system endured frequent software crashes, Mac's low-resolution 512x342 1-bit monochrome screen, the inability to control letter spacing, kerning, and other typographic features, and the discrepancies between screen display and printed output. However, it was an unheard-of combination at the time, and was received with considerable acclaim. Behind the scenes, technologies developed by Adobe Systems set the foundation for professional desktop publishing applications. The LaserWriter and LaserWriter Plus printers included scalable Adobe PostScript fonts built into their ROM memory. The LaserWriter's PostScript capability allowed publication designers to proof files on a local printer, then print the same file at DTP service bureaus using optical resolution 600+ ppi PostScript printers such as those from Linotronic. Later, the Macintosh II was released, which was considerably more suitable for desktop publishing due to its greater expandability, support for large color multi-monitor displays, and its SCSI storage interface (which allowed hard drives to be attached to the system). Macintosh-based systems continued to dominate the market into 1986, when the GEM-based Ventura Publisher was introduced for MS-DOS computers. PageMaker's pasteboard metaphor closely simulated the process of creating layouts manually, but Ventura Publisher automated the layout process through its use of tags and style sheets and automatically generated indices and other body matter. This made it particularly suitable for the creation of manuals and other long-format documents. Desktop publishing moved into the home market in 1986 with Professional Page for the Amiga, Publishing Partner (now PageStream) for the Atari ST, GST's Timeworks Publisher on the PC and Atari ST, and Calamus for the Atari TT030. Software was published even for 8-bit computers like the Apple II and Commodore 64: Home Publisher, The Newsroom, and geoPublish. During its early years, desktop publishing acquired a bad reputation as a result of untrained users who created poorly organized, unprofessional-looking "ransom note effect" layouts. (Similar criticism was leveled again against early World Wide Web publishers a decade later.) However, some desktop publishers who mastered the programs were able to achieve near professional results. Desktop publishing skills were considered of primary importance in career advancement in the 1980s, but increased accessibility to more user-friendly DTP software has made DTP a secondary skill to art direction, graphic design, multimedia development, marketing communications, and administrative careers. DTP skill levels range from what may be learned in a couple of hours (e.g., learning how to put clip art in a word processor), to what's typically required in a college education. The discipline of DTP skills range from technical skills such as prepress production and programming, to creative skills such as communication design and graphic image development. , Apple computers remain dominant in publishing, even as the most popular software has changed from QuarkXPress – an estimated 95% market share in the 1990s – to Adobe InDesign. An Ars Technica writer said in an article: "I've heard about Windows-based publishing environments, but I've never actually seen one in my 20+ years in design and publishing". Terminology There are two types of pages in desktop publishing: digital pages and virtual paper pages to be printed on physical paper pages. All computerized documents are technically digital, which are limited in size only by computer memory or computer data storage space. Virtual paper pages will ultimately be printed, and will therefore require paper parameters coinciding with standard physical paper sizes such as A4, letterpaper and legalpaper. Alternatively, the virtual paper page may require a custom size for later trimming. Some desktop publishing programs allow custom sizes designated for large format printing used in posters, billboards and trade show displays. A virtual page for printing has a predesignated size of virtual printing material and can be viewed on a monitor in WYSIWYG format. Each page for printing has trim sizes (edge of paper) and a printable area if bleed printing is not possible as is the case with most desktop printers. A web page is an example of a digital page that is not constrained by virtual paper parameters. Most digital pages may be dynamically re-sized, causing either the content to scale in size with the page or the content to re-flow. Master pages are templates used to automatically copy or link elements and graphic design styles to some or all the pages of a multipage document. Linked elements can be modified without having to change each instance of an element on pages that use the same element. Master pages can also be used to apply graphic design styles to automatic page numbering. Cascading Style Sheets can provide the same global formatting functions for web pages that master pages provide for virtual paper pages. Page layout is the process by which the elements are laid on the page orderly, aesthetically and precisely. The main types of components to be laid out on a page include text, linked images (that can only be modified as an external source), and embedded images (that may be modified with the layout application software). Some embedded images are rendered in the application software, while others can be placed from an external source image file. Text may be keyed into the layout, placed, or – with database publishing applications – linked to an external source of text which allows multiple editors to develop a document at the same time. Graphic design styles such as color, transparency and filters may also be applied to layout elements. Typography styles may be applied to text automatically with style sheets. Some layout programs include style sheets for images in addition to text. Graphic styles for images may include border shapes, colors, transparency, filters, and a parameter designating the way text flows around the object (also known as "wraparound" or "runaround"). Comparisons With word processing As desktop publishing software still provides extensive features necessary for print publishing, modern word processors now have publishing capabilities beyond those of many older DTP applications, blurring the line between word processing and desktop publishing. In the early 1980s, the graphical user interface was still in its embryonic stage and DTP software was in a class of its own when compared to the leading word processing applications of the time. Programs such as WordPerfect and WordStar were still mainly text-based and offered little in the way of page layout, other than perhaps margins and line spacing. On the other hand, word processing software was necessary for features like indexing and spell checking – features that are common in many applications today. As computers and operating systems became more powerful, versatile, and user-friendly in the 2010s, vendors have sought to provide users with a single application that can meet almost all their publication needs. With other digital layout software In earlier modern-day usage, DTP usually did not include digital tools such as TeX or troff, though both can easily be used on a modern desktop system, and are standard with many Unix-like operating systems and are readily available for other systems. The key difference between digital typesetting software and DTP software is that DTP software is generally interactive and "What you see [onscreen] is what you get" (WYSIWYG) in design, while other digital typesetting software, such as TeX, LaTeX and other variants, tend to operate in "batch mode", requiring the user to enter the processing program's markup language (e.g. HTML) without immediate visualization of the finished product. This kind of workflow is less user-friendly than WYSIWYG, but more suitable for conference proceedings and scholarly articles as well as corporate newsletters or other applications where consistent, automated layout is important. In the 2010s, interactive front-end components of TeX, such as TeXworks and LyX, have produced "what you see is what you mean" (WYSIWYM) hybrids of DTP and batch processing. These hybrids are focused more on the semantics than the traditional DTP. Furthermore, with the advent of TeX editors the line between desktop publishing and markup-based typesetting is becoming increasingly narrow as well; a software which separates itself from the TeX world and develops itself in the direction of WYSIWYG markup-based typesetting is GNU TeXmacs. On a different note, there is a slight overlap between desktop publishing and what is known as hypermedia publishing (e.g. web design, kiosk, CD-ROM). Many graphical HTML editors such as Microsoft FrontPage and Adobe Dreamweaver use a layout engine similar to that of a DTP program. However, many web designers still prefer to write HTML without the assistance of a WYSIWYG editor, for greater control and ability to fine-tune the appearance and functionality. Another reason that some Web designers write in HTML is that WYSIWYG editors often result in excessive lines of code, leading to code bloat that can make the pages hard to troubleshoot. With web design Desktop publishing produces primarily static print or digital media, the focus of this article. Similar skills, processes, and terminology are used in web design. Digital typography is the specialization of typography for desktop publishing. Web typography addresses typography and the use of fonts on the World Wide Web. Desktop style sheets apply formatting for print, Web Cascading Style Sheets (CSS) provide format control for web display. Web HTML font families map website font usage to the fonts available on the user's web browser or display device. Software A wide variety of DTP applications and websites are available and are listed separately. File formats The design industry standard is PDF. The older EPS format is also used and supported by most applications. See also References Sources Typography Publishing Communication design Typesetting News design
Desktop publishing
[ "Engineering" ]
2,796
[ "Design", "Communication design" ]
54,267
https://en.wikipedia.org/wiki/Floor%20and%20ceiling%20functions
In mathematics, the floor function is the function that takes as input a real number , and gives as output the greatest integer less than or equal to , denoted or . Similarly, the ceiling function maps to the least integer greater than or equal to , denoted or . For example, for floor: , , and for ceiling: , and . The floor of is also called the integral part, integer part, greatest integer, or entier of , and was historically denoted (among other notations). However, the same term, integer part, is also used for truncation towards zero, which differs from the floor function for negative numbers. For an integer, . Although and produce graphs that appear exactly alike, they are not the same when the value of x is an exact integer. For example, when =2.0001; . However, if =2, then , while . Notation The integral part or integer part of a number ( in the original) was first defined in 1798 by Adrien-Marie Legendre in his proof of the Legendre's formula. Carl Friedrich Gauss introduced the square bracket notation in his third proof of quadratic reciprocity (1808). This remained the standard in mathematics until Kenneth E. Iverson introduced, in his 1962 book A Programming Language, the names "floor" and "ceiling" and the corresponding notations and . (Iverson used square brackets for a different purpose, the Iverson bracket notation.) Both notations are now used in mathematics, although Iverson's notation will be followed in this article. In some sources, boldface or double brackets are used for floor, and reversed brackets or for ceiling. The fractional part is the sawtooth function, denoted by for real and defined by the formula For all x, . These characters are provided in Unicode: In the LaTeX typesetting system, these symbols can be specified with the and commands in math mode. LaTeX has supported UTF-8 since 2018, so the Unicode characters can now be used directly. Larger versions are and . Definition and properties Given real numbers x and y, integers m and n and the set of integers , floor and ceiling may be defined by the equations Since there is exactly one integer in a half-open interval of length one, for any real number x, there are unique integers m and n satisfying the equation where  and  may also be taken as the definition of floor and ceiling. Equivalences These formulas can be used to simplify expressions involving floors and ceilings. In the language of order theory, the floor function is a residuated mapping, that is, part of a Galois connection: it is the upper adjoint of the function that embeds the integers into the reals. These formulas show how adding an integer to the arguments affects the functions: The above are never true if is not an integer; however, for every and , the following inequalities hold: Monotonicity Both floor and ceiling functions are monotonically non-decreasing functions: Relations among the functions It is clear from the definitions that with equality if and only if x is an integer, i.e. In fact, for integers n, both floor and ceiling functions are the identity: Negating the argument switches floor and ceiling and changes the sign: and: Negating the argument complements the fractional part: The floor, ceiling, and fractional part functions are idempotent: The result of nested floor or ceiling functions is the innermost function: due to the identity property for integers. Quotients If m and n are integers and n ≠ 0, If n is a positive integer If m is positive For m = 2 these imply More generally, for positive m (See Hermite's identity) The following can be used to convert floors to ceilings and vice versa (m positive) For all m and n strictly positive integers: which, for positive and coprime m and n, reduces to and similarly for the ceiling and fractional part functions (still for positive and coprime m and n), Since the right-hand side of the general case is symmetrical in m and n, this implies that More generally, if m and n are positive, This is sometimes called a reciprocity law. Division by positive integers gives rise to an interesting and sometimes useful property. Assuming , Similarly, Indeed, keeping in mind that The second equivalence involving the ceiling function can be proved similarly. Nested divisions For positive integer n, and arbitrary real numbers m,x: Continuity and series expansions None of the functions discussed in this article are continuous, but all are piecewise linear: the functions , , and have discontinuities at the integers. is upper semi-continuous and and are lower semi-continuous. Since none of the functions discussed in this article are continuous, none of them have a power series expansion. Since floor and ceiling are not periodic, they do not have uniformly convergent Fourier series expansions. The fractional part function has Fourier series expansion for not an integer. At points of discontinuity, a Fourier series converges to a value that is the average of its limits on the left and the right, unlike the floor, ceiling and fractional part functions: for y fixed and x a multiple of y the Fourier series given converges to y/2, rather than to x mod y = 0. At points of continuity the series converges to the true value. Using the formula gives for not an integer. Applications Mod operator For an integer x and a positive integer y, the modulo operation, denoted by x mod y, gives the value of the remainder when x is divided by y. This definition can be extended to real x and y, y ≠ 0, by the formula Then it follows from the definition of floor function that this extended operation satisfies many natural properties. Notably, x mod y is always between 0 and y, i.e., if y is positive, and if y is negative, Quadratic reciprocity Gauss's third proof of quadratic reciprocity, as modified by Eisenstein, has two basic steps. Let p and q be distinct positive odd prime numbers, and let First, Gauss's lemma is used to show that the Legendre symbols are given by The second step is to use a geometric argument to show that Combining these formulas gives quadratic reciprocity in the form There are formulas that use floor to express the quadratic character of small numbers mod odd primes p: Rounding For an arbitrary real number , rounding to the nearest integer with tie breaking towards positive infinity is given by ; rounding towards negative infinity is given as . If tie-breaking is away from 0, then the rounding function is (see sign function), and rounding towards even can be expressed with the more cumbersome , which is the above expression for rounding towards positive infinity minus an integrality indicator for . Rounding a real number to the nearest integer value forms a very basic type of quantizer – a uniform one. A typical (mid-tread) uniform quantizer with a quantization step size equal to some value can be expressed as , Number of digits The number of digits in base b of a positive integer k is Number of strings without repeated characters The number of possible strings of arbitrary length that doesn't use any character twice is given by where: > 0 is the number of letters in the alphabet (e.g., 26 in English) the falling factorial denotes the number of strings of length that don't use any character twice. ! denotes the factorial of = 2.718... is Euler's number For = 26, this comes out to 1096259850353149530222034277. Factors of factorials Let n be a positive integer and p a positive prime number. The exponent of the highest power of p that divides n! is given by a version of Legendre's formula where is the way of writing n in base p. This is a finite sum, since the floors are zero when pk > n. Beatty sequence The Beatty sequence shows how every positive irrational number gives rise to a partition of the natural numbers into two sequences via the floor function. Euler's constant (γ) There are formulas for Euler's constant γ = 0.57721 56649 ... that involve the floor and ceiling, e.g. and Riemann zeta function (ζ) The fractional part function also shows up in integral representations of the Riemann zeta function. It is straightforward to prove (using integration by parts) that if is any function with a continuous derivative in the closed interval [a, b], Letting for real part of s greater than 1 and letting a and b be integers, and letting b approach infinity gives This formula is valid for all s with real part greater than −1, (except s = 1, where there is a pole) and combined with the Fourier expansion for {x} can be used to extend the zeta function to the entire complex plane and to prove its functional equation. For s = σ + it in the critical strip 0 < σ < 1, In 1947 van der Pol used this representation to construct an analogue computer for finding roots of the zeta function. Formulas for prime numbers The floor function appears in several formulas characterizing prime numbers. For example, since is equal to 1 if m divides n, and to 0 otherwise, it follows that a positive integer n is a prime if and only if One may also give formulas for producing the prime numbers. For example, let pn be the n-th prime, and for any integer r > 1, define the real number α by the sum Then A similar result is that there is a number θ = 1.3064... (Mills' constant) with the property that are all prime. There is also a number ω = 1.9287800... with the property that are all prime. Let (x) be the number of primes less than or equal to x. It is a straightforward deduction from Wilson's theorem that Also, if n ≥ 2, None of the formulas in this section are of any practical use. Solved problems Ramanujan submitted these problems to the Journal of the Indian Mathematical Society. If n is a positive integer, prove that Some generalizations to the above floor function identities have been proven. Unsolved problem The study of Waring's problem has led to an unsolved problem: Are there any positive integers k ≥ 6 such that Mahler has proved there can only be a finite number of such k; none are known. Computer implementations In most programming languages, the simplest method to convert a floating point number to an integer does not do floor or ceiling, but truncation. The reason for this is historical, as the first machines used ones' complement and truncation was simpler to implement (floor is simpler in two's complement). FORTRAN was defined to require this behavior and thus almost all processors implement conversion this way. Some consider this to be an unfortunate historical design decision that has led to bugs handling negative offsets and graphics on the negative side of the origin. An arithmetic right-shift of a signed integer by is the same as . Division by a power of 2 is often written as a right-shift, not for optimization as might be assumed, but because the floor of negative results is required. Assuming such shifts are "premature optimization" and replacing them with division can break software. Many programming languages (including C, C++, C#, Java, Julia, PHP, R, and Python) provide standard functions for floor and ceiling, usually called floor and ceil, or less commonly ceiling. The language APL uses ⌊x for floor. The J Programming Language, a follow-on to APL that is designed to use standard keyboard symbols, uses <. for floor and >. for ceiling. ALGOL usesentier for floor. In Microsoft Excel the function INT rounds down rather than toward zero, while FLOOR rounds toward zero, the opposite of what "int" and "floor" do in other languages. Since 2010 FLOOR has been changed to error if the number is negative. The OpenDocument file format, as used by OpenOffice.org, Libreoffice and others, INT and FLOOR both do floor, and FLOOR has a third argument to reproduce Excel's earlier behavior. See also Bracket (mathematics) Integer-valued function Step function Modulo operation Citations References Nicholas J. Higham, Handbook of writing for the mathematical sciences, SIAM. , p. 25 ISO/IEC. ISO/IEC 9899::1999(E): Programming languages — C (2nd ed), 1999; Section 6.3.1.4, p. 43. Michael Sullivan. Precalculus, 8th edition, p. 86 External links Štefan Porubský, "Integer rounding functions", Interactive Information Portal for Algorithmic Mathematics, Institute of Computer Science of the Czech Academy of Sciences, Prague, Czech Republic, retrieved 24 October 2008 Special functions Mathematical notation Unary operations
Floor and ceiling functions
[ "Mathematics" ]
2,685
[ "Functions and mappings", "Special functions", "Unary operations", "Mathematical objects", "Combinatorics", "Mathematical relations", "nan" ]
54,316
https://en.wikipedia.org/wiki/List%20of%20mathematical%20functions
In mathematics, some functions or groups of functions are important enough to deserve their own names. This is a listing of articles which explain some of these functions in more detail. There is a large theory of special functions which developed out of statistics and mathematical physics. A modern, abstract point of view contrasts large function spaces, which are infinite-dimensional and within which most functions are 'anonymous', with special functions picked out by properties such as symmetry, or relationship to harmonic analysis and group representations. See also List of types of functions Elementary functions Elementary functions are functions built from basic operations (e.g. addition, exponentials, logarithms...) Algebraic functions Algebraic functions are functions that can be expressed as the solution of a polynomial equation with integer coefficients. Polynomials: Can be generated solely by addition, multiplication, and raising to the power of a positive integer. Constant function: polynomial of degree zero, graph is a horizontal straight line Linear function: First degree polynomial, graph is a straight line. Quadratic function: Second degree polynomial, graph is a parabola. Cubic function: Third degree polynomial. Quartic function: Fourth degree polynomial. Quintic function: Fifth degree polynomial. Rational functions: A ratio of two polynomials. nth root Square root: Yields a number whose square is the given one. Cube root: Yields a number whose cube is the given one. Elementary transcendental functions Transcendental functions are functions that are not algebraic. Exponential function: raises a fixed number to a variable power. Hyperbolic functions: formally similar to the trigonometric functions. Inverse hyperbolic functions: inverses of the hyperbolic functions, analogous to the inverse circular functions. Logarithms: the inverses of exponential functions; useful to solve equations involving exponentials. Natural logarithm Common logarithm Binary logarithm Power functions: raise a variable number to a fixed power; also known as Allometric functions; note: if the power is a rational number it is not strictly a transcendental function. Periodic functions Trigonometric functions: sine, cosine, tangent, cotangent, secant, cosecant, exsecant, excosecant, versine, coversine, vercosine, covercosine, haversine, hacoversine, havercosine, hacovercosine, Inverse trigonometric functions etc.; used in geometry and to describe periodic phenomena. See also Gudermannian function. Special functions Piecewise special functions Arithmetic functions Sigma function: Sums of powers of divisors of a given natural number. Euler's totient function: Number of numbers coprime to (and not bigger than) a given one. Prime-counting function: Number of primes less than or equal to a given number. Partition function: Order-independent count of ways to write a given positive integer as a sum of positive integers. Möbius μ function: Sum of the nth primitive roots of unity, it depends on the prime factorization of n. Prime omega functions Chebyshev functions Liouville function, λ(n) = (–1)Ω(n) Von Mangoldt function, Λ(n) = log p if n is a positive power of the prime p Carmichael function Antiderivatives of elementary functions Logarithmic integral function: Integral of the reciprocal of the logarithm, important in the prime number theorem. Exponential integral Trigonometric integral: Including Sine Integral and Cosine Integral Inverse tangent integral Error function: An integral important for normal random variables. Fresnel integral: related to the error function; used in optics. Dawson function: occurs in probability. Faddeeva function Gamma and related functions Gamma function: A generalization of the factorial function. Barnes G-function Beta function: Corresponding binomial coefficient analogue. Digamma function, Polygamma function Incomplete beta function Incomplete gamma function K-function Multivariate gamma function: A generalization of the Gamma function useful in multivariate statistics. Student's t-distribution Pi function Elliptic and related functions Bessel and related functions Riemann zeta and related functions Hypergeometric and related functions Hypergeometric functions: Versatile family of power series. Confluent hypergeometric function Associated Legendre functions Meijer G-function Fox H-function Iterated exponential and related functions Hyper operators Iterated logarithm Pentation Super-logarithms Tetration Other standard special functions Lambert W function: Inverse of f(w) = w exp(w). Lamé function Mathieu function Mittag-Leffler function Painlevé transcendents Parabolic cylinder function Arithmetic–geometric mean Miscellaneous functions Ackermann function: in the theory of computation, a computable function that is not primitive recursive. Dirac delta function: everywhere zero except for x = 0; total integral is 1. Not a function but a distribution, but sometimes informally referred to as a function, particularly by physicists and engineers. Dirichlet function: is an indicator function that matches 1 to rational numbers and 0 to irrationals. It is nowhere continuous. Thomae's function: is a function that is continuous at all irrational numbers and discontinuous at all rational numbers. It is also a modification of Dirichlet function and sometimes called Riemann function. Kronecker delta function: is a function of two variables, usually integers, which is 1 if they are equal, and 0 otherwise. Minkowski's question mark function: Derivatives vanish on the rationals. Weierstrass function: is an example of continuous function that is nowhere differentiable See also List of types of functions Test functions for optimization List of mathematical abbreviations List of special functions and eponyms External links Special functions : A programmable special functions calculator. Special functions at EqWorld: The World of Mathematical Equations. Functions Functions Functions pl:Funkcje elementarne
List of mathematical functions
[ "Mathematics" ]
1,230
[ "Discrete mathematics", "Functions and mappings", "Mathematical analysis", "Calculus", "Mathematical objects", "Mathematical relations", "Number theory" ]
54,342
https://en.wikipedia.org/wiki/Cellular%20automaton
A cellular automaton (pl. cellular automata, abbrev. CA) is a discrete model of computation studied in automata theory. Cellular automata are also called cellular spaces, tessellation automata, homogeneous structures, cellular structures, tessellation structures, and iterative arrays. Cellular automata have found application in various areas, including physics, theoretical biology and microstructure modeling. A cellular automaton consists of a regular grid of cells, each in one of a finite number of states, such as on and off (in contrast to a coupled map lattice). The grid can be in any finite number of dimensions. For each cell, a set of cells called its neighborhood is defined relative to the specified cell. An initial state (time t = 0) is selected by assigning a state for each cell. A new generation is created (advancing t by 1), according to some fixed rule (generally, a mathematical function) that determines the new state of each cell in terms of the current state of the cell and the states of the cells in its neighborhood. Typically, the rule for updating the state of cells is the same for each cell and does not change over time, and is applied to the whole grid simultaneously, though exceptions are known, such as the stochastic cellular automaton and asynchronous cellular automaton. The concept was originally discovered in the 1940s by Stanislaw Ulam and John von Neumann while they were contemporaries at Los Alamos National Laboratory. While studied by some throughout the 1950s and 1960s, it was not until the 1970s and Conway's Game of Life, a two-dimensional cellular automaton, that interest in the subject expanded beyond academia. In the 1980s, Stephen Wolfram engaged in a systematic study of one-dimensional cellular automata, or what he calls elementary cellular automata; his research assistant Matthew Cook showed that one of these rules is Turing-complete. The primary classifications of cellular automata, as outlined by Wolfram, are numbered one to four. They are, in order, automata in which patterns generally stabilize into homogeneity, automata in which patterns evolve into mostly stable or oscillating structures, automata in which patterns evolve in a seemingly chaotic fashion, and automata in which patterns become extremely complex and may last for a long time, with stable local structures. This last class is thought to be computationally universal, or capable of simulating a Turing machine. Special types of cellular automata are reversible, where only a single configuration leads directly to a subsequent one, and totalistic, in which the future value of individual cells only depends on the total value of a group of neighboring cells. Cellular automata can simulate a variety of real-world systems, including biological and chemical ones. Overview One way to simulate a two-dimensional cellular automaton is with an infinite sheet of graph paper along with a set of rules for the cells to follow. Each square is called a "cell" and each cell has two possible states, black and white. The neighborhood of a cell is the nearby, usually adjacent, cells. The two most common types of neighborhoods are the von Neumann neighborhood and the Moore neighborhood. The former, named after the founding cellular automaton theorist, consists of the four orthogonally adjacent cells. The latter includes the von Neumann neighborhood as well as the four diagonally adjacent cells. For such a cell and its Moore neighborhood, there are 512 (= 29) possible patterns. For each of the 512 possible patterns, the rule table would state whether the center cell will be black or white on the next time interval. Conway's Game of Life is a popular version of this model. Another common neighborhood type is the extended von Neumann neighborhood, which includes the two closest cells in each orthogonal direction, for a total of eight. The general equation for the total number of automata possible is kks, where k is the number of possible states for a cell, and s is the number of neighboring cells (including the cell to be calculated itself) used to determine the cell's next state. Thus, in the two-dimensional system with a Moore neighborhood, the total number of automata possible would be 229, or . It is usually assumed that every cell in the universe starts in the same state, except for a finite number of cells in other states; the assignment of state values is called a configuration. More generally, it is sometimes assumed that the universe starts out covered with a periodic pattern, and only a finite number of cells violate that pattern. The latter assumption is common in one-dimensional cellular automata. Cellular automata are often simulated on a finite grid rather than an infinite one. In two dimensions, the universe would be a rectangle instead of an infinite plane. The obvious problem with finite grids is how to handle the cells on the edges. How they are handled will affect the values of all the cells in the grid. One possible method is to allow the values in those cells to remain constant. Another method is to define neighborhoods differently for these cells. One could say that they have fewer neighbors, but then one would also have to define new rules for the cells located on the edges. These cells are usually handled with periodic boundary conditions resulting in a toroidal arrangement: when one goes off the top, one comes in at the corresponding position on the bottom, and when one goes off the left, one comes in on the right. (This essentially simulates an infinite periodic tiling, and in the field of partial differential equations is sometimes referred to as periodic boundary conditions.) This can be visualized as taping the left and right edges of the rectangle to form a tube, then taping the top and bottom edges of the tube to form a torus (doughnut shape). Universes of other dimensions are handled similarly. This solves boundary problems with neighborhoods, but another advantage is that it is easily programmable using modular arithmetic functions. For example, in a 1-dimensional cellular automaton like the examples below, the neighborhood of a cell xit is {xi−1t−1, xit−1, xi+1t−1}, where t is the time step (vertical), and i is the index (horizontal) in one generation. History Stanislaw Ulam, while working at the Los Alamos National Laboratory in the 1940s, studied the growth of crystals, using a simple lattice network as his model. At the same time, John von Neumann, Ulam's colleague at Los Alamos, was working on the problem of self-replicating systems. Von Neumann's initial design was founded upon the notion of one robot building another robot. This design is known as the kinematic model. As he developed this design, von Neumann came to realize the great difficulty of building a self-replicating robot, and of the great cost in providing the robot with a "sea of parts" from which to build its replicant. Neumann wrote a paper entitled "The general and logical theory of automata" for the Hixon Symposium in 1948. Ulam was the one who suggested using a discrete system for creating a reductionist model of self-replication. Nils Aall Barricelli performed many of the earliest explorations of these models of artificial life. Ulam and von Neumann created a method for calculating liquid motion in the late 1950s. The driving concept of the method was to consider a liquid as a group of discrete units and calculate the motion of each based on its neighbors' behaviors. Thus was born the first system of cellular automata. Like Ulam's lattice network, von Neumann's cellular automata are two-dimensional, with his self-replicator implemented algorithmically. The result was a universal copier and constructor working within a cellular automaton with a small neighborhood (only those cells that touch are neighbors; for von Neumann's cellular automata, only orthogonal cells), and with 29 states per cell. Von Neumann gave an existence proof that a particular pattern would make endless copies of itself within the given cellular universe by designing a 200,000 cell configuration that could do so. This design is known as the tessellation model, and is called a von Neumann universal constructor. Also in the 1940s, Norbert Wiener and Arturo Rosenblueth developed a model of excitable media with some of the characteristics of a cellular automaton. Their specific motivation was the mathematical description of impulse conduction in cardiac systems. However their model is not a cellular automaton because the medium in which signals propagate is continuous, and wave fronts are curves. A true cellular automaton model of excitable media was developed and studied by J. M. Greenberg and S. P. Hastings in 1978; see Greenberg-Hastings cellular automaton. The original work of Wiener and Rosenblueth contains many insights and continues to be cited in modern research publications on cardiac arrhythmia and excitable systems. In the 1960s, cellular automata were studied as a particular type of dynamical system and the connection with the mathematical field of symbolic dynamics was established for the first time. In 1969, Gustav A. Hedlund compiled many results following this point of view in what is still considered as a seminal paper for the mathematical study of cellular automata. The most fundamental result is the characterization in the Curtis–Hedlund–Lyndon theorem of the set of global rules of cellular automata as the set of continuous endomorphisms of shift spaces. In 1969, German computer pioneer Konrad Zuse published his book Calculating Space, proposing that the physical laws of the universe are discrete by nature, and that the entire universe is the output of a deterministic computation on a single cellular automaton; "Zuse's Theory" became the foundation of the field of study called digital physics. Also in 1969 computer scientist Alvy Ray Smith completed a Stanford PhD dissertation on Cellular Automata Theory, the first mathematical treatment of CA as a general class of computers. Many papers came from this dissertation: He showed the equivalence of neighborhoods of various shapes, how to reduce a Moore to a von Neumann neighborhood or how to reduce any neighborhood to a von Neumann neighborhood. He proved that two-dimensional CA are computation universal, introduced 1-dimensional CA, and showed that they too are computation universal, even with simple neighborhoods. He showed how to subsume the complex von Neumann proof of construction universality (and hence self-reproducing machines) into a consequence of computation universality in a 1-dimensional CA. Intended as the introduction to the German edition of von Neumann's book on CA, he wrote a survey of the field with dozens of references to papers, by many authors in many countries over a decade or so of work, often overlooked by modern CA researchers. In the 1970s a two-state, two-dimensional cellular automaton named Game of Life became widely known, particularly among the early computing community. Invented by John Conway and popularized by Martin Gardner in a Scientific American article, its rules are as follows: Any live cell with fewer than two live neighbours dies, as if caused by underpopulation. Any live cell with two or three live neighbours lives on to the next generation. Any live cell with more than three live neighbours dies, as if by overpopulation. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction. Despite its simplicity, the system achieves an impressive diversity of behavior, fluctuating between apparent randomness and order. One of the most apparent features of the Game of Life is the frequent occurrence of gliders, arrangements of cells that essentially move themselves across the grid. It is possible to arrange the automaton so that the gliders interact to perform computations, and after much effort it has been shown that the Game of Life can emulate a universal Turing machine. It was viewed as a largely recreational topic, and little follow-up work was done outside of investigating the particularities of the Game of Life and a few related rules in the early 1970s. Stephen Wolfram independently began working on cellular automata in mid-1981 after considering how complex patterns seemed formed in nature in violation of the second law of thermodynamics. His investigations were initially spurred by a desire to model systems such as the neural networks found in brains. He published his first paper in Reviews of Modern Physics investigating elementary cellular automata (Rule 30 in particular) in June 1983. The unexpected complexity of the behavior of these simple rules led Wolfram to suspect that complexity in nature may be due to similar mechanisms. His investigations, however, led him to realize that cellular automata were poor at modelling neural networks. Additionally, during this period Wolfram formulated the concepts of intrinsic randomness and computational irreducibility, and suggested that rule 110 may be universal—a fact proved later by Wolfram's research assistant Matthew Cook in the 1990s. Classification Wolfram, in A New Kind of Science and several papers dating from the mid-1980s, defined four classes into which cellular automata and several other simple computational models can be divided depending on their behavior. While earlier studies in cellular automata tended to try to identify types of patterns for specific rules, Wolfram's classification was the first attempt to classify the rules themselves. In order of complexity the classes are: Class 1: Nearly all initial patterns evolve quickly into a stable, homogeneous state. Any randomness in the initial pattern disappears. Class 2: Nearly all initial patterns evolve quickly into stable or oscillating structures. Some of the randomness in the initial pattern may filter out, but some remains. Local changes to the initial pattern tend to remain local. Class 3: Nearly all initial patterns evolve in a pseudo-random or chaotic manner. Any stable structures that appear are quickly destroyed by the surrounding noise. Local changes to the initial pattern tend to spread indefinitely. Class 4: Nearly all initial patterns evolve into structures that interact in complex and interesting ways, with the formation of local structures that are able to survive for long periods of time. Class 2 type stable or oscillating structures may be the eventual outcome, but the number of steps required to reach this state may be very large, even when the initial pattern is relatively simple. Local changes to the initial pattern may spread indefinitely. Wolfram has conjectured that many class 4 cellular automata, if not all, are capable of universal computation. This has been proven for Rule 110 and Conway's Game of Life. These definitions are qualitative in nature and there is some room for interpretation. According to Wolfram, "...with almost any general classification scheme there are inevitably cases which get assigned to one class by one definition and another class by another definition. And so it is with cellular automata: there are occasionally rules...that show some features of one class and some of another." Wolfram's classification has been empirically matched to a clustering of the compressed lengths of the outputs of cellular automata. There have been several attempts to classify cellular automata in formally rigorous classes, inspired by Wolfram's classification. For instance, Culik and Yu proposed three well-defined classes (and a fourth one for the automata not matching any of these), which are sometimes called Culik–Yu classes; membership in these proved undecidable. Wolfram's class 2 can be partitioned into two subgroups of stable (fixed-point) and oscillating (periodic) rules. The idea that there are 4 classes of dynamical system came originally from Nobel-prize winning chemist Ilya Prigogine who identified these 4 classes of thermodynamical systems: (1) systems in thermodynamic equilibrium, (2) spatially/temporally uniform systems, (3) chaotic systems, and (4) complex far-from-equilibrium systems with dissipative structures (see figure 1 in the 1974 paper of Nicolis, Prigogine's student). Reversible A cellular automaton is reversible if, for every current configuration of the cellular automaton, there is exactly one past configuration (preimage). If one thinks of a cellular automaton as a function mapping configurations to configurations, reversibility implies that this function is bijective. If a cellular automaton is reversible, its time-reversed behavior can also be described as a cellular automaton; this fact is a consequence of the Curtis–Hedlund–Lyndon theorem, a topological characterization of cellular automata. For cellular automata in which not every configuration has a preimage, the configurations without preimages are called Garden of Eden patterns. For one-dimensional cellular automata there are known algorithms for deciding whether a rule is reversible or irreversible. However, for cellular automata of two or more dimensions reversibility is undecidable; that is, there is no algorithm that takes as input an automaton rule and is guaranteed to determine correctly whether the automaton is reversible. The proof by Jarkko Kari is related to the tiling problem by Wang tiles. Reversible cellular automata are often used to simulate such physical phenomena as gas and fluid dynamics, since they obey the laws of thermodynamics. Such cellular automata have rules specially constructed to be reversible. Such systems have been studied by Tommaso Toffoli, Norman Margolus and others. Several techniques can be used to explicitly construct reversible cellular automata with known inverses. Two common ones are the second-order cellular automaton and the block cellular automaton, both of which involve modifying the definition of a cellular automaton in some way. Although such automata do not strictly satisfy the definition given above, it can be shown that they can be emulated by conventional cellular automata with sufficiently large neighborhoods and numbers of states, and can therefore be considered a subset of conventional cellular automata. Conversely, it has been shown that every reversible cellular automaton can be emulated by a block cellular automaton. Totalistic A special class of cellular automata are totalistic cellular automata. The state of each cell in a totalistic cellular automaton is represented by a number (usually an integer value drawn from a finite set), and the value of a cell at time t depends only on the sum of the values of the cells in its neighborhood (possibly including the cell itself) at time t − 1. If the state of the cell at time t depends on both its own state and the total of its neighbors at time t − 1 then the cellular automaton is properly called outer totalistic. Conway's Game of Life is an example of an outer totalistic cellular automaton with cell values 0 and 1; outer totalistic cellular automata with the same Moore neighborhood structure as Life are sometimes called cellular automata. Related automata There are many possible generalizations of the cellular automaton concept. One way is by using something other than a rectangular (cubic, etc.) grid. For example, if a plane is tiled with regular hexagons, those hexagons could be used as cells. In many cases the resulting cellular automata are equivalent to those with rectangular grids with specially designed neighborhoods and rules. Another variation would be to make the grid itself irregular, such as with Penrose tiles. Also, rules can be probabilistic rather than deterministic. Such cellular automata are called probabilistic cellular automata. A probabilistic rule gives, for each pattern at time t, the probabilities that the central cell will transition to each possible state at time t + 1. Sometimes a simpler rule is used; for example: "The rule is the Game of Life, but on each time step there is a 0.001% probability that each cell will transition to the opposite color." The neighborhood or rules could change over time or space. For example, initially the new state of a cell could be determined by the horizontally adjacent cells, but for the next generation the vertical cells would be used. In cellular automata, the new state of a cell is not affected by the new state of other cells. This could be changed so that, for instance, a 2 by 2 block of cells can be determined by itself and the cells adjacent to itself. There are continuous automata. These are like totalistic cellular automata, but instead of the rule and states being discrete (e.g. a table, using states {0,1,2}), continuous functions are used, and the states become continuous (usually values in [0,1]). The state of a location is a finite number of real numbers. Certain cellular automata can yield diffusion in liquid patterns in this way. Continuous spatial automata have a continuum of locations. The state of a location is a finite number of real numbers. Time is also continuous, and the state evolves according to differential equations. One important example is reaction–diffusion textures, differential equations proposed by Alan Turing to explain how chemical reactions could create the stripes on zebras and spots on leopards. When these are approximated by cellular automata, they often yield similar patterns. MacLennan considers continuous spatial automata as a model of computation. There are known examples of continuous spatial automata, which exhibit propagating phenomena analogous to gliders in the Game of Life. Graph rewriting automata are extensions of cellular automata based on graph rewriting systems. Elementary cellular automata The simplest nontrivial cellular automaton would be one-dimensional, with two possible states per cell, and a cell's neighbors defined as the adjacent cells on either side of it. A cell and its two neighbors form a neighborhood of 3 cells, so there are 23 = 8 possible patterns for a neighborhood. A rule consists of deciding, for each pattern, whether the cell will be a 1 or a 0 in the next generation. There are then 28 = 256 possible rules. These 256 cellular automata are generally referred to by their Wolfram code, a standard naming convention invented by Wolfram that gives each rule a number from 0 to 255. A number of papers have analyzed and compared the distinct cases among the 256 cellular automata (many are trivially isomorphic). The rule 30, rule 90, rule 110, and rule 184 cellular automata are particularly interesting. The images below show the history of rules 30 and 110 when the starting configuration consists of a 1 (at the top of each image) surrounded by 0s. Each row of pixels represents a generation in the history of the automaton, with t=0 being the top row. Each pixel is colored white for 0 and black for 1. Rule 30 exhibits class 3 behavior, meaning even simple input patterns such as that shown lead to chaotic, seemingly random histories. Rule 110, like the Game of Life, exhibits what Wolfram calls class 4 behavior, which is neither completely random nor completely repetitive. Localized structures appear and interact in various complicated-looking ways. In the course of the development of A New Kind of Science, as a research assistant to Wolfram in 1994, Matthew Cook proved that some of these structures were rich enough to support universality. This result is interesting because rule 110 is an extremely simple one-dimensional system, and difficult to engineer to perform specific behavior. This result therefore provides significant support for Wolfram's view that class 4 systems are inherently likely to be universal. Cook presented his proof at a Santa Fe Institute conference on Cellular Automata in 1998, but Wolfram blocked the proof from being included in the conference proceedings, as Wolfram did not want the proof announced before the publication of A New Kind of Science. In 2004, Cook's proof was finally published in Wolfram's journal Complex Systems (Vol. 15, No. 1), over ten years after Cook came up with it. Rule 110 has been the basis for some of the smallest universal Turing machines. Rule space An elementary cellular automaton rule is specified by 8 bits, and all elementary cellular automaton rules can be considered to sit on the vertices of the 8-dimensional unit hypercube. This unit hypercube is the cellular automaton rule space. For next-nearest-neighbor cellular automata, a rule is specified by 25 = 32 bits, and the cellular automaton rule space is a 32-dimensional unit hypercube. A distance between two rules can be defined by the number of steps required to move from one vertex, which represents the first rule, and another vertex, representing another rule, along the edge of the hypercube. This rule-to-rule distance is also called the Hamming distance. Cellular automaton rule space allows us to ask the question concerning whether rules with similar dynamical behavior are "close" to each other. Graphically drawing a high dimensional hypercube on the 2-dimensional plane remains a difficult task, and one crude locator of a rule in the hypercube is the number of bit-1 in the 8-bit string for elementary rules (or 32-bit string for the next-nearest-neighbor rules). Drawing the rules in different Wolfram classes in these slices of the rule space show that class 1 rules tend to have lower number of bit-1s, thus located in one region of the space, whereas class 3 rules tend to have higher proportion (50%) of bit-1s. For larger cellular automaton rule space, it is shown that class 4 rules are located between the class 1 and class 3 rules. This observation is the foundation for the phrase edge of chaos, and is reminiscent of the phase transition in thermodynamics. Applications Biology Several biological processes occur—or can be simulated—by cellular automata. Some examples of biological phenomena modeled by cellular automata with a simple state space are: Patterns of some seashells, like the ones in the genera Conus and Cymbiola, are generated by natural cellular automata. The pigment cells reside in a narrow band along the shell's lip. Each cell secretes pigments according to the activating and inhibiting activity of its neighbor pigment cells, obeying a natural version of a mathematical rule. The cell band leaves the colored pattern on the shell as it grows slowly. For example, the widespread species Conus textile bears a pattern resembling Wolfram's rule 30 cellular automaton. Plants regulate their intake and loss of gases via a cellular automaton mechanism. Each stoma on the leaf acts as a cell. Moving wave patterns on the skin of cephalopods can be simulated with a two-state, two-dimensional cellular automata, each state corresponding to either an expanded or retracted chromatophore. Threshold automata have been invented to simulate neurons, and complex behaviors such as recognition and learning can be simulated. Fibroblasts bear similarities to cellular automata, as each fibroblast only interacts with its neighbors. Additionally, biological phenomena which require explicit modeling of the agents' velocities (for example, those involved in collective cell migration) may be modeled by cellular automata with a more complex state space and rules, such as biological lattice-gas cellular automata. These include phenomena of great medical importance, such as: Characterization of different modes of metastatic invasion. The role of heterogeneity in the development of aggressive carcinomas. Phenotypic switching during tumor proliferation. Chemistry The Belousov–Zhabotinsky reaction is a spatio-temporal chemical oscillator that can be simulated by means of a cellular automaton. In the 1950s A. M. Zhabotinsky (extending the work of B. P. Belousov) discovered that when a thin, homogenous layer of a mixture of malonic acid, acidified bromate, and a ceric salt were mixed together and left undisturbed, fascinating geometric patterns such as concentric circles and spirals propagate across the medium. In the "Computer Recreations" section of the August 1988 issue of Scientific American, A. K. Dewdney discussed a cellular automaton developed by Martin Gerhardt and Heike Schuster of the University of Bielefeld (Germany). This automaton produces wave patterns that resemble those in the Belousov-Zhabotinsky reaction. Physics Probabilistic cellular automata are used in statistical and condensed matter physics to study phenomena like fluid dynamics and phase transitions. The Ising model is a prototypical example, in which each cell can be in either of two states called "up" and "down", making an idealized representation of a magnet. By adjusting the parameters of the model, the proportion of cells being in the same state can be varied, in ways that help explicate how ferromagnets become demagnetized when heated. Moreover, results from studying the demagnetization phase transition can be transferred to other phase transitions, like the evaporation of a liquid into a gas; this convenient cross-applicability is known as universality. The phase transition in the two-dimensional Ising model and other systems in its universality class has been of particular interest, as it requires conformal field theory to understand in depth. Other cellular automata that have been of significance in physics include lattice gas automata, which simulate fluid flows. Computer science, coding, and communication Cellular automaton processors are physical implementations of CA concepts, which can process information computationally. Processing elements are arranged in a regular grid of identical cells. The grid is usually a square tiling, or tessellation, of two or three dimensions; other tilings are possible, but not yet used. Cell states are determined only by interactions with adjacent neighbor cells. No means exists to communicate directly with cells farther away. One such cellular automaton processor array configuration is the systolic array. Cell interaction can be via electric charge, magnetism, vibration (phonons at quantum scales), or any other physically useful means. This can be done in several ways so that no wires are needed between any elements. This is very unlike processors used in most computers today (von Neumann designs) which are divided into sections with elements that can communicate with distant elements over wires. Rule 30 was originally suggested as a possible block cipher for use in cryptography. Two-dimensional cellular automata can be used for constructing a pseudorandom number generator. Cellular automata have been proposed for public-key cryptography. The one-way function is the evolution of a finite CA whose inverse is believed to be hard to find. Given the rule, anyone can easily calculate future states, but it appears to be very difficult to calculate previous states. Cellular automata have also been applied to design error correction codes. Other problems that can be solved with cellular automata include: Firing squad synchronization problem Majority problem Generative art and music Cellular automata have been used in generative music and evolutionary music composition and procedural terrain generation in video games. Maze generation Specific rules Specific cellular automata rules include: Brian's Brain Codd's cellular automaton CoDi Conway's game of life Day and Night Langton's ant Langton's loops Lenia Nobili cellular automata Rule 90 Rule 184 Seeds Turmite Von Neumann cellular automaton Wireworld See also References Citations Works cited Further reading Proposes reaction-diffusion, a type of continuous automaton. External links Mirek's Cellebration – Home to free MCell and MJCell cellular automata explorer software and rule libraries. The software supports a large number of 1D and 2D rules. The site provides both an extensive rules lexicon and many image galleries loaded with examples of rules. MCell is a Windows application, while MJCell is a Java applet. Source code is available. Golly supports von Neumann, Nobili, GOL, and a great many other systems of cellular automata. Developed by Tomas Rokicki and Andrew Trevorrow. This is the only simulator currently available that can demonstrate von Neumann type self-replication. Wolfram Atlas – An atlas of various types of one-dimensional cellular automata. Conway Life Cellular automaton FAQ from the newsgroup comp.theory.cell-automata "Neighbourhood Survey" (includes discussion on triangular grids, and larger neighborhood CAs) Cosma Shalizi's Cellular Automata Notebook contains an extensive list of academic and professional reference material. Systems theory Dynamical systems Computational fields of study
Cellular automaton
[ "Physics", "Mathematics", "Technology" ]
6,682
[ "Computational fields of study", "Recreational mathematics", "Cellular automata", "Mechanics", "Computing and society", "Dynamical systems" ]
54,347
https://en.wikipedia.org/wiki/Complement%20%28set%20theory%29
In set theory, the complement of a set , often denoted by (or ), is the set of elements not in . When all elements in the universe, i.e. all elements under consideration, are considered to be members of a given set , the absolute complement of is the set of elements in that are not in . The relative complement of with respect to a set , also termed the set difference of and , written is the set of elements in that are not in . Absolute complement Definition If is a set, then the absolute complement of (or simply the complement of ) is the set of elements not in (within a larger set that is implicitly defined). In other words, let be a set that contains all the elements under study; if there is no need to mention , either because it has been previously specified, or it is obvious and unique, then the absolute complement of is the relative complement of in : The absolute complement of is usually denoted by . Other notations include Examples Assume that the universe is the set of integers. If is the set of odd numbers, then the complement of is the set of even numbers. If is the set of multiples of 3, then the complement of is the set of numbers congruent to 1 or 2 modulo 3 (or, in simpler terms, the integers that are not multiples of 3). Assume that the universe is the standard 52-card deck. If the set is the suit of spades, then the complement of is the union of the suits of clubs, diamonds, and hearts. If the set is the union of the suits of clubs and diamonds, then the complement of is the union of the suits of hearts and spades. When the universe is the universe of sets described in formalized set theory, the absolute complement of a set is generally not itself a set, but rather a proper class. For more info, see universal set. Properties Let and be two sets in a universe . The following identities capture important properties of absolute complements: De Morgan's laws: Complement laws: (this follows from the equivalence of a conditional with its contrapositive). Involution or double complement law: Relationships between relative and absolute complements: Relationship with a set difference: The first two complement laws above show that if is a non-empty, proper subset of , then is a partition of . Relative complement Definition If and are sets, then the relative complement of in , also termed the set difference of and , is the set of elements in but not in . The relative complement of in is denoted according to the ISO 31-11 standard. It is sometimes written but this notation is ambiguous, as in some contexts (for example, Minkowski set operations in functional analysis) it can be interpreted as the set of all elements where is taken from and from . Formally: Examples If is the set of real numbers and is the set of rational numbers, then is the set of irrational numbers. Properties Let , , and be three sets in a universe . The following identities capture notable properties of relative complements: with the important special case demonstrating that intersection can be expressed using only the relative complement operation. If , then . is equivalent to . Complementary relation A binary relation is defined as a subset of a product of sets The complementary relation is the set complement of in The complement of relation can be written Here, is often viewed as a logical matrix with rows representing the elements of and columns elements of The truth of corresponds to 1 in row column Producing the complementary relation to then corresponds to switching all 1s to 0s, and 0s to 1s for the logical matrix of the complement. Together with composition of relations and converse relations, complementary relations and the algebra of sets are the elementary operations of the calculus of relations. LaTeX notation In the LaTeX typesetting language, the command \setminus is usually used for rendering a set difference symbol, which is similar to a backslash symbol. When rendered, the \setminus command looks identical to \backslash, except that it has a little more space in front and behind the slash, akin to the LaTeX sequence \mathbin{\backslash}. A variant \smallsetminus is available in the amssymb package, but this symbol is not included separately in Unicode. The symbol (as opposed to ) is produced by \complement. (It corresponds to the Unicode symbol .) See also Notes References External links Basic concepts in set theory Operations on sets
Complement (set theory)
[ "Mathematics" ]
913
[ "Basic concepts in set theory", "Operations on sets" ]
54,356
https://en.wikipedia.org/wiki/Boolean%20ring
In mathematics, a Boolean ring is a ring for which for all in , that is, a ring that consists of only idempotent elements. An example is the ring of integers modulo 2. Every Boolean ring gives rise to a Boolean algebra, with ring multiplication corresponding to conjunction or meet , and ring addition to exclusive disjunction or symmetric difference (not disjunction , which would constitute a semiring). Conversely, every Boolean algebra gives rise to a Boolean ring. Boolean rings are named after the founder of Boolean algebra, George Boole. Notation There are at least four different and incompatible systems of notation for Boolean rings and algebras: In commutative algebra the standard notation is to use for the ring sum of and , and use for their product. In logic, a common notation is to use for the meet (same as the ring product) and use for the join, given in terms of ring notation (given just above) by . In set theory and logic it is also common to use for the meet, and for the join . This use of is different from the use in ring theory. A rare convention is to use for the product and for the ring sum, in an effort to avoid the ambiguity of . Historically, the term "Boolean ring" has been used to mean a "Boolean ring possibly without an identity", and "Boolean algebra" has been used to mean a Boolean ring with an identity. The existence of the identity is necessary to consider the ring as an algebra over the field of two elements: otherwise there cannot be a (unital) ring homomorphism of the field of two elements into the Boolean ring. (This is the same as the old use of the terms "ring" and "algebra" in measure theory.) Examples One example of a Boolean ring is the power set of any set , where the addition in the ring is symmetric difference, and the multiplication is intersection. As another example, we can also consider the set of all finite or cofinite subsets of , again with symmetric difference and intersection as operations. More generally with these operations any field of sets is a Boolean ring. By Stone's representation theorem every Boolean ring is isomorphic to a field of sets (treated as a ring with these operations). Relation to Boolean algebras Since the join operation in a Boolean algebra is often written additively, it makes sense in this context to denote ring addition by , a symbol that is often used to denote exclusive or. Given a Boolean ring , for and in we can define , , . These operations then satisfy all of the axioms for meets, joins, and complements in a Boolean algebra. Thus every Boolean ring becomes a Boolean algebra. Similarly, every Boolean algebra becomes a Boolean ring thus: If a Boolean ring is translated into a Boolean algebra in this way, and then the Boolean algebra is translated into a ring, the result is the original ring. The analogous result holds beginning with a Boolean algebra. A map between two Boolean rings is a ring homomorphism if and only if it is a homomorphism of the corresponding Boolean algebras. Furthermore, a subset of a Boolean ring is a ring ideal (prime ring ideal, maximal ring ideal) if and only if it is an order ideal (prime order ideal, maximal order ideal) of the Boolean algebra. The quotient ring of a Boolean ring modulo a ring ideal corresponds to the factor algebra of the corresponding Boolean algebra modulo the corresponding order ideal. Properties of Boolean rings Every Boolean ring satisfies for all in , because we know and since is an abelian group, we can subtract from both sides of this equation, which gives . A similar proof shows that every Boolean ring is commutative: and this yields , which means (using the first property above). The property shows that any Boolean ring is an associative algebra over the field with two elements, in precisely one way. In particular, any finite Boolean ring has as cardinality a power of two. Not every unital associative algebra over is a Boolean ring: consider for instance the polynomial ring . The quotient ring of any Boolean ring modulo any ideal is again a Boolean ring. Likewise, any subring of a Boolean ring is a Boolean ring. Any localization of a Boolean ring by a set is a Boolean ring, since every element in the localization is idempotent. The maximal ring of quotients (in the sense of Utumi and Lambek) of a Boolean ring is a Boolean ring, since every partial endomorphism is idempotent. Every prime ideal in a Boolean ring is maximal: the quotient ring is an integral domain and also a Boolean ring, so it is isomorphic to the field , which shows the maximality of . Since maximal ideals are always prime, prime ideals and maximal ideals coincide in Boolean rings. Every finitely generated ideal of a Boolean ring is principal (indeed, . Furthermore, as all elements are idempotents, Boolean rings are commutative von Neumann regular rings and hence absolutely flat, which means that every module over them is flat. Unification Unification in Boolean rings is decidable, that is, algorithms exist to solve arbitrary equations over Boolean rings. Both unification and matching in finitely generated free Boolean rings are NP-complete, and both are NP-hard in finitely presented Boolean rings. (In fact, as any unification problem in a Boolean ring can be rewritten as the matching problem , the problems are equivalent.) Unification in Boolean rings is unitary if all the uninterpreted function symbols are nullary and finitary otherwise (i.e. if the function symbols not occurring in the signature of Boolean rings are all constants then there exists a most general unifier, and otherwise the minimal complete set of unifiers is finite). See also Ring sum normal form Notes Citations References External links John Armstrong, Boolean Rings Ring theory Boolean algebra
Boolean ring
[ "Mathematics" ]
1,274
[ "Boolean algebra", "Fields of abstract algebra", "Mathematical logic", "Ring theory" ]
54,390
https://en.wikipedia.org/wiki/Disassembler
A disassembler is a computer program that translates machine language into assembly language—the inverse operation to that of an assembler. The output of disassembly is typically formatted for human-readability rather than for input to an assembler, making disassemblers primarily a reverse-engineering tool. Common uses include analyzing the output of high-level programming language compilers and their optimizations, recovering source code when the original is lost, performing malware analysis, modifying software (such as binary patching), and software cracking. A disassembler differs from a decompiler, which targets a high-level language rather than an assembly language. Assembly language source code generally permits the use of constants and programmer comments. These are usually removed from the assembled machine code by the assembler. If so, a disassembler operating on the machine code would produce disassembly lacking these constants and comments; the disassembled output becomes more difficult for a human to interpret than the original annotated source code. Some disassemblers provide a built-in code commenting feature where the generated output is enriched with comments regarding called API functions or parameters of called functions. Some disassemblers make use of the symbolic debugging information present in object files such as ELF. For example, IDA allows the human user to make up mnemonic symbols for values or regions of code in an interactive session: human insight applied to the disassembly process often parallels human creativity in the code writing process. Challenges It is not always possible to distinguish executable code from data within a binary. While common executable formats, such as ELF and PE, separate code and data into distinct sections, flat binaries do not, making it unclear whether a given location contains executable instructions or non-executable data. This ambiguity might complicate the disassembly process. Additionally, CPUs often allow dynamic jumps computed at runtime, which makes it impossible to identify all possible locations in the binary that might be executed as instructions. On computer architectures with variable-width instructions, such as in many CISC architectures, more than one valid disassembly may exist for the same binary. Disassemblers also cannot handle code that changes during execution, as static analysis cannot account for runtime modifications. Encryption, packing, or obfuscation are often applied to computer programs, especially as part of digital rights management to deter reverse engineering and cracking. These techniques pose additional challenges for disassembly, as the code must first be unpacked or decrypted before meaningful analysis can begin. Examples of disassemblers A disassembler can be either stand-alone or interactive. A stand-alone disassembler generates an assembly language file upon execution, which can then be examined. In contrast, an interactive disassembler immediately reflects any changes made by the user. For example, if the disassembler initially treats a section of the program as data rather than code, the user can specify it as code. The disassembled code will then be updated and displayed instantly, allowing the user to analyze it and make further changes during the same session. Any interactive debugger will include some way of viewing the disassembly of the program being debugged. Often, the same disassembly tool will be packaged as a standalone disassembler distributed along with the debugger. For example, objdump, part of GNU Binutils, is related to the interactive debugger gdb. Binary Ninja DEBUG Interactive Disassembler (IDA) Ghidra Hiew Hopper Disassembler PE Explorer Disassembler Netwide Disassembler (Ndisasm), companion to the Netwide Assembler (NASM). OLIVER (CICS interactive test/debug) includes disassemblers for Assembler, COBOL, and PL/1 x64dbg, a debugger for Windows that also performs dynamic disassembly OllyDbg is a 32-bit assembler level analysing debugger Radare2 Rizin and Cutter (graphical interface for Rizin) SIMON (batch interactive test/debug) includes disassemblers for Assembler, COBOL, and PL/1 Sourcer, a commenting 16-bit/32-bit disassembler for DOS, OS/2 and Windows by V Communications in the 1990s Disassemblers and emulators A dynamic disassembler can be integrated into the output of an emulator or hypervisor to trace the real-time execution of machine instructions, displaying them line-by-line. In this setup, along with the disassembled machine code, the disassembler can show changes to registers, data, or other state elements (such as condition codes) caused by each instructions. This provides powerful debugging information for problem resolution. However, the output size can become quite large, particularly if the tracing is active throughout the entire execution of a program. These features were first introduced in the early 1970s by OLIVER as part of its CICS debugging product and are now incorporated into the XPEDITER product from Compuware. Length disassembler A length disassembler, also known as length disassembler engine (LDE), is a tool that, given a sequence of bytes (instructions), outputs the number of bytes taken by the parsed instruction. Notable open source projects for the x86 architecture include ldisasm, Tiny x86 Length Disassembler and Extended Length Disassembler Engine for x86-64. See also Control-flow graph Data-flow analysis Decompiler References Further reading External links List of x86 disassemblers in Wikibooks Transformation Wiki on disassembly Boomerang A general, open source, retargetable decompiler of machine code programs Online Disassembler , a free online disassembler of arms, mips, ppc, and x86 code Debugging Reverse engineering
Disassembler
[ "Engineering" ]
1,307
[ "Reverse engineering", "Disassemblers" ]
54,399
https://en.wikipedia.org/wiki/TI-89%20series
The TI-89 and the TI-89 Titanium are graphing calculators developed by Texas Instruments (TI). They are differentiated from most other TI graphing calculators by their computer algebra system, which allows symbolic manipulation of algebraic expressions—equations can be solved in terms of variables— whereas the TI-83/84 series can only give a numeric result. TI-89 The TI-89 is a graphing calculator developed by Texas Instruments in 1998. The unit features a 160×100 pixel resolution LCD and a large amount of flash memory, and includes TI's Advanced Mathematics Software. The TI-89 is one of the highest model lines in TI's calculator products, along with the TI-Nspire. In the summer of 2004, the standard TI-89 was replaced by the TI-89 Titanium. The TI-89 runs on a 32-bit microprocessor, the Motorola 68000, which nominally runs at 10 or 12 MHz, depending on the calculator's hardware version. The calculator has 256 kB of RAM, (190 kB of which are available to the user) and 2 MB of flash memory (700 kB of which is available to the user). The RAM and Flash ROM are used to store expressions, variables, programs, text files, and lists. The TI-89 is essentially a TI-92 Plus with a limited keyboard and smaller screen. It was created partially in response to the fact that while calculators are allowed on many standardized tests, the TI-92 was not due to the QWERTY layout of its keyboard. Additionally, some people found the TI-92 unwieldy and overly large. The TI-89 is significantly smaller—about the same size as most other graphing calculators. It has a flash ROM, a feature present on the TI-92 Plus but not on the original TI-92. User features The major advantage of the TI-89 over other TI calculators is its built-in computer algebra system, or CAS. The calculator can evaluate and simplify algebraic expressions symbolically. For example, entering x^2-4x+4 returns . The answer is "prettyprinted" by default; that is, displayed as it would be written by hand (e.g. the aforementioned rather than x^2-4x+4). The TI-89's abilities include: Algebraic factoring of expressions, including partial fraction decomposition. Algebraic simplification; for example, the CAS can combine multiple terms into one fraction by finding a common denominator. Evaluation of trigonometric expressions to exact values. For example, sin(60°) returns instead of 0.86603. Solving equations for a certain variable. The CAS can solve for one variable in terms of others; it can also solve systems of equations. For equations such as quadratics where there are multiple solutions, it returns all of them. Equations with infinitely many solutions are solved by introducing arbitrary constants: solve(tan(x+2)=0,x) returns x=2.(90.@n1-1), with the @n1 representing any integer. Symbolic and numeric differentiation and integration. Derivatives and definite integrals are evaluated exactly when possible, and approximately otherwise. Calculate greatest common divisor (gcd) and least common multiple (lcm) Probability theory: factorial, combination, permutation, binomial distribution, normal distribution PrettyPrint (like equation editor and LaTeX) These mathematical constants are shown as symbols , , and Draw 2D and 3D graph Calculate taylorpolynomial Calculate limit of a function, including infinite limits and limits from one direction Vector calculation Matrix calculation Calculate series (summation or infinite product) Calculate chi squared test Calculate complex numbers Factoring polynomial: factor(polynomial) or cfactor(polynomial) Solve equation: solve(equation,) or csolve(equation,) Solve first or second order differential equation: deSolve(differential equation,,) Multiply and divide SI Units: underscore _ "diamond" "MODE" A number of regressions: LinReg QuadReg CubicReg QuartReg ExpReg LnReg PowerReg Logistic SinReg In addition to the standard two-dimensional function plots, it can also produce graphs of parametric equations, polar equations, sequence plots, differential equation fields, and three-dimensional (two independent variable) functions. Programming The TI-89 is directly programmable in a language called TI-BASIC 89, TI's derivative of BASIC for calculators. With the use of a PC, it is also possible to develop more complex programs in Motorola 68000 assembly language or C, translate them to machine language, and copy them to the calculator. Two software development kits for C programming are available; one is TI Flash Studio, the official TI SDK, and the other is TIGCC, a third-party SDK based on GCC. In addition, there is a third party flash application called GTC that allows the writing and compilation of C programs directly on the calculator. It is built on TIGCC, with some modifications. Numerous BASIC extensions are also present, the most notable of which is NewProg. Since the TI-89's release in 1998, thousands of programs for math, science, or entertainment have been developed. Many video games have also been developed. Many are generic clones of Tetris, Minesweeper, and other classic games, but some programs are more advanced: for example, a ZX Spectrum emulator, a chess-playing program, a symbolic circuit simulator, and a clone of Link's Awakening. Some of the most popular and well-known games are Phoenix, Drugwars, and Snake. Many calculator games and other useful programs can be found on TI-program sharing sites. Ticalc.org is a major one that offers thousands of calculator programs. Hardware versions There are four hardware versions of the TI-89. These versions are normally referred to as HW1, HW2, HW3, and HW4 (released in May 2006). Entering the key sequence [F1] [A] displays the hardware version. Older versions (before HW2) don't display anything about the hardware version in the about menu. The differences in the hardware versions are not well documented by Texas Instruments. HW1 and HW2 correspond to the original TI-89; HW3 and HW4 are only present in the TI-89 Titanium. The most significant difference between HW1 and HW2 is in the way the calculator handles the display. In HW1 calculators there is a video buffer that stores all of the information that should be displayed on the screen, and every time the screen is refreshed the calculator accesses this buffer and flushes it to the display (direct memory access). In HW2 and later calculators, a region of memory is directly aliased to the display controller (memory-mapped I/O). This allows for slightly faster memory access, as the HW1's DMA controller used about 10% of the bus bandwidth. However, it interferes with a trick some programs use to implement grayscale graphics by rapidly switching between two or more displays (page-flipping). On the HW1, the DMA controller's base address can be changed (a single write into a memory-mapped hardware register) and the screen will automatically use a new section of memory at the beginning of the next frame. In HW2, the new page must be written to the screen by software. The effect of this is to cause increased flickering in grayscale mode, enough to make the 7-level grayscale supported on the HW1 unusable (although 4-level grayscale works on both calculators). HW2 calculators are slightly faster because TI increased the nominal speed of the processor from 10 MHz to 12 MHz. It is believed that TI increased the speed of HW4 calculators to 16 MHz, though many users disagree about this finding. The measured statistics are closer to 14 MHz. Another difference between HW1 and HW2 calculators is assembly program size limitations. The size limitation on HW2 calculators has varied with the AMS version of the calculator. As of AMS 2.09 the limit is 24k. Some earlier versions limited assembly programs to 8k, and the earliest AMS versions had no limit. The latest AMS version has a 64kb limit. HW1 calculators have no hardware to enforce the limits, so it is easy to bypass them in software. There are unofficial patches and kernels that can be installed on HW2 calculators to remove the limitations. TI-89 Titanium The TI-89 Titanium was released in the June 1st, 2004, and has largely replaced the popular classic TI-89. The TI-89 Titanium is referred to as HW3 and uses the corresponding AMS 3.x. In 2006, new calculators were upgraded to HW4 which was supposed to offer increases in RAM and speeds up to , but some benchmarks made by users reported speeds between 12.85 and . The touted advantages of the TI-89 Titanium over the original TI-89 include two times the flash memory (with over four times as much available to the user). The TI-89 Titanium is essentially a Voyage 200, without an integrated keyboard. The TI-89 Titanium also has a USB On-The-Go port, for connectivity to other TI-89 Titanium calculators, or to a computer (to store programs or update the operating system). The TI-89 Titanium also features some pre-loaded applications, such as "CellSheet", a spreadsheet program also offered with other TI calculators. The Titanium has a slightly updated CAS, which adds a few more mathematical functions, most notably implicit differentiation. The Titanium also has a slightly differing case design from that of the TI-89 (the Titanium's case design is similar to that of the TI-84 Plus). There are some minor compatibility issues with C and assembly programs developed for the original TI-89. Some have to be recompiled to work on the Titanium due to various small hardware changes, though in most cases the problems can be fixed by using a utility such as GhostBuster, by Olivier Armand and Kevin Kofler. This option is generally preferred as it requires no knowledge of the program, works without the need of the program's source code, is automated, and doesn't require additional computer software. In some cases, only one character needs to be changed (the ROM base on TI-89 is at , whereas the TI-89 Titanium is at ) by hand or by patcher. Most, if not all, of these problems are caused by the mirror memory (ghost space) or lack thereof. Use in schools United Kingdom The Joint Council for Qualifications publish examination instructions on behalf of the main examination boards in England, Wales and Northern Ireland. These instructions state that a calculator used in an examination must not be designed to offer symbolic algebra manipulation, symbolic differentiation or integration. This precludes use of the TI-89 or TI-89 Titanium in examinations, but it may be used as part of classroom study. The SQA give the same instructions for examinations in Scotland. United States In the United States, the TI-89 is allowed by the College Board on all calculator-permitted tests, including the SAT, some SAT Subject Tests, and the AP Calculus, Physics, Chemistry, and Statistics exams. However, the calculator is banned from use on the ACT, the PLAN, and in some classrooms. The TI-92 series, with otherwise comparable features, has a QWERTY keyboard that results in it being classified as a computer device rather than as a calculator. See also Comparison of Texas Instruments graphing calculators TI-Nspire References External links Instruction Manual Using the TI-89 Graphing Calculator Computer algebra systems Graphing calculators Products introduced in 1998 Texas Instruments programmable calculators 68k-based mobile devices Products introduced in 2004
TI-89 series
[ "Mathematics" ]
2,542
[ "Computer algebra systems", "Mathematical software" ]
54,416
https://en.wikipedia.org/wiki/Trolleybus
A trolleybus (also known as trolley bus, trolley coach, trackless trolley, trackless tramin the 1910s and 1920sor trolley) is an electric bus that draws power from dual overhead wires (generally suspended from roadside posts) using spring-loaded trolley poles. Two wires, and two trolley poles, are required to complete the electrical circuit. This differs from a tram or streetcar, which normally uses the track as the return path, needing only one wire and one pole (or pantograph). They are also distinct from other kinds of electric buses, which usually rely on batteries. Power is most commonly supplied as 600-volt direct current, but there are exceptions. Currently, around 300 trolleybus systems are in operation, in cities and towns in 43 countries. Altogether, more than 800 trolleybus systems have existed, but not more than about 400 concurrently. History The trolleybus dates back to 29 April 1882, when Dr. Ernst Werner Siemens demonstrated his "Elektromote" in a Berlin suburb. This experiment continued until 13 June 1882, after which there were few developments in Europe, although separate experiments were conducted in the United States. In 1899, another vehicle which could run either on or off rails was demonstrated in Berlin. The next development was when Louis Lombard-Gérin operated an experimental line at the Paris Exhibition of 1900 after four years of trials, with a circular route around Lake Daumesnil that carried passengers. Routes followed in six places including Eberswalde and Fontainebleau. Max Schiemann on 10 July 1901 opened the world's fourth passenger-carrying trolleybus system, which operated at Bielatal (Biela Valley, near Dresden), Germany. Schiemann built and operated the Bielatal system, and is credited with developing the under-running trolley current collection system, with two horizontally parallel overhead wires and rigid trolleypoles spring-loaded to hold them up to the wires. Although this system operated only until 1904, Schiemann had developed what is now the standard trolleybus current collection system. In the early days there were many other methods of current collection. The Cédès-Stoll (Mercédès-Électrique-Stoll) system was first operated near Dresden between 1902 and 1904, and 18 systems followed. The Lloyd-Köhler or Bremen system was tried out in Bremen with 5 further installations, and the Cantono Frigerio system was used in Italy. Throughout this period, trackless freight systems and electric canal boats were also built. Leeds and Bradford became the first cities to put trolleybuses into service in Great Britain, on 20 June 1911. Supposedly, though it was opened on 20 June, the public was not admitted to the Bradford route until the 24th. Bradford was also the last city to operate trolleybuses in the UK; the system closed on 26 March 1972. The last rear-entrance trolleybus in service in Britain was also in Bradford and is now owned by the Bradford Trolleybus Association. Birmingham was the first UK city to replace a tram route with trolleybuses, while Wolverhampton, under the direction of Charles Owen Silvers, became world-famous for its trolleybus designs. There were 50 trolleybus systems in the UK, London's being the largest. By the time trolleybuses arrived in Britain in 1911, the Schiemann system was well established and was the most common, although the Cédès-Stoll (Mercédès-Électrique-Stoll) system was tried in West Ham (in 1912) and in Keighley (in 1913). Smaller trackless trolley systems were built in the US early as well. The first non-experimental system was a seasonal municipal line installed near Nantasket Beach in 1904; the first year-round commercial line was built to open a hilly property to development just outside Los Angeles in 1910. The trackless trolley was often seen as an interim step, leading to streetcars. In the US, some systems subscribed to the all-four concept of using buses, trolleybuses, streetcars (trams, trolleys), and rapid transit subway and/or elevated lines (metros), as appropriate, for routes ranging from the lightly used to the heaviest trunk line. Buses and trolleybuses in particular were seen as entry systems that could later be upgraded to rail as appropriate. In a similar fashion, many cities in Britain originally viewed trolleybus routes as extensions to tram (streetcar) routes where the cost of constructing or restoring track could not be justified at the time, though this attitude changed markedly (to viewing them as outright replacements for tram routes) in the years after 1918. Trackless trolleys were the dominant form of new post-World War I electric traction, with extensive systems in among others, Los Angeles, Chicago, Boston, Rhode Island, and Atlanta; San Francisco and Philadelphia still maintain an "all-four" fleet. Some trolleybus lines in the United States (and in Britain, as noted above) came into existence when a trolley or tram route did not have sufficient ridership to warrant track maintenance or reconstruction. In a similar manner, a proposed tram scheme in Leeds, United Kingdom, was changed to a trolleybus scheme to cut costs. Trolleybuses are uncommon today in North America, but their use is widespread in Europe and Russia. They remain common in many countries which were part of the Soviet Union. Generally trolleybuses occupy a position in usage between street railways (trams) and motorbuses. Worldwide, around 300 cities or metropolitan areas on 5 continents are served by trolleybuses (further detail under Use and preservation, below). This mode of transport operates in large cities, such as Belgrade, Lyon, Pyongyang, São Paulo, Seattle, Sofia, St. Petersburg, and Zurich, as well as in smaller ones such as Dayton, Gdynia, Lausanne, Limoges, Modena, and Salzburg. As of 2020, Kyiv has, due to its history in the former Soviet Union, the largest trolleybus system in the world in terms of route length while another formerly Soviet city, Minsk, has the largest system in terms of number of routes (which also date back to the Soviet era). Landskrona has the smallest system in terms of route length, while Mariánské Lázně is the smallest city to be served by trolleybuses. Opened in 1914, Shanghai's trolleybus system is the oldest operating system in the world. With a length of 86 km, route #52 of Crimean Trolleybus is the longest trolleybus line in the world. See also Trolleybus usage by country. Transit authorities in some cities have reduced or discontinued the use of trolleybuses in recent years, while others, wanting to add or expand use of zero-emission vehicles in an urban environment, have opened new systems or are planning new systems. For example, new systems opened in Lecce, Italy, in 2012; in Malatya, Turkey, in 2015; and in Marrakesh, Morocco, in 2017. Beijing and Shanghai have been expanding their respective systems, with Beijing expanding to a 31-line system operated with a fleet of over 1,250 trolleybuses. Trolleybuses have been long encouraged in North Korea with the newest city to have a network being Manpo in December 2019. Since the year 2022, the city of Prague is constructing a new trolleybus system. Meanwhile, in 2023, plans for a trolleybus line in Berlin were scrapped in favour of a solution with battery-powered vehicles. Vehicle design Modern design vehicles Advantages Comparison to trams Cheaper infrastructureThe initial start up cost of trams is much higher, due to rail, signals, and other infrastructure. Trolleybuses can pull over to the kerb like other buses, eliminating the need for special boarding stations or boarding islands in the middle of the street, thus stations can be moved as needed. Better hill climbingTrolleybuses' rubber tyres have better adhesion than trams' steel wheels on steel rails, giving them better hill-climbing capability and braking. Easier traffic avoidanceUnlike trams (where side tracks are often unavailable), an out-of-service vehicle can be moved to the side of the road and its trolley poles lowered. The ability to drive a substantial distance from the power wires allows trackless vehicles to avoid obstacles, although it also means a possibility that the vehicle may steer or skid far enough that the trolley pole can no longer reach the wire, stranding the vehicle. Trackless trolleys also are able to avoid collisions by manoeuvring around obstacles, similar to motor buses and other road vehicles, while trams can only change speed. QuietnessTrolleybuses are generally quieter than trams. Easier trainingThe control of trolleybuses is relatively similar to motorbuses; the potential operator pool for all buses is much larger than for trams. Comparison to motorbuses Disadvantages Comparison to trams Note: As there are numerous variations of tram and light-rail technology, the disadvantages listed may be applicable only with a specific technology or design. Like any bus, much less capacity than trams. More control requiredTrolleybuses must be driven like motorbuses, requiring directional control by the driver. Higher rolling resistanceRubber-tired vehicles generally have more rolling resistance than steel wheels, which decreases energy efficiency. Less efficient use of right-of-wayLanes must be wider for unguided buses than for streetcars, since unguided buses can drift side-to-side. The use of guidance rail allows trams running in parallel lanes to pass closer together than drivers could safely steer. Difficulties with platform loadingImplementation of level platform loading with minimal gap, either at design stage or afterwards, is easier and cheaper to implement with rail vehicles. Wear of rubber tires leads to significant rubber pollution. Comparison to motorbuses Difficult to re-routeWhen compared to motorbuses, trolleybuses have greater difficulties with temporary or permanent re-routings, wiring for which is not usually readily available outside of downtown areas where the buses may be re-routed via adjacent business area streets where other trolleybus routes operate. This problem was highlighted in Vancouver in July 2008, when an explosion closed several roads in the city's downtown core. Because of the closure, trolleys were forced to detour several miles off their route in order to stay on the wires, leaving major portions of their routes not in service and off-schedule. AestheticsThe jumble of overhead wires may be seen as unsightly. Intersections often have a "webbed ceiling" appearance, due to multiple crossing and converging sets of trolley wires. DewirementsTrolley poles sometimes come off the wire. Dewirements are relatively rare in modern systems with well-maintained overhead wires, hangers, fittings and contact shoes. Trolleybuses are equipped with special insulated pole ropes which drivers use to reconnect the trolley poles with the overhead wires. When approaching switches, trolleybuses usually must decelerate in order to avoid dewiring, and this deceleration can potentially add slightly to traffic congestion. In 1998, a dewirement in Shenyang on poorly maintained infrastructure killed 5 people and ultimately led to the destruction of the trolleybus network. Unable to overtake other trolleybusesTrolleybuses cannot overtake one another in regular service unless two separate sets of wires with a switch are provided or the vehicles are equipped with off-wire capability, with the latter an increasingly common feature of new trolleybuses. Higher capital cost of equipmentTrolleybuses are often long-lived equipment, with limited market demand. This generally leads to higher prices relative to internal combustion buses. The long equipment life may also complicate upgrades. More training requiredDrivers must learn how to prevent dewiring, slowing down at turns and through switches in the overhead wire system, for example. Overhead wires create obstructionTrolleybus systems employ overhead wires above the roads, often shared with other vehicles. The wires can restrict tall motor vehicles such as delivery trucks ("lorries") and double decker buses from using or crossing roads fitted with overhead wires, as such vehicles would hit the wires or pass dangerously close to them, risking damage and dangerous electrical faults. The wires also may impede positioning of overhead signage and create a hazard to activities such as road repairs using tall excavators or piling rigs, use of scaffolding, etc. Off-wire power developments With the re-introduction of hybrid designs, trolleybuses are no longer tied to overhead wires. The Public Service Company of New Jersey, with Yellow Coach, developed "All Service Vehicles"; trackless trolleys capable of operating as gas-electric buses when off wire, and used them successfully between 1935 and 1948. Since the 1980s, systems such as Muni in San Francisco, TransLink in Vancouver, and Beijing, among others, have bought trolleybuses equipped with batteries to allow them to operate fairly long distances away from the wires. Supercapacitors can be also used to move buses short distances. Trolleybuses can optionally be equipped either with limited off-wire capability—a small diesel engine or battery pack—for auxiliary or emergency use only, or full dual-mode capability. A simple auxiliary power unit can allow a trolleybus to get around a route blockage or can reduce the amount (or complexity) of overhead wiring needed at operating garages (depots). This capability has become increasingly common in newer trolleybuses, particularly in China, North America and Europe, where the vast majority of new trolleybuses delivered since the 1990s are fitted with at least limited off-wire capability. These have gradually replaced older trolleybuses which lacked such capability. In Philadelphia, new trackless trolleys equipped with small hybrid diesel-electric power units for operating short distances off-wire were placed in service by SEPTA in 2008. This is instead of the trolleys using a conventional diesel drive train or battery-only system for their off-wire movement. King County Metro in Seattle, Washington and the MBTA in Boston's Silver Line have used dual-mode buses that run on electric power from overhead wires on a fixed right-of-way and on diesel power on city streets. Metro used special-order articulated Breda buses, introduced in 1990, and most were retired in 2005. A limited number of the Breda dual-mode buses had their diesel engines removed, and operated exclusively as trolleybuses until 2016. Since 2004, the MBTA has used dual-mode buses on its Silver Line (Waterfront) route. The last of these were be replaced by diesel hybrid and battery-electric buses in June 2023. In Motion Charging IMC (In Motion Charging) trolleybuses are equipped with a light-weight battery, the size of which is adapted to the line profile used. This battery allows them not to depend on overhead lines. They can thus operate with a mix of electric wire and batteries (60% of the time on the wire and 40% on the battery). With the development of battery technology in recent years, trolleybuses with extended off-wire capability through on-board batteries are becoming popular. The on-board battery is charged while the vehicle is in motion under the overhead wires and then allows off-wire travel for significant distances, often in excess of 15 km. Such trolleybuses are called, among others, trolleybuses with In-Motion Charging, hybrid trolleybuses, battery trolleybuses and electric buses with dynamic charging. The main advantages of this technology over conventional battery electric buses are reduced cost and weight of the battery due to its smaller size, no delays for charging at end stops as the vehicle charges while in motion and reduced need for dedicated charging stations that take up public space. This new development allows the extension of trolleybus routes or the electrification of bus routes without the need to build overhead wires along the whole length of the route. Cities that utilize such trolleybuses include Beijing, Ostrava, Shanghai, Mexico City, Saint Petersburg, and Bergen. The new trolleybus systems in Marrakesh, Baoding and Prague are based exclusively on battery trolleybuses. In 2020, the city of Berlin, Germany announced plans to build a new trolleybus system with 15 routes and 190 battery trolleybuses. However, in early 2023 it was announced that the planned lines would use battery powered electric buses instead. Introducing new flexible, high-capacity public transport of in motion charging (IMC) trolleybuses are electric buses that can charge dynamically via an overhead contact network and can run on batteries for up to half of their route. Because an IMC bus is operated electrically just as a tramcar without limitation of the range. It concept of trolleybus and ebus with Battery electric bus. IMC500 transfers energy from the infrastructure to the vehicle at a power of up to 500 kW. The e.g. 2 x 160 kW motors are supplied in parallel to the battery charging with e.g. 200 kW. Other considerations With increasing diesel fuel costs and problems caused by particulate matter and NOx emissions in cities, trolleybuses can be an attractive alternative, either as the primary transit mode or as a supplement to rapid transit and commuter rail networks. Trolleybuses are quieter than internal combustion engine vehicles. Mainly a benefit, it also provides much less warning of a trolleybus's approach. A speaker attached to the front of the vehicle can raise the noise to a desired "safe" level. This noise can be directed to pedestrians in front of the vehicle, as opposed to motor noise which typically comes from the rear of a bus and is more noticeable to bystanders than to pedestrians. Trolleybuses can share overhead wires and other electrical infrastructure (such as substations) with tramways. This can result in cost savings when trolleybuses are added to a transport system that already has trams, though this refers only to potential savings over the cost of installing and operating trolleybuses alone. The two parallel wires The wires are attached to poles next to the street and carefully stretched and mounted so that they are the same width apart and same height over the road (usually about 18 to 20 feet (~5.7m)). The pair of wires is insulated from the poles and provides about 500 to 600 volts to the bus below. Wire switches Trolleybus wire switches (called "frogs" in the UK) are used where a trolleybus line branches into two or where two lines join. A switch may be either in a "straight through" or "turnout" position; it normally remains in the "straight through" position unless it has been triggered, and reverts to it after a few seconds or after the pole shoe passes through and strikes a release lever (in Boston, the resting or "default" position is the "leftmost" position). Triggering is typically accomplished by a pair of contacts, one on each wire close to and before the switch assembly, which power a pair of electromagnets, one in each frog with diverging wires ("frog" generally refers to one fitting that guides one trolley wheel/shoe onto a desired wire or across one wire. Occasionally, "frog" has been used to refer to the entire switch assembly). Multiple branches may be handled by installing more than one switch assembly. For example, to provide straight-through, left-turn or right-turn branches at an intersection, one switch is installed some distance from the intersection to choose the wires over the left-turn lane, and another switch is mounted closer to or in the intersection to choose between straight through and a right turn (this would be the arrangement in countries such as the United States, where traffic directionality is right-handed; in left-handed traffic countries such as the United Kingdom and New Zealand, the first switch (before the intersection) would be used to access the right-turn lanes, and the second switch (usually in the intersection) would be for the left-turn). Three common types of switches exist: power-on/power-off (the picture of a switch above is of this type), Selectric, and Fahslabend. A power-on/power-off switch is triggered if the trolleybus is drawing considerable power from the overhead wires, usually by accelerating, at the moment the poles pass over the contacts (the contacts are lined up on the wires in this case). If the trolleybus "coasts" through the switch, the switch will not activate. Some trolleybuses, such as those in Philadelphia and Vancouver, have a manual "power-coast" toggle switch that turns the power on or off. This allows a switch to be triggered in situations that would otherwise be impossible, such as activating a switch while braking or accelerating through a switch without activating it. One variation of the toggle switch will simulate accelerating by causing a larger power draw (through a resistance grid), but will not simulate coasting and prevent activation of the switch by cutting the power. A Selectric switch has a similar design, but the contacts on the wires are skewed, often at a 45-degree angle, rather than being lined up. This skew means that a trolleybus going straight through will not trigger the switch, but a trolleybus making a turn will have its poles match the contacts in a matching skew (with one pole shoe ahead of the other), which will trigger the switch regardless of power draw (accelerating versus coasting). For a Fahslabend switch, the trolleybus' turn indicator control (or a separate driver-controlled switch) causes a coded radio signal to be sent from a transmitter, often attached to a trolley pole. The receiver is attached to the switch and causes it to trigger if the correct code is received. This has the advantage that the driver does not need to be accelerating the bus (as with a power-on/power-off switch) or trying to make a sharp turn (as with a Selectric switch). Trailing switches (where two sets of wires merge) do not require action by the operator. The frog runners are pushed into the desired position by the trolley shoe, or the frog is shaped so the shoe is guided onto the exit wire without any moving parts. Manufacturing Well over 200 different trolleybus makers have existed – mostly commercial manufacturers, but in some cases (particularly in communist countries), built by the publicly owned operating companies or authorities. Of the defunct or former trolleybus manufacturers, the largest producers in North America and Western Europe – ones whose production totalled more than 1,000 units each – included the U.S. companies Brill (approx. 3,250 total), Pullman-Standard (2,007), and Marmon-Herrington (1,624); the English companies AEC (approx. 1,750), British United Traction (BUT) (1,573), Leyland (1,420) and Sunbeam (1,379); France's Vétra (more than 1,750); and the Italian builders Alfa Romeo (2,044) and Fiat (approx. 1,700). The largest former trolleybus manufacture is Trolza (formerly Uritsky, or ZiU) since 1951, until they declared their bankruptcy in 2017, building over 65000 trolleybuses. Also, Canadian Car and Foundry built 1,114 trolleybuses based on designs by Brill. As of the 2010s, at least 30 trolleybus manufacturers exist. They include companies that have been building trolleybuses for several decades, such as Škoda since 1936 and New Flyer, among others, along with several younger companies. Current trolleybus manufacturers in western and central Europe include Solaris, Van Hool, and Hess, among others. In Russia ZiU/Trolza has historically been the world's largest trolleybus manufacturer, producing over 65,000 since 1951, mostly for Russia/CIS countries, but after its bankruptcy, its facilities were partially loaned out to PC Transport Systems. Škoda is Western and Central Europe's largest and the second largest in the world, having produced over 14,000 trolleybuses since 1936, mostly for export, and it also supplies trolleybus electrical equipment for other bus builders such as Solaris, SOR and Breda. In Mexico, trolleybus production ended when MASA, which had built more than 860 trolleybuses since 1979, was acquired in 1998 by Volvo. However, Dina, which is now that country's largest bus and truck manufacturer, began building trolleybuses in 2013. Transition to low-floor designs A significant change to trolleybus designs starting in the early 1990s was the introduction of low-floor models, which began only a few years after the first such models were introduced for motorbuses. These have gradually replaced high-floor designs, and by 2012, every existing trolleybus system in Western Europe had purchased low-floor trolleybuses, with the La Spezia (Italy) system being the last one to do so, and several systems in other parts of the world have purchased low-floor vehicles. In the United States, some transit agencies had already begun to accommodate persons in wheelchairs by purchasing buses with wheelchair lifts, and early examples of fleets of lift-equipped trolleybuses included 109 AM General trolleybuses built for the Seattle trolleybus system in 1979 and the retrofitting of lifts in 1983 to 64 Flyer E800s in the Dayton system's fleet. The Americans with Disabilities Act of 1990 required that all new transit vehicles placed into service after 1 July 1993 be accessible to such passengers. Trolleybuses in other countries also began to introduce better access for the disabled in the 1990s, when the first two low-floor trolleybus models were introduced in Europe, both built in 1991, a "Swisstrolley" demonstrator built by Switzerland's NAW/Hess and an N6020 demonstrator built by Neoplan. The first production-series low-floor trolleybuses were built in 1992: 13 by NAW for the Geneva system and 10 Gräf & Stift for the . By 1995, such vehicles were also being made by several other European manufacturers, including Skoda, Breda, Ikarus, and Van Hool. The first Solaris "Trollino" made its debut in early 2001. In the former Soviet Union countries, Belarus' Belkommunmash built its first low-floor trolleybus (model AKSM-333) in 1999, and other manufacturers in the former Soviet countries joined the trend in the early 2000s. However, because the lifespan of a trolleybus is typically longer than that of a motorbus, the budget allocation and purchase typically factored in the longevity; the introduction of low-floor vehicles applied pressures on operators to retire high-floor trolleybuses that were only a few years old and replace them with low-floor trolleybuses. Responses varied, with some systems keeping their high-floor fleets, and others retiring them early but, in many instances, selling them second-hand for continued use in countries where there was a demand for low-cost second-hand trolleybuses, in particular in Romania and Bulgaria. The Lausanne system dealt with this dilemma in the 1990s by purchasing new low-floor passenger trailers to be towed by its high-floor trolleybuses, a choice later also made by Lucerne. Outside Europe, 14 vehicles built by, and for, the Shanghai trolleybus system in mid-1999 were the first reported low-floor trolleybuses in Southeast Asia. Wellington, New Zealand, took delivery of its first low-floor trolleybus in March 2003, and by the end of 2009 had renewed its entire fleet with such vehicles. Unlike Europe, where low floor means "100%" low floor from front to back, most "low floor" buses on other continents are actually only low-entry or part-low floor. In the Americas, the first low-floor trolleybus was a Busscar vehicle supplied to the São Paulo EMTU system in 2001. In North America, wheelchair lifts were again chosen for disabled access in new trolleybuses delivered to San Francisco in 1992–94, to Dayton in 1996–1999, and to Seattle in 2001–2002, but the first low-floor trolleybus was built in 2003, with the first of 28 Neoplan vehicles for the Boston system. Subsequently, the Vancouver system and the Philadelphia system have converted entirely to low-floor vehicles, and in 2013 the Seattle and Dayton systems both placed orders for their first low-floor trolleybuses. Outside São Paulo, almost all trolleybuses currently in service in Latin America are high-floor models built before 2000. However, in 2013, the first domestically manufactured low-floor trolleybuses were introduced in both Argentina and Mexico. With regard to non-passenger aspects of vehicle design, the transition from high-floor to low-floor has meant that some equipment previously placed under the floor has been moved to the roof. Some transit operators have needed to modify their maintenance facilities to accommodate this change, a one-time expense. Double-decker trolleybuses Since the end of 1997, no double-decker trolleybuses have been in service anywhere in the world, but, in the past, several manufacturers made such vehicles. Most builders of double-deck trolleybuses were in the United Kingdom, but there were a few, usually solitary, instances of such trolleybuses being built in other countries, including in Germany by Henschel (for Hamburg); in Italy, by Lancia (for Porto, Portugal); in Russia, by the Yaroslavl motor plant (for Moscow) and in Spain, by Maquitrans (for Barcelona). British manufacturers of double-deck trolleybuses included AEC, BUT, Crossley, Guy, Leyland, Karrier, Sunbeam and others. In 2001, Citybus (Hong Kong) converted a Dennis Dragon (#701) into a double-decker trolleybus, and it was tested on a 300-metre track in Wong Chuk Hang in that year. Hong Kong decided not to build a trolleybus system, and the testing of this prototype did not lead to any further production of vehicles. Use and preservation There are currently 300 cities or metropolitan areas where trolleybuses are operated, and more than 500 additional trolleybus systems have existed in the past. For an overview, by country, see Trolleybus usage by country, and for complete lists of trolleybus systems by location, with dates of opening and (where applicable) closure, see List of trolleybus systems and the related lists indexed there. Of the systems existing as of 2012, the majority are located in Europe and Asia, including 85 in Russia and 43 in Ukraine. However, there are eight systems existing in North America and nine in South America. Trolleybuses have been preserved in most of the countries where they have operated. The United Kingdom has the largest number of preserved trolleybuses with more than 110, while the United States has around 70. Most preserved vehicles are on static display only, but a few museums are equipped with a trolleybus line, allowing trolleybuses to operate for visitors. Museums with operational trolleybus routes include three in the UK – the Trolleybus Museum at Sandtoft, the East Anglia Transport Museum, and the Black Country Living Museum – and three in the United States – the Illinois Railway Museum, the Seashore Trolley Museum, and the Shore Line Trolley Museum – but operation of trolleybuses does not necessarily occur on a regular schedule of dates at these museums. See also Battery electric bus Bus rapid transit Dual-mode bus Electric bus Electric vehicle battery Electromote Guided bus Gyrobus List of trolleybus manufacturers List of trolleybus systems Parallel overhead lines Traction substation Trolleytruck Notes Further reading Bruce, Ashley R. Lombard-Gerin and Inventing the Trolleybus. (2017) Trolleybooks (UK). Cheape, Charles W. Moving the masses: urban public transit in New York, Boston, and Philadelphia, 1880-1912 (Harvard University Press, 1980) Dunbar, Charles S. (1967). Buses, Trolleys & Trams. Paul Hamlyn Ltd. (UK) [republished 2004 with or 9780753709702] McKay, John P. Tramways and Trolleys: The Rise of Urban Mass Transport in Europe (1976) Murray, Alan (2000). World Trolleybus Encyclopaedia. Trolleybooks (UK). Porter, Harry; and Worris, Stanley F.X. (1979). Trolleybus Bulletin No. 109: Databook II. North American Trackless Trolley Association (defunct) Sebree, Mac; and Ward, Paul (1973). Transit's Stepchild, The Trolley Coach (Interurbans Special 58). Los Angeles: Interurbans. LCCN 73-84356 Sebree, Mac; and Ward, Paul (1974). The Trolley Coach in North America (Interurbans Special 59). Los Angeles: Interurbans. LCCN 74-20367 Periodicals Trolleybus Magazine (). National Trolleybus Association (UK), bi-monthly Trackless, Bradford Trolleybus Association, quarterly Trolleybus, British Trolleybus Society (UK), monthly External links (in German) TrolleyMotionan international action group to promote modern trolleybus systems, and database of systems in the world British Trolleybuses Trolleybuses in Latin America North American trolleybus pictures Trolleybuses in Europe Urban Electric Transit - Database/Photo gallery Buses by type Electric buses Sustainable transport Articles containing video clips
Trolleybus
[ "Physics" ]
6,859
[ "Physical systems", "Transport", "Sustainable transport" ]
54,419
https://en.wikipedia.org/wiki/Tangram
The tangram () is a dissection puzzle consisting of seven flat polygons, called tans, which are put together to form shapes. The objective is to replicate a pattern (given only an outline) generally found in a puzzle book using all seven pieces without overlap. Alternatively the tans can be used to create original minimalist designs that are either appreciated for their inherent aesthetic merits or as the basis for challenging others to replicate its outline. It is reputed to have been invented in China sometime around the late 18th century and then carried over to America and Europe by trading ships shortly after. It became very popular in Europe for a time, and then again during World War I. It is one of the most widely recognized dissection puzzles in the world and has been used for various purposes including amusement, art, and education. Etymology The origin of the English word 'tangram' is unclear. One conjecture holds that it is a compound of the Greek element '-gram' derived from γράμμα ('written character, letter, that which is drawn') with the 'tan-' element being variously conjectured to be Chinese t'an 'to extend' or Cantonese t'ang 'Chinese'. Alternatively, the word may be derivative of the archaic English 'tangram' meaning "an odd, intricately contrived thing". In either case, the first known use of the word is believed to be found in the 1848 book Geometrical Puzzle for the Young by mathematician and future Harvard University president Thomas Hill. Hill likely coined the term in the same work, and vigorously promoted the word in numerous articles advocating for the puzzle's use in education, and in 1864 the word received official recognition in the English language when it was included in Noah Webster's American Dictionary. History Origins Despite its relatively recent emergence in the West, there is a much older tradition of dissection amusements in China which likely played a role in its inspiration. In particular, the modular banquet tables of the Song dynasty bear an uncanny resemblance to the playing pieces of the tangram and there were books dedicated to arranging them together to form pleasing patterns. Several Chinese sources broadly report a well-known Song dynasty polymath Huang Bosi 黄伯思 who developed a form of entertainment for his dinner guests based on creative arrangements of six small tables called 宴几 or 燕几(feast tables or swallow tables respectively). One diagram shows these as oblong rectangles, and other reports suggest a seventh table was added later, perhaps by a later inventor. According to Western sources, however, the tangram's historical Chinese inventor is unknown except through the pen name Yang-cho-chu-shih (Dim-witted (?) recluse, recluse = 处士). It is believed that the puzzle was originally introduced in a book titled Ch'i chi'iao t'u, which was already reported as lost in 1815 by Shan-chiao in his book New Figures of the Tangram. Nevertheless, it is generally believed that the puzzle was invented about 20 years earlier. The prominent third-century mathematician Liu Hui made use of construction proofs in his works and some bear a striking resemblance to the subsequently developed banquet tables which in turn seem to anticipate the tangram. While there is no reason to suspect that tangrams were used in the proof of the Pythagorean theorem, as is sometimes reported, it is likely that this style of geometric reasoning went on to exert an influence on Chinese cultural life that lead directly to the puzzle. The early years of attempting to date the Tangram were confused by the popular but fraudulently written history by famed puzzle maker Samuel Loyd in his 1908 The Eighth Book Of Tan. This work contains many whimsical features that aroused both interest and suspicion amongst contemporary scholars who attempted to verify the account. By 1910 it was clear that it was a hoax. A letter dated from this year from the Oxford Dictionary editor Sir James Murray on behalf of a number of Chinese scholars to the prominent puzzlist Henry Dudeney reads "The result has been to show that the man Tan, the god Tan, and the Book of Tan are entirely unknown to Chinese literature, history or tradition." Along with its many strange details The Eighth Book of Tan's date of creation for the puzzle of 4000 years in antiquity had to be regarded as entirely baseless and false. Reaching the Western world (1815–1820s) The earliest extant tangram was given to the Philadelphia shipping magnate and congressman Francis Waln in 1802 but it was not until over a decade later that Western audiences, at large, would be exposed to the puzzle. In 1815, American Captain M. Donnaldson was given a pair of author Sang-Hsia-koi's books on the subject (one problem and one solution book) when his ship, Trader docked there. They were then brought with the ship to Philadelphia, in February 1816. The first tangram book to be published in America was based on the pair brought by Donnaldson. The puzzle eventually reached England, where it became very fashionable. The craze quickly spread to other European countries. This was mostly due to a pair of British tangram books, The Fashionable Chinese Puzzle, and the accompanying solution book, Key. Soon, tangram sets were being exported in great number from China, made of various materials, from glass, to wood, to tortoise shell. Many of these unusual and exquisite tangram sets made their way to Denmark. Danish interest in tangrams skyrocketed around 1818, when two books on the puzzle were published, to much enthusiasm. The first of these was Mandarinen (About the Chinese Game). This was written by a student at Copenhagen University, which was a non-fictional work about the history and popularity of tangrams. The second, Det nye chinesiske Gaadespil (The new Chinese Puzzle Game), consisted of 339 puzzles copied from The Eighth Book of Tan, as well as one original. One contributing factor in the popularity of the game in Europe was that although the Catholic Church forbade many forms of recreation on the sabbath, they made no objection to puzzle games such as the tangram. Second craze in Germany (1891–1920s) Tangrams were first introduced to the German public by industrialist Friedrich Adolf Richter around 1891. The sets were made out of stone or false earthenware, and marketed under the name "The Anchor Puzzle". More internationally, the First World War saw a great resurgence of interest in tangrams, on the homefront and trenches of both sides. During this time, it occasionally went under the name of "The Sphinx" an alternative title for the "Anchor Puzzle" sets. Paradoxes A tangram paradox is a dissection fallacy: Two figures composed with the same set of pieces, one of which seems to be a proper subset of the other. One famous paradox is that of the two monks, attributed to Henry Dudeney, which consists of two similar shapes, one with and the other missing a foot. In reality, the area of the foot is compensated for in the second figure by a subtly larger body. The two-monks paradox – two similar shapes but one missing a foot: The Magic Dice Cup tangram paradox – from Sam Loyd's book The 8th Book of Tan (1903). Each of these cups was composed using the same seven geometric shapes. But the first cup is whole, and the others contain vacancies of different sizes. (Notice that the one on the left is slightly shorter than the other two. The one in the middle is ever-so-slightly wider than the one on the right, and the one on the left is narrower still.) Clipped square tangram paradox – from Loyd's book The Eighth Book of Tan (1903): Number of configurations Over 6500 different tangram problems have been created from 19th century texts alone, and the current number is ever-growing. Fu Traing Wang and Chuan-Chih Hsiung proved in 1942 that there are only thirteen convex tangram configurations (segments drawn between any two points on the configuration are always completely contained inside the configuration, i.e., configurations with no recesses in the outline). Pieces Choosing a unit of measurement so that the seven pieces can be assembled to form a square of side one unit and having area one square unit, the seven pieces are: 2 large right triangles (hypotenuse 1, sides , area ) 1 medium right triangle (hypotenuse , sides , area ) 2 small right triangles (hypotenuse , sides , area ) 1 square (sides , area ) 1 parallelogram (sides of and , height of , area ) Of these seven pieces, the parallelogram is unique in that it has no reflection symmetry but only rotational symmetry, and so its mirror image can be obtained only by flipping it over. Thus, it is the only piece that may need to be flipped when forming certain shapes. See also Tangram (video game) Egg of Columbus (tangram puzzle) Mathematical puzzle Ostomachion Tiling puzzle Pickagram (3D Magnetic Tangram Puzzle) Attribute blocks References Sources Further reading Anno, Mitsumasa. Anno's Math Games (three volumes). New York: Philomel Books, 1987. (v. 1), (v. 2), (v. 3). Botermans, Jack, et al. The World of Games: Their Origins and History, How to Play Them, and How to Make Them (translation of Wereld vol spelletjes). New York: Facts on File, 1989. . Dudeney, H. E. Amusements in Mathematics. New York: Dover Publications, 1958. Gardner, Martin. "Mathematical Games—on the Fanciful History and the Creative Challenges of the Puzzle Game of Tangrams", Scientific American Aug. 1974, p. 98–103. Gardner, Martin. "More on Tangrams", Scientific American Sep. 1974, p. 187–191. Gardner, Martin. The 2nd Scientific American Book of Mathematical Puzzles and Diversions. New York: Simon & Schuster, 1961. . Loyd, Sam. Sam Loyd's Book of Tangram Puzzles (The 8th Book of Tan Part I). Mineola, New York: Dover Publications, 1968. Slocum, Jerry, et al. Puzzles of Old and New: How to Make and Solve Them. De Meern, Netherlands: Plenary Publications International (Europe); Amsterdam, Netherlands: ADM International; Seattle: Distributed by University of Washington Press, 1986. . External links Past & Future: The Roots of Tangram and Its Developments Turning Your Set of Tangram Into A Magic Math Puzzle by puzzle designer G. Sarcone Tiling puzzles Chinese games Mathematical manipulatives Single-player games Geometric dissection Chinese ancient games Chinese inventions Polyforms 19th-century fads and trends
Tangram
[ "Physics", "Mathematics" ]
2,253
[ "Tessellation", "Recreational mathematics", "Tiling puzzles", "Mathematical manipulatives", "Symmetry" ]
54,420
https://en.wikipedia.org/wiki/Ry%C5%8Dji%20Noyori
is a Japanese chemist. He won the Nobel Prize in Chemistry in 2001, Noyori shared a half of the prize with William S. Knowles for the study of chirally catalyzed hydrogenations; the second half of the prize went to K. Barry Sharpless for his study in chirally catalyzed oxidation reactions (Sharpless epoxidation). Education and career Ryōji Noyori was born in Kobe, Japan. Early in his school days Ryoji was interested in physics. His interest was kindled by the famous physicist Hideki Yukawa (1949 Nobel Prize in Physics winner), a close friend of his father. Later, he became fascinated with chemistry, after hearing a presentation on nylon at an industrial exposition. He saw the power of chemistry as being the ability to "produce high value from almost nothing". He was a student at the School of Engineering (Department of Industrial Chemistry) of the Kyoto University, where he graduated in 1961. He subsequently obtained a Master's degree in Industrial Chemistry from the Graduate School of Engineering of the Kyoto University. Between 1963 and 1967, he was a research associate at the School of Engineering of the Kyoto University, and an instructor in the research group of Hitoshi Nozaki. Noyori obtained a Doctor of Engineering degree (DEng) from the Kyoto University in 1967. He became an associate professor at the same university in 1968. After postdoctoral work with Elias J. Corey at Harvard he returned to Nagoya, becoming a full professor in 1972. He is still based at Nagoya, and served as president of RIKEN, a multi-site national research initiative with an annual budget of $800 million, from 2003 to 2015. Research Noyori believes strongly in the power of catalysis and of green chemistry; in a 2005 article he argued for the pursuit of "practical elegance in synthesis". In this article he stated that "our ability to devise straightforward and practical chemical syntheses is indispensable to the survival of our species." Elsewhere he has said that "Research is for nations and mankind, not for researchers themselves." He encourages scientists to be politically active: "Researchers must spur public opinions and government policies toward constructing the sustainable society in the 21st century." Noyori is currently a chairman of the Education Rebuilding Council, which was set up by Japan's PM Shinzō Abe after he came to power in 2006. Noyori is most famous for asymmetric hydrogenation using as catalysts complexes of rhodium and ruthenium, particularly those based on the BINAP ligand. Asymmetric hydrogenation of an alkene in the presence of ((S)-BINAP)Ru(OAc)2 is used for the commercial production of enantiomerically pure (97% ee) naproxen, a nonsteriodal anti-inflammatory drug. The antibacterial agent levofloxacin is manufactured by asymmetric hydrogenation of ketones in the presence of a Ru(II) BINAP halide complex. He has also worked on other asymmetric processes. Each year 3000 tonnes (after new expansion) of menthol are produced (in 94% ee) by Takasago International Corporation, using Noyori's method for isomerisation of allylic amines. More recently with Philip G. Jessop, Noyori has developed an industrial process for the manufacture of N,N-dimethylformamide from hydrogen, dimethylamine and supercritical carbon dioxide in the presence of as catalyst. Recognition The Ryoji Noyori Prize is named in his honour. In 2000 Noyori became Honorary Doctor at the University of Rennes 1, where he taught in 1995, and in 2005, he became Honorary Doctor at Technical University of Munich and RWTH Aachen University, Germany. Noyori was elected a Foreign Member of the Royal Society (ForMemRS) in 2005. and an Honorary Doctorate degree from the Institute of Chemical Technology, Mumbai (formerly known as UDCT) on the 23rd day of February 2018. He has also been awarded: 1978 – Matsunaga prize 1982 – Chuniichi Culture Award 1985 – The Chemical Society of Japan Award 1991 – John G. Kirkwood Award, American Chemical Society and Yale University 1992 – Asahi Prize 1993 – Tetrahedron Prize 1995 – Japan Academy Prize (academics) 1997 – Arthur C. Cope Award 1997 – Chirality Medal 1999 – King Faisal International Prize 2001 – Wolf Prize in Chemistry 2001 – Nobel Prize for Chemistry 2009 – Lomonosov Gold Medal See also List of Japanese Nobel laureates List of Nobel laureates affiliated with Kyoto University References External links including the Nobel Lecture December 8, 2001 Asymmetric Catalysis: Science and Technology 1938 births Living people People from Kobe Harvard University staff Kyoto University alumni 20th-century Japanese chemists Nobel laureates in Chemistry Recipients of the Order of Culture Wolf Prize in Chemistry laureates Recipients of the Lomonosov Gold Medal Foreign members of the Royal Society Members of the Pontifical Academy of Sciences Foreign associates of the National Academy of Sciences Foreign members of the Russian Academy of Sciences Foreign members of the Chinese Academy of Sciences Japanese Nobel laureates Academic staff of Nagoya University Nagoya University alumni Riken personnel Stereochemists 21st-century Japanese chemists
Ryōji Noyori
[ "Chemistry", "Technology" ]
1,082
[ "Science and technology awards", "Stereochemistry", "Recipients of the Lomonosov Gold Medal", "Stereochemists" ]
54,423
https://en.wikipedia.org/wiki/Phase%20transition
In physics, chemistry, and other related fields like biology, a phase transition (or phase change) is the physical process of transition between one state of a medium and another. Commonly the term is used to refer to changes among the basic states of matter: solid, liquid, and gas, and in rare cases, plasma. A phase of a thermodynamic system and the states of matter have uniform physical properties. During a phase transition of a given medium, certain properties of the medium change as a result of the change of external conditions, such as temperature or pressure. This can be a discontinuous change; for example, a liquid may become gas upon heating to its boiling point, resulting in an abrupt change in volume. The identification of the external conditions at which a transformation occurs defines the phase transition point. Types of phase transition States of matter Phase transitions commonly refer to when a substance transforms between one of the four states of matter to another. At the phase transition point for a substance, for instance the boiling point, the two phases involved - liquid and vapor, have identical free energies and therefore are equally likely to exist. Below the boiling point, the liquid is the more stable state of the two, whereas above the boiling point the gaseous form is the more stable. Common transitions between the solid, liquid, and gaseous phases of a single component, due to the effects of temperature and/or pressure are identified in the following table: For a single component, the most stable phase at different temperatures and pressures can be shown on a phase diagram. Such a diagram usually depicts states in equilibrium. A phase transition usually occurs when the pressure or temperature changes and the system crosses from one region to another, like water turning from liquid to solid as soon as the temperature drops below the freezing point. In exception to the usual case, it is sometimes possible to change the state of a system diabatically (as opposed to adiabatically) in such a way that it can be brought past a phase transition point without undergoing a phase transition. The resulting state is metastable, i.e., less stable than the phase to which the transition would have occurred, but not unstable either. This occurs in superheating and supercooling, for example. Metastable states do not appear on usual phase diagrams. Structural Phase transitions can also occur when a solid changes to a different structure without changing its chemical makeup. In elements, this is known as allotropy, whereas in compounds it is known as polymorphism. The change from one crystal structure to another, from a crystalline solid to an amorphous solid, or from one amorphous structure to another () are all examples of solid to solid phase transitions. The martensitic transformation occurs as one of the many phase transformations in carbon steel and stands as a model for displacive phase transformations. Order-disorder transitions such as in alpha-titanium aluminides. As with states of matter, there is also a metastable to equilibrium phase transformation for structural phase transitions. A metastable polymorph which forms rapidly due to lower surface energy will transform to an equilibrium phase given sufficient thermal input to overcome an energetic barrier. Magnetic Phase transitions can also describe the change between different kinds of magnetic ordering. The most well-known is the transition between the ferromagnetic and paramagnetic phases of magnetic materials, which occurs at what is called the Curie point. Another example is the transition between differently ordered, commensurate or incommensurate, magnetic structures, such as in cerium antimonide. A simplified but highly useful model of magnetic phase transitions is provided by the Ising Model Mixtures Phase transitions involving solutions and mixtures are more complicated than transitions involving a single compound. While chemically pure compounds exhibit a single temperature melting point between solid and liquid phases, mixtures can either have a single melting point, known as congruent melting, or they have different liquidus and solidus temperatures resulting in a temperature span where solid and liquid coexist in equilibrium. This is often the case in solid solutions, where the two components are isostructural. There are also a number of phase transitions involving three phases: a eutectic transformation, in which a two-component single-phase liquid is cooled and transforms into two solid phases. The same process, but beginning with a solid instead of a liquid is called a eutectoid transformation. A peritectic transformation, in which a two-component single-phase solid is heated and transforms into a solid phase and a liquid phase. A peritectoid reaction is a peritectoid reaction, except involving only solid phases. A monotectic reaction consists of change from a liquid and to a combination of a solid and a second liquid, where the two liquids display a miscibility gap. Separation into multiple phases can occur via spinodal decomposition, in which a single phase is cooled and separates into two different compositions. Non-equilibrium mixtures can occur, such as in supersaturation. Other examples Other phase changes include: Transition to a mesophase between solid and liquid, such as one of the "liquid crystal" phases. The dependence of the adsorption geometry on coverage and temperature, such as for hydrogen on iron (110). The emergence of superconductivity in certain metals and ceramics when cooled below a critical temperature. The emergence of metamaterial properties in artificial photonic media as their parameters are varied.<ref>Eds. Zhou, W., and Fan. S., [https://www.sciencedirect.com/bookseries/semiconductors-and-semimetals/vol/100/suppl/C Semiconductors and Semimetals. Vol 100. Photonic Crystal Metasurface Optoelectronics], Elsevier, 2019</ref> Quantum condensation of bosonic fluids (Bose–Einstein condensation). The superfluid transition in liquid helium is an example of this. The breaking of symmetries in the laws of physics during the early history of the universe as its temperature cooled. Isotope fractionation occurs during a phase transition, the ratio of light to heavy isotopes in the involved molecules changes. When water vapor condenses (an equilibrium fractionation), the heavier water isotopes (18O and 2H) become enriched in the liquid phase while the lighter isotopes (16O and 1H) tend toward the vapor phase. Phase transitions occur when the thermodynamic free energy of a system is non-analytic for some choice of thermodynamic variables (cf. phases). This condition generally stems from the interactions of a large number of particles in a system, and does not appear in systems that are small. Phase transitions can occur for non-thermodynamic systems, where temperature is not a parameter. Examples include: quantum phase transitions, dynamic phase transitions, and topological (structural) phase transitions. In these types of systems other parameters take the place of temperature. For instance, connection probability replaces temperature for percolating networks. Classifications Ehrenfest classification Paul Ehrenfest classified phase transitions based on the behavior of the thermodynamic free energy as a function of other thermodynamic variables. Under this scheme, phase transitions were labeled by the lowest derivative of the free energy that is discontinuous at the transition. First-order phase transitions exhibit a discontinuity in the first derivative of the free energy with respect to some thermodynamic variable. The various solid/liquid/gas transitions are classified as first-order transitions because they involve a discontinuous change in density, which is the (inverse of the) first derivative of the free energy with respect to pressure. Second-order phase transitions are continuous in the first derivative (the order parameter, which is the first derivative of the free energy with respect to the external field, is continuous across the transition) but exhibit discontinuity in a second derivative of the free energy. These include the ferromagnetic phase transition in materials such as iron, where the magnetization, which is the first derivative of the free energy with respect to the applied magnetic field strength, increases continuously from zero as the temperature is lowered below the Curie temperature. The magnetic susceptibility, the second derivative of the free energy with the field, changes discontinuously. Under the Ehrenfest classification scheme, there could in principle be third, fourth, and higher-order phase transitions. For example, the Gross–Witten–Wadia phase transition in 2-d lattice quantum chromodynamics is a third-order phase transition. The Curie points of many ferromagnetics is also a third-order transition, as shown by their specific heat having a sudden change in slope. In practice, only the first- and second-order phase transitions are typically observed. The second-order phase transition was for a while controversial, as it seems to require two sheets of the Gibbs free energy to osculate exactly, which is so unlikely as to never occur in practice. Cornelis Gorter replied the criticism by pointing out that the Gibbs free energy surface might have two sheets on one side, but only one sheet on the other side, creating a forked appearance. ( pp. 146--150) The Ehrenfest classification implicitly allows for continuous phase transformations, where the bonding character of a material changes, but there is no discontinuity in any free energy derivative. An example of this occurs at the supercritical liquid–gas boundaries. The first example of a phase transition which did not fit into the Ehrenfest classification was the exact solution of the Ising model, discovered in 1944 by Lars Onsager. The exact specific heat differed from the earlier mean-field approximations, which had predicted that it has a simple discontinuity at critical temperature. Instead, the exact specific heat had a logarithmic divergence at the critical temperature. In the following decades, the Ehrenfest classification was replaced by a simplified classification scheme that is able to incorporate such transitions. Modern classifications In the modern classification scheme, phase transitions are divided into two broad categories, named similarly to the Ehrenfest classes: First-order phase transitions are those that involve a latent heat. During such a transition, a system either absorbs or releases a fixed (and typically large) amount of energy per volume. During this process, the temperature of the system will stay constant as heat is added: the system is in a "mixed-phase regime" in which some parts of the system have completed the transition and others have not.Faghri, A., and Zhang, Y., Fundamentals of Multiphase Heat Transfer and Flow, Springer, New York, NY, 2020 Familiar examples are the melting of ice or the boiling of water (the water does not instantly turn into vapor, but forms a turbulent mixture of liquid water and vapor bubbles). Yoseph Imry and Michael Wortis showed that quenched disorder can broaden a first-order transition. That is, the transformation is completed over a finite range of temperatures, but phenomena like supercooling and superheating survive and hysteresis is observed on thermal cycling. s are also called "continuous phase transitions". They are characterized by a divergent susceptibility, an infinite correlation length, and a power law decay of correlations near criticality. Examples of second-order phase transitions are the ferromagnetic transition, superconducting transition (for a Type-I superconductor the phase transition is second-order at zero external field and for a Type-II superconductor the phase transition is second-order for both normal-state–mixed-state and mixed-state–superconducting-state transitions) and the superfluid transition. In contrast to viscosity, thermal expansion and heat capacity of amorphous materials show a relatively sudden change at the glass transition temperature which enables accurate detection using differential scanning calorimetry measurements. Lev Landau gave a phenomenological theory of second-order phase transitions. Apart from isolated, simple phase transitions, there exist transition lines as well as multicritical points, when varying external parameters like the magnetic field or composition. Several transitions are known as infinite-order phase transitions. They are continuous but break no symmetries. The most famous example is the Kosterlitz–Thouless transition in the two-dimensional XY model. Many quantum phase transitions, e.g., in two-dimensional electron gases, belong to this class. The liquid–glass transition is observed in many polymers and other liquids that can be supercooled far below the melting point of the crystalline phase. This is atypical in several respects. It is not a transition between thermodynamic ground states: it is widely believed that the true ground state is always crystalline. Glass is a quenched disorder state, and its entropy, density, and so on, depend on the thermal history. Therefore, the glass transition is primarily a dynamic phenomenon: on cooling a liquid, internal degrees of freedom successively fall out of equilibrium. Some theoretical methods predict an underlying phase transition in the hypothetical limit of infinitely long relaxation times. No direct experimental evidence supports the existence of these transitions. Characteristic properties Phase coexistence A disorder-broadened first-order transition occurs over a finite range of temperatures where the fraction of the low-temperature equilibrium phase grows from zero to one (100%) as the temperature is lowered. This continuous variation of the coexisting fractions with temperature raised interesting possibilities. On cooling, some liquids vitrify into a glass rather than transform to the equilibrium crystal phase. This happens if the cooling rate is faster than a critical cooling rate, and is attributed to the molecular motions becoming so slow that the molecules cannot rearrange into the crystal positions. This slowing down happens below a glass-formation temperature Tg, which may depend on the applied pressure. If the first-order freezing transition occurs over a range of temperatures, and Tg falls within this range, then there is an interesting possibility that the transition is arrested when it is partial and incomplete. Extending these ideas to first-order magnetic transitions being arrested at low temperatures, resulted in the observation of incomplete magnetic transitions, with two magnetic phases coexisting, down to the lowest temperature. First reported in the case of a ferromagnetic to anti-ferromagnetic transition, such persistent phase coexistence has now been reported across a variety of first-order magnetic transitions. These include colossal-magnetoresistance manganite materials, magnetocaloric materials, magnetic shape memory materials, and other materials. The interesting feature of these observations of Tg falling within the temperature range over which the transition occurs is that the first-order magnetic transition is influenced by magnetic field, just like the structural transition is influenced by pressure. The relative ease with which magnetic fields can be controlled, in contrast to pressure, raises the possibility that one can study the interplay between Tg and Tc in an exhaustive way. Phase coexistence across first-order magnetic transitions will then enable the resolution of outstanding issues in understanding glasses. Critical points In any system containing liquid and gaseous phases, there exists a special combination of pressure and temperature, known as the critical point, at which the transition between liquid and gas becomes a second-order transition. Near the critical point, the fluid is sufficiently hot and compressed that the distinction between the liquid and gaseous phases is almost non-existent. This is associated with the phenomenon of critical opalescence, a milky appearance of the liquid due to density fluctuations at all possible wavelengths (including those of visible light). Symmetry Phase transitions often involve a symmetry breaking process. For instance, the cooling of a fluid into a crystalline solid breaks continuous translation symmetry: each point in the fluid has the same properties, but each point in a crystal does not have the same properties (unless the points are chosen from the lattice points of the crystal lattice). Typically, the high-temperature phase contains more symmetries than the low-temperature phase due to spontaneous symmetry breaking, with the exception of certain accidental symmetries (e.g. the formation of heavy virtual particles, which only occurs at low temperatures). Order parameters An order parameter is a measure of the degree of order across the boundaries in a phase transition system; it normally ranges between zero in one phase (usually above the critical point) and nonzero in the other. At the critical point, the order parameter susceptibility will usually diverge. An example of an order parameter is the net magnetization in a ferromagnetic system undergoing a phase transition. For liquid/gas transitions, the order parameter is the difference of the densities. From a theoretical perspective, order parameters arise from symmetry breaking. When this happens, one needs to introduce one or more extra variables to describe the state of the system. For example, in the ferromagnetic phase, one must provide the net magnetization, whose direction was spontaneously chosen when the system cooled below the Curie point. However, note that order parameters can also be defined for non-symmetry-breaking transitions. Some phase transitions, such as superconducting and ferromagnetic, can have order parameters for more than one degree of freedom. In such phases, the order parameter may take the form of a complex number, a vector, or even a tensor, the magnitude of which goes to zero at the phase transition. There also exist dual descriptions of phase transitions in terms of disorder parameters. These indicate the presence of line-like excitations such as vortex- or defect lines. Relevance in cosmology Symmetry-breaking phase transitions play an important role in cosmology. As the universe expanded and cooled, the vacuum underwent a series of symmetry-breaking phase transitions. For example, the electroweak transition broke the SU(2)×U(1) symmetry of the electroweak field into the U(1) symmetry of the present-day electromagnetic field. This transition is important to explain the asymmetry between the amount of matter and antimatter in the present-day universe, according to electroweak baryogenesis theory. Progressive phase transitions in an expanding universe are implicated in the development of order in the universe, as is illustrated by the work of Eric Chaisson and David Layzer. See also relational order theories and order and disorder. Critical exponents and universality classes Continuous phase transitions are easier to study than first-order transitions due to the absence of latent heat, and they have been discovered to have many interesting properties. The phenomena associated with continuous phase transitions are called critical phenomena, due to their association with critical points. Continuous phase transitions can be characterized by parameters known as critical exponents. The most important one is perhaps the exponent describing the divergence of the thermal correlation length by approaching the transition. For instance, let us examine the behavior of the heat capacity near such a transition. We vary the temperature T of the system while keeping all the other thermodynamic variables fixed and find that the transition occurs at some critical temperature Tc. When T is near Tc, the heat capacity C typically has a power law behavior: The heat capacity of amorphous materials has such a behaviour near the glass transition temperature where the universal critical exponent α = 0.59 A similar behavior, but with the exponent ν instead of α, applies for the correlation length. The exponent ν is positive. This is different with α. Its actual value depends on the type of phase transition we are considering. The critical exponents are not necessarily the same above and below the critical temperature. When a continuous symmetry is explicitly broken down to a discrete symmetry by irrelevant (in the renormalization group sense) anisotropies, then some exponents (such as , the exponent of the susceptibility) are not identical. For −1 < α < 0, the heat capacity has a "kink" at the transition temperature. This is the behavior of liquid helium at the lambda transition from a normal state to the superfluid state, for which experiments have found α = −0.013 ± 0.003. At least one experiment was performed in the zero-gravity conditions of an orbiting satellite to minimize pressure differences in the sample. This experimental value of α agrees with theoretical predictions based on variational perturbation theory. For 0 < α < 1, the heat capacity diverges at the transition temperature (though, since α < 1, the enthalpy stays finite). An example of such behavior is the 3D ferromagnetic phase transition. In the three-dimensional Ising model for uniaxial magnets, detailed theoretical studies have yielded the exponent α ≈ +0.110. Some model systems do not obey a power-law behavior. For example, mean field theory predicts a finite discontinuity of the heat capacity at the transition temperature, and the two-dimensional Ising model has a logarithmic divergence. However, these systems are limiting cases and an exception to the rule. Real phase transitions exhibit power-law behavior. Several other critical exponents, β, γ, δ, ν, and η, are defined, examining the power law behavior of a measurable physical quantity near the phase transition. Exponents are related by scaling relations, such as It can be shown that there are only two independent exponents, e.g. ν and η. It is a remarkable fact that phase transitions arising in different systems often possess the same set of critical exponents. This phenomenon is known as universality. For example, the critical exponents at the liquid–gas critical point have been found to be independent of the chemical composition of the fluid. More impressively, but understandably from above, they are an exact match for the critical exponents of the ferromagnetic phase transition in uniaxial magnets. Such systems are said to be in the same universality class. Universality is a prediction of the renormalization group theory of phase transitions, which states that the thermodynamic properties of a system near a phase transition depend only on a small number of features, such as dimensionality and symmetry, and are insensitive to the underlying microscopic properties of the system. Again, the divergence of the correlation length is the essential point. Critical phenomena There are also other critical phenomena; e.g., besides static functions there is also critical dynamics. As a consequence, at a phase transition one may observe critical slowing down or speeding up. Connected to the previous phenomenon is also the phenomenon of enhanced fluctuations before the phase transition, as a consequence of lower degree of stability of the initial phase of the system. The large static universality classes of a continuous phase transition split into smaller dynamic universality classes. In addition to the critical exponents, there are also universal relations for certain static or dynamic functions of the magnetic fields and temperature differences from the critical value. Phase transitions in biological systems Phase transitions play many important roles in biological systems. Examples include the lipid bilayer formation, the coil-globule transition in the process of protein folding and DNA melting, liquid crystal-like transitions in the process of DNA condensation, and cooperative ligand binding to DNA and proteins with the character of phase transition. In biological membranes, gel to liquid crystalline phase transitions play a critical role in physiological functioning of biomembranes. In gel phase, due to low fluidity of membrane lipid fatty-acyl chains, membrane proteins have restricted movement and thus are restrained in exercise of their physiological role. Plants depend critically on photosynthesis by chloroplast thylakoid membranes which are exposed cold environmental temperatures. Thylakoid membranes retain innate fluidity even at relatively low temperatures because of high degree of fatty-acyl disorder allowed by their high content of linolenic acid, 18-carbon chain with 3-double bonds. Gel-to-liquid crystalline phase transition temperature of biological membranes can be determined by many techniques including calorimetry, fluorescence, spin label electron paramagnetic resonance and NMR by recording measurements of the concerned parameter by at series of sample temperatures. A simple method for its determination from 13-C NMR line intensities has also been proposed. It has been proposed that some biological systems might lie near critical points. Examples include neural networks in the salamander retina, bird flocks gene expression networks in Drosophila, and protein folding. However, it is not clear whether or not alternative reasons could explain some of the phenomena supporting arguments for criticality. It has also been suggested that biological organisms share two key properties of phase transitions: the change of macroscopic behavior and the coherence of a system at a critical point. Phase transitions are prominent feature of motor behavior in biological systems. Spontaneous gait transitions, as well as fatigue-induced motor task disengagements, show typical critical behavior as an intimation of the sudden qualitative change of the previously stable motor behavioral pattern. The characteristic feature of second order phase transitions is the appearance of fractals in some scale-free properties. It has long been known that protein globules are shaped by interactions with water. There are 20 amino acids that form side groups on protein peptide chains range from hydrophilic to hydrophobic, causing the former to lie near the globular surface, while the latter lie closer to the globular center. Twenty fractals were discovered in solvent associated surface areas of > 5000 protein segments. The existence of these fractals proves that proteins function near critical points of second-order phase transitions. In groups of organisms in stress (when approaching critical transitions), correlations tend to increase, while at the same time, fluctuations also increase. This effect is supported by many experiments and observations of groups of people, mice, trees, and grassy plants. Phase transitions in social systems Phase transitions have been hypothesised to occur in social systems viewed as dynamical systems. A hypothesis proposed in the 1990s and 2000s in the context of peace and armed conflict is that when a conflict that is non-violent shifts to a phase of armed conflict, this is a phase transition from latent to manifest phases within the dynamical system. Experimental A variety of methods are applied for studying the various effects. Selected examples are: Hall effect (measurement of magnetic transitions) Mössbauer spectroscopy (simultaneous measurement of magnetic and non-magnetic transitions. Limited up to about 800–1000 °C) Neutron diffraction Perturbed angular correlation (simultaneous measurement of magnetic and non-magnetic transitions. No temperature limits. Over 2000 °C already performed, theoretical possible up to the highest crystal material, such as tantalum hafnium carbide 4215 °C.) Raman Spectroscopy SQUID (measurement of magnetic transitions) Thermogravimetry (very common) X-ray diffraction See also of second order phase transitions References Further reading Anderson, P.W., Basic Notions of Condensed Matter Physics, Perseus Publishing (1997). Faghri, A., and Zhang, Y., Fundamentals of Multiphase Heat Transfer and Flow, Springer Nature Switzerland AG, 2020. Goldenfeld, N., Lectures on Phase Transitions and the Renormalization Group, Perseus Publishing (1992). M.R. Khoshbin-e-Khoshnazar, Ice Phase Transition as a sample of finite system phase transition, (Physics Education (India) Volume 32. No. 2, Apr - Jun 2016) Kleinert, H., Gauge Fields in Condensed Matter, Vol. I, "Superfluidity and Vortex lines; Disorder Fields, Phase Transitions", pp. 1–742, World Scientific (Singapore, 1989); Paperback (physik.fu-berlin.de readable online) (readable online). Krieger, Martin H., Constitutions of matter : mathematically modelling the most everyday of physical phenomena, University of Chicago Press, 1996. Contains a detailed pedagogical discussion of Onsager's solution of the 2-D Ising Model. Landau, L.D. and Lifshitz, E.M., Statistical Physics Part 1, vol. 5 of Course of Theoretical Physics, Pergamon Press, 3rd Ed. (1994). Mussardo G., "Statistical Field Theory. An Introduction to Exactly Solved Models of Statistical Physics", Oxford University Press, 2010. Schroeder, Manfred R., Fractals, chaos, power laws : minutes from an infinite paradise, New York: W. H. Freeman, 1991. Very well-written book in "semi-popular" style—not a textbook—aimed at an audience with some training in mathematics and the physical sciences. Explains what scaling in phase transitions is all about, among other things. H. E. Stanley, Introduction to Phase Transitions and Critical Phenomena (Oxford University Press, Oxford and New York 1971). Yeomans J. M., Statistical Mechanics of Phase Transitions'', Oxford University Press, 1992. External links Interactive Phase Transitions on lattices with Java applets Universality classes from Sklogwiki Physical phenomena Critical phenomena
Phase transition
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
6,037
[ "Physical phenomena", "Phase transitions", "Critical phenomena", "Phases of matter", "Condensed matter physics", "Statistical mechanics", "Matter", "Dynamical systems" ]
54,427
https://en.wikipedia.org/wiki/Computer%20algebra%20system
A computer algebra system (CAS) or symbolic algebra system (SAS) is any mathematical software with the ability to manipulate mathematical expressions in a way similar to the traditional manual computations of mathematicians and scientists. The development of the computer algebra systems in the second half of the 20th century is part of the discipline of "computer algebra" or "symbolic computation", which has spurred work in algorithms over mathematical objects such as polynomials. Computer algebra systems may be divided into two classes: specialized and general-purpose. The specialized ones are devoted to a specific part of mathematics, such as number theory, group theory, or teaching of elementary mathematics. General-purpose computer algebra systems aim to be useful to a user working in any scientific field that requires manipulation of mathematical expressions. To be useful, a general-purpose computer algebra system must include various features such as: a user interface allowing a user to enter and display mathematical formulas, typically from a keyboard, menu selections, mouse or stylus. a programming language and an interpreter (the result of a computation commonly has an unpredictable form and an unpredictable size; therefore user intervention is frequently needed), a simplifier, which is a rewrite system for simplifying mathematics formulas, a memory manager, including a garbage collector, needed by the huge size of the intermediate data, which may appear during a computation, an arbitrary-precision arithmetic, needed by the huge size of the integers that may occur, a large library of mathematical algorithms and special functions. The library must not only provide for the needs of the users, but also the needs of the simplifier. For example, the computation of polynomial greatest common divisors is systematically used for the simplification of expressions involving fractions. This large amount of required computer capabilities explains the small number of general-purpose computer algebra systems. Significant systems include Axiom, GAP, Maxima, Magma, Maple, Mathematica, and SageMath. History In the 1950s, while computers were mainly used for numerical computations, there were some research projects into using them for symbolic manipulation. Computer algebra systems began to appear in the 1960s and evolved out of two quite different sources—the requirements of theoretical physicists and research into artificial intelligence. A prime example for the first development was the pioneering work conducted by the later Nobel Prize laureate in physics Martinus Veltman, who designed a program for symbolic mathematics, especially high-energy physics, called Schoonschip (Dutch for "clean ship") in 1963. Other early systems include FORMAC. Using Lisp as the programming basis, Carl Engelman created MATHLAB in 1964 at MITRE within an artificial-intelligence research environment. Later MATHLAB was made available to users on PDP-6 and PDP-10 systems running TOPS-10 or TENEX in universities. Today it can still be used on SIMH emulations of the PDP-10. MATHLAB ("mathematical laboratory") should not be confused with MATLAB ("matrix laboratory"), which is a system for numerical computation built 15 years later at the University of New Mexico. In 1987, Hewlett-Packard introduced the first hand-held calculator CAS with the HP-28 series. Other early handheld calculators with symbolic algebra capabilities included the Texas Instruments TI-89 series and TI-92 calculator, and the Casio CFX-9970G. The first popular computer algebra systems were muMATH, Reduce, Derive (based on muMATH), and Macsyma; a copyleft version of Macsyma is called Maxima. Reduce became free software in 2008. Commercial systems include Mathematica and Maple, which are commonly used by research mathematicians, scientists, and engineers. Freely available alternatives include SageMath (which can act as a front-end to several other free and nonfree CAS). Other significant systems include Axiom, GAP, Maxima and Magma. The movement to web-based applications in the early 2000s saw the release of WolframAlpha, an online search engine and CAS which includes the capabilities of Mathematica. More recently, computer algebra systems have been implemented using artificial neural networks, though as of 2020 they are not commercially available. Symbolic manipulations The symbolic manipulations supported typically include: simplification to a smaller expression or some standard form, including automatic simplification with assumptions and simplification with constraints substitution of symbols or numeric values for certain expressions change of form of expressions: expanding products and powers, partial and full factorization, rewriting as partial fractions, constraint satisfaction, rewriting trigonometric functions as exponentials, transforming logic expressions, etc. partial and total differentiation some indefinite and definite integration (see symbolic integration), including multidimensional integrals symbolic constrained and unconstrained global optimization solution of linear and some non-linear equations over various domains solution of some differential and difference equations taking some limits integral transforms series operations such as expansion, summation and products matrix operations including products, inverses, etc. statistical computation theorem proving and verification which is very useful in the area of experimental mathematics optimized code generation In the above, the word some indicates that the operation cannot always be performed. Additional capabilities Many also include: a programming language, allowing users to implement their own algorithms arbitrary-precision numeric operations exact integer arithmetic and number theory functionality Editing of mathematical expressions in two-dimensional form plotting graphs and parametric plots of functions in two and three dimensions, and animating them drawing charts and diagrams APIs for linking it on an external program such as a database, or using in a programming language to use the computer algebra system string manipulation such as matching and searching add-ons for use in applied mathematics such as physics, bioinformatics, computational chemistry and packages for physical computation solvers for differential equations Some include: graphic production and editing such as computer-generated imagery and signal processing as image processing sound synthesis Some computer algebra systems focus on specialized disciplines; these are typically developed in academia and are free. They can be inefficient for numeric operations as compared to numeric systems. Types of expressions The expressions manipulated by the CAS typically include polynomials in multiple variables; standard functions of expressions (sine, exponential, etc.); various special functions (Γ, ζ, erf, Bessel functions, etc.); arbitrary functions of expressions; optimization; derivatives, integrals, simplifications, sums, and products of expressions; truncated series with expressions as coefficients, matrices of expressions, and so on. Numeric domains supported typically include floating-point representation of real numbers, integers (of unbounded size), complex (floating-point representation), interval representation of reals, rational number (exact representation) and algebraic numbers. Use in education There have been many advocates for increasing the use of computer algebra systems in primary and secondary-school classrooms. The primary reason for such advocacy is that computer algebra systems represent real-world math more than do paper-and-pencil or hand calculator based mathematics. This push for increasing computer usage in mathematics classrooms has been supported by some boards of education. It has even been mandated in the curriculum of some regions. Computer algebra systems have been extensively used in higher education. Many universities offer either specific courses on developing their use, or they implicitly expect students to use them for their course work. The companies that develop computer algebra systems have pushed to increase their prevalence among university and college programs. CAS-equipped calculators are not permitted on the ACT, the PLAN, and in some classrooms though it may be permitted on all of College Board's calculator-permitted tests, including the SAT, some SAT Subject Tests and the AP Calculus, Chemistry, Physics, and Statistics exams. Mathematics used in computer algebra systems Knuth–Bendix completion algorithm Root-finding algorithms Symbolic integration via e.g. Risch algorithm or Risch–Norman algorithm Hypergeometric summation via e.g. Gosper's algorithm Limit computation via e.g. Gruntz's algorithm Polynomial factorization via e.g., over finite fields, Berlekamp's algorithm or Cantor–Zassenhaus algorithm. Greatest common divisor via e.g. Euclidean algorithm Gaussian elimination Gröbner basis via e.g. Buchberger's algorithm; generalization of Euclidean algorithm and Gaussian elimination Padé approximant Schwartz–Zippel lemma and testing polynomial identities Chinese remainder theorem Diophantine equations Landau's algorithm (nested radicals) Derivatives of elementary functions and special functions. (e.g. See derivatives of the incomplete gamma function.) Cylindrical algebraic decomposition Quantifier elimination over real numbers via cylindrical algebraic decomposition See also List of computer algebra systems Scientific computation Statistical package Automated theorem proving Algebraic modeling language Constraint-logic programming Satisfiability modulo theories References External links Curriculum and Assessment in an Age of Computer Algebra Systems - From the Education Resources Information Center Clearinghouse for Science, Mathematics, and Environmental Education, Columbus, Ohio. Richard J. Fateman. "Essays in algebraic simplification." Technical report MIT-LCS-TR-095, 1972. (Of historical interest in showing the direction of research in computer algebra. At the MIT LCS website: ) Algebra education
Computer algebra system
[ "Mathematics" ]
1,891
[ "Computer algebra systems", "Algebra education", "Algebra", "Mathematical software" ]
54,432
https://en.wikipedia.org/wiki/Unification%20%28computer%20science%29
In logic and computer science, specifically automated reasoning, unification is an algorithmic process of solving equations between symbolic expressions, each of the form Left-hand side = Right-hand side. For example, using x,y,z as variables, and taking f to be an uninterpreted function, the singleton equation set { f(1,y) = f(x,2) } is a syntactic first-order unification problem that has the substitution { x ↦ 1, y ↦ 2 } as its only solution. Conventions differ on what values variables may assume and which expressions are considered equivalent. In first-order syntactic unification, variables range over first-order terms and equivalence is syntactic. This version of unification has a unique "best" answer and is used in logic programming and programming language type system implementation, especially in Hindley–Milner based type inference algorithms. In higher-order unification, possibly restricted to higher-order pattern unification, terms may include lambda expressions, and equivalence is up to beta-reduction. This version is used in proof assistants and higher-order logic programming, for example Isabelle, Twelf, and lambdaProlog. Finally, in semantic unification or E-unification, equality is subject to background knowledge and variables range over a variety of domains. This version is used in SMT solvers, term rewriting algorithms, and cryptographic protocol analysis. Formal definition A unification problem is a finite set of equations to solve, where are in the set of terms or expressions. Depending on which expressions or terms are allowed to occur in an equation set or unification problem, and which expressions are considered equal, several frameworks of unification are distinguished. If higher-order variables, that is, variables representing functions, are allowed in an expression, the process is called higher-order unification, otherwise first-order unification. If a solution is required to make both sides of each equation literally equal, the process is called syntactic or free unification, otherwise semantic or equational unification, or E-unification, or unification modulo theory. If the right side of each equation is closed (no free variables), the problem is called (pattern) matching. The left side (with variables) of each equation is called the pattern. Prerequisites Formally, a unification approach presupposes An infinite set of variables. For higher-order unification, it is convenient to choose disjoint from the set of lambda-term bound variables. A set of terms such that . For first-order unification, is usually the set of first-order terms (terms built from variable and function symbols). For higher-order unification consists of first-order terms and lambda terms (terms containing some higher-order variables). A mapping , assigning to each term the set of free variables occurring in . A theory or equivalence relation on , indicating which terms are considered equal. For first-order E-unification, reflects the background knowledge about certain function symbols; for example, if is considered commutative, if results from by swapping the arguments of at some (possibly all) occurrences. In the most typical case that there is no background knowledge at all, then only literally, or syntactically, identical terms are considered equal. In this case, ≡ is called the free theory (because it is a free object), the empty theory (because the set of equational sentences, or the background knowledge, is empty), the theory of uninterpreted functions (because unification is done on uninterpreted terms), or the theory of constructors (because all function symbols just build up data terms, rather than operating on them). For higher-order unification, usually if and are alpha equivalent. As an example of how the set of terms and theory affects the set of solutions, the syntactic first-order unification problem { y = cons(2,y) } has no solution over the set of finite terms. However, it has the single solution { y ↦ cons(2,cons(2,cons(2,...))) } over the set of infinite tree terms. Similarly, the semantic first-order unification problem { a⋅x = x⋅a } has each substitution of the form { x ↦ a⋅...⋅a } as a solution in a semigroup, i.e. if (⋅) is considered associative. But the same problem, viewed in an abelian group, where (⋅) is considered also commutative, has any substitution at all as a solution. As an example of higher-order unification, the singleton set { a = y(x) } is a syntactic second-order unification problem, since y is a function variable. One solution is { x ↦ a, y ↦ (identity function) }; another one is { y ↦ (constant function mapping each value to a), x ↦ (any value) }. Substitution A substitution is a mapping from variables to terms; the notation refers to a substitution mapping each variable to the term , for , and every other variable to itself; the must be pairwise distinct. Applying that substitution to a term is written in postfix notation as ; it means to (simultaneously) replace every occurrence of each variable in the term by . The result of applying a substitution to a term is called an instance of that term . As a first-order example, applying the substitution to the term Generalization, specialization If a term has an instance equivalent to a term , that is, if for some substitution , then is called more general than , and is called more special than, or subsumed by, . For example, is more general than if ⊕ is commutative, since then . If ≡ is literal (syntactic) identity of terms, a term may be both more general and more special than another one only if both terms differ just in their variable names, not in their syntactic structure; such terms are called variants, or renamings of each other. For example, is a variant of , since and However, is not a variant of , since no substitution can transform the latter term into the former one. The latter term is therefore properly more special than the former one. For arbitrary , a term may be both more general and more special than a structurally different term. For example, if ⊕ is idempotent, that is, if always , then the term is more general than , and vice versa, although and are of different structure. A substitution is more special than, or subsumed by, a substitution if is subsumed by for each term . We also say that is more general than . More formally, take a nonempty infinite set of auxiliary variables such that no equation in the unification problem contains variables from . Then a substitution is subsumed by another substitution if there is a substitution such that for all terms , . For instance is subsumed by , using , but is not subsumed by , as is not an instance of . Solution set A substitution σ is a solution of the unification problem E if for . Such a substitution is also called a unifier of E. For example, if ⊕ is associative, the unification problem { x ⊕ a ≐ a ⊕ x } has the solutions {x ↦ a}, {x ↦ a ⊕ a}, {x ↦ a ⊕ a ⊕ a}, etc., while the problem { x ⊕ a ≐ a } has no solution. For a given unification problem E, a set S of unifiers is called complete if each solution substitution is subsumed by some substitution in S. A complete substitution set always exists (e.g. the set of all solutions), but in some frameworks (such as unrestricted higher-order unification) the problem of determining whether any solution exists (i.e., whether the complete substitution set is nonempty) is undecidable. The set S is called minimal if none of its members subsumes another one. Depending on the framework, a complete and minimal substitution set may have zero, one, finitely many, or infinitely many members, or may not exist at all due to an infinite chain of redundant members. Thus, in general, unification algorithms compute a finite approximation of the complete set, which may or may not be minimal, although most algorithms avoid redundant unifiers when possible. For first-order syntactical unification, Martelli and Montanari gave an algorithm that reports unsolvability or computes a single unifier that by itself forms a complete and minimal substitution set, called the most general unifier. Syntactic unification of first-order terms Syntactic unification of first-order terms is the most widely used unification framework. It is based on T being the set of first-order terms (over some given set V of variables, C of constants and Fn of n-ary function symbols) and on ≡ being syntactic equality. In this framework, each solvable unification problem has a complete, and obviously minimal, singleton solution set . Its member is called the most general unifier (mgu) of the problem. The terms on the left and the right hand side of each potential equation become syntactically equal when the mgu is applied i.e. . Any unifier of the problem is subsumed by the mgu . The mgu is unique up to variants: if S1 and S2 are both complete and minimal solution sets of the same syntactical unification problem, then S1 = { σ1 } and S2 = { σ2 } for some substitutions and and is a variant of for each variable x occurring in the problem. For example, the unification problem { x ≐ z, y ≐ f(x) } has a unifier { x ↦ z, y ↦ f(z) }, because {| |- | align="right" | x | { x ↦ z, y ↦ f(z) } | = | align="center" | z | = | align="right" | z | { x ↦ z, y ↦ f(z) } |, and |- | align="right" | y | { x ↦ z, y ↦ f(z) } | = | align="center" | f(z) | = | align="right" | f(x) | { x ↦ z, y ↦ f(z) } | . |} This is also the most general unifier. Other unifiers for the same problem are e.g. { x ↦ f(x1), y ↦ f(f(x1)), z ↦ f(x1) }, { x ↦ f(f(x1)), y ↦ f(f(f(x1))), z ↦ f(f(x1)) }, and so on; there are infinitely many similar unifiers. As another example, the problem g(x,x) ≐ f(y) has no solution with respect to ≡ being literal identity, since any substitution applied to the left and right hand side will keep the outermost g and f, respectively, and terms with different outermost function symbols are syntactically different. Unification algorithms Jacques Herbrand discussed the basic concepts of unification and sketched an algorithm in 1930. But most authors attribute the first unification algorithm to John Alan Robinson (cf. box). Robinson's algorithm had worst-case exponential behavior in both time and space. Numerous authors have proposed more efficient unification algorithms. Algorithms with worst-case linear-time behavior were discovered independently by and uses a similar technique as Paterson-Wegman, hence is linear, but like most linear-time unification algorithms is slower than the Robinson version on small sized inputs due to the overhead of preprocessing the inputs and postprocessing of the output, such as construction of a DAG representation. is also of linear complexity in the input size but is competitive with the Robinson algorithm on small size inputs. The speedup is obtained by using an object-oriented representation of the predicate calculus that avoids the need for pre- and post-processing, instead making variable objects responsible for creating a substitution and for dealing with aliasing. de Champeaux claims that the ability to add functionality to predicate calculus represented as programmatic objects provides opportunities for optimizing other logic operations as well. The following algorithm is commonly presented and originates from . Given a finite set of potential equations, the algorithm applies rules to transform it to an equivalent set of equations of the form { x1 ≐ u1, ..., xm ≐ um } where x1, ..., xm are distinct variables and u1, ..., um are terms containing none of the xi. A set of this form can be read as a substitution. If there is no solution the algorithm terminates with ⊥; other authors use "Ω", or "fail" in that case. The operation of substituting all occurrences of variable x in problem G with term t is denoted G {x ↦ t}. For simplicity, constant symbols are regarded as function symbols having zero arguments. {| | align="right" | | | | |     delete |- | align="right" | | | | |     decompose |- | align="right" | | | | align="right" | if or |     conflict |- | align="right" | | | | |     swap |- | align="right" | | | | align="right" | if and |     eliminate |- | align="right" | | | | align="right" | if |     check |} Occurs check An attempt to unify a variable x with a term containing x as a strict subterm x ≐ f(..., x, ...) would lead to an infinite term as solution for x, since x would occur as a subterm of itself. In the set of (finite) first-order terms as defined above, the equation x ≐ f(..., x, ...) has no solution; hence the eliminate rule may only be applied if x ∉ vars(t). Since that additional check, called occurs check, slows down the algorithm, it is omitted e.g. in most Prolog systems. From a theoretical point of view, omitting the check amounts to solving equations over infinite trees, see #Unification of infinite terms below. Proof of termination For the proof of termination of the algorithm consider a triple where is the number of variables that occur more than once in the equation set, is the number of function symbols and constants on the left hand sides of potential equations, and is the number of equations. When rule eliminate is applied, decreases, since x is eliminated from G and kept only in { x ≐ t }. Applying any other rule can never increase again. When rule decompose, conflict, or swap is applied, decreases, since at least the left hand side's outermost f disappears. Applying any of the remaining rules delete or check can't increase , but decreases . Hence, any rule application decreases the triple with respect to the lexicographical order, which is possible only a finite number of times. Conor McBride observes that "by expressing the structure which unification exploits" in a dependently typed language such as Epigram, Robinson's unification algorithm can be made recursive on the number of variables, in which case a separate termination proof becomes unnecessary. Examples of syntactic unification of first-order terms In the Prolog syntactical convention a symbol starting with an upper case letter is a variable name; a symbol that starts with a lowercase letter is a function symbol; the comma is used as the logical and operator. For mathematical notation, x,y,z are used as variables, f,g as function symbols, and a,b as constants. The most general unifier of a syntactic first-order unification problem of size may have a size of . For example, the problem has the most general unifier , cf. picture. In order to avoid exponential time complexity caused by such blow-up, advanced unification algorithms work on directed acyclic graphs (dags) rather than trees. Application: unification in logic programming The concept of unification is one of the main ideas behind logic programming. Specifically, unification is a basic building block of resolution, a rule of inference for determining formula satisfiability. In Prolog, the equality symbol = implies first-order syntactic unification. It represents the mechanism of binding the contents of variables and can be viewed as a kind of one-time assignment. In Prolog: A variable can be unified with a constant, a term, or another variable, thus effectively becoming its alias. In many modern Prolog dialects and in first-order logic, a variable cannot be unified with a term that contains it; this is the so-called occurs check. Two constants can be unified only if they are identical. Similarly, a term can be unified with another term if the top function symbols and arities of the terms are identical and if the parameters can be unified simultaneously. Note that this is a recursive behavior. Most operations, including +, -, *, /, are not evaluated by =. So for example 1+2 = 3 is not satisfiable because they are syntactically different. The use of integer arithmetic constraints #= introduces a form of E-unification for which these operations are interpreted and evaluated. Application: type inference Type inference algorithms are typically based on unification, particularly Hindley-Milner type inference which is used by the functional languages Haskell and ML. For example, when attempting to infer the type of the Haskell expression True : ['x'], the compiler will use the type a -> [a] -> [a] of the list construction function (:), the type Bool of the first argument True, and the type [Char] of the second argument ['x']. The polymorphic type variable a will be unified with Bool and the second argument [a] will be unified with [Char]. a cannot be both Bool and Char at the same time, therefore this expression is not correctly typed. Like for Prolog, an algorithm for type inference can be given: Any type variable unifies with any type expression, and is instantiated to that expression. A specific theory might restrict this rule with an occurs check. Two type constants unify only if they are the same type. Two type constructions unify only if they are applications of the same type constructor and all of their component types recursively unify. Application: Feature Structure Unification Unification has been used in different research areas of computational linguistics. Order-sorted unification Order-sorted logic allows one to assign a sort, or type, to each term, and to declare a sort s1 a subsort of another sort s2, commonly written as s1 ⊆ s2. For example, when reаsoning about biological creatures, it is useful to declare a sort dog to be a subsort of a sort animal. Wherever a term of some sort s is required, a term of any subsort of s may be supplied instead. For example, assuming a function declaration mother: animal → animal, and a constant declaration lassie: dog, the term mother(lassie) is perfectly valid and has the sort animal. In order to supply the information that the mother of a dog is a dog in turn, another declaration mother: dog → dog may be issued; this is called function overloading, similar to overloading in programming languages. Walther gave a unification algorithm for terms in order-sorted logic, requiring for any two declared sorts s1, s2 their intersection s1 ∩ s2 to be declared, too: if x1 and x2 is a variable of sort s1 and s2, respectively, the equation x1 ≐ x2 has the solution { x1 = x, x2 = x }, where x: s1 ∩ s2. After incorporating this algorithm into a clause-based automated theorem prover, he could solve a benchmark problem by translating it into order-sorted logic, thereby boiling it down an order of magnitude, as many unary predicates turned into sorts. Smolka generalized order-sorted logic to allow for parametric polymorphism. In his framework, subsort declarations are propagated to complex type expressions. As a programming example, a parametric sort list(X) may be declared (with X being a type parameter as in a C++ template), and from a subsort declaration int ⊆ float the relation list(int) ⊆ list(float) is automatically inferred, meaning that each list of integers is also a list of floats. Schmidt-Schauß generalized order-sorted logic to allow for term declarations. As an example, assuming subsort declarations even ⊆ int and odd ⊆ int, a term declaration like ∀ i : int. (i + i) : even allows to declare a property of integer addition that could not be expressed by ordinary overloading. Unification of infinite terms Background on infinite trees: Unification algorithm, Prolog II: Applications: E-unification E-unification is the problem of finding solutions to a given set of equations, taking into account some equational background knowledge E. The latter is given as a set of universal equalities. For some particular sets E, equation solving algorithms (a.k.a. E-unification algorithms) have been devised; for others it has been proven that no such algorithms can exist. For example, if and are distinct constants, the equation has no solution with respect to purely syntactic unification, where nothing is known about the operator . However, if the is known to be commutative, then the substitution solves the above equation, since {| | | | |- | | | | by substitution application |- | | | | by commutativity of |- | | | | by (converse) substitution application |} The background knowledge E could state the commutativity of by the universal equality " for all ". Particular background knowledge sets E It is said that unification is decidable for a theory, if a unification algorithm has been devised for it that terminates for any input problem. It is said that unification is semi-decidable for a theory, if a unification algorithm has been devised for it that terminates for any solvable input problem, but may keep searching forever for solutions of an unsolvable input problem. Unification is decidable for the following theories: , ,, ,, , , (monoid) Boolean rings Abelian groups, even if the signature is expanded by arbitrary additional symbols (but not axioms) K4 modal algebras Unification is semi-decidable for the following theories: , ,, Commutative rings One-sided paramodulation If there is a convergent term rewriting system R available for E, the one-sided paramodulation algorithm can be used to enumerate all solutions of given equations. {| style="border: 1px solid darkgray;" |+ One-sided paramodulation rules |- border="0" | align="right" | G ∪ { f(s1,...,sn) ≐ f(t1,...,tn) } | ; S | ⇒ | align="right" | G ∪ { s1 ≐ t1, ..., sn ≐ tn } | ; S| |     decompose|- | align="right" | G ∪ { x ≐ t } | ; S| ⇒ | align="right" | G { x ↦ t } |; S{x↦t} ∪ {x↦t} | align="right" | if the variable x doesn't occur in t|     eliminate|- | align="right" | G ∪ { f(s1,...,sn) ≐ t } | ; S| ⇒ | align="right" | G ∪ { s1 ≐ u1, ..., sn ≐ un, r ≐ t } | ; S| align="right" |     if f(u1,...,un) → r is a rule from R|     mutate|- | align="right" | G ∪ { f(s1,...,sn) ≐ y } | ; S|⇒ | align="right" | G ∪ { s1 ≐ y1, ..., sn ≐ yn, y ≐ f(y1,...,yn) } | ; S| align="right" | if y1,...,yn are new variables |     imitate|} Starting with G being the unification problem to be solved and S being the identity substitution, rules are applied nondeterministically until the empty set appears as the actual G, in which case the actual S is a unifying substitution. Depending on the order the paramodulation rules are applied, on the choice of the actual equation from G, and on the choice of Rs rules in mutate, different computations paths are possible. Only some lead to a solution, while others end at a G ≠ {} where no further rule is applicable (e.g. G = { f(...) ≐ g(...) }). For an example, a term rewrite system R is used defining the append operator of lists built from cons and nil; where cons(x,y) is written in infix notation as x.y for brevity; e.g. app(a.b.nil,c.d.nil) → a.app(b.nil,c.d.nil) → a.b.app(nil,c.d.nil) → a.b.c.d.nil demonstrates the concatenation of the lists a.b.nil and c.d.nil, employing the rewrite rule 2,2, and 1. The equational theory E corresponding to R is the congruence closure of R, both viewed as binary relations on terms. For example, app(a.b.nil,c.d.nil) ≡ a.b.c.d.nil ≡ app(a.b.c.d.nil,nil). The paramodulation algorithm enumerates solutions to equations with respect to that E when fed with the example R. A successful example computation path for the unification problem { app(x,app(y,x)) ≐ a.a.nil } is shown below. To avoid variable name clashes, rewrite rules are consistently renamed each time before their use by rule mutate; v2, v3, ... are computer-generated variable names for this purpose. In each line, the chosen equation from G is highlighted in red. Each time the mutate rule is applied, the chosen rewrite rule (1 or 2) is indicated in parentheses. From the last line, the unifying substitution S = { y ↦ nil, x ↦ a.nil } can be obtained. In fact,app(x,app(y,x)) {y↦nil, x↦ a.nil } = app(a.nil,app(nil,a.nil)) ≡ app(a.nil,a.nil) ≡ a.app(nil,a.nil) ≡ a.a.nil solves the given problem. A second successful computation path, obtainable by choosing "mutate(1), mutate(2), mutate(2), mutate(1)" leads to the substitution S = { y ↦ a.a.nil, x ↦ nil }; it is not shown here. No other path leads to a success. Narrowing If R is a convergent term rewriting system for E, an approach alternative to the previous section consists in successive application of "narrowing steps"; this will eventually enumerate all solutions of a given equation. A narrowing step (cf. picture) consists in choosing a nonvariable subterm of the current term, syntactically unifying it with the left hand side of a rule from R, and replacing the instantiated rule's right hand side into the instantiated term. Formally, if is a renamed copy of a rewrite rule from R, having no variables in common with a term s, and the subterm is not a variable and is unifiable with via the mgu , then can be narrowed to the term , i.e. to the term , with the subterm at p replaced by . The situation that s can be narrowed to t is commonly denoted as s ↝ t. Intuitively, a sequence of narrowing steps t1 ↝ t2 ↝ ... ↝ tn can be thought of as a sequence of rewrite steps t1 → t2 → ... → tn, but with the initial term t1 being further and further instantiated, as necessary to make each of the used rules applicable. The above example paramodulation computation corresponds to the following narrowing sequence ("↓" indicating instantiation here): The last term, v2.v2.nil can be syntactically unified with the original right hand side term a.a.nil. The narrowing lemma ensures that whenever an instance of a term s can be rewritten to a term t by a convergent term rewriting system, then s and t can be narrowed and rewritten to a term and , respectively, such that is an instance of . Formally: whenever holds for some substitution σ, then there exist terms such that and and for some substitution τ. Higher-order unification Many applications require one to consider the unification of typed lambda-terms instead of first-order terms. Such unification is often called higher-order unification. Higher-order unification is undecidable, and such unification problems do not have most general unifiers. For example, the unification problem { f(a,b,a) ≐ d(b,a,c) }, where the only variable is f, has the solutions {f ↦ λx.λy.λz. d(y,x,c) }, {f ↦ λx.λy.λz. d(y,z,c) }, {f ↦ λx.λy.λz. d(y,a,c) }, {f ↦ λx.λy.λz. d(b,x,c) }, {f ↦ λx.λy.λz. d(b,z,c) } and {f ↦ λx.λy.λz. d(b,a,c) }. A well studied branch of higher-order unification is the problem of unifying simply typed lambda terms modulo the equality determined by αβη conversions. Gérard Huet gave a semi-decidable (pre-)unification algorithm that allows a systematic search of the space of unifiers (generalizing the unification algorithm of Martelli-Montanari with rules for terms containing higher-order variables) that seems to work sufficiently well in practice. Huet and Gilles Dowek have written articles surveying this topic. Several subsets of higher-order unification are well-behaved, in that they are decidable and have a most-general unifier for solvable problems. One such subset is the previously described first-order terms. Higher-order pattern unification, due to Dale Miller, is another such subset. The higher-order logic programming languages λProlog and Twelf have switched from full higher-order unification to implementing only the pattern fragment; surprisingly pattern unification is sufficient for almost all programs, if each non-pattern unification problem is suspended until a subsequent substitution puts the unification into the pattern fragment. A superset of pattern unification called functions-as-constructors unification is also well-behaved. The Zipperposition theorem prover has an algorithm integrating these well-behaved subsets into a full higher-order unification algorithm. In computational linguistics, one of the most influential theories of elliptical construction is that ellipses are represented by free variables whose values are then determined using Higher-Order Unification. For instance, the semantic representation of "Jon likes Mary and Peter does too" is and the value of R (the semantic representation of the ellipsis) is determined by the equation . The process of solving such equations is called Higher-Order Unification. Wayne Snyder gave a generalization of both higher-order unification and E-unification, i.e. an algorithm to unify lambda-terms modulo an equational theory. See also Rewriting Admissible rule Explicit substitution in lambda calculus Mathematical equation solving Dis-unification: solving inequations between symbolic expression Anti-unification: computing a least general generalization (lgg) of two terms, dual to computing a most general instance (mgu) Subsumption lattice, a lattice having unification as meet and anti-unification as join Ontology alignment (use unification with semantic equivalence) Notes References Further reading Franz Baader and Wayne Snyder (2001). "Unification Theory" . In John Alan Robinson and Andrei Voronkov, editors, Handbook of Automated Reasoning, volume I, pages 447–533. Elsevier Science Publishers. Gilles Dowek (2001). "Higher-order Unification and Matching" . In Handbook of Automated Reasoning. Franz Baader and Tobias Nipkow (1998). Term Rewriting and All That. Cambridge University Press. Franz Baader and (1993). "Unification Theory". In Handbook of Logic in Artificial Intelligence and Logic Programming. Jean-Pierre Jouannaud and Claude Kirchner (1991). "Solving Equations in Abstract Algebras: A Rule-Based Survey of Unification". In Computational Logic: Essays in Honor of Alan Robinson. Nachum Dershowitz and Jean-Pierre Jouannaud, Rewrite Systems, in: Jan van Leeuwen (ed.), Handbook of Theoretical Computer Science, volume B Formal Models and Semantics, Elsevier, 1990, pp. 243–320 Jörg H. Siekmann (1990). "Unification Theory". In Claude Kirchner (editor) Unification. Academic Press. Gérard Huet and Derek C. Oppen (1980). "Equations and Rewrite Rules: A Survey". Technical report. Stanford University. Claude Kirchner and Hélène Kirchner. Rewriting, Solving, Proving''. In preparation. Automated theorem proving Logic programming Rewriting systems Logic in computer science Type theory
Unification (computer science)
[ "Mathematics" ]
7,242
[ "Mathematical structures", "Logic in computer science", "Automated theorem proving", "Unification (computer science)", "Mathematical logic", "Mathematical objects", "Computational mathematics", "Equations", "Type theory" ]
54,439
https://en.wikipedia.org/wiki/Metabolic%20syndrome
Metabolic syndrome is a clustering of at least three of the following five medical conditions: abdominal obesity, high blood pressure, high blood sugar, high serum triglycerides, and low serum high-density lipoprotein (HDL). Metabolic syndrome is associated with the risk of developing cardiovascular disease and type 2 diabetes. In the U.S., about 25% of the adult population has metabolic syndrome, a proportion increasing with age, particularly among racial and ethnic minorities. Insulin resistance, metabolic syndrome, and prediabetes are closely related to one another and have overlapping aspects. The syndrome is thought to be caused by an underlying disorder of energy utilization and storage, but the cause of the syndrome is an area of ongoing medical research. Researchers debate whether a diagnosis of metabolic syndrome implies differential treatment or increases risk of cardiovascular disease beyond what is suggested by the sum of its individual components. Signs and symptoms The key sign of metabolic syndrome is central obesity, also known as visceral, male-pattern or apple-shaped adiposity. It is characterized by adipose tissue accumulation predominantly around the waist and trunk. Other signs of metabolic syndrome include high blood pressure, decreased fasting serum HDL cholesterol, elevated fasting serum triglyceride level, impaired fasting glucose, insulin resistance, or prediabetes. Associated conditions include hyperuricemia; fatty liver (especially in concurrent obesity) progressing to nonalcoholic fatty liver disease; polycystic ovarian syndrome in women and erectile dysfunction in men; and acanthosis nigricans. Neck circumference Neck circumference has been used as a surrogate simple and reliable index to indicate upper-body subcutaneous fat accumulation. Neck circumference of more than for men and more than for women are considered high-risk for metabolic syndrome. Persons with large neck circumferences have a more-than-double risk of metabolic syndrome. In adults with overweight/obesity, clinically significant weight loss may protect against COVID-19 and neck circumference has been associated with the risk of being mechanically ventilated in COVID-19 patients, with a 26% increased risk for each centimeter increase in neck circumference. Moreover, hospitalized COVID-19 patients with a "large neck phenotype" on admission had a more than double risk of death. Complications Metabolic syndrome can lead to several serious and chronic complications, including type-2 diabetes, cardiovascular diseases, stroke, kidney disease and nonalcoholic fatty liver disease. Furthermore, metabolic syndrome is associated with a significantly increased risk of surgical complications across most types of surgery in a 2023 systematic review and meta-analysis of over 13 million individuals. Causes The mechanisms of the complex pathways of metabolic syndrome are under investigation. The pathophysiology is very complex and has been only partially elucidated. Most people affected by the condition are older, obese, sedentary, and have a degree of insulin resistance. Stress can also be a contributing factor. The most important risk factors are diet (particularly sugar-sweetened beverage consumption), genetics, aging, sedentary behavior or low physical activity, disrupted chronobiology/sleep, mood disorders/psychotropic medication use, and excessive alcohol use. The pathogenic role played in the syndrome by the excessive expansion of adipose tissue occurring under sustained overeating, and its resulting lipotoxicity was reviewed by Vidal-Puig. Recent studies have highlighted the global prevalence of metabolic syndrome, driven by the rise in obesity and type 2 diabetes. The World Health Organization (WHO) and other major health organizations define metabolic syndrome with criteria that include central obesity, insulin resistance, hypertension, and dyslipidemia. As of 2015, metabolic syndrome affects approximately 25% of the global population, with rates significantly higher in urban areas due to increased consumption of high-calorie, low-nutrient diets and decreased physical activity. This condition is associated with a threefold increase in the risk of type 2 diabetes and cardiovascular disease, accounting for a substantial burden of non-communicable diseases globally (Saklayen, 2018). There is debate regarding whether obesity or insulin resistance is the cause of the metabolic syndrome or if they are consequences of a more far-reaching metabolic derangement. Markers of systemic inflammation, including C-reactive protein, are often increased, as are fibrinogen, interleukin 6, tumor necrosis factor-alpha (TNF-α), and others. Some have pointed to a variety of causes, including increased uric acid levels caused by dietary fructose. Research shows that Western diet habits are a factor in the development of metabolic syndrome, with high consumption of food that is not biochemically suited to humans. Weight gain is associated with metabolic syndrome. Rather than total adiposity, the core clinical component of the syndrome is visceral and/or ectopic fat (i.e., fat in organs not designed for fat storage) whereas the principal metabolic abnormality is insulin resistance. The continuous provision of energy via dietary carbohydrate, lipid, and protein fuels, unmatched by physical activity/energy demand, creates a backlog of the products of mitochondrial oxidation, a process associated with progressive mitochondrial dysfunction and insulin resistance. Stress Recent research indicates prolonged chronic stress can contribute to metabolic syndrome by disrupting the hormonal balance of the hypothalamic-pituitary-adrenal axis (HPA-axis). A dysfunctional HPA-axis causes high cortisol levels to circulate, which results in raising glucose and insulin levels, which in turn cause insulin-mediated effects on adipose tissue, ultimately promoting visceral adiposity, insulin resistance, dyslipidemia and hypertension, with direct effects on the bone, causing "low turnover" osteoporosis. HPA-axis dysfunction may explain the reported risk indication of abdominal obesity to cardiovascular disease (CVD), type 2 diabetes and stroke. Psychosocial stress is also linked to heart disease. Obesity Central obesity is a key feature of the syndrome, as both a sign and a cause, in that the increasing adiposity often reflected in high waist circumference may both result from and contribute to insulin resistance. However, despite the importance of obesity, affected people who are of normal weight may also be insulin-resistant and have the syndrome. Sedentary lifestyle Physical inactivity is a predictor of CVD events and related mortality. Many components of metabolic syndrome are associated with a sedentary lifestyle, including increased adipose tissue (predominantly central); reduced HDL cholesterol; and a trend toward increased triglycerides, blood pressure, and glucose in the genetically susceptible. Compared with individuals who watched television or videos or used their computers for less than one hour daily, those who carried out these behaviors for greater than four hours daily have a twofold increased risk of metabolic syndrome. Aging Metabolic syndrome affects 60% of the U.S. population older than age 50. With respect to that demographic, the percentage of women having the syndrome is higher than that of men. The age dependency of the syndrome's prevalence is seen in most populations around the world. Diabetes mellitus type 2 The metabolic syndrome quintuples the risk of type 2 diabetes mellitus. Type 2 diabetes is considered a complication of metabolic syndrome. In people with impaired glucose tolerance or impaired fasting glucose, presence of metabolic syndrome doubles the risk of developing type 2 diabetes. It is likely that prediabetes and metabolic syndrome denote the same disorder, defining it by the different sets of biological markers. The presence of metabolic syndrome is associated with a higher prevalence of CVD than found in people with type 2 diabetes or impaired glucose tolerance without the syndrome. Hypoadiponectinemia has been shown to increase insulin resistance and is considered to be a risk factor for developing metabolic syndrome. Coronary heart disease The approximate prevalence of the metabolic syndrome in people with coronary artery disease (CAD) is 50%, with a prevalence of 37% in people with premature coronary artery disease (age 45), particularly in women. With appropriate cardiac rehabilitation and changes in lifestyle (e.g., nutrition, physical activity, weight reduction, and, in some cases, drugs), the prevalence of the syndrome can be reduced. Lipodystrophy Lipodystrophic disorders in general are associated with metabolic syndrome. Both genetic (e.g., Berardinelli-Seip congenital lipodystrophy, Dunnigan familial partial lipodystrophy) and acquired (e.g., HIV-related lipodystrophy in people treated with highly active antiretroviral therapy) forms of lipodystrophy may give rise to severe insulin resistance and many of metabolic syndrome's components. Rheumatic diseases There is research that associates comorbidity with rheumatic diseases. Both psoriasis and psoriatic arthritis have been found to be associated with metabolic syndrome. Chronic obstructive pulmonary disease Metabolic syndrome is seen to be a comorbidity in up to 50 percent of those with chronic obstructive pulmonary disease (COPD). It may pre-exist or may be a consequence of the lung pathology of COPD. Pathophysiology It is common for there to be a development of visceral fat, after which the adipocytes (fat cells) of the visceral fat increase plasma levels of TNF-α and alter levels of other substances (e.g., adiponectin, resistin, and PAI-1). TNF-α has been shown to cause the production of inflammatory cytokines and also possibly trigger cell signaling by interaction with a TNF-α receptor that may lead to insulin resistance. An experiment with rats fed a diet with 33% sucrose has been proposed as a model for the development of metabolic syndrome. The sucrose first elevated blood levels of triglycerides, which induced visceral fat and ultimately resulted in insulin resistance. The progression from visceral fat to increased TNF-α to insulin resistance has some parallels to human development of metabolic syndrome. The increase in adipose tissue also increases the number of immune cells, which play a role in inflammation. Chronic inflammation contributes to an increased risk of hypertension, atherosclerosis and diabetes. The involvement of the endocannabinoid system in the development of metabolic syndrome is indisputable. Endocannabinoid overproduction may induce reward system dysfunction and cause executive dysfunctions (e.g., impaired delay discounting), in turn perpetuating unhealthy behaviors. The brain is crucial in development of metabolic syndrome, modulating peripheral carbohydrate and lipid metabolism. Metabolic syndrome can be induced by overfeeding with sucrose or fructose, particularly concomitantly with high-fat diet. The resulting oversupply of omega-6 fatty acids, particularly arachidonic acid (AA), is an important factor in the pathogenesis of metabolic syndrome. Arachidonic acid (with its precursor – linoleic acid) serves as a substrate to the production of inflammatory mediators known as eicosanoids, whereas the arachidonic acid-containing compound diacylglycerol (DAG) is a precursor to the endocannabinoid 2-arachidonoylglycerol (2-AG) while fatty acid amide hydrolase (FAAH) mediates the metabolism of anandamide into arachidonic acid. Anandamide can also be produced from N-acylphosphatidylethanolamine via several pathways. Anandamide and 2-AG can also be hydrolized into arachidonic acid, potentially leading to increased eicosanoid synthesis. Diagnosis NCEP As of 2023, the U.S. National Cholesterol Education Program Adult Treatment Panel III (2001) continues to be the most widely used clinical definition. It requires at least three of the following: Central obesity: waist circumference ≥ 102 cm or 40 inches (male), ≥ 88 cm or 35 inches(female) Dyslipidemia: TG ≥ 1.7 mmol/L (150 mg/dL) Dyslipidemia: HDL-C < 40 mg/dL (male), < 50 mg/dL (female) Blood pressure ≥ 130/85 mmHg (or treated for hypertension) Fasting plasma glucose ≥ 6.1 mmol/L (110 mg/dL) 2009 Interim Joint Statement The International Diabetes Federation Task Force on Epidemiology and Prevention; the National Heart, Lung, and Blood Institute; the American Heart Association; the World Heart Federation; the International Atherosclerosis Society; and the International Association for the Study of Obesity published an interim joint statement to harmonize the definition of the metabolic syndrome in 2009. According to this statement, the criteria for clinical diagnosis of the metabolic syndrome are three or more of the following: Elevated waist circumference with population- and country-specific definitions Elevated triglycerides (≥ 150 mg/dL (1.7 mmol/L)) Reduced HDL-C (≤40 mg/dL (1.0 mmol/L) in males, ≤50 mg/dL (1.3 mmol/L) in females) Elevated blood pressure (systolic ≥130 and/or diastolic ≥85 mm Hg) Elevated fasting glucose (≥100 mg/dL (5.55 mmol/L) This definition recognizes that the risk associated with a particular waist measurement will differ in different populations. However, for international comparisons and to facilitate the etiology, the organizations agree that it is critical that a commonly agreed-upon set of criteria be used worldwide, with agreed-upon cut points for different ethnic groups and sexes. There are many people in the world of mixed ethnicity, and in those cases, pragmatic decisions will have to be made. Therefore, an international criterion of overweight may be more appropriate than ethnic specific criteria of abdominal obesity for an anthropometric component of this syndrome which results from an excess lipid storage in adipose tissue, skeletal muscle and liver. The report notes that previous definitions of the metabolic syndrome by the International Diabetes Federation (IDF) and the revised National Cholesterol Education Program (NCEP) are very similar, and they identify individuals with a given set of symptoms as having metabolic syndrome. There are two differences, however: the IDF definition states that if body mass index (BMI) is greater than 30 kg/m2, central obesity can be assumed, and waist circumference does not need to be measured. However, this potentially excludes any subject without increased waist circumference if BMI is less than 30. Conversely, the NCEP definition indicates that metabolic syndrome can be diagnosed based on other criteria. Also, the IDF uses geography-specific cut points for waist circumference, while NCEP uses only one set of cut points for waist circumference regardless of geography. WHO The World Health Organization (1999) requires the presence of any one of diabetes mellitus, impaired glucose tolerance, impaired fasting glucose or insulin resistance, AND two of the following: Blood pressure ≥ 140/90 mmHg Dyslipidemia: triglycerides (TG) ≥ 1.695 mmol/L and HDL cholesterol ≤ 0.9 mmol/L (male), ≤ 1.0 mmol/L (female) Central obesity: waist:hip ratio > 0.90 (male); > 0.85 (female), or BMI > 30 kg/m2 Microalbuminuria: urinary albumin excretion ratio ≥ 20 μg/min or albumin:creatinine ratio ≥ 30 mg/g EGIR The European Group for the Study of Insulin Resistance (1999) requires that subjects have insulin resistance (defined for purposes of clinical practivality as the top 25% of the fasting insulin values among nondiabetic individuals) AND two or more of the following: Central obesity: waist circumference ≥ 94 cm or 37 inches (male), ≥ 80 cm or 31.5 inches (female) Dyslipidemia: TG ≥ 2.0 mmol/L (177 mg/dL) and/or HDL-C < 1.0 mmol/L (38.61 mg/dL) or treated for dyslipidemia Blood pressure ≥ 140/90 mmHg or antihypertensive medication Fasting plasma glucose ≥ 6.1 mmol/L (110 mg/dL) Cardiometabolic index The Cardiometabolic index (CMI) is a tool used to calculate risk of type 2 diabetes, non-alcoholic fatty liver disease, and metabolic issues. It is based on calculations from waist-to-height ratio and triglycerides-to-HDL cholesterol ratio. CMI can also be used for finding connections between cardiovascular disease and erectile dysfunction. When following an anti inflammatory diet (low-glycemic carbohydrates, fruits, vegetables, fish, less red meat and processed foods) the markers may drop resulting in a significant reduction in body weight and adipose tissue. Other High-sensitivity C-reactive protein has been developed and used as a marker to predict coronary vascular diseases in metabolic syndrome, and it was recently used as a predictor for nonalcoholic fatty liver disease (steatohepatitis) in correlation with serum markers that indicated lipid and glucose metabolism. Fatty liver disease and steatohepatitis can be considered manifestations of metabolic syndrome, indicative of abnormal energy storage as fat in ectopic distribution. Reproductive disorders (such as polycystic ovary syndrome in women of reproductive age), and erectile dysfunction or decreased total testosterone (low testosterone-binding globulin) in men can be attributed to metabolic syndrome. Prevention Various strategies have been proposed to prevent the development of metabolic syndrome. These include increased physical activity (such as walking 30 minutes every day), and a healthy, reduced calorie diet. Many studies support the value of a healthy lifestyle as above. However, one study stated these potentially beneficial measures are effective in only a minority of people, primarily because of a lack of compliance with lifestyle and diet changes. The International Obesity Taskforce states that interventions on a sociopolitical level are required to reduce development of the metabolic syndrome in populations. The Caerphilly Heart Disease Study followed 2,375 male subjects over 20 years and suggested the daily intake of an Imperial pint (~568 mL) of milk or equivalent dairy products more than halved the risk of metabolic syndrome. Some subsequent studies support the authors' findings, while others dispute them. A systematic review of four randomized controlled trials said that, in the short term, a paleolithic nutritional pattern improved three of five measurable components of the metabolic syndrome in participants with at least one of the components. Management Diet Dietary carbohydrate restriction reduces blood glucose levels, contributes to weight loss, and reduces the use of several medications that may be prescribed for metabolic syndrome. Studies suggest that meal timing and frequency can significantly impact the risk of developing metabolic syndrome. Research indicates that individuals who maintain regular meal timings and avoid eating late at night have a reduced risk of developing this condition. Medications Generally, the individual disorders that compose the metabolic syndrome are treated separately. Diuretics and ACE inhibitors may be used to treat hypertension. Various cholesterol medications may be useful if LDL cholesterol, triglycerides, and/or HDL cholesterol is abnormal. Epidemiology Approximately 20–25 percent of the world's adult population has the cluster of risk factors that is metabolic syndrome. In 2000, approximately 32% of U.S. adults had metabolic syndrome. In more recent years that figure has climbed to 34%. In young children, there is no consensus on how to measure metabolic syndrome since age-specific cut points and reference values that would indicate "high risk" have not been well established. A continuous cardiometabolic risk summary score is often used for children instead of a dichotomous measure of metabolic syndrome. Other conditions and specific microbiome diversity seems to be associated with metabolic syndrome, with certain-degree of gender-specificity. History In 1921, Joslin first reported the association of diabetes with hypertension and hyperuricemia. In 1923, Kylin reported additional studies on the above triad. In 1947, Vague observed that upper body obesity appeared to predispose to diabetes, atherosclerosis, gout and calculi. In the late 1950s, the term metabolic syndrome was first used. In 1967, Avogadro, Crepaldi and coworkers described six moderately obese people with diabetes, hypercholesterolemia, and marked hypertriglyceridemia, all of which improved when the affected people were put on a hypocaloric, low-carbohydrate diet. In 1977, Haller used the term metabolic syndrome for associations of obesity, diabetes mellitus, hyperlipoproteinemia, hyperuricemia, and hepatic steatosis when describing the additive effects of risk factors on atherosclerosis. The same year, Singer used the term for associations of obesity, gout, diabetes mellitus, and hypertension with hyperlipoproteinemia. In 1977 and 1978, Gerald B. Phillips developed the concept that risk factors for myocardial infarction concur to form a "constellation of abnormalities" (i.e., glucose intolerance, hyperinsulinemia, hypercholesterolemia, hypertriglyceridemia, and hypertension) associated not only with heart disease, but also with aging, obesity and other clinical states. He suggested there must be an underlying linking factor, the identification of which could lead to the prevention of cardiovascular disease; he hypothesized that this factor was sex hormones. In 1988, in his Banting lecture, Gerald M. Reaven proposed insulin resistance as the underlying factor and named the constellation of abnormalities syndrome X. Reaven did not include abdominal obesity, which has also been hypothesized as the underlying factor, as part of the condition. See also Metabolic disorder Portal-visceral hypothesis References Metabolic disorders Endocrine diseases Medical conditions related to obesity Syndromes affecting the endocrine system Syndromes with obesity
Metabolic syndrome
[ "Chemistry" ]
4,683
[ "Metabolic disorders", "Metabolism" ]
54,445
https://en.wikipedia.org/wiki/Bird%20of%20prey
Birds of prey or predatory birds, also known as (although not the same as) raptors, are hypercarnivorous bird species that actively hunt and feed on other vertebrates (mainly mammals, reptiles and other smaller birds). In addition to speed and strength, these predators have keen eyesight for detecting prey from a distance or during flight, strong feet with sharp talons for grasping or killing prey, and powerful, curved beaks for tearing off flesh. Although predatory birds primarily hunt live prey, many species (such as fish eagles, vultures and condors) also scavenge and eat carrion. Although the term "bird of prey" could theoretically be taken to include all birds that actively hunt and eat other animals, ornithologists typically use the narrower definition followed in this page, excluding many piscivorous predators such as storks, cranes, herons, gulls, skuas, penguins, and kingfishers, as well as many primarily insectivorous birds such as passerines (e.g. shrikes), nightjars, frogmouths, songbirds such as crows and ravens, alongside opportunistic predators from predominantly frugivorous or herbivorous ratites such as cassowaries and rheas. Some extinct predatory telluravian birds had talons similar to those of modern birds of prey, including mousebird relatives (Sandcoleidae), and Messelasturidae indicating possible common descent. Some Enantiornithes also had such talons, indicating possible convergent evolution, as enanthiornithines weren't even modern birds. Common names The term raptor is derived from the Latin word rapio, meaning "to seize or take by force". The common names for various birds of prey are based on structure, but many of the traditional names do not reflect the evolutionary relationships between the groups. Eagles tend to be large, powerful birds with long, broad wings and massive feet. Booted eagles have legs and feet feathered to the toes and build very large stick nests. Falcons and kestrels are medium-size birds of prey with long pointed wings, and many are particularly swift flyers. They belong to the family Falconidae, only distantly related to the Accipitriformes below. Caracaras are a distinct subgroup of the Falconidae unique to the New World, and most common in the Neotropics – their broad wings, naked faces and appetites of a generalist suggest some level of convergence with either Buteo or the vulturine birds, or both. True hawks are medium-sized birds of prey that usually belong to the genus Accipiter (see below). They are mainly woodland birds that hunt by sudden dashes from a concealed perch. They usually have long tails for tight steering. Buzzards are medium-large raptors with robust bodies and broad wings, or, alternatively, any bird of the genus Buteo (also commonly known as "hawks" in North America, while "buzzard" is colloquially used for vultures). Harriers are large, slender hawk-like birds with long tails and long thin legs. Most use a combination of keen eyesight and hearing to hunt small vertebrates, gliding on their long broad wings and circling low over grasslands and marshes. Kites have long wings and relatively weak legs. They spend much of their time soaring. They will take live vertebrate prey, but mostly feed on insects or even carrion. The osprey, a single species found worldwide that specializes in catching fish and builds large stick nests. Owls are variable-sized, typically night-specialized hunting birds. They fly almost silently due to their special feather structure that reduces turbulence. They have particularly acute hearing and nocturnal eyesight. The secretarybird is a single species with a large body and long, stilted legs endemic to the open grasslands of Sub-Saharan Africa. Vultures are scavengers and carrion-eating raptors of two distinct biological families: the Old World vultures (Accipitridae), which occurs only in the Eastern Hemisphere; and the New World vultures (Cathartidae), which occurs only in the Western Hemisphere. Members of both groups have heads either partly or fully devoid of feathers. Many of these English language group names originally referred to particular species encountered in Britain. As English-speaking people travelled further, the familiar names were applied to new birds with similar characteristics. Names that have generalised this way include: kite (Milvus milvus), sparrowhawk or sparhawk (Accipiter nisus), goshawk (Accipiter gentilis), kestrel (Falco tinninculus), hobby (Falco subbuteo), harrier (simplified from "hen-harrier", Circus cyaneus), buzzard (Buteo buteo). Some names have not generalised, and refer to single species (or groups of closely related (sub)species), such as the merlin (Falco columbarius). Systematics Historical classifications The taxonomy of Carl Linnaeus grouped birds (class Aves) into orders, genera, and species, with no formal ranks between genus and order. He placed all birds of prey into a single order, Accipitres, subdividing this into four genera: Vultur (vultures), Falco (eagles, hawks, falcons, etc.), Strix (owls), and Lanius (shrikes). This approach was followed by subsequent authors such as Gmelin, Latham and Turton. Louis Pierre Vieillot used additional ranks: order, tribe, family, genus, species. Birds of prey (order Accipitres) were divided into diurnal and nocturnal tribes; the owls remained monogeneric (family Ægolii, genus Strix), whilst the diurnal raptors were divided into three families: Vulturini, Gypaëti, and Accipitrini. Thus Vieillot's families were similar to the Linnaean genera, with the difference that shrikes were no longer included amongst the birds of prey. In addition to the original Vultur and Falco (now reduced in scope), Vieillot adopted four genera from Savigny: Phene, Haliæetus, Pandion, and Elanus. He also introduced five new genera of vultures (Gypagus, Catharista, Daptrius, Ibycter, Polyborus) and eleven new genera of accipitrines (Aquila, Circaëtus, Circus, Buteo, Milvus, Ictinia, Physeta, Harpia, Spizaëtus, Asturina, Sparvius). Falconimorphae is a deprecated superorder within Raptores, formerly composed of the orders Falconiformes and Strigiformes. The clade was invalidated after 2012. Falconiformes is now placed in Eufalconimorphae, while Strigiformes is placed in Afroaves. Modern systematics The order Accipitriformes is believed to have originated 44 million years ago when it split from the common ancestor of the secretarybird (Sagittarius serpentarius) and the accipitrid species. The phylogeny of Accipitriformes is complex and difficult to unravel. Widespread paraphylies were observed in many phylogenetic studies. More recent and detailed studies show similar results. However, according to the findings of a 2014 study, the sister relationship between larger clades of Accipitriformes was well supported (e.g. relationship of Harpagus kites to buzzards and sea eagles and these latter two with Accipiter hawks are sister taxa of the clade containing Aquilinae and Harpiinae). The diurnal birds of prey are formally classified into six families of two different orders (Accipitriformes and Falconiformes). Accipitridae: hawks, eagles, buzzards, harriers, kites, and Old World vultures Pandionidae: the osprey Sagittariidae: the secretarybird Falconidae: falcons, caracaras, and forest falcons Cathartidae: New World vultures, including condors These families were traditionally grouped together in a single order Falconiformes but are now split into two orders, the Falconiformes and Accipitriformes. The Cathartidae are sometimes placed in a separate order Cathartiformes. Formerly, they were sometimes placed in the order Ciconiiformes. The secretary bird and/or osprey are sometimes listed as subfamilies of Acciptridae: Sagittariinae and Pandioninae, respectively. Australia's letter-winged kite is a member of the family Accipitridae, although it is a nocturnal bird. The nocturnal birds of prey—the owls—are classified separately as members of two extant families of the order Strigiformes: Strigidae: "typical owls" Tytonidae: barn and bay owls Phylogeny Below is a simplified phylogeny of Telluraves which is the clade where the birds of prey belong to along with passerines and several near-passerine lineages. The orders in bold text are birds of prey orders; this is to show the paraphyly of the group as well as their relationships to other birds. A recent phylogenomic study from Wu et al. (2024) has found an alternative phylogeny for the placement of the birds of prey. Their analysis has found support in a clade consisting of the Strigiformes and Accipitriformes in new clade Hieraves. Hieraves was also recovered to be the sister clade to Australaves (which it includes the Cariamiformes and Falconiformes along with Psittacopasserae). Below is their phylogeny from the study. Possible inclusion of Cariamiformes Cariamiformes is an order of telluravian birds consisting of the living seriemas and extinct terror birds. Jarvis et al. 2014 suggested including them in the category of birds of prey, and McClure et al. 2019 considered seriemas to be birds of prey. The Peregrine Fund also considers seriemas to be birds of prey. Like most birds of prey, seriemas and terror birds prey on vertebrates. However, seriemas were not traditionally considered birds of prey, and they are still not considered birds of prey in general parlance. They were traditionally classified in the order Gruiformes, but later research has reclassified them into Cariamiformes. The bodies of seriemas are also shaped somewhat differently from birds of prey. Their legs and necks are significantly longer than those of typical raptors, although the secretarybirds (traditionally considered raptors) also have comparably long legs. The beaks of seriemas are hooked (as in raptors), but are longer than those of typical raptors. Migration Migratory behaviour evolved multiple times within accipitrid raptors. The earliest event occurred nearly 14 to 12 million years ago. This result seems to be one of the oldest dates published so far in the case of birds of prey. For example, a previous reconstruction of migratory behaviour in one Buteo clade with a result of the origin of migration around 5 million years ago was also supported by that study. Migratory species of raptors may have had a southern origin because it seems that all of the major lineages within Accipitridae had an origin in one of the biogeographic realms of the Southern Hemisphere. The appearance of migratory behaviour occurred in the tropics parallel with the range expansion of migratory species to temperate habitats. Similar results of southern origin in other taxonomic groups can be found in the literature. Distribution and biogeographic history highly determine the origin of migration in birds of prey. Based on some comparative analyses, diet breadth also has an effect on the evolution of migratory behaviour in this group, but its relevance needs further investigation. The evolution of migration in animals seems to be a complex and difficult topic with many unanswered questions. A recent study discovered new connections between migration and the ecology, life history of raptors. A brief overview from abstract of the published paper shows that "clutch size and hunting strategies have been proved to be the most important variables in shaping distribution areas, and also the geographic dissimilarities may mask important relationships between life history traits and migratory behaviours. The West Palearctic-Afrotropical and the North-South American migratory systems are fundamentally different from the East Palearctic-Indomalayan system, owing to the presence versus absence of ecological barriers." Maximum entropy modelling can help in answering the question: why species winters at one location while the others are elsewhere. Temperature and precipitation related factors differ in the limitation of species distributions. "This suggests that the migratory behaviours differ among the three main migratory routes for these species" which may have important conservational consequences in the protection of migratory raptors. Sexual dimorphism Birds of prey (raptors) are known to display patterns of sexual dimorphism. It is commonly believed that the dimorphisms found in raptors occur due to sexual selection or environmental factors. In general, hypotheses in favor of ecological factors being the cause for sexual dimorphism in raptors are rejected. This is because the ecological model is less parsimonious, meaning that its explanation is more complex than that of the sexual selection model. Additionally, ecological models are much harder to test because a great deal of data is required. Dimorphisms can also be the product of intrasexual selection between males and females. It appears that both sexes of the species play a role in the sexual dimorphism within raptors; females tend to compete with other females to find good places to nest and attract males, and males competing with other males for adequate hunting ground so they appear as the most healthy mate. It has also been proposed that sexual dimorphism is merely the product of disruptive selection, and is merely a stepping stone in the process of speciation, especially if the traits that define gender are independent across a species. Sexual dimorphism can be viewed as something that can accelerate the rate of speciation. In non-predatory birds, males are typically larger than females. However, in birds of prey, the opposite is the case. For instance, the kestrel is a type of falcon in which males are the primary providers, and the females are responsible for nurturing the young. In this species, the smaller the kestrels are, the less food is needed and thus, they can survive in environments that are harsher. This is particularly true in the male kestrels. It has become more energetically favorable for male kestrels to remain smaller than their female counterparts because smaller males have an agility advantage when it comes to defending the nest and hunting. Larger females are favored because they can incubate larger numbers of offspring, while also being able to brood a larger clutch size. Olfaction It is a long-standing belief that birds lack any sense of smell, but it has become clear that many birds do have functional olfactory systems. Despite this, most raptors are still considered to primarily rely on vision, with raptor vision being extensively studied. A 2020 review of the existing literature combining anatomical, genetic, and behavioural studies showed that, in general, raptors have functional olfactory systems that they are likely to use in a range of different contexts. Persecution Birds of prey have been historically persecuted both directly and indirectly. In the Danish Faroe Islands, there were rewards Naebbetold (by royal decree from 1741) given in return for the bills of birds of prey shown by hunters. In Britain, kites and buzzards were seen as destroyers of game and killed, for instance in 1684–5 alone as many as 100 kites were killed. Rewards for their killing were also in force in the Netherlands from 1756. From 1705 to 1800, it has been estimated that 624087 birds of prey were killed in a part of Germany that included Hannover, Luneburg, Lauenburg and Bremen with 14125 claws deposited just in 1796–97. Many species also develop lead poisoning after accidental consumption of lead shot when feeding on animals that had been shot by hunters. Lead pellets from direct shooting that the birds have escaped from also cause reduced fitness and premature deaths. Attacks on humans Some evidence supports the contention that the African crowned eagle occasionally views human children as prey, with a witness account of one attack (in which the victim, a seven-year-old boy, survived and the eagle was killed), and the discovery of part of a human child skull in a nest. This would make it the only living bird known to prey on humans, although other birds such as ostriches and cassowaries have killed humans in self-defense and a lammergeier might have killed Aeschylus by accident. Many stories of Brazilian indigenous peoples speak about children mauled by Uiruuetê, the Harpy Eagle in Tupi language. Various large raptors like golden eagles are reported attacking human beings, but its unclear if they intend to eat them or if they have ever been successful in killing one. Some fossil evidence indicates large birds of prey occasionally preyed on prehistoric hominids. The Taung Child, an early human found in Africa, is believed to have been killed by an eagle-like bird similar to the crowned eagle. The Haast's eagle may have preyed on early humans in New Zealand, and this conclusion would be consistent with Maori folklore. Leptoptilos robustus might have preyed on both Homo floresiensis and anatomically modern humans, and the Malagasy crowned eagle, teratorns, Woodward's eagle and Caracara major are similar in size to the Haast's eagle, implying that they similarly could pose a threat to a human being. Vision Birds of prey have incredible vision and rely heavily on it for a number of tasks. They utilize their high visual acuity to obtain food, navigate their surroundings, distinguish and flee from predators, mating, nest construction, and much more. They accomplish these tasks with a large eye in relation to their skull, which allows for a larger image to be projected onto the retina. The visual acuity of some large raptors such as eagles and Old World vultures are the highest known among vertebrates; the wedge-tailed eagle has twice the visual acuity of a typical human and six times that of the common ostrich, the vertebrate with the largest eyes. There are two regions in the retina, called the deep and shallow fovea, that are specialized for acute vision. These regions contain the highest density of photoreceptors, and provide the highest points of visual acuity. The deep fovea points forward at an approximate 45° angle, while the shallow fovea points approximately 15° to the right or left of the head axis. Several raptor species repeatedly cock their heads into three distinct positions while observing an object. First, is straight ahead with their head pointed towards the object. Second and third are sideways to the right or left of the object, with their head axis positioned approximately 40° adjacent to the object. This movement is believed to be associated with lining up the incoming image to fall on the deep fovea. Raptors will choose which head position to use depending on the distance to the object. At distances as close as 8m, they used primarily binocular vision. At distances greater than 21m, they spent more time using monocular vision. At distances greater than 40m, they spent 80% or more time using their monocular vision. This suggests that raptors tilt their head to rely on the highly acute deep fovea. Like all birds, raptors possess tetrachromacy, however, due to their emphasis on visual acuity, many diurnal birds of prey have little ability to see ultraviolet light as this produces chromatic aberration which decreases the clarity of vision. See also Origin of birds Explanatory notes References Further reading Olsen, Jerry 2014, Australian High Country raptors, CSIRO Publishing, Melbourne, . Remsen, J. V. Jr., C. D. Cadena, A. Jaramillo, M. Nores, J. F. Pacheco, M. B. Robbins, T. S. Schulenberg, F. G. Stiles, D. F. Stotz, and K. J. Zimmer. [Version 2007-04-05.] A classification of the bird species of South America. American Ornithologists' Union. Accessed 2007-04-10. External links Explore Birds of Prey with The Peregrine Fund Explore Birds of Prey on the Internet Bird Collection Bird of Prey Pictures Global Raptor Information Network The Arboretum at Flagstaff's Wild Birds of Prey Program Raptor Resource Project Paraphyletic groups
Bird of prey
[ "Biology" ]
4,348
[ "Phylogenetics", "Paraphyletic groups" ]
54,473
https://en.wikipedia.org/wiki/Rubiales%20%28plant%29
Rubiales was an order of flowering plants in the Cronquist system, including the families Rubiaceae and Theligonaceae. The latest APG system (2016) does not recognize this order and places the families within Gentianales. References Historically recognized angiosperm orders
Rubiales (plant)
[ "Biology" ]
59
[ "Plant stubs", "Plants" ]
54,493
https://en.wikipedia.org/wiki/Kuratowski%27s%20theorem
In graph theory, Kuratowski's theorem is a mathematical forbidden graph characterization of planar graphs, named after Kazimierz Kuratowski. It states that a finite graph is planar if and only if it does not contain a subgraph that is a subdivision of (the complete graph on five vertices) or of (a complete bipartite graph on six vertices, three of which connect to each of the other three, also known as the utility graph). Statement A planar graph is a graph whose vertices can be represented by points in the Euclidean plane, and whose edges can be represented by simple curves in the same plane connecting the points representing their endpoints, such that no two curves intersect except at a common endpoint. Planar graphs are often drawn with straight line segments representing their edges, but by Fáry's theorem this makes no difference to their graph-theoretic characterization. A subdivision of a graph is a graph formed by subdividing its edges into paths of one or more edges. Kuratowski's theorem states that a finite graph is planar if it is not possible to subdivide the edges of or , and then possibly add additional edges and vertices, to form a graph isomorphic to . Equivalently, a finite graph is planar if and only if it does not contain a subgraph that is homeomorphic to or . Kuratowski subgraphs If is a graph that contains a subgraph that is a subdivision of or , then is known as a Kuratowski subgraph of . With this notation, Kuratowski's theorem can be expressed succinctly: a graph is planar if and only if it does not have a Kuratowski subgraph. The two graphs and are nonplanar, as may be shown either by a case analysis or an argument involving Euler's formula. Additionally, subdividing a graph cannot turn a nonplanar graph into a planar graph: if a subdivision of a graph has a planar drawing, the paths of the subdivision form curves that may be used to represent the edges of itself. Therefore, a graph that contains a Kuratowski subgraph cannot be planar. The more difficult direction in proving Kuratowski's theorem is to show that, if a graph is nonplanar, it must contain a Kuratowski subgraph. Algorithmic implications A Kuratowski subgraph of a nonplanar graph can be found in linear time, as measured by the size of the input graph. This allows the correctness of a planarity testing algorithm to be verified for nonplanar inputs, as it is straightforward to test whether a given subgraph is or is not a Kuratowski subgraph. Usually, non-planar graphs contain a large number of Kuratowski-subgraphs. The extraction of these subgraphs is needed, e.g., in branch and cut algorithms for crossing minimization. It is possible to extract a large number of Kuratowski subgraphs in time dependent on their total size. History Kazimierz Kuratowski published his theorem in 1930. The theorem was independently proved by Orrin Frink and Paul Smith, also in 1930, but their proof was never published. The special case of cubic planar graphs (for which the only minimal forbidden subgraph is ) was also independently proved by Karl Menger in 1930. Since then, several new proofs of the theorem have been discovered. In the Soviet Union, Kuratowski's theorem was known as either the Pontryagin–Kuratowski theorem or the Kuratowski–Pontryagin theorem, as the theorem was reportedly proved independently by Lev Pontryagin around 1927. However, as Pontryagin never published his proof, this usage has not spread to other places. Related results A closely related result, Wagner's theorem, characterizes the planar graphs by their minors in terms of the same two forbidden graphs and . Every Kuratowski subgraph is a special case of a minor of the same type, and while the reverse is not true, it is not difficult to find a Kuratowski subgraph (of one type or the other) from one of these two forbidden minors; therefore, these two theorems are equivalent. An extension is the Robertson–Seymour theorem. See also Kelmans–Seymour conjecture, that 5-connected nonplanar graphs contain a subdivision of References Planar graphs Theorems in graph theory
Kuratowski's theorem
[ "Mathematics" ]
918
[ "Statements about planar graphs", "Planar graphs", "Theorems in discrete mathematics", "Planes (geometry)", "Theorems in graph theory" ]
54,513
https://en.wikipedia.org/wiki/Opal
Opal is a hydrated amorphous form of silica (SiO2·nH2O); its water content may range from 3% to 21% by weight, but is usually between 6% and 10%. Due to the amorphous (chemical) physical structure, it is classified as a mineraloid, unlike crystalline forms of silica, which are considered minerals. It is deposited at a relatively low temperature and may occur in the fissures of almost any kind of rock, being most commonly found with limonite, sandstone, rhyolite, marl, and basalt. The name opal is believed to be derived from the Sanskrit word (), which means 'jewel', and later the Greek derivative (). There are two broad classes of opal: precious and common. Precious opal displays play-of-color (iridescence); common opal does not. Play-of-color is defined as "a pseudo chromatic optical effect resulting in flashes of colored light from certain minerals, as they are turned in white light." The internal structure of precious opal causes it to diffract light, resulting in play-of-color. Depending on the conditions in which it formed, opal may be transparent, translucent, or opaque, and the background color may be white, black, or nearly any color of the visual spectrum. Black opal is considered the rarest, while white, gray, and green opals are the most common. Precious opal Precious opal shows a variable interplay of internal colors, and though it is a mineraloid, it has an internal structure. At microscopic scales, precious opal is composed of silica spheres some in diameter in a hexagonal or cubic close-packed lattice. It was shown by J. V. Sanders in the mid-1960s that these ordered silica spheres produce the internal colors by causing the interference and diffraction of light passing through the microstructure of the opal. The regularity of the sizes and the packing of these spheres is a prime determinant of the quality of precious opal. Where the distance between the regularly packed planes of spheres is around half the wavelength of a component of visible light, the light of that wavelength may be subject to diffraction from the grating created by the stacked planes. The colors that are observed are determined by the spacing between the planes and the orientation of planes with respect to the incident light. The process can be described by Bragg's law of diffraction. Visible light cannot pass through large thicknesses of the opal. This is the basis of the optical band gap in a photonic crystal. In addition, microfractures may be filled with secondary silica and form thin lamellae inside the opal during its formation. The term opalescence is commonly used to describe this unique and beautiful phenomenon, which in gemology is termed play of color. In gemology, opalescence is applied to the hazy-milky-turbid sheen of common or potch opal which does not show a play of color. Opalescence is a form of adularescence. For gemstone use, most opal is cut and polished to form a cabochon. "Natural" opal refers to polished stones consisting wholly of precious opal. Opals too thin to produce a "natural" opal may be combined with other materials to form "composite" gems. An opal doublet consists of a relatively thin layer of precious opal, backed by a layer of dark-colored material, most commonly ironstone, dark or black common opal (potch), onyx, or obsidian. The darker backing emphasizes the play of color and results in a more attractive display than a lighter potch. An opal triplet is similar to a doublet but has a third layer, a domed cap of clear quartz or plastic on the top. The cap takes a high polish and acts as a protective layer for the opal. The top layer also acts as a magnifier, to emphasize the play of color of the opal beneath, which is often an inferior specimen or an extremely thin section of precious opal. Triplet opals tend to have a more artificial appearance and are not classed as precious gemstones, but rather "composite" gemstones. Jewelry applications of precious opal can be somewhat limited by opal's sensitivity to heat due primarily to its relatively high water content and predisposition to scratching. Combined with modern techniques of polishing, a doublet opal can produce a similar effect to Natural black or boulder opal at a fraction of the price. Doublet opal also has the added benefit of having genuine opal as the top visible and touchable layer, unlike triplet opals. Common opal Besides the gemstone varieties that show a play of color, the other kinds of common opal include the milk opal, milky bluish to greenish (which can sometimes be of gemstone quality); resin opal, which is honey-yellow with a resinous luster; wood opal, which is caused by the replacement of the organic material in wood with opal; menilite, which is brown or grey; hyalite, a colorless glass-clear opal sometimes called Muller's glass; geyserite, also called siliceous sinter, deposited around hot springs or geysers; and diatomaceous earth, the accumulations of diatom shells or tests. Common opal often displays a hazy-milky-turbid sheen from within the stone. In gemology, this optical effect is strictly defined as opalescence which is a form of adularescence. Varieties of common opal "Girasol opal" is a term sometimes mistakenly and improperly used to refer to fire opals, as well as a type of transparent to semitransparent type milky quartz from Madagascar which displays an asterism, or star effect when cut properly. However, the true girasol opal is a type of hyalite opal that exhibits a bluish glow or sheen that follows the light source around. It is not a play of color as seen in precious opal, but rather an effect from microscopic inclusions. It is also sometimes referred to as water opal, too, when it is from Mexico. The two most notable locations of this type of opal are Oregon and Mexico. A Peruvian opal (also called blue opal) is a semi-opaque to opaque blue-green stone found in Peru, which is often cut to include the matrix in the more opaque stones. It does not display a play of color. Blue opal also comes from Oregon and Idaho in the Owyhee region, as well as from Nevada around the Virgin Valley. Opal is also formed by diatoms. Diatoms are a form of algae that, when they die, often form layers at the bottoms of lakes, bays, or oceans. Their cell walls are made up of hydrated silicon dioxide which gives them structural coloration and therefore the appearance of tiny opals when viewed under a microscope. These cell walls or "tests" form the “grains” for the diatomaceous earth. This sedimentary rock is white, opaque, and chalky in texture. Diatomite has multiple industrial uses such as filtering or adsorbing since it has a fine particle size and very porous nature, and gardening to increase water absorption. History Opal was rare and very valuable in antiquity. In Europe, it was a gem prized by royalty. Until the opening of vast deposits in Australia in the 19th century the only known source was beyond the Roman frontier in Slovakia. Opal is the national gemstone of Australia. Sources The primary sources of opal are Australia and Ethiopia, but because of inconsistent and widely varying accountings of their respective levels of extraction, it is difficult to accurately state what proportion of the global supply of opal comes from either country. Australian opal has been cited as accounting for 95–97% of the world's supply of precious opal, with the state of South Australia accounting for 80% of the world's supply. In 2012, Ethiopian opal production was estimated to be by the United States Geological Survey. USGS data from the same period (2012), reveals Australian opal production to be $41 million. Because of the units of measurement, it is not possible to directly compare Australian and Ethiopian opal production, but these data and others suggest that the traditional percentages given for Australian opal production may be overstated. Yet, the validity of data in the USGS report appears to conflict with that of Laurs et al. and Mesfin, who estimated the 2012 Ethiopian opal output (from Wegeltena) to be only . Australia The town of Coober Pedy in South Australia is a major source of opal. The world's largest and most valuable gem opal "Olympic Australis" was found in August 1956 at the "Eight Mile" opal field in Coober Pedy. It weighs and is long, with a height of and a width of . The Mintabie Opal Field in South Australia located about northwest of Coober Pedy has also produced large quantities of crystal opal and the rarer black opal. Over the years, it has been sold overseas incorrectly as Coober Pedy opal. The black opal is said to be some of the best examples found in Australia. Andamooka in South Australia is also a major producer of matrix opal, crystal opal, and black opal. Another Australian town, Lightning Ridge in New South Wales, is the main source of black opal, opal containing a predominantly dark background (dark gray to blue-black displaying the play of color), collected from the Griman Creek Formation. Boulder opal consists of concretions and fracture fillings in a dark siliceous ironstone matrix. It is found sporadically in western Queensland, from Kynuna in the north, to Yowah and Koroit in the south. Its largest quantities are found around Jundah and Quilpie in South West Queensland. Australia also has opalized fossil remains, including dinosaur bones in New South Wales and South Australia, and marine creatures in South Australia. Ethiopia It has been reported that Northern African opal was used to make tools as early as 4000 BC. The first published report of gem opal from Ethiopia appeared in 1994, with the discovery of precious opal in the Menz Gishe District, North Shewa Province. The opal, found mostly in the form of nodules, was of volcanic origin and was found predominantly within weathered layers of rhyolite. This Shewa Province opal was mostly dark brown in color and had a tendency to crack. These qualities made it unpopular in the gem trade. In 2008, a new opal deposit was found approximately 180 km north of Shewa Province, near the town of Wegeltena, in Ethiopia's Wollo Province. The Wollo Province opal was different from the previous Ethiopian opal finds in that it more closely resembled the sedimentary opals of Australia and Brazil, with a light background and often vivid play-of-color. Wollo Province opal, more commonly referred to as "Welo" or "Wello" opal, has become the dominant Ethiopian opal in the gem trade. Virgin Valley, Nevada The Virgin Valley opal fields of Humboldt County in northern Nevada produce a wide variety of precious black, crystal, white, fire, and lemon opal. The black fire opal is the official gemstone of Nevada. Most of the precious opal is partial wood replacement. The precious opal is hosted and found in situ within a subsurface horizon or zone of bentonite, which is considered a "lode" deposit. Opals which have weathered out of the in situ deposits are alluvial and considered placer deposits. Miocene-age opalised teeth, bones, fish, and a snake head have been found. Some of the opal has high water content and may desiccate and crack when dried. The largest producing mines of Virgin Valley have been the famous Rainbow Ridge, Royal Peacock, Bonanza, Opal Queen, and WRT Stonetree/Black Beauty mines. The largest unpolished black opal in the Smithsonian Institution, known as the "Roebling opal", came out of the tunneled portion of the Rainbow Ridge Mine in 1917, and weighs . The largest polished black opal in the Smithsonian Institution comes from the Royal Peacock opal mine in the Virgin Valley, weighing , known as the "Black Peacock". Mexico Fire opal is a transparent to translucent opal with warm body colors of yellow to orange to red. Although fire opals don't usually show any play of color, they occasionally exhibit bright green flashes. The most famous source of fire opals is the state of Querétaro in Mexico; these opals are commonly called Mexican fire opals. Fire opals that do not show a play of color are sometimes referred to as jelly opals. Mexican opals are sometimes cut in their rhyolitic host material if it is hard enough to allow cutting and polishing. This type of Mexican opal is referred to as a Cantera opal. Another type of opal from Mexico, referred to as Mexican water opal, is a colorless opal that exhibits either a bluish or golden internal sheen. Opal occurs in significant quantity and variety in central Mexico, where mining and production first originated in the state of Querétaro. In this region the opal deposits are located mainly in the mountain ranges of three municipalities: Colón, Tequisquiapan, and Ezequiel Montes. During the 1960s through to the mid-1970s, the Querétaro mines were heavily mined. Today's opal miners report that it was much easier to find quality opals with a lot of fire and play of color back then, whereas today the gem-quality opals are very hard to come by and command hundreds of US dollars or more. The orange-red background color is characteristic of all "fire opals," including "Mexican fire opal". The oldest mine in Querétaro is Santa Maria del Iris. This mine was opened around 1870 and has been reopened at least 28 times since. At the moment there are about 100 mines in the regions around Querétaro, but most of them are now closed. The best quality of opals came from the mine Santa Maria del Iris, followed by La Hacienda la Esperanza, Fuentezuelas, La Carbonera, and La Trinidad. Important deposits in the state of Jalisco were not discovered until the late 1950s. In 1957, Alfonso Ramirez (of Querétaro) accidentally discovered the first opal mine in Jalisco: La Unica, located on the outer area of the volcano of Tequila, near the Huitzicilapan farm in Magdalena. By 1960 there were around 500 known opal mines in this region alone. Other regions of the country that also produce opals (of lesser quality) are Guerrero, which produces an opaque opal similar to the opals from Australia (some of these opals are carefully treated with heat to improve their colors so high-quality opals from this area may be suspect). There are also some small opal mines in Morelos, Durango, Chihuahua, Baja California, Guanajuato, Puebla, Michoacán, and Estado de México. Other locations Another source of white base opal or creamy opal in the United States is Spencer, Idaho. A high percentage of the opal found there occurs in thin layers. Other significant deposits of precious opal around the world can be found in the Czech Republic, Canada, Slovakia, Hungary, Turkey, Indonesia, Brazil (in Pedro II, Piauí), Honduras (more precisely in Erandique), Guatemala, and Nicaragua. In late 2008, NASA announced the discovery of opal deposits on Mars. Fossil opal Wood opal, also known as xylopal, is a form of opal, as well as a type of petrified wood which has developed an opalescent sheen or, more rarely, where the wood has been completely replaced by opal. Other names for this opalized sheen-like wood are opalized wood and opalized petrified wood. It is often used as a gemstone. Synthetic opal Opals of all varieties have been synthesized experimentally and commercially. The discovery of the ordered sphere structure of precious opal led to its synthesis by Pierre Gilson in 1974. The resulting material is distinguishable from natural opal by its regularity; under magnification, the patches of color are seen to be arranged in a "lizard skin" or "chicken wire" pattern. Furthermore, synthetic opals do not fluoresce under ultraviolet light. Synthetics are also generally lower in density and are often highly porous. Opals which have been created in a laboratory are often termed "lab-created opals", which, while classifiable as man-made and synthetic, are very different from their resin-based counterparts which are also considered man-made and synthetic. The term "synthetic" implies that a stone has been created to be chemically and structurally indistinguishable from a genuine one, and genuine opal contains no resins or polymers. The finest modern lab-created opals do not exhibit the lizard skin or columnar patterning of earlier lab-created varieties, and their patterns are non-directional. They can still be distinguished from genuine opals, however, by their lack of inclusions and the absence of any surrounding non-opal matrix. While many genuine opals are cut and polished without a matrix, the presence of irregularities in their play-of-color continues to mark them as distinct from even the best lab-created synthetics. Other research in macroporous structures have yielded highly ordered materials that have similar optical properties to opals and have been used in cosmetics. Synthetic opals are also deeply investigated in photonics for sensing and light management purposes. Local atomic structure The lattice of spheres of opal that cause interference with light is several hundred times larger than the fundamental structure of crystalline silica. As a mineraloid, no unit cell describes the structure of opal. Nevertheless, opals can be roughly divided into those that show no signs of crystalline order (amorphous opal) and those that show signs of the beginning of crystalline order, commonly termed cryptocrystalline or microcrystalline opal. Dehydration experiments and infrared spectroscopy have shown that most of the H2O in the formula of SiO2·nH2O of opals is present in the familiar form of clusters of molecular water. Isolated water molecules, and silanols, structures such as SiOH, generally form a lesser proportion of the total and can reside near the surface or in defects inside the opal. The structure of low-pressure polymorphs of anhydrous silica consists of frameworks of fully corner bonded tetrahedra of SiO4. The higher temperature polymorphs of silica cristobalite and tridymite are frequently the first to crystallize from amorphous anhydrous silica, and the local structures of microcrystalline opals also appear to be closer to that of cristobalite and tridymite than to quartz. The structures of tridymite and cristobalite are closely related and can be described as hexagonal and cubic close-packed layers. It is therefore possible to have intermediate structures in which the layers are not regularly stacked. Microcrystalline opal Microcrystalline opal or Opal-CT has been interpreted as consisting of clusters of stacked cristobalite and tridymite over very short length scales. The spheres of opal in microcrystalline opal are themselves made up of tiny nanocrystalline blades of cristobalite and tridymite. Microcrystalline opal has occasionally been further subdivided in the literature. Water content may be as high as 10 wt%. Opal-CT, also called lussatine or lussatite, is interpreted as consisting of localized order of α-cristobalite with a lot of stacking disorder. Typical water content is about 1.5 wt%. Noncrystalline opal Two broad categories of noncrystalline opals, sometimes just referred to as "opal-A" ("A" stands for "amorphous"), have been proposed. The first of these is opal-AG consisting of aggregated spheres of silica, with water filling the space in between. Precious opal and potch opal are generally varieties of this, the difference being in the regularity of the sizes of the spheres and their packing. The second "opal-A" is opal-AN or water-containing amorphous silica-glass. Hyalite is another name for this. Noncrystalline silica in siliceous sediments is reported to gradually transform to opal-CT and then opal-C as a result of diagenesis, due to the increasing overburden pressure in sedimentary rocks, as some of the stacking disorder is removed. Opal surface chemical groups The surface of opal in contact with water is covered by siloxane bonds (≡Si–O–Si≡) and silanol groups (≡Si–OH). This makes the opal surface very hydrophilic and capable of forming numerous hydrogen bonds. Etymology The word 'opal' is adapted from the Latin term . The origin of this word in turn is a matter of debate, but most modern references suggest it is adapted from the Sanskrit word meaning ‘precious stone’. As references to the gem are made by Pliny the Elder, one theory attributed the name's origin to Roman mythology: to have been adapted from Ops, the wife of Saturn, and goddess of fertility. (The portion of Saturnalia devoted to Ops was "Opalia", similar to .) Another common claim was that the term was adapted from the Ancient Greek word, . This word has two meanings, one is related to "seeing" and forms the basis of the English words like "opaque"; the other is "other" as in "alias" and "alter". It is claimed that combined these uses, meaning "to see a change in color". However, historians have noted the first appearances of do not occur until after the Romans had taken over the Greek states in 180 BC and they had previously used the term . However, the argument for the Sanskrit origin is strong. The term first appears in Roman references around 250 BC, at a time when the opal was valued above all other gems. The opals were supplied by traders from the Bosporus, who claimed the gems were being supplied from India. Before this, the stone was referred to by a variety of names, but these fell from use after 250 BC. Historical superstitions In the Middle Ages, opal was considered a stone that could provide great luck because it was believed to possess all the virtues of each gemstone whose color was represented in the color spectrum of the opal. It was also said to grant invisibility if wrapped in a fresh bay leaf and held in the hand. As a result, the opal was seen as the patron gemstone for thieves during the medieval period. Following the publication of Sir Walter Scott's Anne of Geierstein in 1829, opal acquired a less auspicious reputation. In Scott's novel, the Baroness of Arnheim wears an opal talisman with supernatural powers. When a drop of holy water falls on the talisman, the opal turns into a colorless stone and the Baroness dies soon thereafter. Due to the popularity of Scott's novel, people began to associate opals with bad luck and death. Within a year of the publishing of Scott's novel in April 1829, the sale of opals in Europe dropped by 50%, and remained low for the next 20 years or so. Even as recently as the beginning of the 20th century, it was believed that when a Russian saw an opal among other goods offered for sale, he or she should not buy anything more, as the opal was believed to embody the evil eye. Opal is considered the birthstone for people born in October. Examples The Olympic Australis, the world's largest and most valuable gem opal, found in Coober Pedy The Andamooka Opal, presented to Queen Elizabeth II, also known as the Queen's Opal The Addyman Plesiosaur from Andamooka, "the finest known opalised skeleton on Earth" The Burning of Troy, the now-lost opal presented to Joséphine de Beauharnais by Napoleon I of France and the first named opal The Flame Queen Opal Opal cameo (jewellery-case) of a profile head of a helmeted warrior, attributed to Wilhelm Schmidt The Halley's Comet Opal, the world's largest uncut black opal Although the clock faces above the information stand in Grand Central Terminal in New York City are often said to be opal, they are in fact opalescent glass The Roebling Opal, Smithsonian Institution The Galaxy Opal, listed as the "World's Largest Polished Opal" in the 1992 Guinness Book of Records The Rainbow Virgin, "the finest crystal opal specimen ever unearthed" The Sea of Opal, the largest black opal in the world The Fire of Australia, assumed to be "the finest uncut opal in existence" Beverly the Bug, the first known example of an opal with an insect inclusion See also Biogenic silica Labradorite Uncut Gems (2019 film) References External links Farlang opal Hist. References Localities, anecdotes by Theophrastus, Isaac Newton, Georg Agricola etc. ICA's Opal Page: International Colored Stone Association Opal Fossils from the South Australian Museum Accessed 19 October 2016. Opal Mineral data and specimen images Mineralogy Database Opalworld Australian Opal Fields – Map of precious opal deposits Emblems of South Australia Glass in nature Hydrates National symbols of Australia Silica polymorphs Symbols of New South Wales
Opal
[ "Chemistry", "Materials_science" ]
5,465
[ "Silica polymorphs", "Polymorphism (materials science)", "Hydrates" ]
54,536
https://en.wikipedia.org/wiki/Citric%20acid
Citric acid is an organic compound with the formula . It is a colorless weak organic acid. It occurs naturally in citrus fruits. In biochemistry, it is an intermediate in the citric acid cycle, which occurs in the metabolism of all aerobic organisms. More than two million tons of citric acid are manufactured every year. It is used widely as acidifier, flavoring, preservative, and chelating agent. A citrate is a derivative of citric acid; that is, the salts, esters, and the polyatomic anion found in solutions and salts of citric acid. An example of the former, a salt is trisodium citrate; an ester is triethyl citrate. When citrate trianion is part of a salt, the formula of the citrate trianion is written as or . Natural occurrence and industrial production Citric acid occurs in a variety of fruits and vegetables, most notably citrus fruits. Lemons and limes have particularly high concentrations of the acid; it can constitute as much as 8% of the dry weight of these fruits (about 47 g/L in the juices). The concentrations of citric acid in citrus fruits range from 0.005 mol/L for oranges and grapefruits to 0.30 mol/L in lemons and limes; these values vary within species depending upon the cultivar and the circumstances under which the fruit was grown. Citric acid was first isolated in 1784 by the chemist Carl Wilhelm Scheele, who crystallized it from lemon juice. Industrial-scale citric acid production first began in 1890 based on the Italian citrus fruit industry, where the juice was treated with hydrated lime (calcium hydroxide) to precipitate calcium citrate, which was isolated and converted back to the acid using diluted sulfuric acid. In 1893, C. Wehmer discovered Penicillium mold could produce citric acid from sugar. However, microbial production of citric acid did not become industrially important until World War I disrupted Italian citrus exports. In 1917, American food chemist James Currie discovered that certain strains of the mold Aspergillus niger could be efficient citric acid producers, and the pharmaceutical company Pfizer began industrial-level production using this technique two years later, followed by Citrique Belge in 1929. In this production technique, which is still the major industrial route to citric acid used today, cultures of Aspergillus niger are fed on a sucrose or glucose-containing medium to produce citric acid. The source of sugar is corn steep liquor, molasses, hydrolyzed corn starch, or other inexpensive, carbohydrate solution. After the mold is filtered out of the resulting suspension, citric acid is isolated by precipitating it with calcium hydroxide to yield calcium citrate salt, from which citric acid is regenerated by treatment with sulfuric acid, as in the direct extraction from citrus fruit juice. In 1977, a patent was granted to Lever Brothers for the chemical synthesis of citric acid starting either from aconitic or isocitrate (also called alloisocitrate) calcium salts under high pressure conditions; this produced citric acid in near quantitative conversion under what appeared to be a reverse, non-enzymatic Krebs cycle reaction. Global production was in excess of 2,000,000 tons in 2018. More than 50% of this volume was produced in China. More than 50% was used as an acidity regulator in beverages, some 20% in other food applications, 20% for detergent applications, and 10% for applications other than food, such as cosmetics, pharmaceuticals, and in the chemical industry. Chemical characteristics Citric acid can be obtained as an anhydrous (water-free) form or as a monohydrate. The anhydrous form crystallizes from hot water, while the monohydrate forms when citric acid is crystallized from cold water. The monohydrate can be converted to the anhydrous form at about 78 °C. Citric acid also dissolves in absolute (anhydrous) ethanol (76 parts of citric acid per 100 parts of ethanol) at 15 °C. It decomposes with loss of carbon dioxide above about 175 °C. Citric acid is a triprotic acid, with pKa values, extrapolated to zero ionic strength, of 3.128, 4.761, and 6.396 at 25 °C. The pKa of the hydroxyl group has been found, by means of 13C NMR spectroscopy, to be 14.4. The speciation diagram shows that solutions of citric acid are buffer solutions between about pH 2 and pH 8. In biological systems around pH 7, the two species present are the citrate ion and mono-hydrogen citrate ion. The SSC 20X hybridization buffer is an example in common use. Tables compiled for biochemical studies are available. Conversely, the pH of a 1 mM solution of citric acid will be about 3.2. The pH of fruit juices from citrus fruits like oranges and lemons depends on the citric acid concentration, with a higher concentration of citric acid resulting in a lower pH. Acid salts of citric acid can be prepared by careful adjustment of the pH before crystallizing the compound. See, for example, sodium citrate. The citrate ion forms complexes with metallic cations. The stability constants for the formation of these complexes are quite large because of the chelate effect. Consequently, it forms complexes even with alkali metal cations. However, when a chelate complex is formed using all three carboxylate groups, the chelate rings have 7 and 8 members, which are generally less stable thermodynamically than smaller chelate rings. In consequence, the hydroxyl group can be deprotonated, forming part of a more stable 5-membered ring, as in ammonium ferric citrate, . Citric acid can be esterified at one or more of its three carboxylic acid groups to form any of a variety of mono-, di-, tri-, and mixed esters. Biochemistry Citric acid cycle Citrate is an intermediate in the citric acid cycle, also known as the TCA (TriCarboxylic Acid) cycle or the Krebs cycle, a central metabolic pathway for animals, plants, and bacteria. In the Krebs cycle, citrate synthase catalyzes the condensation of oxaloacetate with acetyl CoA to form citrate. Citrate then acts as the substrate for aconitase and is converted into aconitic acid. The cycle ends with regeneration of oxaloacetate. This series of chemical reactions is the source of two-thirds of the food-derived energy in higher organisms. The chemical energy released is available under the form of Adenosine triphosphate (ATP). Hans Adolf Krebs received the 1953 Nobel Prize in Physiology or Medicine for the discovery. Other biological roles Citrate can be transported out of the mitochondria and into the cytoplasm, then broken down into acetyl-CoA for fatty acid synthesis, and into oxaloacetate. Citrate is a positive modulator of this conversion, and allosterically regulates the enzyme acetyl-CoA carboxylase, which is the regulating enzyme in the conversion of acetyl-CoA into malonyl-CoA (the commitment step in fatty acid synthesis). In short, citrate is transported into the cytoplasm, converted into acetyl-CoA, which is then converted into malonyl-CoA by acetyl-CoA carboxylase, which is allosterically modulated by citrate. High concentrations of cytosolic citrate can inhibit phosphofructokinase, the catalyst of a rate-limiting step of glycolysis. This effect is advantageous: high concentrations of citrate indicate that there is a large supply of biosynthetic precursor molecules, so there is no need for phosphofructokinase to continue to send molecules of its substrate, fructose 6-phosphate, into glycolysis. Citrate acts by augmenting the inhibitory effect of high concentrations of ATP, another sign that there is no need to carry out glycolysis. Citrate is a vital component of bone, helping to regulate the size of apatite crystals. Applications Food and drink Because it is one of the stronger edible acids, the dominant use of citric acid is as a flavoring and preservative in food and beverages, especially soft drinks and candies. Within the European Union it is denoted by E number E330. Citrate salts of various metals are used to deliver those minerals in a biologically available form in many dietary supplements. Citric acid has 247 kcal per 100 g. In the United States the purity requirements for citric acid as a food additive are defined by the Food Chemicals Codex, which is published by the United States Pharmacopoeia (USP). Citric acid can be added to ice cream as an emulsifying agent to keep fats from separating, to caramel to prevent sucrose crystallization, or in recipes in place of fresh lemon juice. Citric acid is used with sodium bicarbonate in a wide range of effervescent formulae, both for ingestion (e.g., powders and tablets) and for personal care (e.g., bath salts, bath bombs, and cleaning of grease). Citric acid sold in a dry powdered form is commonly sold in markets and groceries as "sour salt", due to its physical resemblance to table salt. It has use in culinary applications, as an alternative to vinegar or lemon juice, where a pure acid is needed. Citric acid can be used in food coloring to balance the pH level of a normally basic dye. Cleaning and chelating agent Citric acid is an excellent chelating agent, binding metals by making them soluble. It is used to remove and discourage the buildup of limescale from boilers and evaporators. It can be used to treat water, which makes it useful in improving the effectiveness of soaps and laundry detergents. By chelating the metals in hard water, it lets these cleaners produce foam and work better without need for water softening. Citric acid is the active ingredient in some bathroom and kitchen cleaning solutions. A solution with a six percent concentration of citric acid will remove hard water stains from glass without scrubbing. Citric acid can be used in shampoo to wash out wax and coloring from the hair. Illustrative of its chelating abilities, citric acid was the first successful eluant used for total ion-exchange separation of the lanthanides, during the Manhattan Project in the 1940s. In the 1950s, it was replaced by the far more efficient EDTA. In industry, it is used to dissolve rust from steel, and to passivate stainless steels. Cosmetics, pharmaceuticals, dietary supplements, and foods Citric acid is used as an acidulant in creams, gels, and liquids. Used in foods and dietary supplements, it may be classified as a processing aid if it was added for a technical or functional effect (e.g. acidulent, chelator, viscosifier, etc.). If it is still present in insignificant amounts, and the technical or functional effect is no longer present, it may be exempt from labeling <21 CFR §101.100(c)>. Citric acid is an alpha hydroxy acid and is an active ingredient in chemical skin peels. Citric acid is commonly used as a buffer to increase the solubility of brown heroin. Citric acid is used as one of the active ingredients in the production of facial tissues with antiviral properties. Other uses The buffering properties of citrates are used to control pH in household cleaners and pharmaceuticals. Citric acid is used as an odorless alternative to white vinegar for fabric dyeing with acid dyes. Sodium citrate is a component of Benedict's reagent, used for both qualitative and quantitative identification of reducing sugars. Citric acid can be used as an alternative to nitric acid in passivation of stainless steel. Citric acid can be used as a lower-odor stop bath as part of the process for developing photographic film. Photographic developers are alkaline, so a mild acid is used to neutralize and stop their action quickly, but commonly used acetic acid leaves a strong vinegar odor in the darkroom. Citric acid is an excellent soldering flux, either dry or as a concentrated solution in water. It should be removed after soldering, especially with fine wires, as it is mildly corrosive. It dissolves and rinses quickly in hot water. Alkali citrate can be used as an inhibitor of kidney stones by increasing urine citrate levels, useful for prevention of calcium stones, and increasing urine pH, useful for preventing uric acid and cystine stones. Synthesis of other organic compounds Citric acid is a versatile precursor to many other organic compounds. Dehydration routes give itaconic acid and its anhydride. Citraconic acid can be produced via thermal isomerization of itaconic acid anhydride. The required itaconic acid anhydride is obtained by dry distillation of citric acid. Aconitic acid can be synthesized by dehydration of citric acid using sulfuric acid: (HO2CCH2)2C(OH)CO2H → HO2CCH=C(CO2H)CH2CO2H + H2O Acetonedicarboxylic acid can also be prepared by decarboxylation of citric acid in fuming sulfuric acid. Safety Although a weak acid, exposure to pure citric acid can cause adverse effects. Inhalation may cause cough, shortness of breath, or sore throat. Over-ingestion may cause abdominal pain and sore throat. Exposure of concentrated solutions to skin and eyes can cause redness and pain. Long-term or repeated consumption may cause erosion of tooth enamel. Compendial status British Pharmacopoeia Japanese Pharmacopoeia See also Closely related acids: isocitric acid, aconitic acid, and propane-1,2,3-tricarboxylic acid (tricarballylic acid, carballylic acid) Acids in wine Explanatory notes References External links Alpha hydroxy acids Chelating agents Citrus Condiments E-number additives Food acidity regulators Household chemicals Photographic chemicals Tricarboxylic acids
Citric acid
[ "Chemistry" ]
3,103
[ "Chelating agents", "Process chemicals" ]
54,648
https://en.wikipedia.org/wiki/Solar%20flare
A solar flare is a relatively intense, localized emission of electromagnetic radiation in the Sun's atmosphere. Flares occur in active regions and are often, but not always, accompanied by coronal mass ejections, solar particle events, and other eruptive solar phenomena. The occurrence of solar flares varies with the 11-year solar cycle. Solar flares are thought to occur when stored magnetic energy in the Sun's atmosphere accelerates charged particles in the surrounding plasma. This results in the emission of electromagnetic radiation across the electromagnetic spectrum. The extreme ultraviolet and X-ray radiation from solar flares is absorbed by the daylight side of Earth's upper atmosphere, in particular the ionosphere, and does not reach the surface. This absorption can temporarily increase the ionization of the ionosphere which may interfere with short-wave radio communication. The prediction of solar flares is an active area of research. Flares also occur on other stars, where the term stellar flare applies. Physical description Solar flares are eruptions of electromagnetic radiation originating in the Sun's atmosphere. They affect all layers of the solar atmosphere (photosphere, chromosphere, and corona). The plasma medium is heated to >107 kelvin, while electrons, protons, and heavier ions are accelerated to near the speed of light. Flares emit electromagnetic radiation across the electromagnetic spectrum, from radio waves to gamma rays. Flares occur in active regions, often around sunspots, where intense magnetic fields penetrate the photosphere to link the corona to the solar interior. Flares are powered by the sudden (timescales of minutes to tens of minutes) release of magnetic energy stored in the corona. The same energy releases may also produce coronal mass ejections (CMEs), although the relationship between CMEs and flares is not well understood. Associated with solar flares are flare sprays. They involve faster ejections of material than eruptive prominences, and reach velocities of 20 to 2000 kilometers per second. Cause Flares occur when accelerated charged particles, mainly electrons, interact with the plasma medium. Evidence suggests that the phenomenon of magnetic reconnection leads to this extreme acceleration of charged particles. On the Sun, magnetic reconnection may happen on solar arcades – a type of prominence consisting of a series of closely occurring loops following magnetic lines of force. These lines of force quickly reconnect into a lower arcade of loops leaving a helix of magnetic field unconnected to the rest of the arcade. The sudden release of energy in this reconnection is the origin of the particle acceleration. The unconnected magnetic helical field and the material that it contains may violently expand outwards forming a coronal mass ejection. This also explains why solar flares typically erupt from active regions on the Sun where magnetic fields are much stronger. Although there is a general agreement on the source of a flare's energy, the mechanisms involved are not well understood. It is not clear how the magnetic energy is transformed into the kinetic energy of the particles, nor is it known how some particles can be accelerated to the GeV range (109 electron volt) and beyond. There are also some inconsistencies regarding the total number of accelerated particles, which sometimes seems to be greater than the total number in the coronal loop. Post-eruption loops and arcades After the eruption of a solar flare, post-eruption loops made of hot plasma begin to form across the neutral line separating regions of opposite magnetic polarity near the flare's source. These loops extend from the photosphere up into the corona and form along the neutral line at increasingly greater distances from the source as time progresses. The existence of these hot loops is thought to be continued by prolonged heating present after the eruption and during the flare's decay stage. In sufficiently powerful flares, typically of C-class or higher, the loops may combine to form an elongated arch-like structure known as a post-eruption arcade. These structures may last anywhere from multiple hours to multiple days after the initial flare. In some cases, dark sunward-traveling plasma voids known as supra-arcade downflows may form above these arcades. Frequency The frequency of occurrence of solar flares varies with the 11-year solar cycle. It can typically range from several per day during solar maxima to less than one every week during solar minima. Additionally, more powerful flares are less frequent than weaker ones. For example, X10-class (severe) flares occur on average about eight times per cycle, whereas M1-class (minor) flares occur on average about 2000 times per cycle. Erich Rieger discovered with coworkers in 1984, an approximately 154 day period in the occurrence of gamma-ray emitting solar flares at least since the solar cycle 19. The period has since been confirmed in most heliophysics data and the interplanetary magnetic field and is commonly known as the Rieger period. The period's resonance harmonics also have been reported from most data types in the heliosphere. The frequency distributions of various flare phenomena can be characterized by power-law distributions. For example, the peak fluxes of radio, extreme ultraviolet, and hard and soft X-ray emissions; total energies; and flare durations (see ) have been found to follow power-law distributions. Classification Soft X-ray The modern classification system for solar flares uses the letters A, B, C, M, or X, according to the peak flux in watts per square metre (W/m2) of soft X-rays with wavelengths , as measured by GOES satellites in geosynchronous orbit. The strength of an event within a class is noted by a numerical suffix ranging from 1 up to, but excluding, 10, which is also the factor for that event within the class. Hence, an X2 flare is twice the strength of an X1 flare, an X3 flare is three times as powerful as an X1. M-class flares are a tenth the size of X-class flares with the same numeric suffix. An X2 is four times more powerful than an M5 flare. X-class flares with a peak flux that exceeds 10−3 W/m2 may be noted with a numerical suffix equal to or greater than 10. This system was originally devised in 1970 and included only the letters C, M, and X. These letters were chosen to avoid confusion with other optical classification systems. The A and B classes were added in the 1990s as instruments became more sensitive to weaker flares. Around the same time, the backronym moderate for M-class flares and extreme for X-class flares began to be used. Importance An earlier classification system, sometimes referred to as the flare importance, was based on H-alpha spectral observations. The scheme uses both the intensity and emitting surface. The classification in intensity is qualitative, referring to the flares as: faint (f), normal (n), or brilliant (b). The emitting surface is measured in terms of millionths of the hemisphere and is described below. (The total hemisphere area AH = 15.5 × 1012 km2.) A flare is then classified taking S or a number that represents its size and a letter that represents its peak intensity, v.g.: Sn is a normal sunflare. Duration A common measure of flare duration is the full width at half maximum (FWHM) time of flux in the soft X-ray bands measured by GOES. The FWHM time spans from when a flare's flux first reaches halfway between its maximum flux and the background flux and when it again reaches this value as the flare decays. Using this measure, the duration of a flare ranges from approximately tens of seconds to several hours with a median duration of approximately 6 and 11 minutes in the bands, respectively. Flares can also be classified based on their duration as either impulsive or long duration events (LDE). The time threshold separating the two is not well defined. The SWPC regards events requiring 30 minutes or more to decay to half maximum as LDEs, whereas Belgium's Solar-Terrestrial Centre of Excellence regards events with duration greater than 60 minutes as LDEs. Effects The electromagnetic radiation emitted during a solar flare propagates away from the Sun at the speed of light with intensity inversely proportional to the square of the distance from its source region. The excess ionizing radiation, namely X-ray and extreme ultraviolet (XUV) radiation, is known to affect planetary atmospheres and is of relevance to human space exploration and the search for extraterrestrial life. Solar flares also affect other objects in the Solar System. Research into these effects has primarily focused on the atmosphere of Mars and, to a lesser extent, that of Venus. The impacts on other planets in the Solar System are little studied in comparison. As of 2024, research on their effects on Mercury have been limited to modeling of the response of ions in the planet's magnetosphere, and their impact on Jupiter and Saturn have only been studied in the context of X-ray radiation back scattering off of the planets' upper atmospheres. Ionosphere Enhanced XUV irradiance during solar flares can result in increased ionization, dissociation, and heating in the ionospheres of Earth and Earth-like planets. On Earth, these changes to the upper atmosphere, collectively referred to as sudden ionospheric disturbances, can interfere with short-wave radio communication and global navigation satellite systems (GNSS) such as GPS, and subsequent expansion of the upper atmosphere can increase drag on satellites in low Earth orbit leading to orbital decay over time. Flare-associated XUV photons interact with and ionize neutral constituents of planetary atmospheres via the process of photoionization. The electrons that are freed in this process, referred to as photoelectrons to distinguish them from the ambient ionospheric electrons, are left with kinetic energies equal to the photon energy in excess of the ionization threshold. In the lower ionosphere where flare impacts are greatest and transport phenomena are less important, the newly liberated photoelectrons lose energy primarily via thermalization with the ambient electrons and neutral species and via secondary ionization due to collisions with the latter, or so-called photoelectron impact ionization. In the process of thermalization, photoelectrons transfer energy to neutral species, resulting in heating and expansion of the neutral atmosphere. The greatest increases in ionization occur in the lower ionosphere where wavelengths with the greatest relative increase in irradiance—the highly penetrative X-ray wavelengths—are absorbed, corresponding to Earth's E and D layers and Mars's M1 layer. Radio blackouts The temporary increase in ionization of the daylight side of Earth's atmosphere, in particular the D layer of the ionosphere, can interfere with short-wave radio communications that rely on its level of ionization for skywave propagation. Skywave, or skip, refers to the propagation of radio waves reflected or refracted off of the ionized ionosphere. When ionization is higher than normal, radio waves get degraded or completely absorbed by losing energy from the more frequent collisions with free electrons. The level of ionization of the atmosphere correlates with the strength of the associated solar flare in soft X-ray radiation. The Space Weather Prediction Center, a part of the United States National Oceanic and Atmospheric Administration, classifies radio blackouts by the peak soft X-ray intensity of the associated flare. Solar flare effect During non-flaring or solar quiet conditions, electric currents flow through the ionosphere's dayside E layer inducing small-amplitude diurnal variations in the geomagnetic field. These ionospheric currents can be strengthened during large solar flares due to increases in electrical conductivity associated with enhanced ionization of the E and D layers. The subsequent increase in the induced geomagnetic field variation is referred to as a solar flare effect (sfe) or historically as a magnetic crochet. The latter term derives from the French word meaning hook reflecting the hook-like disturbances in magnetic field strength observed by ground-based magnetometers. These disturbances are on the order of a few nanoteslas and last for a few minutes, which is relatively minor compared to those induced during geomagnetic storms. Health Low Earth orbit For astronauts in low Earth orbit, an expected radiation dose from the electromagnetic radiation emitted during a solar flare is about 0.05 gray, which is not immediately lethal on its own. Of much more concern for astronauts is the particle radiation associated with solar particle events. Mars The impacts of solar flare radiation on Mars are relevant to exploration and the search for life on the planet. Models of its atmosphere indicate that the most energetic solar flares previously recorded may have provided acute doses of radiation that would have been almost harmful or lethal to mammals and other higher organisms on Mars's surface. Furthermore, flares energetic enough to provide lethal doses, while not yet observed on the Sun, are thought to occur and have been observed on other Sun-like stars. Observational history Flares produce radiation across the electromagnetic spectrum, although with different intensity. They are not very intense in visible light, but they can be very bright at particular spectral lines. They normally produce bremsstrahlung in X-rays and synchrotron radiation in radio. Optical observations Solar flares were first observed by Richard Carrington and Richard Hodgson independently on 1 September 1859 by projecting the image of the solar disk produced by an optical telescope through a broad-band filter. It was an extraordinarily intense white light flare, a flare emitting a high amount of light in the visual spectrum. Since flares produce copious amounts of radiation at H-alpha, adding a narrow (≈1 Å) passband filter centered at this wavelength to the optical telescope allows the observation of not very bright flares with small telescopes. For years Hα was the main, if not the only, source of information about solar flares. Other passband filters are also used. Radio observations During World War II, on February 25 and 26, 1942, British radar operators observed radiation that Stanley Hey interpreted as solar emission. Their discovery did not go public until the end of the conflict. The same year, Southworth also observed the Sun in radio, but as with Hey, his observations were only known after 1945. In 1943, Grote Reber was the first to report radioastronomical observations of the Sun at 160 MHz. The fast development of radioastronomy revealed new peculiarities of the solar activity like storms and bursts related to the flares. Today, ground-based radiotelescopes observe the Sun from c. 15 MHz up to 400 GHz. Space telescopes Because the Earth's atmosphere absorbs much of the electromagnetic radiation emitted by the Sun with wavelengths shorter than 300 nm, space-based telescopes allowed for the observation of solar flares in previously unobserved high-energy spectral lines. Since the 1970s, the GOES series of satellites have been continuously observing the Sun in soft X-rays, and their observations have become the standard measure of flares, diminishing the importance of the H-alpha classification. Additionally, space-based telescopes allow for the observation of extremely long wavelengths—as long as a few kilometres—which cannot propagate through the ionosphere. Examples of large solar flares The most powerful flare ever observed is thought to be the flare associated with the 1859 Carrington Event. While no soft X-ray measurements were made at the time, the magnetic crochet associated with the flare was recorded by ground-based magnetometers allowing the flare's strength to be estimated after the event. Using these magnetometer readings, its soft X-ray class has been estimated to be greater than X10 and around X45 (±5). In modern times, the largest solar flare measured with instruments occurred on 4 November 2003. This event saturated the GOES detectors, and because of this, its classification is only approximate. Initially, extrapolating the GOES curve, it was estimated to be X28. Later analysis of the ionospheric effects suggested increasing this estimate to X45. This event produced the first clear evidence of a new spectral component above 100 GHz. Prediction Current methods of flare prediction are problematic, and there is no certain indication that an active region on the Sun will produce a flare. However, many properties of active regions and their sunspots correlate with flaring. For example, magnetically complex regions (based on line-of-sight magnetic field) referred to as delta spots frequently produce the largest flares. A simple scheme of sunspot classification based on the McIntosh system for sunspot groups, or related to a region's fractal complexity is commonly used as a starting point for flare prediction. Predictions are usually stated in terms of probabilities for occurrence of flares above M- or X-class within 24 or 48 hours. The U.S. National Oceanic and Atmospheric Administration (NOAA) issues forecasts of this kind. MAG4 was developed at the University of Alabama in Huntsville with support from the Space Radiation Analysis Group at Johnson Space Flight Center (NASA/SRAG) for forecasting M- and X-class flares, CMEs, fast CME, and solar energetic particle events. A physics-based method that can predict imminent large solar flares was proposed by Institute for Space-Earth Environmental Research (ISEE), Nagoya University. See also References External links NOAA Space Weather Prediction Center's near real-time solar flare data and resources: GOES X-Ray Flux (1-minute data) GOES Solar Ultraviolet Imager (SUVI) D Region Absorption Predictions (D-RAP) 3-Day Forecast Forecast Discussion Solar phenomena Space plasmas Articles containing video clips Cosmic doomsday
Solar flare
[ "Physics" ]
3,661
[ "Space plasmas", "Physical phenomena", "Astrophysics", "Solar phenomena", "Stellar phenomena" ]
54,665
https://en.wikipedia.org/wiki/Mixmaster%20anonymous%20remailer
Mixmaster is a Type II anonymous remailer which sends messages in fixed-size packets and reorders them, preventing anyone watching the messages go in and out of remailers from tracing them. It is an implementation of a David Chaum's mix network. History Mixmaster was originally written by Lance Cottrell, and was maintained by Len Sassaman. Peter Palfrader is the current maintainer. Current Mixmaster software can be compiled to handle Cypherpunk messages as well; they are needed as reply blocks for nym servers. Support for Mixmaster was removed from the Neomutt fork of the Mutt mail client in 2024 because the project did not seem active anymore. See also Anonymity Anonymous P2P Anonymous remailer Cypherpunk anonymous remailer (Type I) Mixminion (Type III) Onion routing Tor (network) Pseudonymous remailer (a.k.a. nym servers) Penet remailer Data privacy Traffic analysis References Further reading Email Security, Bruce Schneier () Computer Privacy Handbook, Andre Bacard () External links Mixmaster homepage Official Mixmaster Remailer FAQ Remailer FAQ Remailer Vulnerabilities A/I: PARANOIA REMAILER HOWTO Feraga.com: Howto use a Type II Anonymous Remailer (link not active 12 May 2010) Anonymity networks Internet Protocol based network software Routing Network architecture Mix networks
Mixmaster anonymous remailer
[ "Engineering" ]
305
[ "Network architecture", "Computer networks engineering" ]
54,666
https://en.wikipedia.org/wiki/Anonymous%20remailer
An anonymous remailer is a server that receives messages with embedded instructions on where to send them next, and that forwards them without revealing where they originally came from. There are cypherpunk anonymous remailers, mixmaster anonymous remailers, and nym servers, among others, which differ in how they work, in the policies they adopt, and in the type of attack on the anonymity of e-mail they can (or are intended to) resist. Remailing as discussed in this article applies to e-mails intended for particular recipients, not the general public. Anonymity in the latter case is more easily addressed by using any of several methods of anonymous publication. Types of remailer There are several strategies that affect the anonymity of the handled e-mail. In general, different classes of anonymous remailers differ with regard to the choices their designers/operators have made. These choices can be influenced by the legal ramifications of operating specific types of remailers. It must be understood that every data packet traveling on the Internet contains the node addresses (as raw IP bit strings) of both the sending and intended recipient nodes, and so no data packet can ever actually be anonymous at this level . In addition, all standards-based e-mail messages contain defined fields in their headers in which the source and transmitting entities (and Internet nodes as well) are required to be included. Some remailers change both types of address in messages they forward, and the list of forwarding nodes in e-mail messages as well, as the message passes through; in effect, they substitute 'fake source addresses' for the originals. The 'IP source address' for that packet may become that of the remailer server itself, and within an e-mail message (which is usually several packets), a nominal 'user' on that server. Some remailers forward their anonymized e-mail to still other remailers, and only after several such hops is the e-mail actually delivered to the intended address. There are, more or less, four types of remailers: Pseudonymous remailers A pseudonymous remailer simply takes away the e-mail address of the sender, gives a pseudonym to the sender, and sends the message to the intended recipient (that can be answered via that remailer). Cypherpunk remailers, also called Type I A Cypherpunk remailer sends the message to the recipient, stripping away the sender address on it. One can not answer a message sent via a Cypherpunk remailer. The message sent to the remailer can usually be encrypted, and the remailer will decrypt it and send it to the recipient address hidden inside the encrypted message. In addition, it is possible to chain two or three remailers, so that each remailer can't know who is sending a message to whom. Cypherpunk remailers do not keep logs of transactions. Mixmaster remailers, also called Type II In Mixmaster, the user composes an email to a remailer, which is relayed through each node in the network using SMTP, until it finally arrives at the final recipient. Mixmaster can only send emails one way. An email is sent anonymously to an individual, but for them to be able to respond, a reply address must be included in the body of the email. Also, Mixmaster remailers require the use of a computer program to write messages. Such programs are not supplied as a standard part of most operating systems or mail management systems. Mixminion remailers, also called Type III A Mixminion remailer attempts to address the following challenges in Mixmaster remailers: replies, forward anonymity, replay prevention and key rotation, exit policies, integrated directory servers and dummy traffic. They are currently available for the Linux and Windows platforms. Some implementations are open source. Traceable remailers Some remailers establish an internal list of actual senders and invented names such that a recipient can send mail to invented name AT some-remailer.example. When receiving traffic addressed to this user, the server software consults that list, and forwards the mail to the original sender, thus permitting anonymous—though traceable with access to the list—two-way communication. The famous "penet.fi" remailer in Finland did just that for several years. Because of the existence of such lists in this type of remailing server, it is possible to break the anonymity by gaining access to the list(s), by breaking into the computer, asking a court (or merely the police in some places) to order that the anonymity be broken, and/or bribing an attendant. This happened to penet.fi as a result of some traffic passed through it about Scientology. The Church claimed copyright infringement and sued penet.fi's operator. A court ordered the list be made available. Penet's operator shut it down after destroying its records (including the list) to retain identity confidentiality for its users; though not before being forced to supply the court with the real e-mail addresses of two of its users. More recent remailer designs use cryptography in an attempt to provide more or less the same service, but without so much risk of loss of user confidentiality. These are generally termed nym servers or pseudonymous remailers. The degree to which they remain vulnerable to forced disclosure (by courts or police) is and will remain unclear since new statutes/regulations and new cryptanalytic developments proceed apace. Multiple anonymous forwarding among cooperating remailers in different jurisdictions may retain, but cannot guarantee, anonymity against a determined attempt by one or more governments, or civil litigators. Untraceable remailers If users accept the loss of two-way interaction, identity anonymity can be made more secure. By not keeping any list of users and corresponding anonymizing labels for them, a remailer can ensure that any message that has been forwarded leaves no internal information behind that can later be used to break identity confidentiality. However, while being handled, messages remain vulnerable within the server (e.g., to Trojan software in a compromised server, to a compromised server operator, or to mis-administration of the server), and traffic analysis comparison of traffic into and out of such a server can suggest quite a lot—far more than almost any would credit. The Mixmaster strategy is designed to defeat such attacks, or at least to increase their cost (i.e., to 'attackers') beyond feasibility. If every message is passed through several servers (ideally in different legal and political jurisdictions), then attacks based on legal systems become considerably more difficult, if only because of 'Clausewitzian' friction among lawyers, courts, different statutes, organizational rivalries, legal systems, etc. And, since many different servers and server operators are involved, subversion of any (i.e., of either system or operator) becomes less effective also since no one (most likely) will be able to subvert the entire chain of remailers. Random padding of messages, random delays before forwarding, and encryption of forwarding information between forwarding remailers, increases the degree of difficulty for attackers still further as message size and timing can be largely eliminated as traffic analysis clues, and lack of easily readable forwarding information renders ineffective simple automated traffic analysis algorithms. Web-based mailer There are also web services that allow users to send anonymous email messages. These services do not provide the anonymity of real remailers, but they are easier to use. When using a web-based anonymous email or anonymous remailer service, its reputation should first be analyzed, since the service stands between senders and recipients. Some of the aforementioned web services log the users IP addresses to ensure they do not break the law; others offer superior anonymity with attachment functionality by choosing to trust that the users will not breach the websites terms of service (ToS). Remailer statistics In most cases, remailers are owned and operated by individuals, and are not as stable as they might ideally be. In fact, remailers can, and have, gone down without warning. It is important to use up-to-date statistics when choosing remailers. Remailer abuse and blocking by governments Although most re-mailer systems are used responsibly, the anonymity they provide can be exploited by entities or individuals whose reasons for anonymity are not necessarily benign. Such reasons could include support for violent extremist actions, sexual exploitation of children or more commonly to frustrate accountability for 'trolling' and harassment of targeted individuals, or companies (The Dizum.com re-mailer chain being abused as recently as May 2013 for this purpose.) The response of some re-mailers to this abuse potential is often to disclaim responsibility (as dizum.com does), as owing to the technical design (and ethical principles) of many systems, it is impossible for the operators to physically unmask those using their systems. Some re-mailer systems go further and claim that it would be illegal for them to monitor for certain types abuse at all. Until technical changes were made in the remailers concerned in the mid-2000s, some re-mailers (notably nym.alias.net based systems) were seemingly willing to use any genuine (and thus valid) but otherwise forged address. This loophole allowed trolls to mis-attribute controversial claims or statements with the aim of causing offence, upset or harassment to the genuine holder(s) of the address(es) forged. While re-mailers may disclaim responsibility, the comments posted via them have led to them being blocked in some countries. In 2014, dizum.com (a Netherlands-based remailer) was seemingly blocked by authorities in Pakistan, because comments an (anonymous) user of that service had made concerning key figures in Islam. See also Anonymity Anonymity application Anonymous blogging Anonymous P2P Anonymous remailer Cypherpunk anonymous remailer (Type I) Mixmaster anonymous remailer (Type II) Mixminion anonymous remailer (Type III) Anonymous web browsing Data privacy Identity theft Internet privacy Personally identifiable information Privacy software and Privacy-enhancing technologies I2P I2P-Bote Java Anon Proxy Onion routing Tor (network) Pseudonymity, Pseudonymization Pseudonymous remailer (a.k.a. nym servers) Penet remailer Traffic analysis Winston Smith Project Mix network References Remailer Vulnerabilities Email Security, Bruce Schneier () Computer Privacy Handbook, Andre Bacard () Anonymous file sharing networks Internet Protocol based network software Routing Network architecture Cryptography
Anonymous remailer
[ "Mathematics", "Engineering" ]
2,269
[ "Cybersecurity engineering", "Cryptography", "Network architecture", "Applied mathematics", "Computer networks engineering" ]
54,679
https://en.wikipedia.org/wiki/Cypherpunk%20anonymous%20remailer
A Cypherpunk anonymous remailer, also known as a Type I remailer, is a type of anonymous remailer that receives messages encrypted with PGP or GPG, follows predetermined instructions to strip any identifying information, and forwards the messages to the desired recipient. Cypherpunk anonymous remailers are vulnerable to traffic analysis attacks, which take advantage of the predictable order in which messages are sent to recipients. This predictability can potentially reveal the identity of the sender. To address this weakness, Type II and Type III remailers were developed. Prior to the introduction of Mixmaster (Type II) remailers, users attempted to mitigate this issue by sending messages in batches or by using multiple remailers in sequence to further obscure the sender's identity. Mixmaster remailers were built upon the technology of Cypherpunk remailers, rendering the latter obsolescent. However, there are still websites and systems which rely on the general ideas of layered encryption and identity obfuscation behind Type I remailers. History The Cypherpunk movement emerged in the late 1980s and early 1990s, consisting of activists, cryptographers, and computer scientists who believed in the use of cryptography as a means to safeguard privacy and resist government interference. They played a crucial role in the development of privacy technologies, including remailers. Uses While they are mostly considered obsolete due to the Mixmaster being the most common remailer type, Cypherpunk remailers are still applicable in niche applications for those who have no other accessible options. For example, sites that are censored or blocked by governments can use such remailers to circumvent this censorship. Cypherpunk remailers also require less setup and fewer resources to run, and can therefore be a suitable solution for those under time constraints or with few available assets. See also Anonymity Anonymous P2P Anonymous remailer Mixmaster anonymous remailer (Type II) Mixminion (Type III) Onion routing Tor (network) Pseudonymous remailer (aka. nim servers) Pen net remailer Data privacy Traffic analysis Notes The additional headers used in this context are referred to as "pseudo-headers" because they are not included in the RFC 822 headers specification for email. Messages sent to Cypherpunk remailers can be layered, meaning they pass through multiple Cypherpunk remailers to minimize the chances of identifying the sender. Some Cypherpunk remailers also function as Mixmaster anonymous remailers, enabling them to divide long Cypherpunk messages into Mixmaster packets and forward them to the next remailer if it supports Mixmaster functionality. Many users of Cypherpunk remailers may choose to repeat steps 1–4 to add additional layers of protection to their messages, routing them through multiple remailers for enhanced privacy and security. Further reading Email Security, Bruce Schneier () Computer Privacy Handbook, Andre Bacardi () External links About.com: Send Email Anonymously – Chaining Remailers with PGP Feraga.com: How to use a Type I Anonymous Remailer (link not active 12 May 2010, see archive version) References Internet Protocol based network software Anonymity networks Routing Network architecture
Cypherpunk anonymous remailer
[ "Engineering" ]
694
[ "Network architecture", "Computer networks engineering" ]
54,681
https://en.wikipedia.org/wiki/NP-hardness
In computational complexity theory, a computational problem H is called NP-hard if, for every problem L which can be solved in non-deterministic polynomial-time, there is a polynomial-time reduction from L to H. That is, assuming a solution for H takes 1 unit time, Hs solution can be used to solve L in polynomial time. As a consequence, finding a polynomial time algorithm to solve a single NP-hard problem would give polynomial time algorithms for all the problems in the complexity class NP. As it is suspected, but unproven, that P≠NP, it is unlikely that any polynomial-time algorithms for NP-hard problems exist. A simple example of an NP-hard problem is the subset sum problem. Informally, if H is NP-hard, then it is at least as difficult to solve as the problems in NP. However, the opposite direction is not true: some problems are undecidable, and therefore even more difficult to solve than all problems in NP, but they are probably not NP-hard (unless P=NP). Definition A decision problem H is NP-hard when for every problem L in NP, there is a polynomial-time many-one reduction from L to H. Another definition is to require that there be a polynomial-time reduction from an NP-complete problem G to H. As any problem L in NP reduces in polynomial time to G, L reduces in turn to H in polynomial time so this new definition implies the previous one. It does not restrict the class NP-hard to decision problems, and it also includes search problems or optimization problems. Consequences If P ≠ NP, then NP-hard problems could not be solved in polynomial time. Some NP-hard optimization problems can be polynomial-time approximated up to some constant approximation ratio (in particular, those in APX) or even up to any approximation ratio (those in PTAS or FPTAS). There are many classes of approximability, each one enabling approximation up to a different level. Examples All NP-complete problems are also NP-hard (see List of NP-complete problems). For example, the optimization problem of finding the least-cost cyclic route through all nodes of a weighted graph—commonly known as the travelling salesman problem—is NP-hard. The subset sum problem is another example: given a set of integers, does any non-empty subset of them add up to zero? That is a decision problem and happens to be NP-complete. There are decision problems that are NP-hard but not NP-complete such as the halting problem. That is the problem which asks "given a program and its input, will it run forever?" That is a yes/no question and so is a decision problem. It is easy to prove that the halting problem is NP-hard but not NP-complete. For example, the Boolean satisfiability problem can be reduced to the halting problem by transforming it to the description of a Turing machine that tries all truth value assignments and when it finds one that satisfies the formula it halts and otherwise it goes into an infinite loop. It is also easy to see that the halting problem is not in NP since all problems in NP are decidable in a finite number of operations, but the halting problem, in general, is undecidable. There are also NP-hard problems that are neither NP-complete nor Undecidable. For instance, the language of true quantified Boolean formulas is decidable in polynomial space, but not in non-deterministic polynomial time (unless NP = PSPACE). NP-naming convention NP-hard problems do not have to be elements of the complexity class NP. As NP plays a central role in computational complexity, it is used as the basis of several classes: NP Class of computational decision problems for which any given yes-solution can be verified as a solution in polynomial time by a deterministic Turing machine (or solvable by a non-deterministic Turing machine in polynomial time). NP-hard Class of problems which are at least as hard as the hardest problems in NP. Problems that are NP-hard do not have to be elements of NP; indeed, they may not even be decidable. NP-complete Class of decision problems which contains the hardest problems in NP. Each NP-complete problem has to be in NP. NP-easy At most as hard as NP, but not necessarily in NP. NP-equivalent Decision problems that are both NP-hard and NP-easy, but not necessarily in NP. NP-intermediate If P and NP are different, then there exist decision problems in the region of NP that fall between P and the NP-complete problems. (If P and NP are the same class, then NP-intermediate problems do not exist because in this case every NP-complete problem would fall in P, and by definition, every problem in NP can be reduced to an NP-complete problem.) Application areas NP-hard problems are often tackled with rules-based languages in areas including: Approximate computing Configuration Cryptography Data mining Decision support Phylogenetics Planning Process monitoring and control Rosters or schedules Routing/vehicle routing Scheduling See also Lists of problems List of unsolved problems Reduction (complexity) Unknowability References Complexity classes
NP-hardness
[ "Mathematics" ]
1,089
[ "NP-hard problems", "Mathematical problems", "Computational problems" ]
54,717
https://en.wikipedia.org/wiki/De%20Broglie%E2%80%93Bohm%20theory
The de Broglie–Bohm theory is an interpretation of quantum mechanics which postulates that, in addition to the wavefunction, an actual configuration of particles exists, even when unobserved. The evolution over time of the configuration of all particles is defined by a guiding equation. The evolution of the wave function over time is given by the Schrödinger equation. The theory is named after Louis de Broglie (1892–1987) and David Bohm (1917–1992). The theory is deterministic and explicitly nonlocal: the velocity of any one particle depends on the value of the guiding equation, which depends on the configuration of all the particles under consideration. Measurements are a particular case of quantum processes described by the theory—for which it yields the same quantum predictions as other interpretations of quantum mechanics. The theory does not have a "measurement problem", due to the fact that the particles have a definite configuration at all times. The Born rule in de Broglie–Bohm theory is not a postulate. Rather, in this theory, the link between the probability density and the wave function has the status of a theorem, a result of a separate postulate, the "quantum equilibrium hypothesis", which is additional to the basic principles governing the wave function. There are several equivalent mathematical formulations of the theory. Overview De Broglie–Bohm theory is based on the following postulates: There is a configuration of the universe, described by coordinates , which is an element of the configuration space . The configuration space is different for different versions of pilot-wave theory. For example, this may be the space of positions of particles, or, in case of field theory, the space of field configurations . The configuration evolves (for spin=0) according to the guiding equation where is the probability current or probability flux, and is the momentum operator. Here, is the standard complex-valued wavefunction from quantum theory, which evolves according to Schrödinger's equation This completes the specification of the theory for any quantum theory with Hamilton operator of type . The configuration is distributed according to at some moment of time , and this consequently holds for all times. Such a state is named quantum equilibrium. With quantum equilibrium, this theory agrees with the results of standard quantum mechanics. Even though this latter relation is frequently presented as an axiom of the theory, Bohm presented it as derivable from statistical-mechanical arguments in the original papers of 1952. This argument was further supported by the work of Bohm in 1953 and was substantiated by Vigier and Bohm's paper of 1954, in which they introduced stochastic fluid fluctuations that drive a process of asymptotic relaxation from quantum non-equilibrium to quantum equilibrium (ρ → |ψ|2). Double-slit experiment The double-slit experiment is an illustration of wave–particle duality. In it, a beam of particles (such as electrons) travels through a barrier that has two slits. If a detector screen is on the side beyond the barrier, the pattern of detected particles shows interference fringes characteristic of waves arriving at the screen from two sources (the two slits); however, the interference pattern is made up of individual dots corresponding to particles that had arrived on the screen. The system seems to exhibit the behaviour of both waves (interference patterns) and particles (dots on the screen). If this experiment is modified so that one slit is closed, no interference pattern is observed. Thus, the state of both slits affects the final results. It can also be arranged to have a minimally invasive detector at one of the slits to detect which slit the particle went through. When that is done, the interference pattern disappears. In de Broglie–Bohm theory, the wavefunction is defined at both slits, but each particle has a well-defined trajectory that passes through exactly one of the slits. The final position of the particle on the detector screen and the slit through which the particle passes is determined by the initial position of the particle. Such initial position is not knowable or controllable by the experimenter, so there is an appearance of randomness in the pattern of detection. In Bohm's 1952 papers he used the wavefunction to construct a quantum potential that, when included in Newton's equations, gave the trajectories of the particles streaming through the two slits. In effect the wavefunction interferes with itself and guides the particles by the quantum potential in such a way that the particles avoid the regions in which the interference is destructive and are attracted to the regions in which the interference is constructive, resulting in the interference pattern on the detector screen. To explain the behavior when the particle is detected to go through one slit, one needs to appreciate the role of the conditional wavefunction and how it results in the collapse of the wavefunction; this is explained below. The basic idea is that the environment registering the detection effectively separates the two wave packets in configuration space. Theory Pilot wave The de Broglie–Bohm theory describes a pilot wave in a configuration space and trajectories of particles as in classical mechanics but defined by non-Newtonian mechanics. At every moment of time there exists not only a wavefunction, but also a well-defined configuration of the whole universe (i.e., the system as defined by the boundary conditions used in solving the Schrödinger equation). The de Broglie–Bohm theory works on particle positions and trajectories like classical mechanics but the dynamics are different. In classical mechanics, the accelerations of the particles are imparted directly by forces, which exist in physical three-dimensional space. In de Broglie–Bohm theory, the quantum "field exerts a new kind of "quantum-mechanical" force". Bohm hypothesized that each particle has a "complex and subtle inner structure" that provides the capacity to react to the information provided by the wavefunction by the quantum potential. Also, unlike in classical mechanics, physical properties (e.g., mass, charge) are spread out over the wavefunction in de Broglie–Bohm theory, not localized at the position of the particle. The wavefunction itself, and not the particles, determines the dynamical evolution of the system: the particles do not act back onto the wave function. As Bohm and Hiley worded it, "the Schrödinger equation for the quantum field does not have sources, nor does it have any other way by which the field could be directly affected by the condition of the particles [...] the quantum theory can be understood completely in terms of the assumption that the quantum field has no sources or other forms of dependence on the particles". P. Holland considers this lack of reciprocal action of particles and wave function to be one "[a]mong the many nonclassical properties exhibited by this theory". Holland later called this a merely apparent lack of back reaction, due to the incompleteness of the description. In what follows below, the setup for one particle moving in is given followed by the setup for N particles moving in 3 dimensions. In the first instance, configuration space and real space are the same, while in the second, real space is still , but configuration space becomes . While the particle positions themselves are in real space, the velocity field and wavefunction are on configuration space, which is how particles are entangled with each other in this theory. Extensions to this theory include spin and more complicated configuration spaces. We use variations of for particle positions, while represents the complex-valued wavefunction on configuration space. Guiding equation For a spinless single particle moving in , the particle's velocity is For many particles labeled for the -th particle their velocities are The main fact to notice is that this velocity field depends on the actual positions of all of the particles in the universe. As explained below, in most experimental situations, the influence of all of those particles can be encapsulated into an effective wavefunction for a subsystem of the universe. Schrödinger's equation The one-particle Schrödinger equation governs the time evolution of a complex-valued wavefunction on . The equation represents a quantized version of the total energy of a classical system evolving under a real-valued potential function on : For many particles, the equation is the same except that and are now on configuration space, : This is the same wavefunction as in conventional quantum mechanics. Relation to the Born rule In Bohm's original papers, he discusses how de Broglie–Bohm theory results in the usual measurement results of quantum mechanics. The main idea is that this is true if the positions of the particles satisfy the statistical distribution given by . And that distribution is guaranteed to be true for all time by the guiding equation if the initial distribution of the particles satisfies . For a given experiment, one can postulate this as being true and verify it experimentally. But, as argued by Dürr et al., one needs to argue that this distribution for subsystems is typical. The authors argue that , by virtue of its equivariance under the dynamical evolution of the system, is the appropriate measure of typicality for initial conditions of the positions of the particles. The authors then prove that the vast majority of possible initial configurations will give rise to statistics obeying the Born rule (i.e., ) for measurement outcomes. In summary, in a universe governed by the de Broglie–Bohm dynamics, Born rule behavior is typical. The situation is thus analogous to the situation in classical statistical physics. A low-entropy initial condition will, with overwhelmingly high probability, evolve into a higher-entropy state: behavior consistent with the second law of thermodynamics is typical. There are anomalous initial conditions that would give rise to violations of the second law; however in the absence of some very detailed evidence supporting the realization of one of those conditions, it would be quite unreasonable to expect anything but the actually observed uniform increase of entropy. Similarly in the de Broglie–Bohm theory, there are anomalous initial conditions that would produce measurement statistics in violation of the Born rule (conflicting the predictions of standard quantum theory), but the typicality theorem shows that absent some specific reason to believe one of those special initial conditions was in fact realized, the Born rule behavior is what one should expect. It is in this qualified sense that the Born rule is, for the de Broglie–Bohm theory, a theorem rather than (as in ordinary quantum theory) an additional postulate. It can also be shown that a distribution of particles which is not distributed according to the Born rule (that is, a distribution "out of quantum equilibrium") and evolving under the de Broglie–Bohm dynamics is overwhelmingly likely to evolve dynamically into a state distributed as . The conditional wavefunction of a subsystem In the formulation of the de Broglie–Bohm theory, there is only a wavefunction for the entire universe (which always evolves by the Schrödinger equation). Here, the "universe" is simply the system limited by the same boundary conditions used to solve the Schrödinger equation. However, once the theory is formulated, it is convenient to introduce a notion of wavefunction also for subsystems of the universe. Let us write the wavefunction of the universe as , where denotes the configuration variables associated to some subsystem (I) of the universe, and denotes the remaining configuration variables. Denote respectively by and the actual configuration of subsystem (I) and of the rest of the universe. For simplicity, we consider here only the spinless case. The conditional wavefunction of subsystem (I) is defined by It follows immediately from the fact that satisfies the guiding equation that also the configuration satisfies a guiding equation identical to the one presented in the formulation of the theory, with the universal wavefunction replaced with the conditional wavefunction . Also, the fact that is random with probability density given by the square modulus of implies that the conditional probability density of given is given by the square modulus of the (normalized) conditional wavefunction (in the terminology of Dürr et al. this fact is called the fundamental conditional probability formula). Unlike the universal wavefunction, the conditional wavefunction of a subsystem does not always evolve by the Schrödinger equation, but in many situations it does. For instance, if the universal wavefunction factors as then the conditional wavefunction of subsystem (I) is (up to an irrelevant scalar factor) equal to (this is what standard quantum theory would regard as the wavefunction of subsystem (I)). If, in addition, the Hamiltonian does not contain an interaction term between subsystems (I) and (II), then does satisfy a Schrödinger equation. More generally, assume that the universal wave function can be written in the form where solves Schrödinger equation and, for all and . Then, again, the conditional wavefunction of subsystem (I) is (up to an irrelevant scalar factor) equal to , and if the Hamiltonian does not contain an interaction term between subsystems (I) and (II), then satisfies a Schrödinger equation. The fact that the conditional wavefunction of a subsystem does not always evolve by the Schrödinger equation is related to the fact that the usual collapse rule of standard quantum theory emerges from the Bohmian formalism when one considers conditional wavefunctions of subsystems. Extensions Relativity Pilot-wave theory is explicitly nonlocal, which is in ostensible conflict with special relativity. Various extensions of "Bohm-like" mechanics exist that attempt to resolve this problem. Bohm himself in 1953 presented an extension of the theory satisfying the Dirac equation for a single particle. However, this was not extensible to the many-particle case because it used an absolute time. A renewed interest in constructing Lorentz-invariant extensions of Bohmian theory arose in the 1990s; see Bohm and Hiley: The Undivided Universe and references therein. Another approach is given by Dürr et al., who use Bohm–Dirac models and a Lorentz-invariant foliation of space-time. Thus, Dürr et al. (1999) showed that it is possible to formally restore Lorentz invariance for the Bohm–Dirac theory by introducing additional structure. This approach still requires a foliation of space-time. While this is in conflict with the standard interpretation of relativity, the preferred foliation, if unobservable, does not lead to any empirical conflicts with relativity. In 2013, Dürr et al. suggested that the required foliation could be covariantly determined by the wavefunction. The relation between nonlocality and preferred foliation can be better understood as follows. In de Broglie–Bohm theory, nonlocality manifests as the fact that the velocity and acceleration of one particle depends on the instantaneous positions of all other particles. On the other hand, in the theory of relativity the concept of instantaneousness does not have an invariant meaning. Thus, to define particle trajectories, one needs an additional rule that defines which space-time points should be considered instantaneous. The simplest way to achieve this is to introduce a preferred foliation of space-time by hand, such that each hypersurface of the foliation defines a hypersurface of equal time. Initially, it had been considered impossible to set out a description of photon trajectories in the de Broglie–Bohm theory in view of the difficulties of describing bosons relativistically. In 1996, Partha Ghose presented a relativistic quantum-mechanical description of spin-0 and spin-1 bosons starting from the Duffin–Kemmer–Petiau equation, setting out Bohmian trajectories for massive bosons and for massless bosons (and therefore photons). In 2001, Jean-Pierre Vigier emphasized the importance of deriving a well-defined description of light in terms of particle trajectories in the framework of either the Bohmian mechanics or the Nelson stochastic mechanics. The same year, Ghose worked out Bohmian photon trajectories for specific cases. Subsequent weak-measurement experiments yielded trajectories that coincide with the predicted trajectories. The significance of these experimental findings is controversial. Chris Dewdney and G. Horton have proposed a relativistically covariant, wave-functional formulation of Bohm's quantum field theory and have extended it to a form that allows the inclusion of gravity. Nikolić has proposed a Lorentz-covariant formulation of the Bohmian interpretation of many-particle wavefunctions. He has developed a generalized relativistic-invariant probabilistic interpretation of quantum theory, in which is no longer a probability density in space, but a probability density in space-time. He uses this generalized probabilistic interpretation to formulate a relativistic-covariant version of de Broglie–Bohm theory without introducing a preferred foliation of space-time. His work also covers the extension of the Bohmian interpretation to a quantization of fields and strings. Roderick I. Sutherland at the University in Sydney has a Lagrangian formalism for the pilot wave and its beables. It draws on Yakir Aharonov's retrocasual weak measurements to explain many-particle entanglement in a special relativistic way without the need for configuration space. The basic idea was already published by Olivier Costa de Beauregard in the 1950s and is also used by John Cramer in his transactional interpretation except the beables that exist between the von Neumann strong projection operator measurements. Sutherland's Lagrangian includes two-way action-reaction between pilot wave and beables. Therefore, it is a post-quantum non-statistical theory with final boundary conditions that violate the no-signal theorems of quantum theory. Just as special relativity is a limiting case of general relativity when the spacetime curvature vanishes, so, too is statistical no-entanglement signaling quantum theory with the Born rule a limiting case of the post-quantum action-reaction Lagrangian when the reaction is set to zero and the final boundary condition is integrated out. Spin To incorporate spin, the wavefunction becomes complex-vector-valued. The value space is called spin space; for a spin-1/2 particle, spin space can be taken to be . The guiding equation is modified by taking inner products in spin space to reduce the complex vectors to complex numbers. The Schrödinger equation is modified by adding a Pauli spin term: where — the mass, charge and magnetic moment of the –th particle — the appropriate spin operator acting in the –th particle's spin space — spin quantum number of the –th particle ( for electron) is vector potential in is the magnetic field in is the covariant derivative, involving the vector potential, ascribed to the coordinates of –th particle (in SI units) — the wavefunction defined on the multidimensional configuration space; e.g. a system consisting of two spin-1/2 particles and one spin-1 particle has a wavefunction of the form where is a tensor product, so this spin space is 12-dimensional is the inner product in spin space : Stochastic electrodynamics Stochastic electrodynamics (SED) is an extension of the de Broglie–Bohm interpretation of quantum mechanics, with the electromagnetic zero-point field (ZPF) playing a central role as the guiding pilot-wave. Modern approaches to SED, like those proposed by the group around late Gerhard Grössing, among others, consider wave and particle-like quantum effects as well-coordinated emergent systems. These emergent systems are the result of speculated and calculated sub-quantum interactions with the zero-point field. Quantum field theory In Dürr et al., the authors describe an extension of de Broglie–Bohm theory for handling creation and annihilation operators, which they refer to as "Bell-type quantum field theories". The basic idea is that configuration space becomes the (disjoint) space of all possible configurations of any number of particles. For part of the time, the system evolves deterministically under the guiding equation with a fixed number of particles. But under a stochastic process, particles may be created and annihilated. The distribution of creation events is dictated by the wavefunction. The wavefunction itself is evolving at all times over the full multi-particle configuration space. Hrvoje Nikolić introduces a purely deterministic de Broglie–Bohm theory of particle creation and destruction, according to which particle trajectories are continuous, but particle detectors behave as if particles have been created or destroyed even when a true creation or destruction of particles does not take place. Curved space To extend de Broglie–Bohm theory to curved space (Riemannian manifolds in mathematical parlance), one simply notes that all of the elements of these equations make sense, such as gradients and Laplacians. Thus, we use equations that have the same form as above. Topological and boundary conditions may apply in supplementing the evolution of Schrödinger's equation. For a de Broglie–Bohm theory on curved space with spin, the spin space becomes a vector bundle over configuration space, and the potential in Schrödinger's equation becomes a local self-adjoint operator acting on that space. The field equations for the de Broglie–Bohm theory in the relativistic case with spin can also be given for curved space-times with torsion. In a general spacetime with curvature and torsion, the guiding equation for the four-velocity of an elementary fermion particle iswhere the wave function is a spinor, is the corresponding adjoint, are the Dirac matrices, and is a tetrad. If the wave function propagates according to the curved Dirac equation, then the particle moves according to the Mathisson-Papapetrou equations of motion, which are an extension of the geodesic equation. This relativistic wave-particle duality follows from the conservation laws for the spin tensor and energy-momentum tensor, and also from the covariant Heisenberg picture equation of motion. Exploiting nonlocality De Broglie and Bohm's causal interpretation of quantum mechanics was later extended by Bohm, Vigier, Hiley, Valentini and others to include stochastic properties. Bohm and other physicists, including Valentini, view the Born rule linking to the probability density function as representing not a basic law, but a result of a system having reached quantum equilibrium during the course of the time development under the Schrödinger equation. It can be shown that, once an equilibrium has been reached, the system remains in such equilibrium over the course of its further evolution: this follows from the continuity equation associated with the Schrödinger evolution of . It is less straightforward to demonstrate whether and how such an equilibrium is reached in the first place. Antony Valentini has extended de Broglie–Bohm theory to include signal nonlocality that would allow entanglement to be used as a stand-alone communication channel without a secondary classical "key" signal to "unlock" the message encoded in the entanglement. This violates orthodox quantum theory but has the virtue of making the parallel universes of the chaotic inflation theory observable in principle. Unlike de Broglie–Bohm theory, Valentini's theory the wavefunction evolution also depends on the ontological variables. This introduces an instability, a feedback loop that pushes the hidden variables out of "sub-quantal heat death". The resulting theory becomes nonlinear and non-unitary. Valentini argues that the laws of quantum mechanics are emergent and form a "quantum equilibrium" that is analogous to thermal equilibrium in classical dynamics, such that other "quantum non-equilibrium" distributions may in principle be observed and exploited, for which the statistical predictions of quantum theory are violated. It is controversially argued that quantum theory is merely a special case of a much wider nonlinear physics, a physics in which non-local (superluminal) signalling is possible, and in which the uncertainty principle can be violated. Results Below are some highlights of the results that arise out of an analysis of de Broglie–Bohm theory. Experimental results agree with all of quantum mechanics' standard predictions insofar as it has them. But while standard quantum mechanics is limited to discussing the results of "measurements", de Broglie–Bohm theory governs the dynamics of a system without the intervention of outside observers (p. 117 in Bell). The basis for agreement with standard quantum mechanics is that the particles are distributed according to . This is a statement of observer ignorance: the initial positions are represented by a statistical distribution so deterministic trajectories will result in a statistical distribution. Measuring spin and polarization According to ordinary quantum theory, it is not possible to measure the spin or polarization of a particle directly; instead, the component in one direction is measured; the outcome from a single particle may be 1, meaning that the particle is aligned with the measuring apparatus, or −1, meaning that it is aligned the opposite way. An ensemble of particles prepared by a polarizer to be in state 1 will all measure polarized in state 1 in a subsequent apparatus. A polarized ensemble sent through a polarizer set at angle to the first pass will result in some values of 1 and some of −1 with a probability that depends on the relative alignment. For a full explanation of this, see the Stern–Gerlach experiment. In de Broglie–Bohm theory, the results of a spin experiment cannot be analyzed without some knowledge of the experimental setup. It is possible to modify the setup so that the trajectory of the particle is unaffected, but that the particle with one setup registers as spin-up, while in the other setup it registers as spin-down. Thus, for the de Broglie–Bohm theory, the particle's spin is not an intrinsic property of the particle; instead spin is, so to speak, in the wavefunction of the particle in relation to the particular device being used to measure the spin. This is an illustration of what is sometimes referred to as contextuality and is related to naive realism about operators. Interpretationally, measurement results are a deterministic property of the system and its environment, which includes information about the experimental setup including the context of co-measured observables; in no sense does the system itself possess the property being measured, as would have been the case in classical physics. Measurements, the quantum formalism, and observer independence De Broglie–Bohm theory gives the almost results as (non-relativisitic) quantum mechanics. It treats the wavefunction as a fundamental object in the theory, as the wavefunction describes how the particles move. This means that no experiment can distinguish between the two theories. This section outlines the ideas as to how the standard quantum formalism arises out of quantum mechanics. Collapse of the wavefunction De Broglie–Bohm theory is a theory that applies primarily to the whole universe. That is, there is a single wavefunction governing the motion of all of the particles in the universe according to the guiding equation. Theoretically, the motion of one particle depends on the positions of all of the other particles in the universe. In some situations, such as in experimental systems, we can represent the system itself in terms of a de Broglie–Bohm theory in which the wavefunction of the system is obtained by conditioning on the environment of the system. Thus, the system can be analyzed with Schrödinger's equation and the guiding equation, with an initial distribution for the particles in the system (see the section on the conditional wavefunction of a subsystem for details). It requires a special setup for the conditional wavefunction of a system to obey a quantum evolution. When a system interacts with its environment, such as through a measurement, the conditional wavefunction of the system evolves in a different way. The evolution of the universal wavefunction can become such that the wavefunction of the system appears to be in a superposition of distinct states. But if the environment has recorded the results of the experiment, then using the actual Bohmian configuration of the environment to condition on, the conditional wavefunction collapses to just one alternative, the one corresponding with the measurement results. Collapse of the universal wavefunction never occurs in de Broglie–Bohm theory. Its entire evolution is governed by Schrödinger's equation, and the particles' evolutions are governed by the guiding equation. Collapse only occurs in a phenomenological way for systems that seem to follow their own Schrödinger's equation. As this is an effective description of the system, it is a matter of choice as to what to define the experimental system to include, and this will affect when "collapse" occurs. Operators as observables In the standard quantum formalism, measuring observables is generally thought of as measuring operators on the Hilbert space. For example, measuring position is considered to be a measurement of the position operator. This relationship between physical measurements and Hilbert space operators is, for standard quantum mechanics, an additional axiom of the theory. The de Broglie–Bohm theory, by contrast, requires no such measurement axioms (and measurement as such is not a dynamically distinct or special sub-category of physical processes in the theory). In particular, the usual operators-as-observables formalism is, for de Broglie–Bohm theory, a theorem. A major point of the analysis is that many of the measurements of the observables do not correspond to properties of the particles; they are (as in the case of spin discussed above) measurements of the wavefunction. In the history of de Broglie–Bohm theory, the proponents have often had to deal with claims that this theory is impossible. Such arguments are generally based on inappropriate analysis of operators as observables. If one believes that spin measurements are indeed measuring the spin of a particle that existed prior to the measurement, then one does reach contradictions. De Broglie–Bohm theory deals with this by noting that spin is not a feature of the particle, but rather that of the wavefunction. As such, it only has a definite outcome once the experimental apparatus is chosen. Once that is taken into account, the impossibility theorems become irrelevant. There are also objections to this theory based on what it says about particular situations usually involving eigenstates of an operator. For example, the ground state of hydrogen is a real wavefunction. According to the guiding equation, this means that the electron is at rest when in this state. Nevertheless, it is distributed according to , and no contradiction to experimental results is possible to detect. Operators as observables leads many to believe that many operators are equivalent. De Broglie–Bohm theory, from this perspective, chooses the position observable as a favored observable rather than, say, the momentum observable. Again, the link to the position observable is a consequence of the dynamics. The motivation for de Broglie–Bohm theory is to describe a system of particles. This implies that the goal of the theory is to describe the positions of those particles at all times. Other observables do not have this compelling ontological status. Having definite positions explains having definite results such as flashes on a detector screen. Other observables would not lead to that conclusion, but there need not be any problem in defining a mathematical theory for other observables; see Hyman et al. for an exploration of the fact that a probability density and probability current can be defined for any set of commuting operators. Hidden variables De Broglie–Bohm theory is often referred to as a "hidden-variable" theory. Bohm used this description in his original papers on the subject, writing: "From the point of view of the usual interpretation, these additional elements or parameters [permitting a detailed causal and continuous description of all processes] could be called 'hidden' variables." Bohm and Hiley later stated that they found Bohm's choice of the term "hidden variables" to be too restrictive. In particular, they argued that a particle is not actually hidden but rather "is what is most directly manifested in an observation [though] its properties cannot be observed with arbitrary precision (within the limits set by uncertainty principle)". However, others nevertheless treat the term "hidden variable" as a suitable description. Generalized particle trajectories can be extrapolated from numerous weak measurements on an ensemble of equally prepared systems, and such trajectories coincide with the de Broglie–Bohm trajectories. In particular, an experiment with two entangled photons, in which a set of Bohmian trajectories for one of the photons was determined using weak measurements and postselection, can be understood in terms of a nonlocal connection between that photon's trajectory and the other photon's polarization. However, not only the De Broglie–Bohm interpretation, but also many other interpretations of quantum mechanics that do not include such trajectories are consistent with such experimental evidence. Different predictions A specialized version of the double slit experiment has been devised to test characteristics of the trajectory predictions. Experimental realization of this concept disagreed with the Bohm predictions. where they differed from standard quantum mechanics. These conclusions have been the subject of debate. Heisenberg's uncertainty principle The Heisenberg's uncertainty principle states that when two complementary measurements are made, there is a limit to the product of their accuracy. As an example, if one measures the position with an accuracy of and the momentum with an accuracy of , then In de Broglie–Bohm theory, there is always a matter of fact about the position and momentum of a particle. Each particle has a well-defined trajectory, as well as a wavefunction. Observers have limited knowledge as to what this trajectory is (and thus of the position and momentum). It is the lack of knowledge of the particle's trajectory that accounts for the uncertainty relation. What one can know about a particle at any given time is described by the wavefunction. Since the uncertainty relation can be derived from the wavefunction in other interpretations of quantum mechanics, it can be likewise derived (in the epistemic sense mentioned above) on the de Broglie–Bohm theory. To put the statement differently, the particles' positions are only known statistically. As in classical mechanics, successive observations of the particles' positions refine the experimenter's knowledge of the particles' initial conditions. Thus, with succeeding observations, the initial conditions become more and more restricted. This formalism is consistent with the normal use of the Schrödinger equation. For the derivation of the uncertainty relation, see Heisenberg uncertainty principle, noting that this article describes the principle from the viewpoint of the Copenhagen interpretation. Quantum entanglement, Einstein–Podolsky–Rosen paradox, Bell's theorem, and nonlocality De Broglie–Bohm theory highlighted the issue of nonlocality: it inspired John Stewart Bell to prove his now-famous theorem, which in turn led to the Bell test experiments. In the Einstein–Podolsky–Rosen paradox, the authors describe a thought experiment that one could perform on a pair of particles that have interacted, the results of which they interpreted as indicating that quantum mechanics is an incomplete theory. Decades later John Bell proved Bell's theorem (see p. 14 in Bell), in which he showed that, if they are to agree with the empirical predictions of quantum mechanics, all such "hidden-variable" completions of quantum mechanics must either be nonlocal (as the Bohm interpretation is) or give up the assumption that experiments produce unique results (see counterfactual definiteness and many-worlds interpretation). In particular, Bell proved that any local theory with unique results must make empirical predictions satisfying a statistical constraint called "Bell's inequality". Alain Aspect performed a series of Bell test experiments that test Bell's inequality using an EPR-type setup. Aspect's results show experimentally that Bell's inequality is in fact violated, meaning that the relevant quantum-mechanical predictions are correct. In these Bell test experiments, entangled pairs of particles are created; the particles are separated, traveling to remote measuring apparatus. The orientation of the measuring apparatus can be changed while the particles are in flight, demonstrating the apparent nonlocality of the effect. The de Broglie–Bohm theory makes the same (empirically correct) predictions for the Bell test experiments as ordinary quantum mechanics. It is able to do this because it is manifestly nonlocal. It is often criticized or rejected based on this; Bell's attitude was: "It is a merit of the de Broglie–Bohm version to bring this [nonlocality] out so explicitly that it cannot be ignored." The de Broglie–Bohm theory describes the physics in the Bell test experiments as follows: to understand the evolution of the particles, we need to set up a wave equation for both particles; the orientation of the apparatus affects the wavefunction. The particles in the experiment follow the guidance of the wavefunction. It is the wavefunction that carries the faster-than-light effect of changing the orientation of the apparatus. Maudlin provides an analysis of exactly what kind of nonlocality is present and how it is compatible with relativity. Bell has shown that the nonlocality does not allow superluminal communication. Maudlin has shown this in greater detail. Classical limit Bohm's formulation of de Broglie–Bohm theory in a classical-looking version has the merits that the emergence of classical behavior seems to follow immediately for any situation in which the quantum potential is negligible, as noted by Bohm in 1952. Modern methods of decoherence are relevant to an analysis of this limit. See Allori et al. for steps towards a rigorous analysis. Quantum trajectory method Work by Robert E. Wyatt in the early 2000s attempted to use the Bohm "particles" as an adaptive mesh that follows the actual trajectory of a quantum state in time and space. In the "quantum trajectory" method, one samples the quantum wavefunction with a mesh of quadrature points. One then evolves the quadrature points in time according to the Bohm equations of motion. At each time step, one then re-synthesizes the wavefunction from the points, recomputes the quantum forces, and continues the calculation. (QuickTime movies of this for H + H2 reactive scattering can be found on the Wyatt group web-site at UT Austin.) This approach has been adapted, extended, and used by a number of researchers in the chemical physics community as a way to compute semi-classical and quasi-classical molecular dynamics. A 2007 issue of The Journal of Physical Chemistry A was dedicated to Prof. Wyatt and his work on "computational Bohmian dynamics". Eric R. Bittner's group at the University of Houston has advanced a statistical variant of this approach that uses Bayesian sampling technique to sample the quantum density and compute the quantum potential on a structureless mesh of points. This technique was recently used to estimate quantum effects in the heat capacity of small clusters Nen for n ≈ 100. There remain difficulties using the Bohmian approach, mostly associated with the formation of singularities in the quantum potential due to nodes in the quantum wavefunction. In general, nodes forming due to interference effects lead to the case where This results in an infinite force on the sample particles forcing them to move away from the node and often crossing the path of other sample points (which violates single-valuedness). Various schemes have been developed to overcome this; however, no general solution has yet emerged. These methods, as does Bohm's Hamilton–Jacobi formulation, do not apply to situations in which the full dynamics of spin need to be taken into account. The properties of trajectories in the de Broglie–Bohm theory differ significantly from the Moyal quantum trajectories as well as the quantum trajectories from the unraveling of an open quantum system. Similarities with the many-worlds interpretation Kim Joris Boström has proposed a non-relativistic quantum mechanical theory that combines elements of de Broglie-Bohm mechanics and Everett's many-worlds. In particular, the unreal many-worlds interpretation of Hawking and Weinberg is similar to the Bohmian concept of unreal empty branch worlds: Many authors have expressed critical views of de Broglie–Bohm theory by comparing it to Everett's many-worlds approach. Many (but not all) proponents of de Broglie–Bohm theory (such as Bohm and Bell) interpret the universal wavefunction as physically real. According to some supporters of Everett's theory, if the (never collapsing) wavefunction is taken to be physically real, then it is natural to interpret the theory as having the same many worlds as Everett's theory. In the Everettian view the role of the Bohmian particle is to act as a "pointer", tagging, or selecting, just one branch of the universal wavefunction (the assumption that this branch indicates which wave packet determines the observed result of a given experiment is called the "result assumption"); the other branches are designated "empty" and implicitly assumed by Bohm to be devoid of conscious observers. H. Dieter Zeh comments on these "empty" branches: David Deutsch has expressed the same point more "acerbically": This conclusion has been challenged by Detlef Dürr and Justin Lazarovici: The Bohmian, of course, cannot accept this argument. For her, it is decidedly the particle configuration in three-dimensional space and not the wave function on the abstract configuration space that constitutes a world (or rather, the world). Instead, she will accuse the Everettian of not having local beables (in Bell's sense) in her theory, that is, the ontological variables that refer to localized entities in three-dimensional space or four-dimensional spacetime. The many worlds of her theory thus merely appear as a grotesque consequence of this omission. Occam's-razor criticism Both Hugh Everett III and Bohm treated the wavefunction as a physically real field. Everett's many-worlds interpretation is an attempt to demonstrate that the wavefunction alone is sufficient to account for all our observations. When we see the particle detectors flash or hear the click of a Geiger counter, Everett's theory interprets this as our wavefunction responding to changes in the detector's wavefunction, which is responding in turn to the passage of another wavefunction (which we think of as a "particle", but is actually just another wave packet). No particle (in the Bohm sense of having a defined position and velocity) exists according to that theory. For this reason Everett sometimes referred to his own many-worlds approach as the "pure wave theory". Of Bohm's 1952 approach, Everett said: In the Everettian view, then, the Bohm particles are superfluous entities, similar to, and equally as unnecessary as, for example, the luminiferous ether, which was found to be unnecessary in special relativity. This argument is sometimes called the "redundancy argument", since the superfluous particles are redundant in the sense of Occam's razor. According to Brown & Wallace, the de Broglie–Bohm particles play no role in the solution of the measurement problem. For these authors, the "result assumption" (see above) is inconsistent with the view that there is no measurement problem in the predictable outcome (i.e. single-outcome) case. They also say that a standard tacit assumption of de Broglie–Bohm theory (that an observer becomes aware of configurations of particles of ordinary objects by means of correlations between such configurations and the configuration of the particles in the observer's brain) is unreasonable. This conclusion has been challenged by Valentini, who argues that the entirety of such objections arises from a failure to interpret de Broglie–Bohm theory on its own terms. According to Peter R. Holland, in a wider Hamiltonian framework, theories can be formulated in which particles do act back on the wave function. Derivations De Broglie–Bohm theory has been derived many times and in many ways. Below are six derivations, all of which are very different and lead to different ways of understanding and extending this theory. Schrödinger's equation can be derived by using Einstein's light quanta hypothesis: and de Broglie's hypothesis: . The guiding equation can be derived in a similar fashion. We assume a plane wave: . Notice that . Assuming that for the particle's actual velocity, we have that . Thus, we have the guiding equation. Notice that this derivation does not use Schrödinger's equation. Preserving the density under the time evolution is another method of derivation. This is the method that Bell cites. It is this method that generalizes to many possible alternative theories. The starting point is the continuity equation for the density . This equation describes a probability flow along a current. We take the velocity field associated with this current as the velocity field whose integral curves yield the motion of the particle. A method applicable for particles without spin is to do a polar decomposition of the wavefunction and transform Schrödinger's equation into two coupled equations: the continuity equation from above and the Hamilton–Jacobi equation. This is the method used by Bohm in 1952. The decomposition and equations are as follows: Decomposition: Note that corresponds to the probability density . Continuity equation: . Hamilton–Jacobi equation: The Hamilton–Jacobi equation is the equation derived from a Newtonian system with potential and velocity field The potential is the classical potential that appears in Schrödinger's equation, and the other term involving is the quantum potential, terminology introduced by Bohm. This leads to viewing the quantum theory as particles moving under the classical force modified by a quantum force. However, unlike standard Newtonian mechanics, the initial velocity field is already specified by , which is a symptom of this being a first-order theory, not a second-order theory. A fourth derivation was given by Dürr et al. In their derivation, they derive the velocity field by demanding the appropriate transformation properties given by the various symmetries that Schrödinger's equation satisfies, once the wavefunction is suitably transformed. The guiding equation is what emerges from that analysis. A fifth derivation, given by Dürr et al. is appropriate for generalization to quantum field theory and the Dirac equation. The idea is that a velocity field can also be understood as a first-order differential operator acting on functions. Thus, if we know how it acts on functions, we know what it is. Then given the Hamiltonian operator , the equation to satisfy for all functions (with associated multiplication operator ) is , where is the local Hermitian inner product on the value space of the wavefunction. This formulation allows for stochastic theories such as the creation and annihilation of particles. A further derivation has been given by Peter R. Holland, on which he bases his quantum-physics textbook The Quantum Theory of Motion. It is based on three basic postulates and an additional fourth postulate that links the wavefunction to measurement probabilities: A physical system consists in a spatiotemporally propagating wave and a point particle guided by it. The wave is described mathematically by a solution to Schrödinger's wave equation. The particle motion is described by a solution to in dependence on initial condition , with the phase of .The fourth postulate is subsidiary yet consistent with the first three: The probability to find the particle in the differential volume at time t equals . History The theory was historically developed in the 1920s by de Broglie, who, in 1927, was persuaded to abandon it in favour of the then-mainstream Copenhagen interpretation. David Bohm, dissatisfied with the prevailing orthodoxy, rediscovered de Broglie's pilot-wave theory in 1952. Bohm's suggestions were not then widely received, partly due to reasons unrelated to their content, such as Bohm's youthful communist affiliations. The de Broglie–Bohm theory was widely deemed unacceptable by mainstream theorists, mostly because of its explicit non-locality. On the theory, John Stewart Bell, author of the 1964 Bell's theorem wrote in 1982: Since the 1990s, there has been renewed interest in formulating extensions to de Broglie–Bohm theory, attempting to reconcile it with special relativity and quantum field theory, besides other features such as spin or curved spatial geometries. De Broglie–Bohm theory has a history of different formulations and names. In this section, each stage is given a name and a main reference. Pilot-wave theory Louis de Broglie presented his pilot wave theory at the 1927 Solvay Conference, after close collaboration with Schrödinger, who developed his wave equation for de Broglie's theory. At the end of the presentation, Wolfgang Pauli pointed out that it was not compatible with a semi-classical technique Fermi had previously adopted in the case of inelastic scattering. Contrary to a popular legend, de Broglie actually gave the correct rebuttal that the particular technique could not be generalized for Pauli's purpose, although the audience might have been lost in the technical details and de Broglie's mild manner left the impression that Pauli's objection was valid. He was eventually persuaded to abandon this theory nonetheless because he was "discouraged by criticisms which [it] roused". De Broglie's theory already applies to multiple spin-less particles, but lacks an adequate theory of measurement as no one understood quantum decoherence at the time. An analysis of de Broglie's presentation is given in Bacciagaluppi et al. Also, in 1932 John von Neumann published a no hidden variables proof in his book Mathematical Foundations of Quantum Mechanics, that was widely believed to prove that all hidden-variable theories are impossible. This sealed the fate of de Broglie's theory for the next two decades. In 1926, Erwin Madelung had developed a hydrodynamic version of Schrödinger's equation, which is incorrectly considered as a basis for the density current derivation of the de Broglie–Bohm theory. The Madelung equations, being quantum analog of Euler equations of fluid dynamics, differ philosophically from the de Broglie–Bohm mechanics and are the basis of the stochastic interpretation of quantum mechanics. Peter R. Holland has pointed out that, earlier in 1927, Einstein had actually submitted a preprint with a similar proposal but, not convinced, had withdrawn it before publication. According to Holland, failure to appreciate key points of the de Broglie–Bohm theory has led to confusion, the key point being "that the trajectories of a many-body quantum system are correlated not because the particles exert a direct force on one another (à la Coulomb) but because all are acted upon by an entity – mathematically described by the wavefunction or functions of it – that lies beyond them". This entity is the quantum potential. After publishing his popular textbook Quantum Theory that adhered entirely to the Copenhagen orthodoxy, Bohm was persuaded by Einstein to take a critical look at von Neumann's no hidden variables proof. The result was 'A Suggested Interpretation of the Quantum Theory in Terms of "Hidden Variables" I and II' [Bohm 1952]. It was an independent origination of the pilot wave theory, and extended it to incorporate a consistent theory of measurement, and to address a criticism of Pauli that de Broglie did not properly respond to; it is taken to be deterministic (though Bohm hinted in the original papers that there should be disturbances to this, in the way Brownian motion disturbs Newtonian mechanics). This stage is known as the de Broglie–Bohm Theory in Bell's work [Bell 1987] and is the basis for 'The Quantum Theory of Motion' [Holland 1993]. This stage applies to multiple particles, and is deterministic. The de Broglie–Bohm theory is an example of a hidden-variables theory. Bohm originally hoped that hidden variables could provide a local, causal, objective description that would resolve or eliminate many of the paradoxes of quantum mechanics, such as Schrödinger's cat, the measurement problem and the collapse of the wavefunction. However, Bell's theorem complicates this hope, as it demonstrates that there can be no local hidden-variable theory that is compatible with the predictions of quantum mechanics. The Bohmian interpretation is causal but not local. Bohm's paper was largely ignored or panned by other physicists. Albert Einstein, who had suggested that Bohm search for a realist alternative to the prevailing Copenhagen approach, did not consider Bohm's interpretation to be a satisfactory answer to the quantum nonlocality question, calling it "too cheap", while Werner Heisenberg considered it a "superfluous 'ideological superstructure' ". Wolfgang Pauli, who had been unconvinced by de Broglie in 1927, conceded to Bohm as follows: I just received your long letter of 20th November, and I also have studied more thoroughly the details of your paper. I do not see any longer the possibility of any logical contradiction as long as your results agree completely with those of the usual wave mechanics and as long as no means is given to measure the values of your hidden parameters both in the measuring apparatus and in the observe [sic] system. As far as the whole matter stands now, your 'extra wave-mechanical predictions' are still a check, which cannot be cashed. He subsequently described Bohm's theory as "artificial metaphysics". According to physicist Max Dresden, when Bohm's theory was presented at the Institute for Advanced Study in Princeton, many of the objections were ad hominem, focusing on Bohm's sympathy with communists as exemplified by his refusal to give testimony to the House Un-American Activities Committee. In 1979, Chris Philippidis, Chris Dewdney and Basil Hiley were the first to perform numeric computations on the basis of the quantum potential to deduce ensembles of particle trajectories. Their work renewed the interests of physicists in the Bohm interpretation of quantum physics. Eventually John Bell began to defend the theory. In "Speakable and Unspeakable in Quantum Mechanics" [Bell 1987], several of the papers refer to hidden-variables theories (which include Bohm's). The trajectories of the Bohm model that would result for particular experimental arrangements were termed "surreal" by some. Still in 2016, mathematical physicist Sheldon Goldstein said of Bohm's theory: "There was a time when you couldn't even talk about it because it was heretical. It probably still is the kiss of death for a physics career to be actually working on Bohm, but maybe that's changing." Bohmian mechanics Bohmian mechanics is the same theory, but with an emphasis on the notion of current flow, which is determined on the basis of the quantum equilibrium hypothesis that the probability follows the Born rule. The term "Bohmian mechanics" is also often used to include most of the further extensions past the spin-less version of Bohm. While de Broglie–Bohm theory has Lagrangians and Hamilton-Jacobi equations as a primary focus and backdrop, with the icon of the quantum potential, Bohmian mechanics considers the continuity equation as primary and has the guiding equation as its icon. They are mathematically equivalent in so far as the Hamilton-Jacobi formulation applies, i.e., spin-less particles. All of non-relativistic quantum mechanics can be fully accounted for in this theory. Recent studies have used this formalism to compute the evolution of many-body quantum systems, with a considerable increase in speed as compared to other quantum-based methods. Causal interpretation and ontological interpretation Bohm developed his original ideas, calling them the Causal Interpretation. Later he felt that causal sounded too much like deterministic and preferred to call his theory the Ontological Interpretation. The main reference is "The Undivided Universe" (Bohm, Hiley 1993). This stage covers work by Bohm and in collaboration with Jean-Pierre Vigier and Basil Hiley. Bohm is clear that this theory is non-deterministic (the work with Hiley includes a stochastic theory). As such, this theory is not strictly speaking a formulation of de Broglie–Bohm theory, but it deserves mention here because the term "Bohm Interpretation" is ambiguous between this theory and de Broglie–Bohm theory. In 1996 philosopher of science Arthur Fine gave an in-depth analysis of possible interpretations of Bohm's model of 1952. William Simpson has suggested a hylomorphic interpretation of Bohmian mechanics, in which the cosmos is an Aristotelian substance composed of material particles and a substantial form. The wave function is assigned a dispositional role in choreographing the trajectories of the particles. Hydrodynamic quantum analogs Experiments on hydrodynamical analogs of quantum mechanics beginning with the work of Couder and Fort (2006) have purported to show that macroscopic classical pilot-waves can exhibit characteristics previously thought to be restricted to the quantum realm. Hydrodynamic pilot-wave analogs have been claimed to duplicate the double slit experiment, tunneling, quantized orbits, and numerous other quantum phenomena which have led to a resurgence in interest in pilot wave theories. The analogs have been compared to the Faraday wave. These results have been disputed: experiments fail to reproduce aspects of the double-slit experiments. High precision measurements in the tunneling case point to a different origin of the unpredictable crossing: rather than initial position uncertainty or environmental noise, interactions at the barrier seem to be involved. Another classical analog has been reported in surface gravity waves. Surrealistic trajectories In 1992, Englert, Scully, Sussman, and Walther proposed experiments that would show particles taking paths that differ from the Bohm trajectories. They described the Bohm trajectories as "surrealistic"; their proposal was later referred to as ESSW after the last names of the authors. In 2016, Mahler et al. verified the ESSW predictions. However they propose the surealistic effect is a consequence of the nonlocality inherent in Bohm's theory. See also Madelung equations Local hidden-variable theory Superfluid vacuum theory Fluid analogs in quantum mechanics Probability current Notes References Sources (full text) (full text) (Demonstrates incompleteness of the Bohm interpretation in the face of fractal, differentiable-nowhere wavefunctions.) (Describes a Bohmian resolution to the dilemma posed by non-differentiable wavefunctions.) Bohmian mechanics on arxiv.org Further reading John S. Bell: Speakable and Unspeakable in Quantum Mechanics: Collected Papers on Quantum Philosophy, Cambridge University Press, 2004, David Bohm, Basil Hiley: The Undivided Universe: An Ontological Interpretation of Quantum Theory, Routledge Chapman & Hall, 1993, Detlef Dürr, Sheldon Goldstein, Nino Zanghì: Quantum Physics Without Quantum Philosophy, Springer, 2012, Detlef Dürr, Stefan Teufel: Bohmian Mechanics: The Physics and Mathematics of Quantum Theory, Springer, 2009, Peter R. Holland: The quantum theory of motion, Cambridge University Press, 1993 (re-printed 2000, transferred to digital printing 2004), External links "Pilot-Wave Hydrodynamics" Bush, J. W. M., Annual Review of Fluid Mechanics, 2015 "Bohmian Mechanics" (Stanford Encyclopedia of Philosophy) "Bohmian-Mechanics.net", the homepage of the international research network on Bohmian Mechanics that was started by D. Dürr, S. Goldstein and N. Zanghì. Workgroup Bohmian Mechanics at LMU Munich (D. Dürr) Bohmian Mechanics Group at University of Innsbruck (G. Grübl) "Pilot waves, Bohmian metaphysics, and the foundations of quantum mechanics" , lecture course on de Broglie-Bohm theory by Mike Towler, Cambridge University. "21st-century directions in de Broglie-Bohm theory and beyond", August 2010 international conference on de Broglie-Bohm theory. Site contains slides for all the talks – the latest cutting-edge deBB research. "Observing the Trajectories of a Single Photon Using Weak Measurement" "Bohmian trajectories are no longer 'hidden variables'" The David Bohm Society De Broglie–Bohm theory inspired visualization of atomic orbitals. Interpretations of quantum mechanics Quantum measurement
De Broglie–Bohm theory
[ "Physics" ]
12,868
[ "Interpretations of quantum mechanics", "Quantum measurement", "Quantum mechanics" ]
54,738
https://en.wikipedia.org/wiki/Interpretations%20of%20quantum%20mechanics
An interpretation of quantum mechanics is an attempt to explain how the mathematical theory of quantum mechanics might correspond to experienced reality. Quantum mechanics has held up to rigorous and extremely precise tests in an extraordinarily broad range of experiments. However, there exist a number of contending schools of thought over their interpretation. These views on interpretation differ on such fundamental questions as whether quantum mechanics is deterministic or stochastic, local or non-local, which elements of quantum mechanics can be considered real, and what the nature of measurement is, among other matters. While some variation of the Copenhagen interpretation is commonly presented in textbooks, many other interpretations have been developed. Despite nearly a century of debate and experiment, no consensus has been reached among physicists and philosophers of physics concerning which interpretation best "represents" reality. History The definition of quantum theorists' terms, such as wave function and matrix mechanics, progressed through many stages. For instance, Erwin Schrödinger originally viewed the electron's wave function as its charge density smeared across space, but Max Born reinterpreted the absolute square value of the wave function as the electron's probability density distributed across space; the Born rule, as it is now called, matched experiment, whereas Schrödinger's charge density view did not. The views of several early pioneers of quantum mechanics, such as Niels Bohr and Werner Heisenberg, are often grouped together as the "Copenhagen interpretation", though physicists and historians of physics have argued that this terminology obscures differences between the views so designated. Copenhagen-type ideas were never universally embraced, and challenges to a perceived Copenhagen orthodoxy gained increasing attention in the 1950s with the pilot-wave interpretation of David Bohm and the many-worlds interpretation of Hugh Everett III. The physicist N. David Mermin once quipped, "New interpretations appear every year. None ever disappear." As a rough guide to development of the mainstream view during the 1990s and 2000s, a "snapshot" of opinions was collected in a poll by Schlosshauer et al. at the "Quantum Physics and the Nature of Reality" conference of July 2011. The authors reference a similarly informal poll carried out by Max Tegmark at the "Fundamental Problems in Quantum Theory" conference in August 1997. The main conclusion of the authors is that "the Copenhagen interpretation still reigns supreme", receiving the most votes in their poll (42%), besides the rise to mainstream notability of the many-worlds interpretations: "The Copenhagen interpretation still reigns supreme here, especially if we lump it together with intellectual offsprings such as information-based interpretations and the quantum Bayesian interpretation. In Tegmark's poll, the Everett interpretation received 17% of the vote, which is similar to the number of votes (18%) in our poll." Some concepts originating from studies of interpretations have found more practical application in quantum information science. Nature More or less, all interpretations of quantum mechanics share two qualities: They interpret a formalism—a set of equations and principles to generate predictions via input of initial conditions They interpret a phenomenology—a set of observations, including those obtained by empirical research and those obtained informally, such as humans' experience of an unequivocal world Two qualities vary among interpretations: Epistemology—claims about the possibility, scope, and means toward relevant knowledge of the world Ontology—claims about what things, such as categories and entities, exist in the world In the philosophy of science, the distinction between knowledge and reality is termed epistemic versus ontic. A general law can be seen as a generalisation of the regularity of outcomes (epistemic), whereas a causal mechanism may be thought of as determining or regulating outcomes (ontic). A phenomenon can be interpreted either as ontic or as epistemic. For instance, indeterminism may be attributed to limitations of human observation and perception (epistemic), or may be explained as intrinsic physical randomness (ontic). Confusing the epistemic with the ontic—if for example one were to presume that a general law actually "governs" outcomes, and that the statement of a regularity has the role of a causal mechanism—is a category mistake. In a broad sense, scientific theory can be viewed as offering an approximately true description or explanation of the natural world (scientific realism) or as providing nothing more than an account of our knowledge of the natural world (antirealism). A realist stance sees the epistemic as giving us a window onto the ontic, whereas an antirealist stance sees the epistemic as providing only a logically consistent picture of the ontic. In the first half of the 20th Century, a key antirealist philosophy was logical positivism, which sought to exclude unobservable aspects of reality from scientific theory. Since the 1950s antirealism has adopted a more modest approach, often in the form of instrumentalism, permitting talk of unobservables but ultimately discarding the very question of realism and positing scientific theory as a tool to help us make predictions, not to attain a deep metaphysical understanding of the world. The instrumentalist view is typified by David Mermin's famous slogan: "Shut up and calculate" (which is often misattributed to Richard Feynman). Interpretive challenges Abstract, mathematical nature of quantum field theories: the mathematical structure of quantum mechanics is abstract and does not result in a single, clear interpretation of its quantities. Apparent indeterministic and irreversible processes: in classical field theory, a physical property at a given location in the field is readily derived. In most mathematical formulations of quantum mechanics, measurement (understood as an interaction with a given state) has a special role in the theory, as it is the sole process that can cause a nonunitary, irreversible evolution of the state. Role of the observer in determining outcomes. Copenhagen-type interpretations imply that the wavefunction is a calculational tool, and represents reality only immediately after a measurement performed by an observer. Everettian interpretations grant that all possible outcomes are real, and that measurement-type interactions cause a branching process in which each possibility is realised. Classically unexpected correlations between remote objects: entangled quantum systems, as illustrated in the EPR paradox, obey statistics that seem to violate principles of local causality by action at a distance. Complementarity of proffered descriptions: complementarity holds that no set of classical physical concepts can simultaneously refer to all properties of a quantum system. For instance, wave description A and particulate description B can each describe quantum system S, but not simultaneously. This implies the composition of physical properties of S does not obey the rules of classical propositional logic when using propositional connectives (see "Quantum logic"). Like contextuality, the "origin of complementarity lies in the non-commutativity of operators" that describe quantum objects. Rapidly rising intricacy, far exceeding humans' present calculational capacity, as a system's size increases: since the state space of a quantum system is exponential in the number of subsystems, it is difficult to derive classical approximations. Contextual behaviour of systems locally: Quantum contextuality demonstrates that classical intuitions, in which properties of a system hold definite values independent of the manner of their measurement, fail even for local systems. Also, physical principles such as Leibniz's Principle of the identity of indiscernibles no longer apply in the quantum domain, signaling that most classical intuitions may be incorrect about the quantum world. Influential interpretations Copenhagen interpretation The Copenhagen interpretation is a collection of views about the meaning of quantum mechanics principally attributed to Niels Bohr and Werner Heisenberg. It is one of the oldest attitudes towards quantum mechanics, as features of it date to the development of quantum mechanics during 1925–1927, and it remains one of the most commonly taught. There is no definitive historical statement of what is the Copenhagen interpretation, and there were in particular fundamental disagreements between the views of Bohr and Heisenberg. For example, Heisenberg emphasized a sharp "cut" between the observer (or the instrument) and the system being observed, while Bohr offered an interpretation that is independent of a subjective observer or measurement or collapse, which relies on an "irreversible" or effectively irreversible process that imparts the classical behavior of "observation" or "measurement". Features common to Copenhagen-type interpretations include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule, and the principle of complementarity, which states certain pairs of complementary properties cannot all be observed or measured simultaneously. Moreover, properties only result from the act of "observing" or "measuring"; the theory avoids assuming definite values from unperformed experiments. Copenhagen-type interpretations hold that quantum descriptions are objective, in that they are independent of physicists' mental arbitrariness. The statistical interpretation of wavefunctions due to Max Born differs sharply from Schrödinger's original intent, which was to have a theory with continuous time evolution and in which wavefunctions directly described physical reality. Many worlds The many-worlds interpretation is an interpretation of quantum mechanics in which a universal wavefunction obeys the same deterministic, reversible laws at all times; in particular there is no (indeterministic and irreversible) wavefunction collapse associated with measurement. The phenomena associated with measurement are claimed to be explained by decoherence, which occurs when states interact with the environment. More precisely, the parts of the wavefunction describing observers become increasingly entangled with the parts of the wavefunction describing their experiments. Although all possible outcomes of experiments continue to lie in the wavefunction's support, the times at which they become correlated with observers effectively "split" the universe into mutually unobservable alternate histories. Quantum information theories Quantum informational approaches have attracted growing support. They subdivide into two kinds. Information ontologies, such as J. A. Wheeler's "it from bit". These approaches have been described as a revival of immaterialism. Interpretations where quantum mechanics is said to describe an observer's knowledge of the world, rather than the world itself. This approach has some similarity with Bohr's thinking. Collapse (also known as reduction) is often interpreted as an observer acquiring information from a measurement, rather than as an objective event. These approaches have been appraised as similar to instrumentalism. James Hartle writes, The state is not an objective property of an individual system but is that information, obtained from a knowledge of how a system was prepared, which can be used for making predictions about future measurements. ... A quantum mechanical state being a summary of the observer's information about an individual physical system changes both by dynamical laws, and whenever the observer acquires new information about the system through the process of measurement. The existence of two laws for the evolution of the state vector ... becomes problematical only if it is believed that the state vector is an objective property of the system ... The "reduction of the wavepacket" does take place in the consciousness of the observer, not because of any unique physical process which takes place there, but only because the state is a construct of the observer and not an objective property of the physical system. Relational quantum mechanics The essential idea behind relational quantum mechanics, following the precedent of special relativity, is that different observers may give different accounts of the same series of events: for example, to one observer at a given point in time, a system may be in a single, "collapsed" eigenstate, while to another observer at the same time, it may be in a superposition of two or more states. Consequently, if quantum mechanics is to be a complete theory, relational quantum mechanics argues that the notion of "state" describes not the observed system itself, but the relationship, or correlation, between the system and its observer(s). The state vector of conventional quantum mechanics becomes a description of the correlation of some degrees of freedom in the observer, with respect to the observed system. However, it is held by relational quantum mechanics that this applies to all physical objects, whether or not they are conscious or macroscopic. Any "measurement event" is seen simply as an ordinary physical interaction, an establishment of the sort of correlation discussed above. Thus the physical content of the theory has to do not with objects themselves, but the relations between them. QBism QBism, which originally stood for "quantum Bayesianism", is an interpretation of quantum mechanics that takes an agent's actions and experiences as the central concerns of the theory. This interpretation is distinguished by its use of a subjective Bayesian account of probabilities to understand the quantum mechanical Born rule as a normative addition to good decision-making. QBism draws from the fields of quantum information and Bayesian probability and aims to eliminate the interpretational conundrums that have beset quantum theory. QBism deals with common questions in the interpretation of quantum theory about the nature of wavefunction superposition, quantum measurement, and entanglement. According to QBism, many, but not all, aspects of the quantum formalism are subjective in nature. For example, in this interpretation, a quantum state is not an element of reality—instead it represents the degrees of belief an agent has about the possible outcomes of measurements. For this reason, some philosophers of science have deemed QBism a form of anti-realism. The originators of the interpretation disagree with this characterization, proposing instead that the theory more properly aligns with a kind of realism they call "participatory realism", wherein reality consists of more than can be captured by any putative third-person account of it. Consistent histories The consistent histories interpretation generalizes the conventional Copenhagen interpretation and attempts to provide a natural interpretation of quantum cosmology. The theory is based on a consistency criterion that allows the history of a system to be described so that the probabilities for each history obey the additive rules of classical probability. It is claimed to be consistent with the Schrödinger equation. According to this interpretation, the purpose of a quantum-mechanical theory is to predict the relative probabilities of various alternative histories (for example, of a particle). Ensemble interpretation The ensemble interpretation, also called the statistical interpretation, can be viewed as a minimalist interpretation. That is, it claims to make the fewest assumptions associated with the standard mathematics. It takes the statistical interpretation of Born to the fullest extent. The interpretation states that the wave function does not apply to an individual systemfor example, a single particlebut is an abstract statistical quantity that only applies to an ensemble (a vast multitude) of similarly prepared systems or particles. In the words of Einstein: The most prominent current advocate of the ensemble interpretation is Leslie E. Ballentine, professor at Simon Fraser University, author of the text book Quantum Mechanics, A Modern Development. De Broglie–Bohm theory The de Broglie–Bohm theory of quantum mechanics (also known as the pilot wave theory) is a theory by Louis de Broglie and extended later by David Bohm to include measurements. Particles, which always have positions, are guided by the wavefunction. The wavefunction evolves according to the Schrödinger wave equation, and the wavefunction never collapses. The theory takes place in a single spacetime, is non-local, and is deterministic. The simultaneous determination of a particle's position and velocity is subject to the usual uncertainty principle constraint. The theory is considered to be a hidden-variable theory, and by embracing non-locality it satisfies Bell's inequality. The measurement problem is resolved, since the particles have definite positions at all times. Collapse is explained as phenomenological. Transactional interpretation The transactional interpretation of quantum mechanics (TIQM) by John G. Cramer is an interpretation of quantum mechanics inspired by the Wheeler–Feynman absorber theory. It describes the collapse of the wave function as resulting from a time-symmetric transaction between a possibility wave from the source to the receiver (the wave function) and a possibility wave from the receiver to source (the complex conjugate of the wave function). This interpretation of quantum mechanics is unique in that it not only views the wave function as a real entity, but the complex conjugate of the wave function, which appears in the Born rule for calculating the expected value for an observable, as also real. Von Neumann–Wigner interpretation In his treatise The Mathematical Foundations of Quantum Mechanics, John von Neumann deeply analyzed the so-called measurement problem. He concluded that the entire physical universe could be made subject to the Schrödinger equation (the universal wave function). He also described how measurement could cause a collapse of the wave function. This point of view was prominently expanded on by Eugene Wigner, who argued that human experimenter consciousness (or maybe even dog consciousness) was critical for the collapse, but he later abandoned this interpretation. However, consciousness remains a mystery. The origin and place in nature of consciousness are not well understood. Some specific proposals for consciousness caused wave-function collapse have been shown to be unfalsifiable. Quantum logic Quantum logic can be regarded as a kind of propositional logic suitable for understanding the apparent anomalies regarding quantum measurement, most notably those concerning composition of measurement operations of complementary variables. This research area and its name originated in the 1936 paper by Garrett Birkhoff and John von Neumann, who attempted to reconcile some of the apparent inconsistencies of classical Boolean logic with the facts related to measurement and observation in quantum mechanics. Modal interpretations of quantum theory Modal interpretations of quantum mechanics were first conceived of in 1972 by Bas van Fraassen, in his paper "A formal approach to the philosophy of science". Van Fraassen introduced a distinction between a dynamical state, which describes what might be true about a system and which always evolves according to the Schrödinger equation, and a value state, which indicates what is actually true about a system at a given time. The term "modal interpretation" now is used to describe a larger set of models that grew out of this approach. The Stanford Encyclopedia of Philosophy describes several versions, including proposals by Kochen, Dieks, Clifton, Dickson, and Bub. According to Michel Bitbol, Schrödinger's views on how to interpret quantum mechanics progressed through as many as four stages, ending with a non-collapse view that in respects resembles the interpretations of Everett and van Fraassen. Because Schrödinger subscribed to a kind of post-Machian neutral monism, in which "matter" and "mind" are only different aspects or arrangements of the same common elements, treating the wavefunction as ontic and treating it as epistemic became interchangeable. Time-symmetric theories Time-symmetric interpretations of quantum mechanics were first suggested by Walter Schottky in 1921. Several theories have been proposed that modify the equations of quantum mechanics to be symmetric with respect to time reversal. (See Wheeler–Feynman time-symmetric theory.) This creates retrocausality: events in the future can affect ones in the past, exactly as events in the past can affect ones in the future. In these theories, a single measurement cannot fully determine the state of a system (making them a type of hidden-variables theory), but given two measurements performed at different times, it is possible to calculate the exact state of the system at all intermediate times. The collapse of the wavefunction is therefore not a physical change to the system, just a change in our knowledge of it due to the second measurement. Similarly, they explain entanglement as not being a true physical state but just an illusion created by ignoring retrocausality. The point where two particles appear to "become entangled" is simply a point where each particle is being influenced by events that occur to the other particle in the future. Not all advocates of time-symmetric causality favour modifying the unitary dynamics of standard quantum mechanics. Thus a leading exponent of the two-state vector formalism, Lev Vaidman, states that the two-state vector formalism dovetails well with Hugh Everett's many-worlds interpretation. Other interpretations As well as the mainstream interpretations discussed above, a number of other interpretations have been proposed that have not made a significant scientific impact for whatever reason. These range from proposals by mainstream physicists to the more occult ideas of quantum mysticism. Related concepts Some ideas are discussed in the context of interpreting quantum mechanics but are not necessarily regarded as interpretations themselves. Quantum Darwinism Quantum Darwinism is a theory meant to explain the emergence of the classical world from the quantum world as due to a process of Darwinian natural selection induced by the environment interacting with the quantum system; where the many possible quantum states are selected against in favor of a stable pointer state. It was proposed in 2003 by Wojciech Zurek and a group of collaborators including Ollivier, Poulin, Paz and Blume-Kohout. The development of the theory is due to the integration of a number of Zurek's research topics pursued over the course of twenty-five years including pointer states, einselection and decoherence. Objective-collapse theories Objective-collapse theories differ from the Copenhagen interpretation by regarding both the wave function and the process of collapse as ontologically objective (meaning these exist and occur independent of the observer). In objective theories, collapse occurs either randomly ("spontaneous localization") or when some physical threshold is reached, with observers having no special role. Thus, objective-collapse theories are realistic, indeterministic, no-hidden-variables theories. Standard quantum mechanics does not specify any mechanism of collapse; quantum mechanics would need to be extended if objective collapse is correct. The requirement for an extension means that objective-collapse theories are alternatives to quantum mechanics rather than interpretations of it. Examples include the Ghirardi–Rimini–Weber theory the continuous spontaneous localization model the Penrose interpretation Comparisons The most common interpretations are summarized in the table below. The values shown in the cells of the table are not without controversy, for the precise meanings of some of the concepts involved are unclear and, in fact, are themselves at the center of the controversy surrounding the given interpretation. For another table comparing interpretations of quantum theory, see reference. No experimental evidence exists that distinguishes among these interpretations. To that extent, the physical theory stands, and is consistent with itself and with reality. Nevertheless, designing experiments that would test the various interpretations is the subject of active research. Most of these interpretations have variants. For example, it is difficult to get a precise definition of the Copenhagen interpretation as it was developed and argued by many people. The silent approach Although interpretational opinions are openly and widely discussed today, that was not always the case. A notable exponent of a tendency of silence was Paul Dirac who once wrote: "The interpretation of quantum mechanics has been dealt with by many authors, and I do not want to discuss it here. I want to deal with more fundamental things." This position is not uncommon among practitioners of quantum mechanics. Similarly Richard Feynman wrote many popularizations of quantum mechanics without ever publishing about interpretation issues like quantum measurement. Others, like Nico van Kampen and Willis Lamb, have openly criticized non-orthodox interpretations of quantum mechanics. See also Bohr–Einstein debates Einstein's thought experiments Glossary of quantum philosophy Local hidden-variable theory Philosophical interpretation of classical physics Popper's experiment Superdeterminism Quantum foundations References Sources Rudolf Carnap, 1939, "The interpretation of physics", in Foundations of Logic and Mathematics of the International Encyclopedia of Unified Science. Chicago, Illinois: University of Chicago Press. Dickson, M., 1994, "Wavefunction tails in the modal interpretation" in Hull, D., Forbes, M., and Burian, R., eds., Proceedings of the PSA 1" 366–376. East Lansing, Michigan: Philosophy of Science Association. --------, and Clifton, R., 1998, "Lorentz-invariance in modal interpretations" in Dieks, D. and Vermaas, P., eds., The Modal Interpretation of Quantum Mechanics. Dordrecht: Kluwer Academic Publishers: 9–48. Fuchs, Christopher, 2002, "Quantum Mechanics as Quantum Information (and only a little more)". --------, and A. Peres, 2000, "Quantum theory needs no 'interpretation, Physics Today. Herbert, N., 1985. Quantum Reality: Beyond the New Physics. New York: Doubleday. . Hey, Anthony, and Walters, P., 2003. The New Quantum Universe, 2nd ed. Cambridge University Press. . Max Jammer, 1966. The Conceptual Development of Quantum Mechanics. McGraw-Hill. --------, 1974. The Philosophy of Quantum Mechanics. Wiley & Sons. Al-Khalili, 2003. Quantum: A Guide for the Perplexed. London: Weidenfeld & Nicolson. de Muynck, W. M., 2002. Foundations of quantum mechanics, an empiricist approach. Dordrecht: Kluwer Academic Publishers. . Roland Omnès, 1999. Understanding Quantum Mechanics. Princeton, New Jersey: Princeton University Press. Karl Popper, 1963. Conjectures and Refutations. London: Routledge and Kegan Paul. The chapter "Three views Concerning Human Knowledge" addresses, among other things, instrumentalism in the physical sciences. Hans Reichenbach, 1944. Philosophic Foundations of Quantum Mechanics. University of California Press. Bas van Fraassen, 1972, "A formal approach to the philosophy of science", in R. Colodny, ed., Paradigms and Paradoxes: The Philosophical Challenge of the Quantum Domain. Univ. of Pittsburgh Press: 303–366. John A. Wheeler and Wojciech Hubert Zurek (eds), Quantum Theory and Measurement, Princeton, New Jersey: Princeton University Press, , LoC QC174.125.Q38 1983. Further reading Almost all authors below are professional physicists. David Z Albert, 1992. Quantum Mechanics and Experience. Cambridge, Massachusetts: Harvard University Press. . John S. Bell, 1987. Speakable and Unspeakable in Quantum Mechanics. Cambridge University Press, . The 2004 edition () includes two additional papers and an introduction by Alain Aspect. Dmitrii Ivanovich Blokhintsev, 1968. The Philosophy of Quantum Mechanics. D. Reidel Publishing Company. . David Bohm, 1980. Wholeness and the Implicate Order. London: Routledge. . David Deutsch, 1997. The Fabric of Reality. London: Allen Lane. ; . Argues forcefully against instrumentalism. For general readers. Provides a pragmatic perspective on interpretations. For general readers. Bernard d'Espagnat, 1976. Conceptual Foundation of Quantum Mechanics, 2nd ed. Addison Wesley. . Bernard d'Espagnat, 1983. In Search of Reality. Springer. . Bernard d'Espagnat, 2003. Veiled Reality: An Analysis of Quantum Mechanical Concepts. Westview Press. Bernard d'Espagnat, 2006. On Physics and Philosophy. Princetone, New Jersey: Princeton University Press. Arthur Fine, 1986. The Shaky Game: Einstein Realism and the Quantum Theory. Science and its Conceptual Foundations. Chicago, Illinois: University of Chicago Press. . Ghirardi, Giancarlo, 2004. Sneaking a Look at God's Cards. Princeton, New Jersey: Princeton University Press. Gregg Jaeger (2009) Entanglement, Information, and the Interpretation of Quantum Mechanics. Springer. . N. David Mermin (1990) Boojums all the way through. Cambridge University Press. . Roland Omnès, 1994. The Interpretation of Quantum Mechanics. Princeton, New Jersey: Princeton University Press. . Roland Omnès, 1999. Understanding Quantum Mechanics. Princeton, New Jersey: Princeton University Press. Roland Omnès, 1999. Quantum Philosophy: Understanding and Interpreting Contemporary Science. Princeton, New Jersey: Princeton University Press. Roger Penrose, 1989. The Emperor's New Mind. Oxford University Press. . Especially chapter 6. Roger Penrose, 1994. Shadows of the Mind. Oxford University Press. . Roger Penrose, 2004. The Road to Reality. New York: Alfred A. Knopf. Argues that quantum theory is incomplete. Lee Phillips, 2017. A brief history of quantum alternatives. Ars Technica. External links Stanford Encyclopedia of Philosophy: "Bohmian mechanics" by Sheldon Goldstein. "Collapse Theories." by Giancarlo Ghirardi. "Copenhagen Interpretation of Quantum Mechanics" by Jan Faye. "Everett's Relative State Formulation of Quantum Mechanics" by Jeffrey Barrett. "Many-Worlds Interpretation of Quantum Mechanics" by Lev Vaidman. "Modal Interpretation of Quantum Mechanics" by Michael Dickson and Dennis Dieks. "Philosophical Issues in Quantum Theory" by Wayne Myrvold. "Quantum-Bayesian and Pragmatist Views of Quantum Theory" by Richard Healey. "Quantum Entanglement and Information" by Jeffrey Bub. "Quantum mechanics" by Jenann Ismael. "Quantum Logic and Probability Theory" by Alexander Wilce. "Relational Quantum Mechanics" by Federico Laudisa and Carlo Rovelli. "The Role of Decoherence in Quantum Mechanics" by Guido Bacciagaluppi. Internet Encyclopedia of Philosophy: "Interpretations of Quantum Mechanics" by Peter J. Lewis. "Everettian Interpretations of Quantum Mechanics" by Christina Conroy. Epistemology Philosophy of physics Philosophical debates Reality
Interpretations of quantum mechanics
[ "Physics" ]
6,151
[ "Philosophy of physics", "Applied and interdisciplinary physics", "Quantum mechanics", "Quantum measurement", "Interpretations of quantum mechanics" ]
54,743
https://en.wikipedia.org/wiki/Inbreeding
Inbreeding is the production of offspring from the mating or breeding of individuals or organisms that are closely related genetically. By analogy, the term is used in human reproduction, but more commonly refers to the genetic disorders and other consequences that may arise from expression of deleterious recessive traits resulting from incestuous sexual relationships and consanguinity. Animals avoid inbreeding only rarely. Inbreeding results in homozygosity which can increase the chances of offspring being affected by recessive traits. In extreme cases, this usually leads to at least temporarily decreased biological fitness of a population (called inbreeding depression), which is its ability to survive and reproduce. An individual who inherits such deleterious traits is colloquially referred to as inbred. The avoidance of expression of such deleterious recessive alleles caused by inbreeding, via inbreeding avoidance mechanisms, is the main selective reason for outcrossing. Crossbreeding between populations sometimes has positive effects on fitness-related traits, but also sometimes leads to negative effects known as outbreeding depression. However, increased homozygosity increases the probability of fixing beneficial alleles and also slightly decreases the probability of fixing deleterious alleles in a population. Inbreeding can result in purging of deleterious alleles from a population through purifying selection. Inbreeding is a technique used in selective breeding. For example, in livestock breeding, breeders may use inbreeding when trying to establish a new and desirable trait in the stock and for producing distinct families within a breed, but will need to watch for undesirable characteristics in offspring, which can then be eliminated through further selective breeding or culling. Inbreeding also helps to ascertain the type of gene action affecting a trait. Inbreeding is also used to reveal deleterious recessive alleles, which can then be eliminated through assortative breeding or through culling. In plant breeding, inbred lines are used as stocks for the creation of hybrid lines to make use of the effects of heterosis. Inbreeding in plants also occurs naturally in the form of self-pollination. Inbreeding can significantly influence gene expression which can prevent inbreeding depression. Overview Offspring of biologically related persons are subject to the possible effects of inbreeding, such as congenital birth defects. The chances of such disorders are increased when the biological parents are more closely related. This is because such pairings have a 25% probability of producing homozygous zygotes, resulting in offspring with two recessive alleles, which can produce disorders when these alleles are deleterious. Because most recessive alleles are rare in populations, it is unlikely that two unrelated partners will both be carriers of the same deleterious allele; however, because close relatives share a large fraction of their alleles, the probability that any such deleterious allele is inherited from the common ancestor through both parents is increased dramatically. For each homozygous recessive individual formed there is an equal chance of producing a homozygous dominant individual — one completely devoid of the harmful allele. Contrary to common belief, inbreeding does not in itself alter allele frequencies, but rather increases the relative proportion of homozygotes to heterozygotes; however, because the increased proportion of deleterious homozygotes exposes the allele to natural selection, in the long run its frequency decreases more rapidly in inbred populations. In the short term, incestuous reproduction is expected to increase the number of spontaneous abortions of zygotes, perinatal deaths, and postnatal offspring with birth defects. The advantages of inbreeding may be the result of a tendency to preserve the structures of alleles interacting at different loci that have been adapted together by a common selective history. Malformations or harmful traits can stay within a population due to a high homozygosity rate, and this will cause a population to become fixed for certain traits, like having too many bones in an area, like the vertebral column of wolves on Isle Royale or having cranial abnormalities, such as in Northern elephant seals, where their cranial bone length in the lower mandibular tooth row has changed. Having a high homozygosity rate is problematic for a population because it will unmask recessive deleterious alleles generated by mutations, reduce heterozygote advantage, and it is detrimental to the survival of small, endangered animal populations. When deleterious recessive alleles are unmasked due to the increased homozygosity generated by inbreeding, this can cause inbreeding depression. There may also be other deleterious effects besides those caused by recessive diseases. Thus, similar immune systems may be more vulnerable to infectious diseases (see Major histocompatibility complex and sexual selection). Inbreeding history of the population should also be considered when discussing the variation in the severity of inbreeding depression between and within species. With persistent inbreeding, there is evidence that shows that inbreeding depression becomes less severe. This is associated with the unmasking and elimination of severely deleterious recessive alleles. However, inbreeding depression is not a temporary phenomenon because this elimination of deleterious recessive alleles will never be complete. Eliminating slightly deleterious mutations through inbreeding under moderate selection is not as effective. Fixation of alleles most likely occurs through Muller's ratchet, when an asexual population's genome accumulates deleterious mutations that are irreversible. Despite all its disadvantages, inbreeding can also have a variety of advantages, such as ensuring a child produced from the mating contains, and will pass on, a higher percentage of its mother/father's genetics, reducing the recombination load, and allowing the expression of recessive advantageous phenotypes. Some species with a Haplodiploidy mating system depend on the ability to produce sons to mate with as a means of ensuring a mate can be found if no other male is available. It has been proposed that under circumstances when the advantages of inbreeding outweigh the disadvantages, preferential breeding within small groups could be promoted, potentially leading to speciation. Genetic disorders Autosomal recessive disorders occur in individuals who have two copies of an allele for a particular recessive genetic mutation. Except in certain rare circumstances, such as new mutations or uniparental disomy, both parents of an individual with such a disorder will be carriers of the gene. These carriers do not display any signs of the mutation and may be unaware that they carry the mutated gene. Since relatives share a higher proportion of their genes than do unrelated people, it is more likely that related parents will both be carriers of the same recessive allele, and therefore their children are at a higher risk of inheriting an autosomal recessive genetic disorder. The extent to which the risk increases depends on the degree of genetic relationship between the parents; the risk is greater when the parents are close relatives and lower for relationships between more distant relatives, such as second cousins, though still greater than for the general population. Children of parent-child or sibling-sibling unions are at an increased risk compared to cousin-cousin unions. Inbreeding may result in a greater than expected phenotypic expression of deleterious recessive alleles within a population. As a result, first-generation inbred individuals are more likely to show physical and health defects, including: The isolation of a small population for a period of time can lead to inbreeding within that population, resulting in increased genetic relatedness between breeding individuals. Inbreeding depression can also occur in a large population if individuals tend to mate with their relatives, instead of mating randomly. Due to higher prenatal and postnatal mortality rates, some individuals in the first generation of inbreeding will not live on to reproduce. Over time, with isolation, such as a population bottleneck caused by purposeful (assortative) breeding or natural environmental factors, the deleterious inherited traits are culled. Island species are often very inbred, as their isolation from the larger group on a mainland allows natural selection to work on their population. This type of isolation may result in the formation of race or even speciation, as the inbreeding first removes many deleterious genes, and permits the expression of genes that allow a population to adapt to an ecosystem. As the adaptation becomes more pronounced, the new species or race radiates from its entrance into the new space, or dies out if it cannot adapt and, most importantly, reproduce. The reduced genetic diversity, for example due to a bottleneck will unavoidably increase inbreeding for the entire population. This may mean that a species may not be able to adapt to changes in environmental conditions. Each individual will have similar immune systems, as immune systems are genetically based. When a species becomes endangered, the population may fall below a minimum whereby the forced interbreeding between the remaining animals will result in extinction. Natural breedings include inbreeding by necessity, and most animals only migrate when necessary. In many cases, the closest available mate is a mother, sister, grandmother, father, brother, or grandfather. In all cases, the environment presents stresses to remove from the population those individuals who cannot survive because of illness. There was an assumption that wild populations do not inbreed; this is not what is observed in some cases in the wild. However, in species such as horses, animals in wild or feral conditions often drive off the young of both sexes, thought to be a mechanism by which the species instinctively avoids some of the genetic consequences of inbreeding. In general, many mammal species, including humanity's closest primate relatives, avoid close inbreeding possibly due to the deleterious effects. Examples Although there are several examples of inbred populations of wild animals, the negative consequences of this inbreeding are poorly documented. In the South American sea lion, there was concern that recent population crashes would reduce genetic diversity. Historical analysis indicated that a population expansion from just two matrilineal lines was responsible for most of the individuals within the population. Even so, the diversity within the lines allowed great variation in the gene pool that may help to protect the South American sea lion from extinction. In lions, prides are often followed by related males in bachelor groups. When the dominant male is killed or driven off by one of these bachelors, a father may be replaced by his son. There is no mechanism for preventing inbreeding or to ensure outcrossing. In the prides, most lionesses are related to one another. If there is more than one dominant male, the group of alpha males are usually related. Two lines are then being "line bred". Also, in some populations, such as the Crater lions, it is known that a population bottleneck has occurred. Researchers found far greater genetic heterozygosity than expected. In fact, predators are known for low genetic variance, along with most of the top portion of the trophic levels of an ecosystem. Additionally, the alpha males of two neighboring prides can be from the same litter; one brother may come to acquire leadership over another's pride, and subsequently mate with his 'nieces' or cousins. However, killing another male's cubs, upon the takeover, allows the new selected gene complement of the incoming alpha male to prevail over the previous male. There are genetic assays being scheduled for lions to determine their genetic diversity. The preliminary studies show results inconsistent with the outcrossing paradigm based on individual environments of the studied groups. In Central California, sea otters were thought to have been driven to extinction due to over hunting, until a small colony was discovered in the Point Sur region in the 1930s. Since then, the population has grown and spread along the central Californian coast to around 2,000 individuals, a level that has remained stable for over a decade. Population growth is limited by the fact that all Californian sea otters are descended from the isolated colony, resulting in inbreeding. Cheetahs are another example of inbreeding. Thousands of years ago, the cheetah went through a population bottleneck that reduced its population dramatically so the animals that are alive today are all related to one another. A consequence from inbreeding for this species has been high juvenile mortality, low fecundity, and poor breeding success. In a study on an island population of song sparrows, individuals that were inbred showed significantly lower survival rates than outbred individuals during a severe winter weather related population crash. These studies show that inbreeding depression and ecological factors have an influence on survival. The Florida panther population was reduced to about 30 animals, so inbreeding became a problem. Several females were imported from Texas and now the population is better off genetically. Measures A measure of inbreeding of an individual A is the probability F(A) that both alleles in one locus are derived from the same allele in an ancestor. These two identical alleles that are both derived from a common ancestor are said to be identical by descent. This probability F(A) is called the "coefficient of inbreeding". Another useful measure that describes the extent to which two individuals are related (say individuals A and B) is their coancestry coefficient f(A,B), which gives the probability that one randomly selected allele from A and another randomly selected allele from B are identical by descent. This is also denoted as the kinship coefficient between A and B. A particular case is the self-coancestry of individual A with itself, f(A,A), which is the probability that taking one random allele from A and then, independently and with replacement, another random allele also from A, both are identical by descent. Since they can be identical by descent by sampling the same allele or by sampling both alleles that happen to be identical by descent, we have f(A,A) = 1/2 + F(A)/2. Both the inbreeding and the coancestry coefficients can be defined for specific individuals or as average population values. They can be computed from genealogies or estimated from the population size and its breeding properties, but all methods assume no selection and are limited to neutral alleles. There are several methods to compute this percentage. The two main ways are the path method and the tabular method. Typical coancestries between relatives are as follows: Father/daughter or mother/son → 25% () Brother/sister → 25% () Grandfather/granddaughter or grandmother/grandson → 12.5% () Half-brother/half-sister, Double cousins → 12.5% () Uncle/niece or aunt/nephew → 12.5% () Great-grandfather/great-granddaughter or great-grandmother/great-grandson → 6.25% () Half-uncle/niece or half-aunt/nephew → 6.25% () First cousins → 6.25% () Animals Wild animals Banded mongoose females regularly mate with their fathers and brothers. Bed bugs: North Carolina State University found that bedbugs, in contrast to most other insects, tolerate incest and are able to genetically withstand the effects of inbreeding quite well. Common fruit fly females prefer to mate with their own brothers over unrelated males. Cottony cushion scales: 'It turns out that females in these hermaphrodite insects are not really fertilizing their eggs themselves, but instead are having this done by a parasitic tissue that infects them at birth,' says Laura Ross of Oxford University's Department of Zoology. 'It seems that this infectious tissue derives from left-over sperm from their father, who has found a sneaky way of having more children by mating with his daughters.' Adactylidium: The single male offspring mite mates with all the daughters when they are still in the mother. The females, now impregnated, cut holes in their mother's body so that they can emerge. The male emerges as well, but does not look for food or new mates, and dies after a few hours. The females die at the age of 4 days, when their own offspring eat them alive from the inside. Domestic animals Breeding in domestic animals is primarily assortative breeding (see selective breeding). Without the sorting of individuals by trait, a breed could not be established, nor could poor genetic material be removed. Homozygosity is the case where similar or identical alleles combine to express a trait that is not otherwise expressed (recessiveness). Inbreeding exposes recessive alleles through increasing homozygosity. Breeders must avoid breeding from individuals that demonstrate either homozygosity or heterozygosity for disease causing alleles. The goal of preventing the transfer of deleterious alleles may be achieved by reproductive isolation, sterilization, or, in the extreme case, culling. Culling is not strictly necessary if genetics are the only issue in hand. Small animals such as cats and dogs may be sterilized, but in the case of large agricultural animals, such as cattle, culling is usually the only economic option. The issue of casual breeders who inbreed irresponsibly is discussed in the following quotation on cattle: Meanwhile, milk production per cow per lactation increased from 17,444 lbs to 25,013 lbs from 1978 to 1998 for the Holstein breed. Mean breeding values for milk of Holstein cows increased by 4,829 lbs during this period. High producing cows are increasingly difficult to breed and are subject to higher health costs than cows of lower genetic merit for production (Cassell, 2001). Intensive selection for higher yield has increased relationships among animals within breed and increased the rate of casual inbreeding. Many of the traits that affect profitability in crosses of modern dairy breeds have not been studied in designed experiments. Indeed, all crossbreeding research involving North American breeds and strains is very dated (McAllister, 2001) if it exists at all. As a result of long-term cooperation between USDA and dairy farmers which led to a revolution in dairy cattle productivity, the United States has since 1992 been the world’s largest supplier of dairy bull semen. However, US genomic technology has resulted in the US dairy cattle population becoming "the most inbred it’s ever been" and the rate of increase in US national milk yield has tapered off. Efforts are now being made to identify desirable genes in cattle breeds not yet optimized by US dairy breeders in order to apply hybrid vigor to the US dairy cattle population and thus propel US dairy technology to even higher levels of productivity. The BBC produced two documentaries on dog inbreeding titled Pedigree Dogs Exposed and Pedigree Dogs Exposed: Three Years On that document the negative health consequences of excessive inbreeding. Linebreeding Linebreeding is a form of inbreeding. There is no clear distinction between the two terms, but linebreeding may encompass crosses between individuals and their descendants or two cousins. This method can be used to increase a particular animal's contribution to the population. While linebreeding is less likely to cause problems in the first generation than does inbreeding, over time, linebreeding can reduce the genetic diversity of a population and cause problems related to a too-small gene pool that may include an increased prevalence of genetic disorders and inbreeding depression. Outcrossing Outcrossing is where two unrelated individuals are crossed to produce progeny. In outcrossing, unless there is verifiable genetic information, one may find that all individuals are distantly related to an ancient progenitor. If the trait carries throughout a population, all individuals can have this trait. This is called the founder effect. In the well established breeds, that are commonly bred, a large gene pool is present. For example, in 2004, over 18,000 Persian cats were registered. A possibility exists for a complete outcross, if no barriers exist between the individuals to breed. However, it is not always the case, and a form of distant linebreeding occurs. Again it is up to the assortative breeder to know what sort of traits, both positive and negative, exist within the diversity of one breeding. This diversity of genetic expression, within even close relatives, increases the variability and diversity of viable stock. Laboratory animals Systematic inbreeding and maintenance of inbred strains of laboratory mice and rats is of great importance for biomedical research. The inbreeding guarantees a consistent and uniform animal model for experimental purposes and enables genetic studies in congenic and knock-out animals. In order to achieve a mouse strain that is considered inbred, a minimum of 20 sequential generations of sibling matings must occur. With each successive generation of breeding, homozygosity in the entire genome increases, eliminating heterozygous loci. With 20 generations of sibling matings, homozygosity is occurring at roughly 98.7% of all loci in the genome, allowing for these offspring to serve as animal models for genetic studies. The use of inbred strains is also important for genetic studies in animal models, for example to distinguish genetic from environmental effects. The mice that are inbred typically show considerably lower survival rates. Humans Effects Inbreeding increases homozygosity, which can increase the chances of the expression of deleterious or beneficial recessive alleles and therefore has the potential to either decrease or increase the fitness of the offspring. Depending on the rate of inbreeding, natural selection may still be able to eliminate deleterious alleles. With continuous inbreeding, genetic variation is lost and homozygosity is increased, enabling the expression of recessive deleterious alleles in homozygotes. The coefficient of inbreeding, or the degree of inbreeding in an individual, is an estimate of the percent of homozygous alleles in the overall genome. The more biologically related the parents are, the greater the coefficient of inbreeding, since their genomes have many similarities already. This overall homozygosity becomes an issue when there are deleterious recessive alleles in the gene pool of the family. By pairing chromosomes of similar genomes, the chance for these recessive alleles to pair and become homozygous greatly increases, leading to offspring with autosomal recessive disorders. However, these deleterious effects are common for very close relatives but not for those related on the 3rd cousin or greater level, who exhibit increased fitness. Inbreeding is especially problematic in small populations where the genetic variation is already limited. By inbreeding, individuals are further decreasing genetic variation by increasing homozygosity in the genomes of their offspring. Thus, the likelihood of deleterious recessive alleles to pair is significantly higher in a small inbreeding population than in a larger inbreeding population. The fitness consequences of consanguineous mating have been studied since their scientific recognition by Charles Darwin in 1839. Some of the most harmful effects known from such breeding includes its effects on the mortality rate as well as on the general health of the offspring. Since the 1960s, there have been many studies to support such debilitating effects on the human organism. Specifically, inbreeding has been found to decrease fertility as a direct result of increasing homozygosity of deleterious recessive alleles. Fetuses produced by inbreeding also face a greater risk of spontaneous abortions due to inherent complications in development. Among mothers who experience stillbirths and early infant deaths, those that are inbreeding have a significantly higher chance of reaching repeated results with future offspring. Additionally, consanguineous parents possess a high risk of premature birth and producing underweight and undersized infants. Viable inbred offspring are also likely to be inflicted with physical deformities and genetically inherited diseases. Studies have confirmed an increase in several genetic disorders due to inbreeding such as blindness, hearing loss, neonatal diabetes, limb malformations, disorders of sex development, schizophrenia and several others. Moreover, there is an increased risk for congenital heart disease depending on the inbreeding coefficient (See coefficient of inbreeding) of the offspring, with significant risk accompanied by an F =.125 or higher. Prevalence The general negative outlook and eschewal of inbreeding that is prevalent in the Western world today has roots from over 2000 years ago. Specifically, written documents such as the Bible illustrate that there have been laws and social customs that have called for the abstention from inbreeding. Along with cultural taboos, parental education and awareness of inbreeding consequences have played large roles in minimizing inbreeding frequencies in areas like Europe. That being so, there are less urbanized and less populated regions across the world that have shown continuity in the practice of inbreeding. The continuity of inbreeding is often either by choice or unavoidably due to the limitations of the geographical area. When by choice, the rate of consanguinity is highly dependent on religion and culture. In the Western world, some Anabaptist groups are highly inbred because they originate from small founder populations that have bred as a closed population. Of the practicing regions, Middle Eastern and northern Africa territories show the greatest frequencies of consanguinity. Among these populations with high levels of inbreeding, researchers have found several disorders prevalent among inbred offspring. In Lebanon, Saudi Arabia, Egypt, and in Israel, the offspring of consanguineous relationships have an increased risk of congenital malformations, congenital heart defects, congenital hydrocephalus and neural tube defects. Furthermore, among inbred children in Palestine and Lebanon, there is a positive association between consanguinity and reported cleft lip/palate cases. Historically, populations of Qatar have engaged in consanguineous relationships of all kinds, leading to high risk of inheriting genetic diseases. As of 2014, around 5% of the Qatari population suffered from hereditary hearing loss; most were descendants of a consanguineous relationship. Royalty and nobility Inter-nobility marriage was used as a method of forming political alliances among elites. These ties were often sealed only upon the birth of progeny within the arranged marriage. Thus marriage was seen as a union of lines of nobility and not as a contract between individuals. Royal intermarriage was often practiced among European royal families, usually for interests of state. Over time, due to the relatively limited number of potential consorts, the gene pool of many ruling families grew progressively smaller, until all European royalty was related. This also resulted in many being descended from a certain person through many lines of descent, such as the numerous European royalty and nobility descended from the British Queen Victoria or King Christian IX of Denmark. The House of Habsburg was known for its intermarriages; the Habsburg lip often cited as an ill-effect. The closely related houses of Habsburg, Bourbon, Braganza and Wittelsbach also frequently engaged in first-cousin unions as well as the occasional double-cousin and uncle–niece marriages. In ancient Egypt, royal women were believed to carry the bloodlines and so it was advantageous for a pharaoh to marry his sister or half-sister; in such cases a special combination between endogamy and polygamy is found. Normally, the old ruler's eldest son and daughter (who could be either siblings or half-siblings) became the new rulers. All rulers of the Ptolemaic dynasty uninterruptedly from Ptolemy IV (Ptolemy II married his sister but had no issue) were married to their brothers and sisters, so as to keep the Ptolemaic blood "pure" and to strengthen the line of succession. King Tutankhamun's mother is reported to have been the half-sister to his father, Cleopatra VII (also called Cleopatra VI) and Ptolemy XIII, who married and became co-rulers of ancient Egypt following their father's death, are the most widely known example. See also References External links Dale Vogt, Helen A. Swartz and John Massey, 1993. Inbreeding: Its Meaning, Uses and Effects on Farm Animals. University of Missouri, Extension. Consanguineous marriages with global map Population genetics Breeding Incest Kinship and descent
Inbreeding
[ "Biology" ]
5,875
[ "Behavior", "Reproduction", "Breeding", "Human behavior", "Kinship and descent" ]
54,746
https://en.wikipedia.org/wiki/Incest%20taboo
An incest taboo is any cultural rule or norm that prohibits sexual relations between certain members of the same family, mainly between individuals related by blood. All known human cultures have norms that exclude certain close relatives from those considered suitable or permissible sexual or marriage partners, making such relationships taboo. However, different norms exist among cultures as to which blood relations are permissible as sexual partners and which are not. Sexual relations between related persons which are subject to the taboo are called incestuous relationships. Some cultures proscribe sexual relations between clan-members, even when no traceable biological relationship exists, while members of other clans are permissible irrespective of the existence of a biological relationship. In many cultures, certain types of cousin relations are preferred as sexual and marital partners, whereas in others these are taboo. Some cultures permit sexual and marital relations between aunts/uncles and nephews/nieces. In some instances, brother–sister marriages have been practised by the elites with some regularity. Parent–child and sibling–sibling unions are almost universally taboo. Origin Debate about the origin of the incest taboo has often been framed as a question of whether it is based in nature or nurture. One explanation sees the incest taboo as a cultural implementation of a biologically evolved preference for sexual partners with whom one is unlikely to share genes, since inbreeding may have detrimental outcomes. The most widely held hypothesis proposes that the so-called Westermarck effect discourages adults from engaging in sexual relations with individuals with whom they grew up. The existence of the Westermarck effect has achieved some empirical support. Another school argues that the incest prohibition is a cultural construct which arises as a side effect of a general human preference for group exogamy, which arises because intermarriage between groups construct valuable alliances that improve the ability for both groups to thrive. According to this view, the incest taboo is not necessarily universal, but is likely to arise and become more strict under cultural circumstances that favour exogamy over endogamy, and likely to become more lax under circumstances that favor endogamy. This hypothesis has also achieved some empirical support. Limits to biological evolution of taboo While it is theoretically possible that natural selection may, under certain genetic circumstances, select for individuals that instinctively avoid mating with (close) relatives, incest will still exist in the gene pool because even genetically weakened, inbred individuals are better watchposts against predators than none at all, and weak individuals are useful for the stronger individuals in the group as looking out for predators without being able to seriously compete with the stronger individuals. Additionally, protecting the health of closer relatives and their inbred offspring is more evolutionarily advantageous than punishing said relative, especially in a context where predation and starvation are significant factors, as opposed to a rich welfare state. Research Modern anthropology developed at a time when a great many human societies were illiterate, and much of the research on incest taboos has taken place in societies without legal codes, and, therefore, without written laws concerning marriage and incest. Nevertheless, anthropologists have found that the institution of marriage, and rules concerning appropriate and inappropriate sexual behavior, exist in every society. The following excerpt from Notes and Queries on Anthropology (1951), a well-established field manual for ethnographic research, illustrates the scope of ethnographic investigation into the matter: These theories anthropologists are generally concerned solely with brother–sister incest, and are not claiming that all sexual relations among family members are taboo or even necessarily considered incestuous by that society. These theories are further complicated by the fact that in many societies people related to one another in different ways, and sometimes distantly, are classified together as siblings, and others who are just as closely related genetically are not considered family members. The definition restricts itself to sexual intercourse; this does not mean that other forms of sexual contact do not occur, or are proscribed, or prescribed. For example, in some Inuit societies in the Arctic, and traditionally in Bali, mothers would routinely stroke the penises of their infant sons; such behavior was considered no more sexual than breastfeeding. In these theories, anthropologists are primarily concerned with marriage rules and not actual sexual behavior. In short, anthropologists were not studying "incest" per se; they were asking informants what they meant by "incest", and what the consequences of "incest" were, in order to map out social relationships within the community. This excerpt also suggests that the relationship between sexual and marriage practices is complex, and that societies distinguish between different sorts of prohibitions. In other words, although an individual may be prohibited from marrying or having sexual relations with many people, different sexual relations may be prohibited for different reasons, and with different penalties. For example, Trobriand Islanders prohibit both sexual relations between a woman and her brother, and between a woman and her father, but they describe these prohibitions in very different ways: relations between a woman and her brother fall within the category of forbidden relations among members of the same clan; relations between a woman and her father do not. This is because the Trobrianders are matrilineal; children belong to the clan of their mother and not of their father. Thus, sexual relations between a man and his mother's sister (and mother's sister's daughter) are also considered incestuous, but relations between a man and his father's sister are not. A man and his father's sister will often have a flirtatious relationship, and, far from being taboo, Trobriand society encourages a man and his father's sister or the daughter of his father's sister to have sexual relations or marry. Instinctual and genetic explanations An explanation for the taboo is that it is due to an instinctual, inborn aversion that would lower the adverse genetic effects of inbreeding such as a higher incidence of congenital birth defects (see article Inbreeding depression). Since the rise of modern genetics, belief in this theory has grown. Birth defects and inbreeding The increase in frequency of birth defects often attributed to inbreeding results directly from an increase in the frequency of homozygous alleles inherited by the offspring of inbred couples. This leads to an increase in homozygous allele frequency within a population, and results in diverging effects. Should a child inherit the version of homozygous alleles responsible for a birth defect from its parents, the birth defect will be expressed; on the other hand, should the child inherit the version of homozygous alleles not responsible for a birth defect, it would actually decrease the ratio of the allele version responsible for the birth defect in that population. The overall consequences of these diverging effects depends in part on the size of the population. In small populations, as long as children born with inheritable birth defects die (or are killed) before they reproduce, the ultimate effect of inbreeding will be to decrease the frequency of defective genes in the population; over time, the gene pool will be healthier. However, in larger populations, it is more likely that large numbers of carriers will survive and mate, leading to more constant rates of birth defects. Besides recessive genes, there are also other reasons why inbreeding may be harmful, such as a narrow range of certain immune systems genes in a population increasing vulnerability to infectious diseases (see Major histocompatibility complex and sexual selection). The biological costs of incest also depend largely on the degree of genetic proximity between the two relatives engaging in incest. This fact may explain why the cultural taboo generally includes prohibitions against sex between close relatives but less often includes prohibitions against sex between more distal relatives. Children born of close relatives have decreased survival. Many mammal species, including humanity's closest primate relatives, avoid incest. Westermarck effect The Westermarck effect, first proposed by Edvard Westermarck in 1891, is the theory that children reared together, regardless of biological relationship, form a sentimental attachment that is by its nature non-erotic. Melford Spiro argued that his observations that unrelated children reared together on Israeli Kibbutzim nevertheless avoided one another as sexual partners confirmed the Westermarck effect. Joseph Shepher in a study examined the second generation in a kibbutz and found no marriages and no sexual activity between the adolescents in the same peer group. This was not enforced but voluntary. Looking at the second generation adults in all kibbutzim, out of a total of 2769 marriages, none were between those of the same peer group. However, according to a book review by John Hartung of a book by Shepher, out of 2516 marriages documented in Israel, 200 were between couples reared in the same kibbutz. These marriages occurred after young adults reared on kibbutzim had served in the military and encountered tens of thousands of other potential mates, and 200 marriages is higher than what would be expected by chance. Of these 200 marriages, five were between men and women who had been reared together for the first six years of their lives, which would argue against the Westermarck effect. A study in Taiwan of marriages where the future bride is adopted in the groom's family as an infant or small child found that these marriages have higher infidelity and divorce and lower fertility than ordinary marriages; it has been argued that this observation is consistent with the Westermarck effect. Third-parties' objections Another approach is looking at moral objections to third-party incest. This increases the longer a child has grown up together with another child of the opposite sex. This occurs even if the other child is genetically unrelated. Humans have been argued to have a special kin detection system that besides the incest taboo also regulates a tendency towards altruism towards kin. Counter arguments One objection against an instinctive and genetic basis for the incest taboo is that incest does occur.<ref>Cicchetti and Carlson eds. 1989 Child Maltreatment: Theory and Research on the Causes and Consequences of Child Abuse and Neglect. New York, Cambridge University Press</ref> Anthropologists have also argued that the social construct "incest" (and the incest taboo) is not the same thing as the biological phenomenon of "inbreeding". For example, there is equal genetic relation between a man and the daughter of his father's sister and between a man and the daughter of his mother's sister, such that biologists would consider mating incestuous in both instances, but Trobrianders consider mating incestuous in one case and not in the other. Anthropologists have documented a great number of societies where marriages between some first cousins are prohibited as incestuous, while marriages between other first cousins are encouraged. Therefore, it is argued that the prohibition against incestuous relations in most societies is not based on or motivated by concerns over biological closeness. Other studies on cousin marriages have found support for a biological basis for the taboo. Also, current supporters of genetic influences on behavior do not argue that genes determine behavior absolutely, but that genes may create predispositions that are affected in various ways by the environment (including culture). Steve Stewart-Williams argues against the view that incest taboo is a Western phenomenon, arguing that while brother–sister marriage was reported in a diverse range of cultures such Egyptian, Incan, and Hawaiian cultures, it was not a culture-wide phenomenon, being largely restricted to the upper classes. Stewart-Williams argues that these marriages were largely political (their function being to keep power and wealth concentrated in the family) and there is no evidence the siblings were attracted to each other and there is in fact some evidence against it (for example, Cleopatra married two of her brothers but did not have children with them, only having children with unrelated lovers). Stewart-Williams suggests that this was therefore simply a case of social pressure overriding anti-incest instincts. Stewart-Williams also observes that anti-incest behaviour has been observed in other animals and even many plant species (many plants could self-pollinate but have mechanisms that prevent them from doing so). Sociological explanations Psychoanalytic theory—in particular, the claimed existence of an Oedipus complex, which is not an instinctual aversion against incest but an instinctual desire—has influenced many theorists seeking to explain the incest taboo using sociological theories. Exogamy The anthropologist Claude Lévi-Strauss developed a general argument for the universality of the incest taboo in human societies. His argument begins with the claim that the incest taboo is in effect a prohibition against endogamy, and the effect is to encourage exogamy. Through exogamy, otherwise unrelated households or lineages will form relationships through marriage, thus strengthening social solidarity. That is, Lévi-Strauss views marriage as an exchange of women between two social groups. This theory is based in part on Marcel Mauss's theory of The Gift, which (in Lévi-Strauss' words) argued: It is also based on Lévi-Strauss's analysis of data on different kinship systems and marriage practices documented by anthropologists and historians. Lévi-Strauss called attention specifically to data collected by Margaret Mead during her research among the Arapesh. When she asked if a man ever sleeps with his sister, Arapesh replied: "No we don't sleep with our sisters; we give our sisters to other men, and other men give us their sisters." Mead pressed the question repeatedly, asking what would happen if a brother and sister did have sex with one another. Lévi-Strauss quotes the Arapesh response: By applying Mauss's theory to data such as Mead's, Lévi-Strauss proposed what he called alliance theory. He argued that, in "primitive" societies—societies not based on agriculture, class hierarchies, or centralized government—marriage is not fundamentally a relationship between a man and a woman, but a transaction involving a woman that forges a relationship—an alliance—between two men. Some anthropologists argue that nuclear family incest avoidance can be explained in terms of the ecological, demographic, and economic benefits of exogamy. While Lévi-Strauss generally discounted the relevance of alliance theory in Africa, a particularly strong concern for incest is a fundamental issue among the age systems of East Africa. Here, the avoidance between men of an age-set and their daughters is altogether more intense than in any other sexual avoidance. Paraphrasing Lévi-Strauss's argument, without this avoidance, the rivalries for power between age-sets, coupled with the close bonds of sharing between age-mates, could lead to a sharing of daughters as spouses. Young men entering the age system would then find a dire shortage of marriageable girls, and extended families would be in danger of dying out. Thus, by parading this avoidance of their daughters, senior men make these girls available for younger age-sets and their marriages form alliances that mitigate the rivalries for power. Endogamy Exogamy between households or descent groups is typically prescribed in classless societies. Societies that are stratified—that is, divided into unequal classes—often prescribe different degrees of endogamy. Endogamy is the opposite of exogamy; it refers to the practice of marriage between members of the same social group. An example is India's caste system, in which unequal castes are endogamous. Inequality between ethnic groups and races also correlates with endogamy. An extreme example of this principle, and an exception to the incest taboo, is found among members of the ruling class in certain ancient states, such as the Inca, Egypt, China, and Hawaii; brother–sister marriage (usually between half-siblings) was a means of maintaining wealth and political power within one family. Some scholars have argued that in Roman-governed Egypt this practice was also found among commoners, but others have argued that this was in fact not the norm.Huebner, Sabine R. "‘Brother-Sister’ Marriage in Roman Egypt: a Curiosity of Humankind or a Widespread Family Strategy?." The Journal of Roman Studies 97 (2007): 21-49. See also Baldwin effect Heterosis Homozygosity Inbreeding avoidance References Bibliography Claude Lévi-Strauss, 1969 The Elementary Structures of Kinship revised edition, translated from the French by James Harle Bell and John Richard von Sturmer. Boston: Beacon Press George Homans and David M. Schneider, Marriage, Authority, and Final Causes: A Study of Unilateral Cross-Cousin Marriage Rodney Needham, Structure and Sentiment: A Test Case in Social Anthropology Arthur P. Wolf and William H. Durham (editors), Inbreeding, Incest, and the Incest Taboo: The State of Knowledge at the Turn of the Century'', Taboo Sociobiology Interpersonal relationships Kinship and descent Taboo
Incest taboo
[ "Biology" ]
3,474
[ "Behavior", "Behavioural sciences", "Sociobiology", "Interpersonal relationships", "Human behavior", "Kinship and descent" ]
54,789
https://en.wikipedia.org/wiki/Recursively%20enumerable%20language
In mathematics, logic and computer science, a formal language is called recursively enumerable (also recognizable, partially decidable, semidecidable, Turing-acceptable or Turing-recognizable) if it is a recursively enumerable subset in the set of all possible words over the alphabet of the language, i.e., if there exists a Turing machine which will enumerate all valid strings of the language. Recursively enumerable languages are known as type-0 languages in the Chomsky hierarchy of formal languages. All regular, context-free, context-sensitive and recursive languages are recursively enumerable. The class of all recursively enumerable languages is called RE. Definitions There are three equivalent definitions of a recursively enumerable language: A recursively enumerable language is a recursively enumerable subset in the set of all possible words over the alphabet of the language. A recursively enumerable language is a formal language for which there exists a Turing machine (or other computable function) which will enumerate all valid strings of the language. Note that if the language is infinite, the enumerating algorithm provided can be chosen so that it avoids repetitions, since we can test whether the string produced for number n is "already" produced for a number which is less than n. If it already is produced, use the output for input n+1 instead (recursively), but again, test whether it is "new". A recursively enumerable language is a formal language for which there exists a Turing machine (or other computable function) that will halt and accept when presented with any string in the language as input but may either halt and reject or loop forever when presented with a string not in the language. Contrast this to recursive languages, which require that the Turing machine halts in all cases. All regular, context-free, context-sensitive and recursive languages are recursively enumerable. Post's theorem shows that RE, together with its complement co-RE, correspond to the first level of the arithmetical hierarchy. Example The set of halting Turing machines is recursively enumerable but not recursive. Indeed, one can run the Turing machine and accept if the machine halts, hence it is recursively enumerable. On the other hand, the problem is undecidable. Some other recursively enumerable languages that are not recursive include: Post correspondence problem Mortality (computability theory) Entscheidungsproblem Closure properties Recursively enumerable languages (REL) are closed under the following operations. That is, if L and P are two recursively enumerable languages, then the following languages are recursively enumerable as well: the Kleene star of L the concatenation of L and P the union the intersection . Recursively enumerable languages are not closed under set difference or complementation. The set difference is recursively enumerable if is recursive. If is recursively enumerable, then the complement of is recursively enumerable if and only if is also recursive. See also Computably enumerable set Recursion Sources Kozen, D.C. (1997), Automata and Computability, Springer. External links Lecture slides Formal languages Theory of computation Mathematics of computing Alan Turing
Recursively enumerable language
[ "Mathematics" ]
728
[ "Formal languages", "Mathematical logic" ]
54,808
https://en.wikipedia.org/wiki/Termite
Termites are a group of detritophagous eusocial insects which consume a variety of decaying plant material, generally in the form of wood, leaf litter, and soil humus. They are distinguished by their moniliform antennae and the soft-bodied and often unpigmented worker caste for which they have been commonly termed "white ants"; however, they are not ants, being more closely related to cockroaches. About 2,972 extant species are currently described, 2,105 of which are members of the family Termitidae. Termites comprise the infraorder Isoptera, or alternatively the epifamily Termitoidae, within the order Blattodea (along with cockroaches). Termites were once classified in a separate order from cockroaches, but recent phylogenetic studies indicate that they evolved from cockroaches, as they are deeply nested within the group, and the sister group to wood-eating cockroaches of the genus Cryptocercus. Previous estimates suggested the divergence took place during the Jurassic or Triassic. More recent estimates suggest that they have an origin during the Late Jurassic, with the first fossil records in the Early Cretaceous. Similarly to ants and some bees and wasps from the separate order Hymenoptera, most termites have an analogous "worker" and "soldier" caste system consisting of mostly sterile individuals which are physically and behaviorally distinct. Unlike ants, most colonies begin from sexually mature individuals known as the "king" and "queen" that together form a lifelong monogamous pair. Also unlike ants, which undergo a complete metamorphosis, termites undergo an incomplete metamorphosis that proceeds through egg, nymph, and adult stages. Termite colonies are commonly described as superorganisms due to the collective behaviors of the individuals which form a self-governing entity: the colony itself. Their colonies range in size from a few hundred individuals to enormous societies with several million individuals. Most species are rarely seen, having a cryptic life-history where they remain hidden within the galleries and tunnels of their nests for most of their lives. Termites' success as a group has led to them colonizing almost every global landmass, with the highest diversity occurring in the tropics where they are estimated to constitute 10% of the animal biomass, particularly in Africa which has the richest diversity with more than 1000 described species. They are important decomposers of decaying plant matter in the subtropical and tropical regions of the world, and their recycling of wood and plant matter is of considerable ecological importance. Many species are ecosystem engineers capable of altering soil characteristics such as hydrology, decomposition, nutrient cycling, vegetative growth, and consequently surrounding biodiversity through the large mounds constructed by certain species. Termites have several impacts on humans. They are a delicacy in the diet of some human cultures such as the Makiritare in the Alto Orinoco province of Venezuela, where they are commonly used as a spice. They are also used in traditional medicinal treatments of various diseases and ailments, such as influenza, asthma, bronchitis, etc. Termites are most famous for being structural pests; however, the vast majority of termite species are innocuous, with the regional numbers of economically significant species being: North America, 9; Australia, 16; Indian subcontinent, 26; tropical Africa, 24; Central America and the West Indies, 17. Of known pest species, 28 of the most invasive and structurally damaging belong to the genus Coptotermes. The distribution of most known pest species is expected to increase over time as a consequence of climate change. Increased urbanization and connectivity is also predicted to expand the range of some pest termites. Etymology The infraorder name Isoptera is derived from the Greek words iso (equal) and ptera (winged), which refers to the nearly equal size of the fore and hind wings. "Termite" derives from the Latin and Late Latin word termes ("woodworm, white ant"), altered by the influence of Latin terere ("to rub, wear, erode") from the earlier word tarmes. A termite nest is also known as a termitary or termitarium (plural termitaria or termitariums). The word was first used in English in 1781. Earlier attested designations were "wood ants" or "white ants", though these may never have been in wide use as termites do not exist in the British Isles. Taxonomy and evolution Termites were formerly placed in the order Isoptera. As early as 1934 suggestions were made that they were closely related to wood-eating cockroaches (genus Cryptocercus, the woodroach) based on the similarity of their symbiotic gut flagellates. In the 1960s additional evidence supporting that hypothesis emerged when F. A. McKittrick noted similar morphological characteristics between some termites and Cryptocercus nymphs. In 2008 DNA analysis from 16S rRNA sequences supported the position of termites being nested within the evolutionary tree containing the order Blattodea, which included the cockroaches. The cockroach genus Cryptocercus shares the strongest phylogenetical similarity with termites and is considered to be a sister-group to termites. Termites and Cryptocercus share similar morphological and social features: for example, most cockroaches do not exhibit social characteristics, but Cryptocercus takes care of its young and exhibits other social behaviour such as trophallaxis and allogrooming. Termites are thought to be the descendants of the genus Cryptocercus. Some researchers have suggested a more conservative measure of retaining the termites as the Termitoidae, an epifamily within the cockroach order, which preserves the classification of termites at family level and below. Termites have long been accepted to be closely related to cockroaches and mantids, and they are classified in the same superorder (Dictyoptera). The oldest unambiguous termite fossils date to the early Cretaceous, but given the diversity of Cretaceous termites and early fossil records showing mutualism between microorganisms and these insects, they possibly originated earlier in the Jurassic or Triassic. Possible evidence of a Jurassic origin is the assumption that the extinct mammaliaform Fruitafossor from Morrison Formation consumed termites, judging from its morphological similarity to modern termite-eating mammals. Morrison Formation also yields social insect nest fossils close to that of termites. The oldest termite nest discovered is believed to be from the Upper Cretaceous in West Texas, where the oldest known faecal pellets were also discovered. Claims that termites emerged earlier have faced controversy. For example, F. M. Weesner indicated that the Mastotermitidae termites may go back to the Late Permian, 251 million years ago, and fossil wings that have a close resemblance to the wings of Mastotermes of the Mastotermitidae, the most primitive living termite, have been discovered in the Permian layers in Kansas. It is even possible that the first termites emerged during the Carboniferous. The folded wings of the fossil wood roach Pycnoblattina, arranged in a convex pattern between segments 1a and 2a, resemble those seen in Mastotermes, the only living insect with the same pattern. Kumar Krishna et al., though, consider that all of the Paleozoic and Triassic insects tentatively classified as termites are in fact unrelated to termites and should be excluded from the Isoptera. Other studies suggest that the origin of termites is more recent, having diverged from Cryptocercus sometime during the Early Cretaceous. The primitive giant northern termite (Mastotermes darwiniensis) exhibits numerous cockroach-like characteristics that are not shared with other termites, such as laying its eggs in rafts and having anal lobes on the wings. It has been proposed that the Isoptera and Cryptocercidae be grouped in the clade "Xylophagodea". Termites are sometimes called "white ants", but the only resemblance to the ants is due to their sociality which is due to convergent evolution with termites being the first social insects to evolve a caste system more than 100 million years ago. Termite genomes are generally relatively large compared to those of other insects; the first fully sequenced termite genome, of Zootermopsis nevadensis, which was published in the journal Nature Communications, consists of roughly 500Mb, while two subsequently published genomes, Macrotermes natalensis and Cryptotermes secundus, are considerably larger at around 1.3Gb. External phylogeny showing relationship of termites with other insect groups: Internal phylogeny showing relationship of extant termite families: There are currently 3,173 living and fossil termite species recognised, classified in 12 families; reproductive and/or soldier castes are usually required for identification. The infraorder Isoptera is divided into the following clade and family groups, showing the subfamilies in their respective classification: Early-diverging termite families Infraorder Isoptera Brullé, 1832 Family Cratomastotermitidae Engel, Grimaldi, & Krishna, 2009 Family Mastotermitidae Desneux, 1904 Parvorder Euisoptera Engel, Grimaldi, & Krishna, 2009 Family Melqartitermitidae Engel, 2021 Family Mylacrotermitidae Engel, 2021 Family Krishnatermitidae Engel, 2021 Family Termopsidae Holmgren, 1911 Family Carinatermitidae Krishna & Grimaldi, 2000 Minorder Teletisoptera Barden & Engel, 2021 Family Archotermopsidae Engel, Grimaldi, & Krishna, 2009 Family Hodotermitidae Desneux, 1904 Family Hodotermopsidae Engel, 2021 subfamily Hodotermopsellinae Engel & Jouault, 2024 subfamily Hodotermopsinae Engel, 2021 Family Arceotermitidae Engel, 2021 subfamily Arceotermitinae Engel, 2021 subfamily Cosmotermitinae Engel, 2021 Family Stolotermitidae Holmgren, 1910 subfamily Stolotermitinae Holmgren, 1910 subfamily Porotermitinae Emerson, 1942 Minorder Artisoptera Engel, 2021 Family Tanytermitidae Engel, 2021 Microrder Icoisoptera Engel, 2013 Family Kalotermitidae Froggatt, 1897 Nanorder Neoisoptera Engel, Grimaldi, & Krishna, 2009 see below for families and subfamilies Neoisoptera The Neoisoptera, literally meaning "newer termites" (in an evolutionary sense), are a recently coined clade that include families such as the Heterotermitidae, Rhinotermitidae and Termitidae. Neoisopterans have a bifurcated caste development with true workers, and so notably lack pseudergates (except in some basal taxa such as Serritermitidae: see below). All Neoisopterans have a fontanelle, which appears as a circular pore or series of pores in a depressed region within the middle of the head. The fontanelle connects to the frontal gland, a novel organ unique to Neoisopteran termites which evolved to excrete an array of defensive chemicals and secretions, and so is typically most developed in the soldier caste. Cellulose digestion in the family Termitidae has co-evolved with bacterial gut microbiota and many taxa have evolved additional symbiotic relationships such as with the fungus Termitomyces; in contrast, basal Neoisopterans and all other Euisoptera have flagellates and prokaryotes in their hindguts. Extant families and subfamilies are organized as follows: Early-Diverging Neoisoptera (Non-Geoisoptera) Family Archeorhinotermitidae Krishna & Grimaldi, 2003 Family Stylotermitidae Holmgren & Holmgren, 1917 Family Serritermitidae Holmgren, 1910 Family Rhinotermitidae Froggatt, 1897 Family Termitogetonidae Holmgren, 1910 Family Psammotermitidae Holmgren, 1910 Subfamily Prorhinotermitinae Quennedey & Deligne, 1975 Subfamily Psammotermitinae Holmgren, 1910 Clade Geoisoptera Engel, Hellemans, & Bourguignon, 2024 Family Heterotermitidae Froggatt, 1897 (=Coptotermitinae Holmgren, 1910) Family Termitidae Latreille, 1802 Subfamily Sphaerotermitinae Engel & Krishna, 2004 Subfamily Macrotermitinae Kemner, 1934, nomen protectum [ICZN 2003] Subfamily Foraminitermitinae Holmgren, 1912 Subfamily Apicotermitinae Grassé & Noirot, 1954 [1955] Subfamily Microcerotermitinae Holmgren, 1910 Subfamily Syntermitinae Engel & Krishna, 2004 Subfamily Forficulitermitinae Hellemans, Engel, & Bourguignon, 2024 Subfamily Engelitermitinae Romero Arias, Roisin, & Scheffrahn, 2024 Subfamily Crepititermitinae Hellemans, Engel, & Bourguignon, 2024 Subfamily Protohamitermitinae Hellemans, Engel, & Bourguignon, 2024 Subfamily Cylindrotermitinae Hellemans, Engel, & Bourguignon, 2024 Subfamily Neocapritermitinae Hellemans, Engel, & Bourguignon, 2024 Subfamily Nasutitermitinae Hare, 1937 Subfamily Promirotermitinae Hellemans, Engel, & Bourguignon, 2024 Subfamily Mirocapritermitinae Kemner, 1934 Subfamily Amitermitinae Kemner, 1934 Subfamily Cubitermitinae Weidner, 1956 Subfamily Termitinae Latreille, 1802 Distribution and diversity Termites are found on all continents except Antarctica. The diversity of termite species is low in North America and Europe (10 species known in Europe and 50 in North America), but is high in South America, where over 400 species are known. Of the 2,972 extant termite species currently classified, 1,000 are found in Africa, where mounds are extremely abundant in certain regions. Approximately 1.1 million active termite mounds can be found in the northern Kruger National Park alone. In Asia, there are 435 species of termites, which are mainly distributed in China. Within China, termite species are restricted to mild tropical and subtropical habitats south of the Yangtze River. In Australia, all ecological groups of termites (dampwood, drywood, subterranean) are endemic to the country, with over 360 classified species. Because termites are highly social and abundant, they represent a disproportionate amount of the world's insect biomass. Termites and ants comprise about 1% of insect species, but represent more than 50% of insect biomass. Due to their soft cuticles, termites do not inhabit cool or cold habitats. There are three ecological groups of termites: dampwood, drywood and subterranean. Dampwood termites are found only in coniferous forests, and drywood termites are found in hardwood forests; subterranean termites live in widely diverse areas. One species in the drywood group is the West Indian drywood termite (Cryptotermes brevis), which is an invasive species in Australia. Description Termites are usually small, measuring between in length. The largest of all extant termites are the queens of the species Macrotermes bellicosus, measuring up to over 10 centimetres (4 in) in length. Another giant termite, the extinct Gyatermes styriensis, flourished in Austria during the Miocene and had a wingspan of and a body length of . Most worker and soldier termites are completely blind as they do not have a pair of eyes. However, some species, such as Hodotermes mossambicus, have compound eyes which they use for orientation and to distinguish sunlight from moonlight. The alates (winged males and females) have eyes along with lateral ocelli. Lateral ocelli, however, are not found in all termites, absent in the families Hodotermitidae, Termopsidae, and Archotermopsidae. Like other insects, termites have a small tongue-shaped labrum and a clypeus; the clypeus is divided into a postclypeus and anteclypeus. Termite antennae have a number of functions such as the sensing of touch, taste, odours (including pheromones), heat and vibration. The three basic segments of a termite antenna include a scape, a pedicel (typically shorter than the scape), and the flagellum (all segments beyond the scape and pedicel). The mouth parts contain a maxillae, a labium, and a set of mandibles. The maxillae and labium have palps that help termites sense food and handling. The cuticle of most castes is soft and flexible due to a resulting lack of sclerotization, particularly of the abdomen which often appears translucent. Pigmentation and sclerotization of the cuticle correlates with life history, with species that spend more time in the surface in the open tending to have a more sclerotized and pigmented exoskeleton. Consistent with all insects, the anatomy of the termite thorax consists of three segments: the prothorax, the mesothorax and the metathorax. Each segment contains a pair of legs. On alates, the wings are located at the mesothorax and metathorax, which is consistent with all four-winged insects. The mesothorax and metathorax have well-developed exoskeletal plates; the prothorax has smaller plates. Termites have a ten-segmented abdomen with two plates, the tergites and the sternites. The tenth abdominal segment has a pair of short cerci. There are ten tergites, of which nine are wide and one is elongated. The reproductive organs are similar to those in cockroaches but are more simplified. For example, the intromittent organ is not present in male alates, and the sperm is either immotile or aflagellate. However, Mastotermitidae termites have multiflagellate sperm with limited motility. The genitals in females are also simplified. Unlike in other termites, Mastotermitidae females have an ovipositor, a feature strikingly similar to that in female cockroaches. The non-reproductive castes of termites are wingless and rely exclusively on their six legs for locomotion. The alates fly only for a brief amount of time, so they also rely on their legs. The appearance of the legs is similar in each caste, but the soldiers have larger and heavier legs. The structure of the legs is consistent with other insects: the parts of a leg include a coxa, trochanter, femur, tibia and the tarsus. The number of tibial spurs on an individual's leg varies. Some species of termite have an arolium, located between the claws, which is present in species that climb on smooth surfaces but is absent in most termites. Unlike in ants, the hind-wings and fore-wings are of equal length. Most of the time, the alates are poor flyers; their technique is to launch themselves in the air and fly in a random direction. Studies show that in comparison to larger termites, smaller termites cannot fly long distances. When a termite is in flight, its wings remain at a right angle, and when the termite is at rest, its wings remain parallel to the body. Caste system Due to termites being hemimetabolous insects, where the young go through multiple and gradual adultoid molts before becoming an adult, the advent of eusociality has significantly altered the developmental patterns of this group of insects of which, although similar, is not homologous to that of the eusocial Hymenoptera. Unlike ants, bees, and wasps which undergo a complete metamorphosis and as a result only exhibit developmental plasticity at the immobile larval stage, the mobile adultoid instars of termites remain developmentally flexible throughout all life stages up to the final molt, which has uniquely allowed for the evolution of distinct yet flexible castes amongst the immatures. As a result the caste system of termites consists mostly of neotenous or juvenile individuals that undertake the most labor in the colony, which is in contrast to the eusocial Hymenoptera where work is strictly undertaken by the adults. The developmental plasticity in termites can be described similarly to cell potency, where each molt offers a varying level of phenotypic potency. Early instars typically exhibit the highest phenotypic potency and can be described as totipotent (able to molt into all alternative phenotypes), whereas following instars can be pluripotent (able to molt into reproductives and non-reproductives but cannot molt into at least one phenotype), to multipotent (able to molt into either reproductive or non-reproductive phenotypes), to unipotent (able to molt into developmentally close phenotypes), and then finally committed (no longer able to change phenotype, functionally an adult.) In most termites, phenotypic potency decreases with every successive molt. Notable exceptions are basal taxa such as the Archotermopsidae, which are able to retain high developmental plasticity even up to the late instars. In these basal taxa, the immatures are able to go through progressive (nymph-to-imago), regressive (winged-to-wingless) and stationary (size increase, remains wingless) molts, which typically indicates the developmental trajectory an individual follows. There is significant variation of the developmental patterns in termites even across closely related taxa, but can typically be generalized into the following two patterns: The first is the linear developmental pathway, where all immatures are capable of developing into winged adults (Alates), exhibit high phenotypic potency, and where there exists no true sterile caste other than the soldier. The second is the bifurcated developmental pathway, where immatures diverge into two distinct developmental lineages known as the nymphal (winged) and apterous (wingless) lines. The bifurcation occurs early, either at the egg or the first two instars, and represents an irreversible and committed development to either the reproductive or non-reproductive lifestyles. As such, the apterous lineage consists mostly of wingless and truly altruistic sterile individuals (true workers, soldiers), whereas the nymphal lineage consists mainly of fertile individuals destined to become winged reproductives. The bifurcated developmental pathway is found mainly in the derived taxa (i.e. Neoisoptera), and is believed to have evolved in tandem with the sterile worker caste as species moved to foraging for food beyond their nests, as opposed to the nest also being the food (such as in obligate wood-dwellers). There are three main castes which are discussed below: Worker termites undertake the most labor within the colony, being responsible for foraging, food storage, and brood and nest maintenance. Workers are tasked with the digestion of cellulose in food and are thus the most likely caste to be found in infested wood. The process of worker termites feeding other nestmates is known as trophallaxis. Trophallaxis is an effective nutritional tactic to convert and recycle nitrogenous components. It frees the parents from feeding all but the first generation of offspring, allowing for the group to grow much larger and ensuring that the necessary gut symbionts are transferred from one generation to another. Workers are believed to have evolved from older wingless immatures (Larvae) that evolved cooperative behaviors; and indeed in some basal taxa the late instar larvae are known to undertake the role of workers without differentiating as a true separate caste. Workers can either be male or female, although in some species with polymorphic workers either sex may be restricted to a certain developmental path. Workers may also be fertile or sterile, however the term "worker" is normally reserved for the latter, having evolved in taxa that exhibit a bifurcated developmental pathway. As a result, sterile workers like in the family Termitidae are termed true workers and are the most derived, while those that are undifferentiated and fertile as in the wood-nesting Archotermopsidae are termed pseudergates, which are the most basal. True workers are individuals which irreversibly develop from the apterous lineage and have completely forgo development into a winged adult. They display altruistic behaviors and either have terminal molts or exhibit a low level of phenotypical potency. True workers across different termite taxa (Mastotermitidae, Hodotermitidae, Rhinotermitidae & Termitidae) can widely vary in the level of developmental plasticity even between closely related taxa, with many species having true workers that can molt into the other apterous castes such as ergatoids (worker reproductive; apterous neotenics), soldiers, or the other worker castes. Pseudergates sensu stricto are individuals which arise from the linear developmental pathway that have regressively molted and lost their wing buds, and are regarded as totipotent immatures. They are capable of performing work but are overall less involved in labor and considered more cooperative than truly altruistic. Pseudergates sensu lato, otherwise known as false workers, are most represented in basal lineages (Kalotermitidae, Archotermopsidae, Hodotermopsidae, Serritermitidae) and closely resemble true workers in which they also perform most of the work and are similarly altruistic, however differ in developing from the linear developmental pathway where they exist in a stationary molt; i.e they have halted development before the growth of wing buds, and are regarded as pluripotent immatures. The soldier caste is the most anatomically and behaviorally specialized, and their sole purpose is to defend the colony. Many soldiers have large heads with highly modified powerful jaws so enlarged that they cannot feed themselves. Instead, like juveniles, they are fed by workers. Fontanelles, simple holes in the forehead that lead to a gland which exudes defensive secretions, are a feature of the clade Neoisoptera and are present in all extant taxa such as Rhinotermitidae. The majority of termite species have mandibulate soldiers which are easily identified by the disproportionately large sclerotized head and mandibles. Among certain termites, the soldier caste has evolved globular (phragmotic) heads to block their narrow tunnels such as seen in Cryptotermes. Amongst mandibulate soldiers, the mandibles have been adapted for a variety of defensive strategies: Biting/crushing (Incisitermes), slashing (Cubitermes), slashing/snapping (Dentispicotermes), symmetrical snapping (Termes), asymmetrical snapping (Neocapritermes), and piercing (Armitermes). In the more derived termite taxa, the soldier caste can be polymorphic and include minor and major forms. Other morphologically specialized soldiers includes the Nasutes, which have a horn-like nozzle projection (nasus) on the head. These unique soldiers are able to spray noxious, sticky secretions containing diterpenes at their enemies. Nitrogen fixation plays an important role in Nasute nutrition. Soldiers are normally a committed sterile caste and so do not molt into anything else, but in certain basal taxa like the Archotermopsidae they are known to rarely molt into neotenic forms that develop functional sexual organs. In species with the linear developmental pathway, soldiers develop from apterous immatures and constitute the only true sterile caste in these taxa. The primary reproductive caste of a colony consists of the fertile adult (imago) female and male individuals, colloquially known as the queen and king. The queen of the colony is responsible for egg production of the colony. Unlike in ants, the male and female reproductives form lifelong pairs where the king will continue to mate with the queen throughout their lives. In some species, the abdomen of the queen swells up dramatically to increase fecundity, a characteristic known as physogastrism. Depending on the species, the queen starts producing reproductive alates at a certain time of the year, and huge swarms emerge from the colony when nuptial flight begins. These swarms attract a wide variety of predators. The queens can be particularly long-lived for insects, with some reportedly living as long as 30 or 50 years. In both the linear and bifurcated developmental pathways, the primary reproductives only develop from winged immatures (nymphs). These winged immatures are capable of regressively molting into a form known as brachypterous neotenics (nymphoids), which retain juvenile and adult characteristics. BN's can be found in both the derived and basal termite taxa, and generally serve as supplementary reproductives. Life cycle Termites are often compared with the social Hymenoptera (ants and various species of bees and wasps), but their differing evolutionary origins result in major differences in life cycle. In the eusocial Hymenoptera, the workers are exclusively female. Males (drones) are haploid and develop from unfertilised eggs, while females (both workers and the queen) are diploid and develop from fertilised eggs. In contrast, worker termites, which constitute the majority in a colony, are diploid individuals of both sexes and develop from fertilised eggs. Depending on species, male and female workers may have different roles in a termite colony. The life cycle of a termite begins with an egg, but is different from that of a bee or ant in that it goes through a developmental process called incomplete metamorphosis, going through multiple gradual pre-adult molts that are highly developmentally plastic before becoming an adult. Unlike in other hemimetabolous insects, nymphs are more strictly defined in termites as immature young with visible wing buds, which often invariably go through a series of moults to become winged adults. Larvae, which are defined as early nymph instars with absent wing buds, exhibit the highest developmental potentiality and are able to molt into Alates, Soldiers, Neotenics, or Workers. Workers are believed to have evolved from larvae, sharing many similarities to the extent that workers can be regarded as "larval", in that both lack wings, eyes, and functional reproductive organs while maintaining varying levels of developmental flexibility, although usually to a much lesser extent in workers. The main distinction being that while larvae are wholly dependent on other nestmates to survive, workers are independent and are able to feed themselves and contribute to the colony. Workers remain wingless and across many taxa become developmentally arrested, appearing to not change into any other caste until death. In some basal taxa, there is no distinction, with the "workers" (pseudergates) essentially being late instar larvae that retain the ability to change into all other castes. The development of larvae into adults can take months; the time period depends on food availability and nutrition, temperature, and the size of the colony. Since larvae and nymphs are unable to feed themselves, workers must feed them, but workers also take part in the social life of the colony and have certain other tasks to accomplish such as foraging, building or maintaining the nest or tending to the queen. Pheromones regulate the caste system in termite colonies, preventing all but a very few of the termites from becoming fertile queens. Queens of the eusocial termite Reticulitermes speratus are capable of a long lifespan without sacrificing fecundity. These long-lived queens have a significantly lower level of oxidative damage, including oxidative DNA damage, than workers, soldiers and nymphs. The lower levels of damage appear to be due to increased catalase, an enzyme that protects against oxidative stress. Reproduction Termite alates (winged virgin queens and kings) only leave the colony when a nuptial flight takes place. Alate males and females pair up together and then land in search of a suitable place for a colony. A termite king and queen do not mate until they find such a spot. When they do, they excavate a chamber big enough for both, close up the entrance and proceed to mate. After mating, the pair may never surface again, spending the rest of their lives in the nest. Nuptial flight time varies in each species. For example, alates in certain species emerge during the day in summer while others emerge during the winter. The nuptial flight may also begin at dusk, when the alates swarm around areas with many lights. The time when nuptial flight begins depends on the environmental conditions, the time of day, moisture, wind speed and precipitation. The number of termites in a colony also varies, with the larger species typically having 100–1,000 individuals. However, some termite colonies, including those with many individuals, can number in the millions. The queen only lays 10–20 eggs in the very early stages of the colony, but lays as many as 1,000 a day when the colony is several years old. At maturity, a primary queen has a great capacity to lay eggs. In some species, the mature queen has a greatly distended abdomen and may produce 40,000 eggs a day. The two mature ovaries may have some 2,000 ovarioles each. The abdomen increases the queen's body length to several times more than before mating and reduces her ability to move freely; attendant workers provide assistance. The king grows only slightly larger after initial mating and continues to mate with the queen for life (a termite queen can live between 30 and 50 years); this is very different from ant colonies, in which a queen mates once with the males and stores the gametes for life, as the male ants die shortly after mating. If a queen is absent, a termite king produces pheromones which encourage the development of replacement termite queens. As the queen and king are monogamous, sperm competition does not occur. Termites going through incomplete metamorphosis on the path to becoming alates form a subcaste in certain species of termite, functioning as potential supplementary reproductives. These supplementary reproductives only mature into primary reproductives upon the death of a king or queen, or when the primary reproductives are separated from the colony. Supplementaries have the ability to replace a dead primary reproductive, and there may also be more than a single supplementary within a colony. Some queens have the ability to switch from sexual reproduction to asexual reproduction. Studies show that while termite queens mate with the king to produce colony workers, the queens reproduce their replacements (neotenic queens) parthenogenetically. The neotropical termite Embiratermes neotenicus and several other related species produce colonies that contain a primary king accompanied by a primary queen or by up to 200 neotenic queens that had originated through thelytokous parthenogenesis of a founding primary queen. The form of parthenogenesis likely employed maintains heterozygosity in the passage of the genome from mother to daughter, thus avoiding inbreeding depression. Behaviour and ecology Diet Termites are primarily detritivores, consuming dead plants at any level of decomposition. They also play a vital role in the ecosystem by recycling waste material such as dead wood, faeces and plants. Many species eat cellulose, having a specialised midgut that breaks down the fibre. Termites are considered to be a major source (11%) of atmospheric methane, one of the prime greenhouse gases, produced from the breakdown of cellulose. Termites rely primarily upon a symbiotic microbial community that includes bacteria, flagellate protists such as metamonads and hypermastigids. This community provides the enzymes that digests the cellulose, allowing the insects to absorb the end products for their own use. The microbial ecosystem present in the termite gut contains many species found nowhere else on Earth. Termites hatch without these symbionts present in their guts, and develop them after fed a culture from other termites. Gut protozoa, such as Trichonympha, in turn, rely on symbiotic bacteria embedded on their surfaces to produce some of the necessary digestive enzymes. Most higher termites, especially in the family Termitidae, can produce their own cellulase enzymes, but they rely primarily upon the bacteria. The flagellates have been lost in Termitidae. Researchers have found species of spirochetes living in termite guts capable of fixing atmospheric nitrogen to a form usable by the insect. Scientists' understanding of the relationship between the termite digestive tract and the microbial endosymbionts is still rudimentary; what is true in all termite species, however, is that the workers feed the other members of the colony with substances derived from the digestion of plant material, either from the mouth or anus. Judging from closely related bacterial species, it is strongly presumed that the termites' and cockroach's gut microbiota derives from their dictyopteran ancestors. Despite primarily consuming decaying plant material as a group, many termite species have been observed to opportunistically feed on dead animals to supplement their dietary needs. Termites are also known to harbor bacteriophages in their gut. Some of these bacteriophages likely infect the symbiotic bacteria which play a key role in termite biology. The exact role and function of bacteriophages in the termite gut microbiome is not clearly understood. Termite gut bacteriophages also show similarity to bacteriophages (CrAssphage) found in the human gut. Certain species such as Gnathamitermes tubiformans have seasonal food habits. For example, they may preferentially consume Red three-awn (Aristida longiseta) during the summer, Buffalograss (Buchloe dactyloides) from May to August, and blue grama Bouteloua gracilis during spring, summer and autumn. Colonies of G. tubiformans consume less food in spring than they do during autumn when their feeding activity is high. Various woods differ in their susceptibility to termite attack; the differences are attributed to such factors as moisture content, hardness, and resin and lignin content. In one study, the drywood termite Cryptotermes brevis strongly preferred poplar and maple woods to other woods that were generally rejected by the termite colony. These preferences may in part have represented conditioned or learned behaviour. Some species of termite practice fungiculture. They maintain a "garden" of specialised fungi of genus Termitomyces, which are nourished by the excrement of the insects. When the fungi are eaten, their spores pass undamaged through the intestines of the termites to complete the cycle by germinating in the fresh faecal pellets. Molecular evidence suggests that the family Macrotermitinae developed agriculture about 31 million years ago. It is assumed that more than 90 per cent of dry wood in the semiarid savannah ecosystems of Africa and Asia are reprocessed by these termites. Originally living in the rainforest, fungus farming allowed them to colonise the African savannah and other new environments, eventually expanding into Asia. Depending on their feeding habits, termites are placed into two groups: the lower termites and higher termites. The lower termites predominately feed on wood. As wood is difficult to digest, termites prefer to consume fungus-infected wood because it is easier to digest and the fungi are high in protein. Meanwhile, the higher termites consume a wide variety of materials, including faeces, humus, grass, leaves and roots. The gut of the lower termites contains many species of bacteria along with protozoa and Holomastigotoides, while the higher termites only have a few species of bacteria with no protozoa. Predators Termites are consumed by a wide variety of predators. One termite species alone, Hodotermes mossambicus, was reported (1990) in the stomach contents of 65 birds and 19 mammals. Arthropods such as ants, centipedes, cockroaches, crickets, dragonflies, scorpions and spiders, reptiles such as lizards, and amphibians such as frogs and toads consume termites, with two spiders in the family Ammoxenidae being specialist termite predators. Other predators include aardvarks, aardwolves, anteaters, bats, bears, bilbies, many birds, echidnas, foxes, galagos, numbats, mice and pangolins. The aardwolf is an insectivorous mammal that primarily feeds on termites; it locates its food by sound and also by detecting the scent secreted by the soldiers; a single aardwolf is capable of consuming thousands of termites in a single night by using its long, sticky tongue. Sloth bears break open mounds to consume the nestmates, while chimpanzees have developed tools to "fish" termites from their nest. Wear pattern analysis of bone tools used by the early hominin Paranthropus robustus suggests that they used these tools to dig into termite mounds. Among all predators, ants are the greatest enemy to termites. Some ant genera are specialist predators of termites. For example, Megaponera is a strictly termite-eating (termitophagous) genus that perform raiding activities, some lasting several hours. Paltothyreus tarsatus is another termite-raiding species, with each individual stacking as many termites as possible in its mandibles before returning home, all the while recruiting additional nestmates to the raiding site through chemical trails. The Malaysian basicerotine ants Eurhopalothrix heliscata uses a different strategy of termite hunting by pressing themselves into tight spaces, as they hunt through rotting wood housing termite colonies. Once inside, the ants seize their prey by using their short but sharp mandibles. Tetramorium uelense is a specialised predator species that feeds on small termites. A scout recruits 10–30 workers to an area where termites are present, killing them by immobilising them with their stinger. Centromyrmex and Iridomyrmex colonies sometimes nest in termite mounds, and so the termites are preyed on by these ants. No evidence for any kind of relationship (other than a predatory one) is known. Other ants, including Acanthostichus, Camponotus, Crematogaster, Cylindromyrmex, Leptogenys, Odontomachus, Ophthalmopone, Pachycondyla, Rhytidoponera, Solenopsis and Wasmannia, also prey on termites. Specialized subterranean species of army ants such as ones in the genus Dorylus are known to commonly predate on young Macrotermes colonies. Ants are not the only invertebrates that perform raids. Many sphecoid wasps and several species including Polybia and Angiopolybia are known to raid termite mounds during the termites' nuptial flight. Parasites, pathogens and viruses Termites are less likely to be attacked by parasites than bees, wasps and ants, as they are usually well protected in their mounds. Nevertheless, termites are infected by a variety of parasites. Some of these include dipteran flies, Pyemotes mites, and a large number of nematode parasites. Most nematode parasites are in the order Rhabditida; others are in the genus Mermis, Diplogaster aerivora and Harteria gallinarum. Under imminent threat of an attack by parasites, a colony may migrate to a new location. Certain fungal pathogens such as Aspergillus nomius and Metarhizium anisopliae are, however, major threats to a termite colony as they are not host-specific and may infect large portions of the colony; transmission usually occurs via direct physical contact. M. anisopliae is known to weaken the termite immune system. Infection with A. nomius only occurs when a colony is under great stress. Over 34 fungal species are known to live as parasites on the exoskeleton of termites, with many being host-specific and only causing indirect harm to their host. Termites are infected by viruses including Entomopoxvirinae and the Nuclear Polyhedrosis Virus. Locomotion and foraging Because the worker and soldier castes lack wings and thus never fly, and the reproductives use their wings for just a brief amount of time, termites predominantly rely upon their legs to move about. Foraging behaviour depends on the type of termite. For example, certain species feed on the wood structures they inhabit, and others harvest food that is near the nest. Most workers are rarely found out in the open, and do not forage unprotected; they rely on sheeting and runways to protect them from predators. Subterranean termites construct tunnels and galleries to look for food, and workers who manage to find food sources recruit additional nestmates by depositing a phagostimulant pheromone that attracts workers. Foraging workers use semiochemicals to communicate with each other, and workers who begin to forage outside of their nest release trail pheromones from their sternal glands. In one species, Nasutitermes costalis, there are three phases in a foraging expedition: first, soldiers scout an area. When they find a food source, they communicate to other soldiers and a small force of workers starts to emerge. In the second phase, workers appear in large numbers at the site. The third phase is marked by a decrease in the number of soldiers present and an increase in the number of workers. Isolated termite workers may engage in Lévy flight behaviour as an optimised strategy for finding their nestmates or foraging for food. Competition Competition between two colonies always results in agonistic behaviour towards each other, resulting in fights. These fights can cause mortality on both sides and, in some cases, the gain or loss of territory. "Cemetery pits" may be present, where the bodies of dead termites are buried. Studies show that when termites encounter each other in foraging areas, some of the termites deliberately block passages to prevent other termites from entering. Dead termites from other colonies found in exploratory tunnels leads to the isolation of the area and thus the need to construct new tunnels. Conflict between two competitors does not always occur. For example, though they might block each other's passages, colonies of Macrotermes bellicosus and Macrotermes subhyalinus are not always aggressive towards each other. Suicide cramming is known in Coptotermes formosanus. Since C. formosanus colonies may get into physical conflict, some termites squeeze tightly into foraging tunnels and die, successfully blocking the tunnel and ending all agonistic activities. Among the reproductive caste, neotenic queens may compete with each other to become the dominant queen when there are no primary reproductives. This struggle among the queens leads to the elimination of all but a single queen, which, with the king, takes over the colony. Ants and termites may compete with each other for nesting space. In particular, ants that prey on termites usually have a negative impact on arboreal nesting species. Communication Most termites are blind, so communication primarily occurs through chemical, mechanical and pheromonal cues. These methods of communication are used in a variety of activities, including foraging, locating reproductives, construction of nests, recognition of nestmates, nuptial flight, locating and fighting enemies, and defending the nests. The most common way of communicating is through antennation. A number of pheromones are known, including contact pheromones (which are transmitted when workers are engaged in trophallaxis or grooming) and alarm, trail and sex pheromones. The alarm pheromone and other defensive chemicals are secreted from the frontal gland. Trail pheromones are secreted from the sternal gland, and sex pheromones derive from two glandular sources: the sternal and tergal glands. When termites go out to look for food, they forage in columns along the ground through vegetation. A trail can be identified by the faecal deposits or runways that are covered by objects. Workers leave pheromones on these trails, which are detected by other nestmates through olfactory receptors. Termites can also communicate through mechanical cues, vibrations, and physical contact. These signals are frequently used for alarm communication or for evaluating a food source. When termites construct their nests, they use predominantly indirect communication. No single termite would be in charge of any particular construction project. Individual termites react rather than think, but at a group level, they exhibit a sort of collective cognition. Specific structures or other objects such as pellets of soil or pillars cause termites to start building. The termite adds these objects onto existing structures, and such behaviour encourages building behaviour in other workers. The result is a self-organised process whereby the information that directs termite activity results from changes in the environment rather than from direct contact among individuals. Termites can distinguish nestmates and non-nestmates through chemical communication and gut symbionts: chemicals consisting of hydrocarbons released from the cuticle allow the recognition of alien termite species. Each colony has its own distinct odour. This odour is a result of genetic and environmental factors such as the termites' diet and the composition of the bacteria within the termites' intestines. Defence Termites rely on alarm communication to defend a colony. Alarm pheromones can be released when the nest has been breached or is being attacked by enemies or potential pathogens. Termites always avoid nestmates infected with Metarhizium anisopliae spores, through vibrational signals released by infected nestmates. Other methods of defence include headbanging and secretion of fluids from the frontal gland and defecating faeces containing alarm pheromones. In some species, some soldiers block tunnels to prevent their enemies from entering the nest, and they may deliberately rupture themselves as an act of defence. In cases where the intrusion is coming from a breach that is larger than the soldier's head, soldiers form a phalanx-like formation around the breach and bite at intruders. If an invasion carried out by Megaponera analis is successful, an entire colony may be destroyed, although this scenario is rare. To termites, any breach of their tunnels or nests is a cause for alarm. When termites detect a potential breach, the soldiers usually bang their heads, apparently to attract other soldiers for defence and to recruit additional workers to repair any breach. Additionally, an alarmed termite bumps into other termites which causes them to be alarmed and to leave pheromone trails to the disturbed area, which is also a way to recruit extra workers. The pantropical subfamily Nasutitermitinae has a specialised caste of soldiers, known as nasutes, that have the ability to exude noxious liquids through a horn-like frontal projection that they use for defence. Nasutes have lost their mandibles through the course of evolution and must be fed by workers. A wide variety of monoterpene hydrocarbon solvents have been identified in the liquids that nasutes secrete. Similarly, Formosan subterranean termites have been known to secrete naphthalene to protect their nests. Soldiers of the species Globitermes sulphureus commit suicide by autothysis – rupturing a large gland just beneath the surface of their cuticles. The thick, yellow fluid in the gland becomes very sticky on contact with the air, entangling ants or other insects that are trying to invade the nest. Another termite, Neocapriterme taracua, also engages in suicidal defence. Workers physically unable to use their mandibles while in a fight form a pouch full of chemicals, then deliberately rupture themselves, releasing toxic chemicals that paralyse and kill their enemies. The soldiers of the neotropical termite family Serritermitidae have a defence strategy which involves front gland autothysis, with the body rupturing between the head and abdomen. When soldiers guarding nest entrances are attacked by intruders, they engage in autothysis, creating a block that denies entry to any attacker. Workers use several different strategies to deal with their dead, including burying, cannibalism, and avoiding a corpse altogether. To avoid pathogens, termites occasionally engage in necrophoresis, in which a nestmate carries away a corpse from the colony to dispose of it elsewhere. Which strategy is used depends on the nature of the corpse a worker is dealing with (i.e. the age of the carcass). Relationship with other organisms A species of fungus is known to mimic termite eggs, successfully avoiding its natural predators. These small brown balls, known as "termite balls", rarely kill the eggs, and in some cases the workers tend to them. This fungus mimics these eggs by producing cellulose-digesting enzymes known as glucosidases. A unique mimicking behaviour exists between various species of Trichopsenius beetles and certain termite species within Reticulitermes. The beetles share the same cuticle hydrocarbons as the termites and even biosynthesize them. This chemical mimicry allows the beetles to integrate themselves within the termite colonies. The developed appendages on the physogastric abdomen of Austrospirachtha mimetes allows the beetle to mimic a termite worker. Some species of ant are known to capture termites to use as a fresh food source later on, rather than killing them. For example, Formica nigra captures termites, and those that try to escape are immediately seized and driven underground. Certain species of ants in the subfamily Ponerinae conduct these raids although other ant species go in alone to steal the eggs or nymphs. Ants such as Megaponera analis attack the outside of mounds and Dorylinae ants attack underground. Despite this, some termites and ants can coexist peacefully. Some species of termite, including Nasutitermes corniger, form associations with certain ant species to keep away predatory ant species. The earliest known association between Azteca ants and Nasutitermes termites date back to the Oligocene to Miocene period. 54 species of ants are known to inhabit Nasutitermes mounds, both occupied and abandoned ones. One reason many ants live in Nasutitermes mounds is due to the termites' frequent occurrence in their geographical range; another is to protect themselves from floods. Iridomyrmex also inhabits termite mounds although no evidence for any kind of relationship (other than a predatory one) is known. In rare cases, certain species of termites live inside active ant colonies. Some invertebrate organisms such as beetles, caterpillars, flies and millipedes are termitophiles and dwell inside termite colonies (they are unable to survive independently). As a result, certain beetles and flies have evolved with their hosts. They have developed a gland that secrete a substance that attracts the workers by licking them. Mounds may also provide shelter and warmth to birds, lizards, snakes and scorpions. Termites are known to carry pollen and regularly visit flowers, so are regarded as potential pollinators for a number of flowering plants. One flower in particular, Rhizanthella gardneri, is regularly pollinated by foraging workers, and it is perhaps the only Orchidaceae flower in the world to be pollinated by termites. Many plants have developed effective defences against termites. However, seedlings are vulnerable to termite attacks and need additional protection, as their defence mechanisms only develop when they have passed the seedling stage. Defence is typically achieved by secreting antifeedant chemicals into the woody cell walls. This reduces the ability of termites to efficiently digest the cellulose. A commercial product, "Blockaid", has been developed in Australia that uses a range of plant extracts to create a paint-on nontoxic termite barrier for buildings. An extract of a species of Australian figwort, Eremophila, has been shown to repel termites; tests have shown that termites are strongly repelled by the toxic material to the extent that they will starve rather than consume the food. When kept close to the extract, they become disoriented and eventually die. Relationship with the environment Termite populations can be substantially impacted by environmental changes including those caused by human intervention. A Brazilian study investigated the termite assemblages of three sites of Caatinga under different levels of anthropogenic disturbance in the semi-arid region of northeastern Brazil were sampled using 65 x 2 m transects. A total of 26 species of termites were present in the three sites, and 196 encounters were recorded in the transects. The termite assemblages were considerably different among sites, with a conspicuous reduction in both diversity and abundance with increased disturbance, related to the reduction of tree density and soil cover, and with the intensity of trampling by cattle and goats. The wood-feeders were the most severely affected feeding group. Nests A termite nest can be considered as being composed of two parts, the inanimate and the animate. The animate is all of the termites living inside the colony, and the inanimate part is the structure itself, which is constructed by the termites. Nests can be broadly separated into three main categories: hypogeal, i.e subterranean (completely below ground), epigeal (protruding above the soil surface), and arboreal (built above ground, but always connected to the ground via shelter tubes). Epigeal nests (mounds) protrude from the earth with ground contact and are made out of earth and mud. A nest has many functions such as providing a protected living space and providing shelter against predators. Most termites construct underground colonies rather than multifunctional nests and mounds. Primitive termites of today nest in wooden structures such as logs, stumps and the dead parts of trees, as did termites millions of years ago. To build their nests, termites use a variety of resources such as faeces which have many desirable properties as a construction material. Other building materials include partly digested plant material, used in carton nests (arboreal nests built from faecal elements and wood), and soil, used in subterranean nest and mound construction. Not all nests are visible, as many nests in tropical forests are located underground. Species in the subfamily Apicotermitinae are good examples of subterranean nest builders, as they only dwell inside tunnels. Other termites live in wood, and tunnels are constructed as they feed on the wood. Nests and mounds protect the termites' soft bodies against desiccation, light, pathogens and parasites, as well as providing a fortification against predators. Nests made out of carton are particularly weak, and so the inhabitants use counter-attack strategies against invading predators. Arboreal carton nests of mangrove swamp-dwelling Nasutitermes are enriched in lignin and depleted in cellulose and xylans. This change is caused by bacterial decay in the gut of the termites: they use their faeces as a carton building material. Arboreal termites nests can account for as much as 2% of above ground carbon storage in Puerto Rican mangrove swamps. These Nasutitermes nests are mainly composed of partially biodegraded wood material from the stems and branches of mangrove trees, namely, Rhizophora mangle (red mangrove), Avicennia germinans (black mangrove) and Laguncularia racemosa (white mangrove). Some species build complex nests called polycalic nests; this habitat is called polycalism. Polycalic species of termites form multiple nests, or calies, connected by subterranean chambers. The termite genera Apicotermes and Trinervitermes are known to have polycalic species. Polycalic nests appear to be less frequent in mound-building species although polycalic arboreal nests have been observed in a few species of Nasutitermes. Mounds Nests are considered mounds if they protrude from the earth's surface. A mound provides termites the same protection as a nest but is stronger. Mounds located in areas with torrential and continuous rainfall are at risk of mound erosion due to their clay-rich construction. Those made from carton can provide protection from the rain, and in fact can withstand high precipitation. Certain areas in mounds are used as strong points in case of a breach. For example, Cubitermes colonies build narrow tunnels used as strong points, as the diameter of the tunnels is small enough for soldiers to block. A highly protected chamber, known as the "queen's cell", houses the queen and king and is used as a last line of defence. Species in the genus Macrotermes arguably build the most complex structures in the insect world, constructing enormous mounds. These mounds are among the largest in the world, reaching a height of 8 to 9 metres (26 to 29 feet), and consist of chimneys, pinnacles and ridges. Another termite species, Amitermes meridionalis, can build nests 3 to 4 metres (9 to 13 feet) high and 2.5 metres (8 feet) wide. The tallest mound ever recorded was 12.8 metres (42 ft) long found in the Democratic Republic of the Congo. The sculptured mounds sometimes have elaborate and distinctive forms, such as those of the compass termite (Amitermes meridionalis and A. laurensis), which builds tall, wedge-shaped mounds with the long axis oriented approximately north–south, which gives them their common name. This orientation has been experimentally shown to assist thermoregulation. The north–south orientation causes the internal temperature of a mound to increase rapidly during the morning while avoiding overheating from the midday sun. The temperature then remains at a plateau for the rest of the day until the evening. Shelter tubes Termites construct shelter tubes, also known as earthen tubes or mud tubes, that start from the ground. These shelter tubes can be found on walls and other structures. Constructed by termites during the night, a time of higher humidity, these tubes provide protection to termites from potential predators, especially ants. Shelter tubes also provide high humidity and darkness and allow workers to collect food sources that cannot be accessed in any other way. These passageways are made from soil and faeces and are normally brown in colour. The size of these shelter tubes depends on the number of food sources that are available. They range from less than 1 cm to several cm in width, but may be dozens of metres in length. Relationship with humans As pests Owing to their wood-eating habits, many termite species can do significant damage to unprotected buildings and other wooden structures. Termites play an important role as decomposers of wood and vegetative material, and the conflict with humans occurs where structures and landscapes containing structural wood components, cellulose derived structural materials and ornamental vegetation provide termites with a reliable source of food and moisture. Their habit of remaining concealed often results in their presence being undetected until the timbers are severely damaged, with only a thin exterior layer of wood remaining, which protects them from the environment. Of the 3,106 species known, only 183 species cause damage; 83 species cause significant damage to wooden structures. In North America, 18 subterranean species are pests; in Australia, 16 species have an economic impact; in the Indian subcontinent 26 species are considered pests, and in tropical Africa, 24. In Central America and the West Indies, there are 17 pest species. Among the termite genera, Coptotermes has the highest number of pest species of any genus, with 28 species known to cause damage. Less than 10% of drywood termites are pests, but they infect wooden structures and furniture in tropical, subtropical and other regions. Dampwood termites only attack lumber material exposed to rainfall or soil. Drywood termites thrive in warm climates, and human activities can enable them to invade homes since they can be transported through contaminated goods, containers and ships. Colonies of termites have been seen thriving in warm buildings located in cold regions. Some termites are considered invasive species. Cryptotermes brevis, the most widely introduced invasive termite species in the world, has been introduced to all the islands in the West Indies and to Australia. In addition to causing damage to buildings, termites can also damage food crops. Termites may attack trees whose resistance to damage is low but generally ignore fast-growing plants. Most attacks occur at harvest time; crops and trees are attacked during the dry season. In Australia, at a cost of more than per year, termites cause more damage to houses than fire, floods and storms combined. In Malaysia, it is estimated that termites caused about RM400 million of damages to properties and buildings. The damage caused by termites costs the southwestern United States approximately $1.5 billion each year in wood structure damage, but the true cost of damage worldwide cannot be determined. Drywood termites are responsible for a large proportion of the damage caused by termites. The goal of termite control is to keep structures and susceptible ornamental plants free from termites.; Structures may be homes or business, or elements such as wooden fence posts and telephone poles. Regular and thorough inspections by a trained professional may be necessary to detect termite activity in the absence of more obvious signs like termite swarmers or alates inside or adjacent to a structure. Termite monitors made of wood or cellulose adjacent to a structure may also provide indication of termite foraging activity where it will be in conflict with humans. Termites can be controlled by application of Bordeaux mixture or other substances that contain copper such as chromated copper arsenate. In the United states, application of a soil termiticide with the active ingredient Fipronil, such as Termidor SC or Taurus SC, by a licensed professional, is a common remedy approved by the Environmental Protection Agency for economically significant subterranean termites. A growing demand for alternative, green, and "more natural" extermination methods has increased demand for mechanical and biological control methods such as orange oil. To better control the population of termites, various methods have been developed to track termite movements. One early method involved distributing termite bait laced with immunoglobulin G (IgG) marker proteins from rabbits or chickens. Termites collected from the field could be tested for the rabbit-IgG markers using a rabbit-IgG-specific assay. More recently developed, less expensive alternatives include tracking the termites using egg white, cow milk, or soy milk proteins, which can be sprayed on termites in the field. Termites bearing these proteins can be traced using a protein-specific ELISA test. RNAi insecticides specific to termites are in development. One factor reducing investment in its research and development is concern about high potential for resistance evolution. In 1994, termites, of the species Reticulitermes grassei, were identified in two bungalows in Saunton, Devon. Anecdotal evidence suggests the infestation could date back 70 years before the official identification. There are reports that gardeners had seen white ants and that a greenhouse had had to be replaced in the past. The Saunton infestation was the first and only colony ever recorded in the UK. In 1998, Termite Eradication Programme was set-up, with the intention of containing and eradicating the colony. The TEP was managed by the Ministry of Housing, Communities & Local Government (now the Department for Levelling Up, Housing and Communities.) The TEP used "insect growth regulators" to prevent the termites from reaching maturity and reproducing. In 2021, the UK's Termite Eradication Programme announced the eradication of the colony, the first time a country has eradicated termites. As food 43 termite species are used as food by humans or are fed to livestock. These insects are particularly important in impoverished countries where malnutrition is common, as the protein from termites can help improve the human diet. Termites are consumed in many regions globally, but this practice has only become popular in developed nations in recent years. Termites are consumed by people in many different cultures around the world. In many parts of Africa, the alates are an important factor in the diets of native populations. Groups have different ways of collecting or cultivating insects; sometimes collecting soldiers from several species. Though harder to acquire, queens are regarded as a delicacy. Termite alates are high in nutrition with adequate levels of fat and protein. They are regarded as pleasant in taste, having a nut-like flavour after they are cooked. Alates are collected when the rainy season begins. During a nuptial flight, they are typically seen around lights to which they are attracted, and so nets are set up on lamps and captured alates are later collected. The wings are removed through a technique that is similar to winnowing. The best result comes when they are lightly roasted on a hot plate or fried until crisp. Oil is not required as their bodies usually contain sufficient amounts of oil. Termites are typically eaten when livestock is lean and tribal crops have not yet developed or produced any food, or if food stocks from a previous growing season are limited. In addition to Africa, termites are consumed in local or tribal areas in Asia and North and South America. In Australia, Indigenous Australians are aware that termites are edible but do not consume them even in times of scarcity; there are few explanations as to why. Termite mounds are the main sources of soil consumption (geophagy) in many countries including Kenya, Tanzania, Zambia, Zimbabwe and South Africa. Researchers have suggested that termites are suitable candidates for human consumption and space agriculture, as they are high in protein and can be used to convert inedible waste to consumable products for humans. In agriculture Termites can be major agricultural pests, particularly in East Africa and North Asia, where crop losses can be severe (3–100% in crop loss in Africa). Counterbalancing this is the greatly improved water infiltration where termite tunnels in the soil allow rainwater to soak in deeply, which helps reduce runoff and consequent soil erosion through bioturbation. In South America, cultivated plants such as eucalyptus, upland rice and sugarcane can be severely damaged by termite infestations, with attacks on leaves, roots and woody tissue. Termites can also attack other plants, including cassava, coffee, cotton, fruit trees, maize, peanuts, soybeans and vegetables. Mounds can disrupt farming activities, making it difficult for farmers to operate farming machinery; however, despite farmers' dislike of the mounds, it is often the case that no net loss of production occurs. Termites can be beneficial to agriculture, such as by boosting crop yields and enriching the soil. Termites and ants can re-colonise untilled land that contains crop stubble, which colonies use for nourishment when they establish their nests. The presence of nests in fields enables larger amounts of rainwater to soak into the ground and increases the amount of nitrogen in the soil, both essential for the growth of crops. In science and technology The termite gut has inspired various research efforts aimed at replacing fossil fuels with cleaner, renewable energy sources. Termites are efficient bioreactors, theoretically capable of producing two litres of hydrogen from a single sheet of paper. Approximately 200 species of microbes live inside the termite hindgut, releasing the hydrogen that was trapped inside wood and plants that they digest. Through the action of unidentified enzymes in the termite gut, lignocellulose polymers are broken down into sugars and are transformed into hydrogen. The bacteria within the gut turns the sugar and hydrogen into cellulose acetate, an acetate ester of cellulose on which termites rely for energy. Community DNA sequencing of the microbes in the termite hindgut has been employed to provide a better understanding of the metabolic pathway. Genetic engineering may enable hydrogen to be generated in bioreactors from woody biomass. The development of autonomous robots capable of constructing intricate structures without human assistance has been inspired by the complex mounds that termites build. These robots work independently and can move by themselves on a tracked grid, capable of climbing and lifting up bricks. Such robots may be useful for future projects on Mars, or for building levees to prevent flooding. Termites use sophisticated means to control the temperatures of their mounds. As discussed above, the shape and orientation of the mounds of the Australian compass termite stabilises their internal temperatures during the day. As the towers heat up, the solar chimney effect (stack effect) creates an updraft of air within the mound. Wind blowing across the tops of the towers enhances the circulation of air through the mounds, which also include side vents in their construction. The solar chimney effect has been in use for centuries in the Middle East and Near East for passive cooling, as well as in Europe by the Romans. It is only relatively recently, however, that climate responsive construction techniques have become incorporated into modern architecture. Especially in Africa, the stack effect has become a popular means to achieve natural ventilation and passive cooling in modern buildings. In culture The Eastgate Centre is a shopping centre and office block in central Harare, Zimbabwe, whose architect, Mick Pearce, used passive cooling inspired by that used by the local termites. It was the first major building exploiting termite-inspired cooling techniques to attract international attention. Other such buildings include the Learning Resource Center at the Catholic University of Eastern Africa and the Council House 2 building in Melbourne, Australia. Few zoos hold termites, due to the difficulty in keeping them captive and to the reluctance of authorities to permit potential pests. One of the few that do, the Zoo Basel in Switzerland, has two thriving Macrotermes bellicosus populations – resulting in an event very rare in captivity: the mass migrations of young flying termites. This happened in September 2008, when thousands of male termites left their mound each night, died, and covered the floors and water pits of the house holding their exhibit. African tribes in several countries have termites as totems, and for this reason tribe members are forbidden to eat the reproductive alates. Termites are widely used in traditional popular medicine; they are used as treatments for diseases and other conditions such as asthma, bronchitis, hoarseness, influenza, sinusitis, tonsillitis and whooping cough. In Nigeria, Macrotermes nigeriensis is used for spiritual protection and to treat wounds and sick pregnant women. In Southeast Asia, termites are used in ritual practices. In Malaysia, Singapore and Thailand, termite mounds are commonly worshiped among the populace. Abandoned mounds are viewed as structures created by spirits, believing a local guardian dwells within the mound; this is known as Keramat and Datok Kong. In urban areas, local residents construct red-painted shrines over mounds that have been abandoned, where they pray for good health, protection and luck. See also Mound-building termites Stigmergy Termite shield Xylophagy Notes References Cited literature pest control Sydney External links Isoptera: termites at CSIRO Australia Entomology Jared Leadbetter seminar: Termites and Their Symbiotic Gut Microbes Articles containing video clips Building defects Household pest insects Insects in culture Extant Early Cretaceous first appearances Symbiosis Superorganisms
Termite
[ "Materials_science", "Biology" ]
15,952
[ "Superorganisms", "Behavior", "Symbiosis", "Biological interactions", "Building defects", "Mechanical failure" ]
54,813
https://en.wikipedia.org/wiki/Shellac
Shellac () is a resin secreted by the female lac bug on trees in the forests of India and Thailand. Chemically, it is mainly composed of aleuritic acid, jalaric acid, shellolic acid, and other natural waxes. It is processed and sold as dry flakes and dissolved in alcohol to make liquid shellac, which is used as a brush-on colorant, food glaze and wood finish. Shellac functions as a tough natural primer, sanding sealant, tannin-blocker, odour-blocker, stain, and high-gloss varnish. Shellac was once used in electrical applications as it possesses good insulation qualities and seals out moisture. Phonograph and 78 rpm gramophone records were made of shellac until they were gradually replaced by vinyl. By 1948 shellac was no longer used to make records. From the time shellac replaced oil and wax finishes in the 19th century, it was one of the dominant wood finishes in the western world until it was largely replaced by nitrocellulose lacquer in the 1920s and 1930s. Besides wood finishing, shellac is used as an ingredient in food, medication and candy as confectioner's glaze, as well as a means of preserving harvested citrus fruit. Etymology Shellac comes from shell and lac, a partial calque of French , 'lac in thin pieces', later , 'gum lac'. Most European languages (except Romance ones and Greek) have borrowed the word for the substance from English or from the German equivalent . Production Shellac is scraped from the bark of the trees where the female lac bug, Kerria lacca (order Hemiptera, family Kerriidae, also known as Laccifer lacca), secretes it to form a tunnel-like tube as it traverses the branches of the tree. Though these tunnels are sometimes referred to as "cocoons", they are not cocoons in the entomological sense. This insect is in the same superfamily as the insect from which cochineal is obtained. The insects suck the sap of the tree and excrete "sticklac" almost constantly. The least-coloured shellac is produced when the insects feed on the kusum tree (Schleichera). The number of lac bugs required to produce of shellac has variously been estimated between and . The root word lakh is a unit in the Indian numbering system for and presumably refers to the huge numbers of insects that swarm on host trees, up to . The raw shellac, which contains bark shavings and lac bugs removed during scraping, is placed in canvas tubes (much like long socks) and heated over a fire. This causes the shellac to liquefy, and it seeps out of the canvas, leaving the bark and bugs behind. The thick, sticky shellac is then dried into a flat sheet and broken into flakes, or dried into "buttons" (pucks/cakes), then bagged and sold. The end-user then crushes it into a fine powder and mixes it with ethyl alcohol before use, to dissolve the flakes and make liquid shellac. Liquid shellac has a limited shelf life (about 1 year), so is sold in dry form for dissolution before use. Liquid shellac sold in hardware stores is often marked with the production (mixing) date, so the consumer can know whether the shellac inside is still good. Some manufacturers (e.g., Zinsser) have ceased labeling shellac with the production date, but the production date may be discernible from the production lot code. Alternatively, old shellac may be tested to see if it is still usable: a few drops on glass should dry to a hard surface in roughly 15 minutes. Shellac that remains tacky for a long time is no longer usable. Storage life depends on peak temperature, so refrigeration extends shelf life. The thickness (concentration) of shellac is measured by the unit "pound cut", referring to the amount (in pounds) of shellac flakes dissolved in a gallon of denatured alcohol. For example: a 1-lb. cut of shellac is the strength obtained by dissolving one pound of shellac flakes in a gallon of alcohol (equivalent to ). Most pre-mixed commercial preparations come at a 3-lb. cut. Multiple thin layers of shellac produce a significantly better end result than a few thick layers. Thick layers of shellac do not adhere to the substrate or to each other well, and thus can peel off with relative ease; in addition, thick shellac will obscure fine details in carved designs in wood and other substrates. Shellac naturally dries to a high-gloss sheen. For applications where a flatter (less shiny) sheen is desired, products containing amorphous silica, such as "Shellac Flat", may be added to the dissolved shellac. Shellac naturally contains a small amount of wax (3%–5% by volume), which comes from the lac bug. In some preparations, this wax is removed (the resulting product being called "dewaxed shellac"). This is done for applications where the shellac will be coated with something else (such as paint or varnish), so the topcoat will adhere. Waxy (non-dewaxed) shellac appears milky in liquid form, but dries clear. Colours and availability Shellac comes in many warm colours, ranging from a very light blonde ("platina") to a very dark brown ("garnet"), with many varieties of brown, yellow, orange and red in between. The colour is influenced by the sap of the tree the lac bug is living on and by the time of harvest. Historically, the most commonly sold shellac is called "orange shellac", and was used extensively as a combination stain and protectant for wood panelling and cabinetry in the 20th century. Shellac was once very common anywhere paints or varnishes were sold (such as hardware stores). However, cheaper and more abrasion- and chemical-resistant finishes, such as polyurethane, have almost completely replaced it in decorative residential wood finishing such as hardwood floors, wooden wainscoting plank panelling, and kitchen cabinets. These alternative products, however, must be applied over a stain if the user wants the wood to be coloured; clear or blonde shellac may be applied over a stain without affecting the colour of the finished piece, as a protective topcoat. "Wax over shellac" (an application of buffed-on paste wax over several coats of shellac) is often regarded as a beautiful, if fragile, finish for hardwood floors. Luthiers still use shellac to French polish fine acoustic stringed instruments, but it has been replaced by synthetic plastic lacquers and varnishes in many workshops, especially high-volume production environments. Shellac dissolved in alcohol, typically more dilute than as used in French polish, is now commonly sold as "sanding sealer" by several companies. It is used to seal wooden surfaces, often as preparation for a final more durable finish; it reduces the amount of final coating required by reducing its absorption into the wood. Properties Shellac is a natural bioadhesive polymer and is chemically similar to synthetic polymers. It can thus be considered a natural form of plastic. With a melting point of , it can be classed as a thermoplastic used to bind wood flour, the mixture can be moulded with heat and pressure. Shellac scratches more easily than most lacquers and varnishes, and application is more labour-intensive, which is why it has been replaced by plastic in most areas. Shellac is much softer than Urushi lacquer, for instance, which is far superior with regard to both chemical and mechanical resistance. But damaged shellac can easily be touched up with another coat of shellac (unlike polyurethane, which chemically cures to a solid) because the new coat merges with and bonds to the existing coat(s). Shellac is soluble in alkaline solutions of ammonia, sodium borate, sodium carbonate, and sodium hydroxide, and also in various organic solvents. When dissolved in alcohol (typically denatured ethanol) for application, shellac yields a coating of good durability and hardness. Upon mild hydrolysis shellac gives a complex mix of aliphatic and alicyclic hydroxy acids and their polymers that varies in exact composition depending upon the source of the shellac and the season of collection. The major component of the aliphatic component is aleuritic acid, whereas the main alicyclic component is shellolic acid. Shellac is UV-resistant, and does not darken as it ages (though the wood under it may do so, as in the case of pine). History The earliest written evidence of shellac goes back years, but shellac is known to have been used earlier. According to the ancient Indian epic poem, the Mahabharata, an entire palace was coated with dried shellac. Shellac was uncommonly used as a dyestuff for as long as there was a trade with the East Indies. According to Merrifield, shellac was first used as a binding agent in artist's pigments in Spain in the year 1220. The use of overall paint or varnish decoration on large pieces of furniture was first popularised in Venice (then later throughout Italy). There are a number of 13th-century references to painted or varnished cassone, often dowry cassone that were made deliberately impressive as part of dynastic marriages. The definition of varnish is not always clear, but it seems to have been a spirit varnish based on gum benjamin or mastic, both traded around the Mediterranean. At some time, shellac began to be used as well. An article from the Journal of the American Institute of Conservation describes using infrared spectroscopy to identify shellac coating on a 16th-century cassone. This is also the period in history where "varnisher" was identified as a distinct trade, separate from both carpenter and artist. Another use for shellac is sealing wax. The widespread use of shellac seals in Europe dates back to the 17th century, thanks to the increasing trade with India. Uses Historical In the early- and mid-twentieth century, orange shellac was used as a one-product finish (combination stain and varnish-like topcoat) on decorative wood panelling used on walls and ceilings in homes, particularly in the US. In the American South, use of knotty pine plank panelling covered with orange shellac was once as common in new construction as drywall is today. It was also often used on kitchen cabinets and hardwood floors, prior to the advent of polyurethane. Until the advent of vinyl, most gramophone records were pressed from shellac compounds. From 1921 to 1928, tons of shellac were used to create 260 million records for Europe. In the 1930s, it was estimated that half of all shellac was used for gramophone records. Use of shellac for records was common until the 1950s and continued into the 1970s in some non-Western countries, as well as for some children's records. Until recent advances in technology, shellac (French polish) was the only glue used in the making of ballet dancers' pointe shoes, to stiffen the box (toe area) to support the dancer en pointe. Many manufacturers of pointe shoes still use the traditional techniques, and many dancers use shellac to revive a softening pair of shoes. Shellac was historically used as a protective coating on paintings. Sheets of Braille were coated with shellac to help protect them from wear due to being read by hand. Shellac was used from the mid-nineteenth century to produce small moulded goods such as picture frames, boxes, toilet articles, jewelry, inkwells and even dentures. Advances in plastics have rendered shellac obsolete as a moulding compound. Shellac (both orange and white varieties) was used both in the field and laboratory to glue and stabilise dinosaur bones until about the mid-1960s. While effective at the time, the long-term negative effects of shellac (being organic in nature) on dinosaur bones and other fossils is debated, and shellac is very rarely used by professional conservators and fossil preparators today. Shellac was used for fixing inductor, motor, generator and transformer windings. It was applied directly to single-layer windings in an alcohol solution. For multi-layer windings, the whole coil was submerged in shellac solution, then drained and placed in a warm location to allow the alcohol to evaporate. The shellac locked the wire turns in place, provided extra insulation, prevented movement and vibration and reduced buzz and hum. In motors and generators it also helps transfer force generated by magnetic attraction and repulsion from the windings to the rotor or armature. In more recent times, shellac has been replaced in these applications by synthetic resins such as polyester resin. Some applications use shellac mixed with other natural or synthetic resins, such as pine resin or phenol-formaldehyde resin, of which Bakelite is the best known, for electrical use. Mixed with other resins, barium sulfate, calcium carbonate, zinc sulfide, aluminium oxide and/or cuprous carbonate (malachite), shellac forms a component of heat-cured capping cement used to fasten the caps or bases to the bulbs of electric lamps. Current uses It is the central element of the traditional "French polish" method of finishing furniture, fine string instruments, and pianos. Shellac, being edible, is used as a glazing agent on pills (see excipient) and sweets, in the form of pharmaceutical glaze (or, "confectioner's glaze"). Because of its acidic properties (resisting stomach acids), shellac-coated pills may be used for a timed enteric or colonic release. Shellac is used as a 'wax' coating on citrus fruit to prolong its shelf/storage life. It is also used to replace the natural wax of the apple, which is removed during the cleaning process. When used for this purpose, it has the food additive E number E904. Shellac is an odour and stain blocker and so is often used as the base of "all-purpose" primers. Although its durability against abrasives and many common solvents is not very good, shellac provides an excellent barrier against water vapour penetration. Shellac-based primers are an effective sealant to control odours associated with fire damage. Shellac has traditionally been used as a dye for cotton and, especially, silk cloth in Thailand, particularly in the north-eastern region. It yields a range of warm colours from pale yellow through to dark orange-reds and dark ochre. Naturally dyed silk cloth, including that using shellac, is widely available in the rural northeast, especially in Ban Khwao District, Chaiyaphum province. The Thai name for the insect and the substance is "khrang" (Thai: ครั่ง). Wood finish Wood finishing is one of the most traditional and still popular uses of shellac mixed with solvents or alcohol. This dissolved shellac liquid, applied to a piece of wood, is an evaporative finish: the alcohol of the shellac mixture evaporates, leaving behind a protective film. Shellac as wood finish is natural and non-toxic in its pure form. A finish made of shellac is UV-resistant. For water-resistance and durability, it does not keep up with synthetic finishing products. Because it is compatible with most other finishes, shellac is also used as a barrier or primer coat on wood to prevent the bleeding of resin or pigments into the final finish, or to prevent wood stain from blotching. Other Shellac is used: in the tying of artificial flies for trout and salmon, where the shellac was used to seal all trimmed materials at the head of the fly. in combination with wax for preserving and imparting a shine to citrus fruits, such as lemons and oranges. in dental technology, where it is occasionally used in the production of custom impression trays and temporary denture baseplate production. as a binder in India ink. for bicycles, as a protective and decorative coating for bicycle handlebar tape, and as a hard-drying adhesive for tubular tyres, particularly for track racing. for re-attaching ink sacs when restoring vintage fountain pens, the orange variety preferably. applied as a coating with either a standard or modified Huon-Stuehrer nozzle, can be economically micro-sprayed onto various smooth candies, such as chocolate coated peanuts. Irregularities on the surface of the product being sprayed may result in the formation of unsightly aggregates ("lac-aggs") which precludes the use of this technique on foods such as walnuts or raisins. for fixing pads to the key-cups of woodwind instruments. for luthierie applications, to bind wood fibres down and prevent tear out on the soft spruce soundboards. to stiffen and impart water-resistance to felt hats, for wood finishing and as a constituent of gossamer (or goss for short), a cheesecloth fabric coated in shellac and ammonia solution used in the shell of traditional silk top and riding hats. for mounting insects, in the form of a gel adhesive mixture composed of 75% ethyl alcohol. as a binder in the fabrication of abrasive wheels, imparting flexibility and smoothness not found in vitrified (ceramic bond) wheels. 'Elastic' bonded wheels typically contain plaster of paris, yielding a stronger bond when mixed with shellac; the mixture of dry plaster powder, abrasive (e.g. corundum/aluminium oxide Al2O3), and shellac are heated and the mixture pressed in a mould. in fireworks pyrotechnic compositions as a low-temperature fuel, where it allows the creation of pure 'greens' and 'blues'- colours difficult to achieve with other fuel mixes. in jewellery; shellac is often applied to the top of a 'shellac stick' in order to hold small, complex, objects. By melting the shellac, the jeweller can press the object (such as a stone setting mount) into it. The shellac, once cool, can firmly hold the object, allowing it to be manipulated with tools. in watchmaking, due to its low melting temperature (about ), shellac is used in most mechanical movements to adjust and adhere pallet stones to the pallet fork and secure the roller jewel to the roller table of the balance wheel. Also for securing small parts to a 'wax chuck' (faceplate) in a watchmakers' lathe. in the early twentieth century, it was used to protect some military rifle stocks. in Jelly Belly jelly beans, in combination with beeswax to give them their final buff and polish. in modern traditional archery, shellac is one of the hot-melt glue/resin products used to attach arrowheads to wooden or bamboo arrow shafts. in alcohol solution as sanding sealer, widely sold to seal sanded surfaces, typically wooden surfaces before a final coat of a more durable finish. Similar to French polish but more dilute. as a topcoat in nail polish (although not all nail polish sold as "shellac" contains shellac, and some nail polish not labelled in this way does). in sculpture, to seal plaster and in conjunction with wax or oil-soaps, to act as a barrier during mold-making processes. as a dilute solution in the sealing of harpsichord soundboards, protecting them from dust and buffering humidity changes while maintaining a bare-wood appearance. as a waterproofing agent for leather (e.g., for the soles of figure skate boots). as a way for ballet dancers to harden their pointe shoes, making them last longer. Gallery See also Polymers Rosin References External links Shellac.net US shellac vendor – properties and uses of dewaxed and non-dewaxed shellac The Story of Shellac (history) DIYinfo.org's Shellac Wiki, practical information on everything to do with shellac Reactive Pyrolysis-Gas Chromatography of Shellac Shellac A short introduction to the origin of shellac, the history of Japanning and French polishing, and how to conserve and repair these finishes sympathetically Shellac Application By Smith & Rodger Wood finishing materials Food additives Insect products Polymers Resins Waxes Excipients Forestry in India Non-timber forest products E-number additives
Shellac
[ "Physics", "Chemistry", "Materials_science" ]
4,311
[ "Resins", "Unsolved problems in physics", "Materials", "Polymer chemistry", "Polymers", "Amorphous solids", "Matter", "Waxes" ]
54,838
https://en.wikipedia.org/wiki/Biogas
Biogas is a gaseous renewable energy source produced from raw materials such as agricultural waste, manure, municipal waste, plant material, sewage, green waste, wastewater, and food waste. Biogas is produced by anaerobic digestion with anaerobic organisms or methanogens inside an anaerobic digester, biodigester or a bioreactor. The gas composition is primarily methane () and carbon dioxide () and may have small amounts of hydrogen sulfide (), moisture and siloxanes. The methane can be combusted or oxidized with oxygen. This energy release allows biogas to be used as a fuel; it can be used in fuel cells and for heating purpose, such as in cooking. It can also be used in a gas engine to convert the energy in the gas into electricity and heat. After removal of carbon dioxide and hydrogen sulfide it can be compressed in the same way as natural gas and used to power motor vehicles. In the United Kingdom, for example, biogas is estimated to have the potential to replace around 17% of vehicle fuel. It qualifies for renewable energy subsidies in some parts of the world. Biogas can be cleaned and upgraded to natural gas standards, when it becomes bio-methane. Biogas is considered to be a renewable resource because its production-and-use cycle is continuous, and it generates no net carbon dioxide. From a carbon perspective, as much carbon dioxide is absorbed from the atmosphere in the growth of the primary bio-resource as is released, when the material is ultimately converted to energy. Production Biogas is produced by microorganisms, such as methanogens and sulfate-reducing bacteria, performing anaerobic respiration. Biogas can refer to gas produced naturally and industrially. Natural In soil, methane is produced in anaerobic environments by methanogens, but is mostly consumed in aerobic zones by methanotrophs. Methane emissions result when the balance favors methanogens. Wetland soils are the main natural source of methane. Other sources include oceans, forest soils, termites, and wild ruminants. Industrial The purpose of industrial biogas production is the collection of biomethane, usually for fuel. Industrial biogas is produced either; As landfill gas (LFG), which is produced by the decomposition of biodegradable waste inside a landfill due to chemical reactions and microbes, or As digested gas, produced inside an anaerobic digester. Bio-gas Plants A biogas plant is the name often given to an anaerobic digester that treats farm wastes or energy crops. It can be produced using anaerobic digesters (air-tight tanks with different configurations). These plants can be fed with energy crops such as maize silage or biodegradable wastes including sewage sludge and food waste. During the process, the micro-organisms transform biomass waste into biogas (mainly methane and carbon dioxide) and digestate. Higher quantities of biogas can be produced when the wastewater is co-digested with other residuals from the dairy industry, sugar industry, or brewery industry. For example, while mixing 90% of wastewater from beer factory with 10% cow whey, the production of biogas was increased by 2.5 times compared to the biogas produced by wastewater from the brewery only. Manufacturing of biogas from intentionally planted maize has been described as being unsustainable and harmful due to very concentrated, intense and soil eroding character of these plantations. Key processes There are two key processes: mesophilic and thermophilic digestion which is dependent on temperature. In experimental work at University of Alaska Fairbanks, a 1000-litre digester using psychrophiles harvested from "mud from a frozen lake in Alaska" has produced 200–300 liters of methane per day, about 20–30% of the output from digesters in warmer climates. Dangers The air pollution produced by biogas is similar to that of natural gas as when methane (a major constituent of biogas) is ignited for its usage as an energy source, Carbon dioxide is made as a product which is a greenhouse gas ( as described by this equation: + 2 → + 2 ). The content of toxic hydrogen sulfide presents additional risks and has been responsible for serious accidents. Leaks of unburned methane are an additional risk, because methane is a potent greenhouse gas. A facility may leak 2% of the methane. Biogas can be explosive when mixed in the ratio of one part biogas to 8–20 parts air. Special safety precautions have to be taken for entering an empty biogas digester for maintenance work. It is important that a biogas system never has negative pressure as this could cause an explosion. Negative gas pressure can occur if too much gas is removed or leaked; Because of this biogas should not be used at pressures below one column inch of water, measured by a pressure gauge. Frequent smell checks must be performed on a biogas system. If biogas is smelled anywhere windows and doors should be opened immediately. If there is a fire the gas should be shut off at the gate valve of the biogas system. Landfill gas Landfill gas is produced by wet organic waste decomposing under anaerobic conditions in a similar way to biogas. The waste is covered and mechanically compressed by the weight of the material that is deposited above. This material prevents oxygen exposure thus allowing anaerobic microbes to thrive. Biogas builds up and is slowly released into the atmosphere if the site has not been engineered to capture the gas. Landfill gas released in an uncontrolled way can be hazardous since it can become explosive when it escapes from the landfill and mixes with oxygen. The lower explosive limit is 5% methane and the upper is 15% methane. The methane in biogas is 28 times more potent a greenhouse gas than carbon dioxide. Therefore, uncontained landfill gas, which escapes into the atmosphere may significantly contribute to the effects of global warming. In addition, volatile organic compounds (VOCs) in landfill gas contribute to the formation of photochemical smog. Technical Biochemical oxygen demand (BOD) is a measure of the amount of oxygen required by aerobic micro-organisms to decompose the organic matter in a sample of material being used in the biodigester as well as the BOD for the liquid discharge allows for the calculation of the daily energy output from a biodigester. Another term related to biodigesters is effluent dirtiness, which tells how much organic material there is per unit of biogas source. Typical units for this measure are in mg BOD/litre. As an example, effluent dirtiness can range between 800 and 1200 mg BOD/litre in Panama. From 1 kg of decommissioned kitchen bio-waste, 0.45 m3 of biogas can be obtained. The price for collecting biological waste from households is approximately €70 per ton. Composition The composition of biogas varies depending upon the substrate composition, as well as the conditions within the anaerobic reactor (temperature, pH, and substrate concentration). Landfill gas typically has methane concentrations around 50%. Advanced waste treatment technologies can produce biogas with 55–75% methane, which for reactors with free liquids can be increased to 80–90% methane using in-situ gas purification techniques. As produced, biogas contains water vapor. The fractional volume of water vapor is a function of biogas temperature; correction of measured gas volume for water vapour content and thermal expansion is easily done via simple mathematics which yields the standardized volume of dry biogas. For 1000 kg (wet weight) of input to a typical biodigester, total solids may be 30% of the wet weight while volatile suspended solids may be 90% of the total solids. Protein would be 20% of the volatile solids, carbohydrates would be 70% of the volatile solids, and finally fats would be 10% of the volatile solids. Contaminants Sulfur compounds Toxic, corrosive and foul smelling hydrogen sulfide () is the most common contaminant in biogas. If not separated, combustion will produce sulfur dioxide () and sulfuric acid (), which are corrosive and environmentally hazardous. , Other sulfur-containing compounds, such as thiols may be present. Ammonia Ammonia () is produced from organic compounds containing nitrogen, such as the amino acids in proteins. If not separated from the biogas, combustion results in emissions. Siloxanes In some cases, biogas contains siloxanes. They are formed from the anaerobic decomposition of materials commonly found in soaps and detergents. During combustion of biogas containing siloxanes, silicon is released and can combine with free oxygen or other elements in the combustion gas. Deposits are formed containing mostly silica () or silicates () and can contain calcium, sulfur, zinc, phosphorus. Such white mineral deposits accumulate to a surface thickness of several millimeters and must be removed by chemical or mechanical means. Practical and cost-effective technologies to remove siloxanes and other biogas contaminants are available. Benefits of manure derived biogas High levels of methane are produced when manure is stored under anaerobic conditions. During storage and when manure has been applied to the land, nitrous oxide is also produced as a byproduct of the denitrification process. Nitrous oxide () is 320 times more aggressive as a greenhouse gas than carbon dioxide and methane 25 times more than carbon dioxide. By converting cow manure into methane biogas via anaerobic digestion, the millions of cattle in the United States would be able to produce 100 billion kilowatt hours of electricity, enough to power millions of homes across the United States. One cow can produce enough manure in one day to generate 3 kilowatt hours of electricity. Furthermore, by converting cattle manure into methane biogas instead of letting it decompose, global warming gases could be reduced by 99 million metric tons or 4%. Applications Biogas can be used for electricity production on sewage works, in a CHP gas engine, where the waste heat from the engine is conveniently used for heating the digester; cooking; space heating; water heating; and process heating. If compressed, it can replace compressed natural gas for use in vehicles, where it can fuel an internal combustion engine or fuel cells and is a much more effective displacer of carbon dioxide than the normal use in on-site CHP plants. Biogas upgrading Raw biogas produced from digestion is roughly 60% methane and 39% with trace elements of : inadequate for use in machinery. The corrosive nature of alone is enough to destroy the mechanisms. Methane in biogas can be concentrated via a biogas upgrader to the same standards as fossil natural gas, which itself has to go through a cleaning process, and becomes biomethane. If the local gas network allows, the producer of the biogas may use their distribution networks. Gas must be very clean to reach pipeline quality and must be of the correct composition for the distribution network to accept. Carbon dioxide, water, hydrogen sulfide, and particulates must be removed if present. There are four main methods of upgrading: water washing, pressure swing absorption, selexol absorption, and amine gas treating. In addition to these, the use of membrane separation technology for biogas upgrading is increasing, and there are already several plants operating in Europe and USA. The most prevalent method is water washing where high pressure gas flows into a column where the carbon dioxide and other trace elements are scrubbed by cascading water running counter-flow to the gas. This arrangement could deliver 98% methane with manufacturers guaranteeing maximum 2% methane loss in the system. It takes roughly between 3% and 6% of the total energy output in gas to run a biogas upgrading system. Biogas gas-grid injection Gas-grid injection is the injection of biogas into the methane grid (natural gas grid). Until the breakthrough of micro combined heat and power two-thirds of all the energy produced by biogas power plants was lost (as heat). Using the grid to transport the gas to consumers, the energy can be used for on-site generation, resulting in a reduction of losses in the transportation of energy. Typical energy losses in natural gas transmission systems range from 1% to 2%; in electricity transmission they range from 5% to 8%. Before being injected in the gas grid, biogas passes a cleaning process, during which it is upgraded to natural gas quality. During the cleaning process trace components harmful to the gas grid and the final users are removed. Biogas in transport If concentrated and compressed, it can be used in vehicle transportation. Compressed biogas is becoming widely used in Sweden, Switzerland, and Germany. A biogas-powered train, named Biogaståget Amanda (The Biogas Train Amanda), has been in service in Sweden since 2005. Biogas powers automobiles. In 1974, a British documentary film titled Sweet as a Nut detailed the biogas production process from pig manure and showed how it fueled a custom-adapted combustion engine. In 2007, an estimated 12,000 vehicles were being fueled with upgraded biogas worldwide, mostly in Europe. Biogas is part of the wet gas and condensing gas (or air) category that includes mist or fog in the gas stream. The mist or fog is predominately water vapor that condenses on the sides of pipes or stacks throughout the gas flow. Biogas environments include wastewater digesters, landfills, and animal feeding operations (covered livestock lagoons). Ultrasonic flow meters are one of the few devices capable of measuring in a biogas atmosphere. Most of thermal flow meters are unable to provide reliable data because the moisture causes steady high flow readings and continuous flow spiking, although there are single-point insertion thermal mass flow meters capable of accurately monitoring biogas flows with minimal pressure drop. They can handle moisture variations that occur in the flow stream because of daily and seasonal temperature fluctuations, and account for the moisture in the flow stream to produce a dry gas value. Biogas generated heat/electricity Biogas can be used in different types of internal combustion engines, such as the Jenbacher or Caterpillar gas engines. Other internal combustion engines such as gas turbines are suitable for the conversion of biogas into both electricity and heat. The digestate is the remaining inorganic matter that was not transformed into biogas. It can be used as an agricultural fertiliser. Biogas can be used as the fuel in the system of producing biogas from agricultural wastes and co-generating heat and electricity in a combined heat and power (CHP) plant. Unlike the other green energy such as wind and solar, the biogas can be quickly accessed on demand. The global warming potential can also be greatly reduced when using biogas as the fuel instead of fossil fuel. However, the acidification and eutrophication potentials produced by biogas are 25 and 12 times higher respectively than fossil fuel alternatives. This impact can be reduced by using correct combination of feedstocks, covered storage for digesters and improved techniques for retrieving escaped material. Overall, the results still suggest that using biogas can lead to significant reduction in most impacts compared to fossil fuel alternative. The balance between environmental damage and green house gas emission should still be considered while implicating the system. Technological advancements Projects such as NANOCLEAN are nowadays developing new ways to produce biogas more efficiently, using iron oxide nanoparticles in the processes of organic waste treatment. This process can triple the production of biogas. Biogas and Sanitation Faecal Sludge is a product of onsite sanitation systems. Post collection and transportation, Faecal sludge can be treated with sewage in a conventional treatment plant, or otherwise it can be treated independently in a faecal sludge treatment plant. Faecal sludge can also be co-treated with organic solid waste in composting or in an anaerobic digestion system. Biogas can be generated through anaerobic digestion in the treatment of faecal sludge. The appropriate management of excreta and its valorisation through the production of biogas from faecal sludge helps mitigate the effects of poorly managed excreta such as waterborne diseases and water and environmental pollution. The Resource Recovery and Reuse (RRR) ) is a subprogram of the CGIAR Research Program on Water, Land and Ecosystems (WLE) dedicated to applied research on the safe recovery of water, nutrients and energy from domestic and agro-industrial waste streams. They believe using waste as energy would be good financially and would tackle sanitation, health and environmental issues. Legislation European Union The European Union has legislation regarding waste management and landfill sites called the Landfill Directive. Countries such as the United Kingdom and Germany now have legislation in force that provides farmers with long-term revenue and energy security. The EU mandates that internal combustion engines with biogas have ample gas pressure to optimize combustion, and within the European Union ATEX centrifugal fan units built in accordance with the European directive 2014–34/EU (previously 94/9/EG) are obligatory. These centrifugal fan units, for example Combimac, Meidinger AG or Witt & Sohn AG are suitable for use in Zone 1 and 2 . United States The United States legislates against landfill gas as it contains VOCs. The United States Clean Air Act and Title 40 of the Code of Federal Regulations (CFR) requires landfill owners to estimate the quantity of non-methane organic compounds (NMOCs) emitted. If the estimated NMOC emissions exceeds 50 tonnes per year, the landfill owner is required to collect the gas and treat it to remove the entrained NMOCs. That usually means burning it. Because of the remoteness of landfill sites, it is sometimes not economically feasible to produce electricity from the gas. There are a variety of grants and loans the support the development of anaerobic digestor systems. The Rural Energy for American Program provides loan financing and grant funding for biogas systems, as does the Environmental Quality Incentives Program, Conservation Stewardship Program, and Conservation Loan Program. Global developments United States With the many benefits of biogas, it is starting to become a popular source of energy and is starting to be used in the United States more. In 2003, the United States consumed of energy from "landfill gas", about 0.6% of the total U.S. natural gas consumption. Methane biogas derived from cow manure is being tested in the U.S. According to a 2008 study, collected by the Science and Children magazine, methane biogas from cow manure would be sufficient to produce 100 billion kilowatt hours enough to power millions of homes across America. Furthermore, methane biogas has been tested to prove that it can reduce 99 million metric tons of greenhouse gas emissions or about 4% of the greenhouse gases produced by the United States. The number of farm-based digesters increased by 21% in 2021 according to the American Biogas Council. In Vermont biogas generated on dairy farms was included in the CVPS Cow Power program. The program was originally offered by Central Vermont Public Service Corporation as a voluntary tariff and now with a recent merger with Green Mountain Power is now the GMP Cow Power Program. Customers can elect to pay a premium on their electric bill, and that premium is passed directly to the farms in the program. In Sheldon, Vermont, Green Mountain Dairy has provided renewable energy as part of the Cow Power program. It started when the brothers who own the farm, Bill and Brian Rowell, wanted to address some of the manure management challenges faced by dairy farms, including manure odor, and nutrient availability for the crops they need to grow to feed the animals. They installed an anaerobic digester to process the cow and milking center waste from their 950 cows to produce renewable energy, a bedding to replace sawdust, and a plant-friendly fertilizer. The energy and environmental attributes are sold to the GMP Cow Power program. On average, the system run by the Rowells produces enough electricity to power 300 to 350 other homes. The generator capacity is about 300 kilowatts. In Hereford, Texas, cow manure is being used to power an ethanol power plant. By switching to methane biogas, the ethanol power plant has saved 1000 barrels of oil a day. Over all, the power plant has reduced transportation costs and will be opening many more jobs for future power plants that will rely on biogas. In Oakley, Kansas, an ethanol plant considered to be one of the largest biogas facilities in North America is using integrated manure utilization system (IMUS) to produce heat for its boilers by utilizing feedlot manure, municipal organics and ethanol plant waste. At full capacity the plant is expected to replace 90% of the fossil fuel used in the manufacturing process of ethanol and methanol. In California, the Southern California Gas Company has advocated for mixing biogas into existing natural gas pipelines. However, California state officials have taken the position that biogas is "better used in hard-to-electrify sectors of the economy-- like aviation, heavy industry and long-haul trucking". Europe The level of development varies greatly in Europe. While countries such as Germany, Austria, Sweden and Italy are fairly advanced in their use of biogas, there is a vast potential for this renewable energy source in the rest of the continent, especially in Eastern Europe. MT-Energie is a German biogas technology company operating in the field of renewable energies. Different legal frameworks, education schemes and the availability of technology are among the prime reasons behind this untapped potential. Another challenge for the further progression of biogas has been negative public perception. In February 2009, the European Biogas Association (EBA) was founded in Brussels as a non-profit organisation to promote the deployment of sustainable biogas production and use in Europe. EBA's strategy defines three priorities: establish biogas as an important part of Europe's energy mix, promote source separation of household waste to increase the gas potential, and support the production of biomethane as vehicle fuel. In July 2013, it had 60 members from 24 countries across Europe. UK , there are about 130 non-sewage biogas plants in the UK. Most are on-farm, and some larger facilities exist off-farm, which are taking food and consumer wastes. On 5 October 2010, biogas was injected into the UK gas grid for the first time. Sewage from over 30,000 Oxfordshire homes is sent to Didcot sewage treatment works, where it is treated in an anaerobic digestor to produce biogas, which is then cleaned to provide gas for approximately 200 homes. In 2015 the Green-Energy company Ecotricity announced their plans to build three grid-injecting digesters. Italy In Italy the biogas industry first started in 2008, thanks to the introduction of advantageous feed tariffs. They were later replaced by feed-in premiums and the preference was given to by products and farming waste and leading to stagnation in biogas production and derived heat and electricity since 2012., in Italy there are more than 200 biogas plants with a production of about 1.2 GW Germany Germany is Europe's biggest biogas producer and the market leader in biogas technology. In 2010 there were 5,905 biogas plants operating throughout the country: Lower Saxony, Bavaria, and the eastern federal states are the main regions. Most of these plants are employed as power plants. Usually the biogas plants are directly connected with a CHP which produces electric power by burning the bio methane. The electrical power is then fed into the public power grid. In 2010, the total installed electrical capacity of these power plants was 2,291 MW. The electricity supply was approximately 12.8 TWh, which is 12.6% of the total generated renewable electricity. Biogas in Germany is primarily extracted by the co-fermentation of energy crops (called 'NawaRo', an abbreviation of nachwachsende Rohstoffe, German for renewable resources) mixed with manure. The main crop used is corn. Organic waste and industrial and agricultural residues such as waste from the food industry are also used for biogas generation. In this respect, biogas production in Germany differs significantly from the UK, where biogas generated from landfill sites is most common. Biogas production in Germany has developed rapidly over the last 20 years. The main reason is the legally created frameworks. Government support of renewable energy started in 1991 with the Electricity Feed-in Act (StrEG). This law guaranteed the producers of energy from renewable sources the feed into the public power grid, thus the power companies were forced to take all produced energy from independent private producers of green energy. In 2000 the Electricity Feed-in Act was replaced by the Renewable Energy Sources Act (EEG). This law even guaranteed a fixed compensation for the produced electric power over 20 years. The amount of around 8¢/kWh gave farmers the opportunity to become energy suppliers and gain a further source of income. The German agricultural biogas production was given a further push in 2004 by implementing the so-called NawaRo-Bonus. This is a special payment given for the use of renewable resources, that is, energy crops. In 2007 the German government stressed its intention to invest further effort and support in improving the renewable energy supply to provide an answer on growing climate challenges and increasing oil prices by the 'Integrated Climate and Energy Programme'. This continual trend of renewable energy promotion induces a number of challenges facing the management and organisation of renewable energy supply that has also several impacts on the biogas production. The first challenge to be noticed is the high area-consuming of the biogas electric power supply. In 2011 energy crops for biogas production consumed an area of circa 800,000 ha in Germany. This high demand of agricultural areas generates new competitions with the food industries that did not exist hitherto. Moreover, new industries and markets were created in predominately rural regions entailing different new players with an economic, political and civil background. Their influence and acting has to be governed to gain all advantages this new source of energy is offering. Finally biogas will furthermore play an important role in the German renewable energy supply if good governance is focused. Developing countries Domestic biogas plants convert livestock manure and night soil into biogas and slurry, the fermented manure. This technology is feasible for small-holders with livestock producing 50 kg manure per day, an equivalent of about 6 pigs or 3 cows. This manure has to be collectable to mix it with water and feed it into the plant. Toilets can be connected. Another precondition is the temperature that affects the fermentation process. With an optimum at 36 °C the technology especially applies for those living in a (sub) tropical climate. This makes the technology for small holders in developing countries often suitable. Depending on size and location, a typical brick made fixed dome biogas plant can be installed at the yard of a rural household with the investment between US$300 to $500 in Asian countries and up to $1400 in the African context. A high quality biogas plant needs minimum maintenance costs and can produce gas for at least 15–20 years without major problems and re-investments. For the user, biogas provides clean cooking energy, reduces indoor air pollution, and reduces the time needed for traditional biomass collection, especially for women and children. The slurry is a clean organic fertilizer that potentially increases agricultural productivity. In developing countries, it was also determined that the use of biogas leads to a 20% reduction in GHG emissions compared with GHG emissions due to firewood. Moreover, GHG emissions of 384.1 kg CO2-eq·y−1 per animal could be prevented. Energy is an important part of modern society and can serve as one of the most important indicators of socio-economic development. As much as there have been advancements in technology, even so, some three billion people, primarily in the rural areas of developing countries, continue to access their energy needs for cooking through traditional means by burning biomass resources like firewood, crop residues and animal dung in crude traditional stoves. Domestic biogas technology is a proven and established technology in many parts of the world, especially Asia. Several countries in this region have embarked on large-scale programmes on domestic biogas, such as China and India. The Netherlands Development Organisation, SNV, supports national programmes on domestic biogas that aim to establish commercial-viable domestic biogas sectors in which local companies market, install and service biogas plants for households. In Asia, SNV is working in Nepal, Vietnam, Bangladesh, Bhutan, Cambodia, Lao PDR, Pakistan and Indonesia, and in Africa; Rwanda, Senegal, Burkina Faso, Ethiopia, Tanzania, Uganda, Kenya, Benin and Cameroon. In South Africa a prebuilt Biogas system is manufactured and sold. One key feature is that installation requires less skill and is quicker to install as the digester tank is premade plastic. India Biogas in India has been traditionally based on dairy manure as feed stock and these "gobar" gas plants have been in operation for a long period of time, especially in rural India. In the last 2–3 decades, research organisations with a focus on rural energy security have enhanced the design of the systems resulting in newer efficient low cost designs such as the Deenabandhu model. The Deenabandhu Model is a new biogas-production model popular in India. (Deenabandhu means "friend of the helpless".) The unit usually has a capacity of 2 to 3 cubic metres. It is constructed using bricks or by a ferrocement mixture. In India, the brick model costs slightly more than the ferrocement model; however, India's Ministry of New and Renewable Energy offers some subsidy per model constructed. Biogas which is mainly methane/natural gas can also be used for generating protein rich cattle, poultry and fish feed in villages economically by cultivating Methylococcus capsulatus bacteria culture with tiny land and water foot print. The carbon dioxide gas produced as by product from these plants can be put to use in cheaper production of algae oil or spirulina from algaculture particularly in tropical countries like India which can displace the prime position of crude oil in near future. Union government of India is implementing many schemes to utilise productively the agro waste or biomass in rural areas to uplift rural economy and job potential. With these plants, the non-edible biomass or waste of edible biomass is converted in to high value products without any water pollution or green house gas (GHG) emissions. LPG (Liquefied Petroleum Gas) is a key source of cooking fuel in urban India and its prices have been increasing along with the global fuel prices. Also the heavy subsidies provided by the successive governments in promoting LPG as a domestic cooking fuel has become a financial burden renewing the focus on biogas as a cooking fuel alternative in urban establishments. This has led to the development of prefabricated digester for modular deployments as compared to RCC and cement structures which take a longer duration to construct. Renewed focus on process technology like the Biourja process model has enhanced the stature of medium and large scale anaerobic digester in India as a potential alternative to LPG as primary cooking fuel. In India, Nepal, Pakistan and Bangladesh biogas produced from the anaerobic digestion of manure in small-scale digestion facilities is called gobar gas; it is estimated that such facilities exist in over 2 million households in India, 50,000 in Bangladesh and thousands in Pakistan, particularly North Punjab, due to the thriving population of livestock. The digester is an airtight circular pit made of concrete with a pipe connection. The manure is directed to the pit, usually straight from the cattle shed. The pit is filled with a required quantity of wastewater. The gas pipe is connected to the kitchen fireplace through control valves. The combustion of this biogas has very little odour or smoke. Owing to simplicity in implementation and use of cheap raw materials in villages, it is one of the most environmentally sound energy sources for rural needs. One type of these system is the Sintex Digester. Some designs use vermiculture to further enhance the slurry produced by the biogas plant for use as compost. In Pakistan, the Rural Support Programmes Network is running the Pakistan Domestic Biogas Programme which has installed 5,360 biogas plants and has trained in excess of 200 masons on the technology and aims to develop the Biogas Sector in Pakistan. In Nepal, the government provides subsidies to build biogas plant at home. China As of at least 2023, China is both the world's largest producer and largest consumer of household biogas. The Chinese have experimented with the applications of biogas since 1958. Around 1970, China had installed 6,000,000 digesters in an effort to make agriculture more efficient. During the last few years, technology has met high growth rates. This seems to be the earliest developments in generating biogas from agricultural waste. The rural biogas construction in China has shown an increased development trend. The exponential growth of energy supply caused by rapid economic development and severe haze condition in China have led biogas to become the better eco-friendly energy for the rural areas. In Qing county, Hebei Province, the technology of using crop straw as a main material to generate biogas is currently developing. China had 26.5 million biogas plants, with an output of 10.5 billion cubic meter biogas until 2007. The annual biogas output has increased to 248 billion cubic meter in 2010. The Chinese government had supported and funded rural biogas projects. As of 2023, more than 30 million rural Chinese households use biogas digesters. During the winter, the biogas production in northern regions of China is lower. This is caused by the lack of heat control technology for digesters thus the co-digestion of different feedstock failed to complete in the cold environment. Zambia Lusaka, the capital city of Zambia, has two million inhabitants with over half of the population residing in peri-urban areas. The majority of this population use pit latrines as toilets generating approximately 22,680 tons of fecal sludge per annum. This sludge is inadequately managed: Over 60% of the generated faecal sludge remains within the residential environment thereby compromising both the environment and public health. In the face of research work and implementation of biogas having started as early as in the 1980s, Zambia is lagging behind in the adoption and use of biogas in the sub-Saharan Africa. Animal manure and crop residues are required for the provision of energy for cooking and lighting. Inadequate funding, absence of policy, regulatory framework and strategies on biogas, unfavorable investor monetary policy, inadequate expertise, lack of awareness of the benefits of biogas technology among leaders, financial institutions and locals, resistance to change due cultural and traditions of the locals, high installation and maintenance costs of biogas digesters, inadequate research and development, improper management and lack of monitoring of installed digesters, complexity of the carbon market, lack of incentives and social equity are among the challenges that have impeded the acquiring and sustainable implementation of domestic biogas production in Zambia. Associations World Biogas Association (https://www.worldbiogasassociation.org/) American Biogas Council (https://americanbiogascouncil.org/) Canadian Biogas Association (https://www.biogasassociation.ca/) European Biogas Association German Biogas Association Indian Biogas Association Society and culture In the 1985 Australian film Mad Max Beyond Thunderdome the post-apocalyptic settlement Barter town is powered by a central biogas system based upon a piggery. As well as providing electricity, methane is used to power Barter's vehicles. "Cow Town", written in the early 1940s, discusses the travails of a city vastly built on cow manure and the hardships brought upon by the resulting methane biogas. Carter McCormick, an engineer from a town outside the city, is sent in to figure out a way to utilize this gas to help power, rather than suffocate, the city. Contemporary biogas production provides new opportunities for skilled employment, drawing on the development of new technologies. See also (municipal solid waste and landfill gas) References Further reading Updated Guidebook on Biogas Development. United Nations, New York, (1984) Energy Resources Development Series No. 27. p. 178, 30 cm. Book: Biogas from Waste and Renewable Resources. WILEY-VCH Verlag GmbH & Co. KGaA, (2008) Dieter Deublein and Angelika Steinhauser A Comparison between Shale Gas in China and Unconventional Fuel Development in the United States: Health, Water and Environmental Risks by Paolo Farah and Riccardo Tremolada. This is a paper presented at the Colloquium on Environmental Scholarship 2013 hosted by Vermont Law School (11 October 2013) Woodhead Publishing Series. (2013). The Biogas Handbook: Science, Production and Applications. Lazenby, Ruthie (15 August 2022). "Rethinking Manure Biogas: Policy Considerations to Promote Equity and Protect the Climate and Environment" (PDF). Retrieved 19 October 2022. External links European Biogas Association Biogas Portal on Energypedia American Biogas Council An Introduction to Biogas, University of Adelaide Anaerobic digestion Biofuels Biodegradation Biogas technology Biomass Biotechnology products Biodegradable waste management Fuel gas Methane Renewable energy Sustainable energy Waste management
Biogas
[ "Chemistry", "Engineering", "Biology" ]
7,794
[ "Biofuels technology", "Methane", "Biotechnology products", "Biodegradable waste management", "Biodegradation", "Anaerobic digestion", "Environmental engineering", "Water technology", "Greenhouse gases", "Biogas technology" ]
54,840
https://en.wikipedia.org/wiki/Eutrophication
Eutrophication is a general term describing a process in which nutrients accumulate in a body of water, resulting in an increased growth of organism that may deplete the oxygen in the water. Eutrophication may occur naturally or as a result of human actions. Manmade, or cultural, eutrophication occurs when sewage, industrial wastewater, fertilizer runoff, and other nutrient sources are released into the environment. Such nutrient pollution usually causes algal blooms and bacterial growth, resulting in the depletion of dissolved oxygen in water and causing substantial environmental degradation. Approaches for prevention and reversal of eutrophication include minimizing point source pollution from sewage and agriculture as well as other nonpoint pollution sources. Additionally, the introduction of bacteria and algae-inhibiting organisms such as shellfish and seaweed can also help reduce nitrogen pollution, which in turn controls the growth of cyanobacteria, the main source of harmful algae blooms. History and terminology The term "eutrophication" comes from the Greek eutrophos, meaning "well-nourished". Water bodies with very low nutrient levels are termed oligotrophic and those with moderate nutrient levels are termed mesotrophic. Advanced eutrophication may also be referred to as dystrophic and hypertrophic conditions. Thus, eutrophication has been defined as "degradation of water quality owing to enrichment by nutrients which results in excessive plant (principally algae) growth and decay." Eutrophication was recognized as a water pollution problem in European and North American lakes and reservoirs in the mid-20th century. Breakthrough research carried out at the Experimental Lakes Area (ELA) in Ontario, Canada, in the 1970s provided the evidence that freshwater bodies are phosphorus-limited. ELA uses the whole ecosystem approach and long-term, whole-lake investigations of freshwater focusing on cultural eutrophication. Causes Eutrophication is caused by excessive concentrations of nutrients, most commonly phosphates and nitrates, although this varies with location. Prior to their being phasing out in the 1970's, phosphate-containing detergents contributed to eutrophication. Since then, sewage and agriculture have emerged as the dominant phosphate sources. The main sources of nitrogen pollution are from agricultural runoff containing fertilizers and animal wastes, from sewage, and from atmospheric deposition of nitrogen originating from combustion or animal waste. The limitation of productivity in any aquatic system varies with the rate of supply (from external sources) and removal (flushing out) of nutrients from the body of water. This means that some nutrients are more prevalent in certain areas than others and different ecosystems and environments have different limiting factors. Phosphorus is the limiting factor for plant growth in most freshwater ecosystems, and because phosphate adheres tightly to soil particles and sinks in areas such as wetlands and lakes, due to its prevalence nowadays more and more phosphorus is accumulating inside freshwater bodies. In marine ecosystems, nitrogen is the primary limiting nutrient; nitrous oxide (created by the combustion of fossil fuels) and its deposition in the water from the atmosphere has led to an increase in nitrogen levels, and also the heightened levels of eutrophication in the ocean. Cultural eutrophication Cultural or anthropogenic eutrophication is the process that causes eutrophication because of human activity. The problem became more apparent following the introduction of chemical fertilizers in agriculture (green revolution of the mid-1900s). Phosphorus and nitrogen are the two main nutrients that cause cultural eutrophication as they enrich the water, allowing for some aquatic plants, especially algae to grow rapidly and bloom in high densities. Algal blooms can shade out benthic plants thereby altering the overall plant community. When algae die off, their degradation by bacteria removes oxygen, potentially, generating anoxic conditions. This anoxic environment kills off aerobic organisms (e.g. fish and invertebrates) in the water body. This also affects terrestrial animals, restricting their access to affected water (e.g. as drinking sources). Selection for algal and aquatic plant species that can thrive in nutrient-rich conditions can cause structural and functional disruption to entire aquatic ecosystems and their food webs, resulting in loss of habitat and species biodiversity. There are several sources of excessive nutrients from human activity including run-off from fertilized fields, lawns, and golf courses, untreated sewage and wastewater and internal combustion of fuels creating nitrogen pollution. Cultural eutrophication can occur in fresh water and salt water bodies, shallow waters being the most susceptible. In shore lines and shallow lakes, sediments are frequently resuspended by wind and waves which can result in nutrient release from sediments into the overlying water, enhancing eutrophication. The deterioration of water quality caused by cultural eutrophication can therefore negatively impact human uses including potable supply for consumption, industrial uses and recreation. Natural eutrophication Eutrophication can be a natural process and occurs naturally through the gradual accumulation of sediment and nutrients. Naturally, eutrophication is usually caused by the natural accumulation of nutrients from dissolved phosphate minerals and dead plant matter in water. Natural eutrophication has been well-characterized in lakes. Paleolimnologists now recognise that climate change, geology, and other external influences are also critical in regulating the natural productivity of lakes. A few artificial lakes also demonstrate the reverse process (meiotrophication), becoming less nutrient rich with time as nutrient poor inputs slowly elute the nutrient richer water mass of the lake. This process may be seen in artificial lakes and reservoirs which tend to be highly eutrophic on first filling but may become more oligotrophic with time. The main difference between natural and anthropogenic eutrophication is that the natural process is very slow, occurring on geological time scales. Effects Ecological effects Eutrophication can have the following ecological effects: increased biomass of phytoplankton, changes in macrophyte species composition and biomass, dissolved oxygen depletion, increased incidences of fish kills, loss of desirable fish species. Decreased biodiversity When an ecosystem experiences an increase in nutrients, primary producers reap the benefits first. In aquatic ecosystems, species such as algae experience a population increase (called an algal bloom). Algal blooms limit the sunlight available to bottom-dwelling organisms and cause wide swings in the amount of dissolved oxygen in the water. Oxygen is required by all aerobically respiring plants and animals and it is replenished in daylight by photosynthesizing plants and algae. Under eutrophic conditions, dissolved oxygen greatly increases during the day, but is greatly reduced after dark by the respiring algae and by microorganisms that feed on the increasing mass of dead algae. When dissolved oxygen levels decline to hypoxic levels, fish and other marine animals suffocate. As a result, creatures such as fish, shrimp, and especially immobile bottom dwellers die off. In extreme cases, anaerobic conditions ensue, promoting growth of bacteria. Zones where this occurs are known as dead zones. New species invasion Eutrophication may cause competitive release by making abundant a normally limiting nutrient. This process causes shifts in the species composition of ecosystems. For instance, an increase in nitrogen might allow new, competitive species to invade and out-compete original inhabitant species. This has been shown to occur in New England salt marshes. In Europe and Asia, the common carp frequently lives in naturally eutrophic or hypereutrophic areas, and is adapted to living in such conditions. The eutrophication of areas outside its natural range partially explain the fish's success in colonizing these areas after being introduced. Toxicity Some harmful algal blooms resulting from eutrophication, are toxic to plants and animals. Freshwater algal blooms can pose a threat to livestock. When the algae die or are eaten, neuro- and hepatotoxins are released which can kill animals and may pose a threat to humans. An example of algal toxins working their way into humans is the case of shellfish poisoning. Biotoxins created during algal blooms are taken up by shellfish (mussels, oysters), leading to these human foods acquiring the toxicity and poisoning humans. Examples include paralytic, neurotoxic, and diarrhoetic shellfish poisoning. Other marine animals can be vectors for such toxins, as in the case of ciguatera, where it is typically a predator fish that accumulates the toxin and then poisons humans. Economic effects Eutrophication and harmful algal blooms can have economic impacts due to increasing water treatment costs, commercial fishing and shellfish losses, recreational fishing losses (reductions in harvestable fish and shellfish), and reduced tourism income (decreases in perceived aesthetic value of the water body). Water treatment costs can be increased due to decreases in water transparency (increased turbidity). There can also be issues with color and smell during drinking water treatment. Health impacts Human health effects of eutrophication derive from two main issues excess nitrate in drinking water and exposure to toxic algae. Nitrates in drinking water can cause blue baby syndrome in infants and can react with chemicals used to treat water to create disinfection by-products in drinking water. Getting direct contact with toxic algae through swimming or drinking can cause rashes, stomach or liver illness, and respiratory or neurological problems . Causes and effects for different types of water bodies Freshwater systems One response to added amounts of nutrients in aquatic ecosystems is the rapid growth of microscopic algae, creating an algal bloom. In freshwater ecosystems, the formation of floating algal blooms are commonly nitrogen-fixing cyanobacteria (blue-green algae). This outcome is favored when soluble nitrogen becomes limiting and phosphorus inputs remain significant. Nutrient pollution is a major cause of algal blooms and excess growth of other aquatic plants leading to overcrowding competition for sunlight, space, and oxygen. Increased competition for the added nutrients can cause potential disruption to entire ecosystems and food webs, as well as a loss of habitat, and biodiversity of species. When overproduced macrophytes and algae die in eutrophic water, their decompose further consumes dissolved oxygen. The depleted oxygen levels in turn may lead to fish kills and a range of other effects reducing biodiversity. Nutrients may become concentrated in an anoxic zone, often in deeper waters cut off by stratification of the water column and may only be made available again during autumn turn-over in temperate areas or in conditions of turbulent flow. The dead algae and organic load carried by the water inflows into a lake settle to the bottom and undergo anaerobic digestion releasing greenhouse gases such as methane and CO2. Some of the methane gas may be oxidised by anaerobic methane oxidation bacteria such as Methylococcus capsulatus, which in turn may provide a food source for zooplankton. Thus a self-sustaining biological process can take place to generate primary food source for the phytoplankton and zooplankton depending on the availability of adequate dissolved oxygen in the water body. Enhanced growth of aquatic vegetation, phytoplankton and algal blooms disrupts normal functioning of the ecosystem, causing a variety of problems such as a lack of oxygen which is needed for fish and shellfish to survive. The growth of dense algae in surface waters can shade the deeper water and reduce the viability of benthic shelter plants with resultant impacts on the wider ecosystem. Eutrophication also decreases the value of rivers, lakes and aesthetic enjoyment. Health problems can occur where eutrophic conditions interfere with drinking water treatment. Phosphorus is often regarded as the main culprit in cases of eutrophication in lakes subjected to "point source" pollution from sewage pipes. The concentration of algae and the trophic state of lakes correspond well to phosphorus levels in water. Studies conducted in the Experimental Lakes Area in Ontario have shown a relationship between the addition of phosphorus and the rate of eutrophication. Later stages of eutrophication lead to blooms of nitrogen-fixing cyanobacteria limited solely by the phosphorus concentration. Phosphorus-base eutrophication in fresh water lakes has been addressed in several cases. Coastal waters Eutrophication is a common phenomenon in coastal waters, where nitrogenous sources are the main culprit. In coastal waters, nitrogen is commonly the key limiting nutrient of marine waters (unlike the freshwater systems where phosphorus is often the limiting nutrient). Therefore, nitrogen levels are more important than phosphorus levels for understanding and controlling eutrophication problems in salt water. Estuaries, as the interface between freshwater and saltwater, can be both phosphorus and nitrogen limited and commonly exhibit symptoms of eutrophication. Eutrophication in estuaries often results in bottom water hypoxia or anoxia, leading to fish kills and habitat degradation. Upwelling in coastal systems also promotes increased productivity by conveying deep, nutrient-rich waters to the surface, where the nutrients can be assimilated by algae. Examples of anthropogenic sources of nitrogen-rich pollution to coastal waters include sea cage fish farming and discharges of ammonia from the production of coke from coal. In addition to runoff from land, wastes from fish farming and industrial ammonia discharges, atmospheric fixed nitrogen can be an important nutrient source in the open ocean. This could account for around one third of the ocean's external (non-recycled) nitrogen supply, and up to 3% of the annual new marine biological production. Coastal waters embrace a wide range of marine habitats from enclosed estuaries to the open waters of the continental shelf. Phytoplankton productivity in coastal waters depends on both nutrient and light supply, with the latter an important limiting factor in waters near to shore where sediment resuspension often limits light penetration. Nutrients are supplied to coastal waters from land via river and groundwater and also via the atmosphere. There is also an important source from the open ocean, via mixing of relatively nutrient rich deep ocean waters. Nutrient inputs from the ocean are little changed by human activity, although climate change may alter the water flows across the shelf break. By contrast, inputs from land to coastal zones of the nutrients nitrogen and phosphorus have been increased by human activity globally. The extent of increases varies greatly from place to place depending on human activities in the catchments. A third key nutrient, dissolved silicon, is derived primarily from sediment weathering to rivers and from offshore and is therefore much less affected by human activity. Effects of coastal eutrophication These increasing nitrogen and phosphorus nutrient inputs exert eutrophication pressures on coastal zones. These pressures vary geographically depending on the catchment activities and associated nutrient load. The geographical setting of the coastal zone is another important factor as it controls dilution of the nutrient load and oxygen exchange with the atmosphere. The effects of these eutrophication pressures can be seen in several different ways: There is evidence from satellite monitoring that the amounts of chlorophyll as a measure of overall phytoplankton activity are increasing in many coastal areas worldwide due to increased nutrient inputs. The phytoplankton species composition may change due to increased nutrient loadings and changes in the proportions of key nutrients. In particular the increases in nitrogen and phosphorus inputs, along with much smaller changes in silicon inputs, create changes in the ratio of nitrogen and phosphorus to silicon. These changing nutrient ratios drive changes in phytoplankton species composition, particularly disadvantaging silica rich phytoplankton species like diatoms compared to other species. This process leads to the development of nuisance algal blooms in areas such as the North Sea (see also OSPAR Convention) and the Black Sea. In some cases nutrient enrichment can lead to harmful algal blooms (HABs). Such blooms can occur naturally, but there is good evidence that these are increasing as a result of nutrient enrichment, although the causal linkage between nutrient enrichment and HABs is not straightforward. Oxygen depletion has existed in some coastal seas such as the Baltic for thousands of years. In such areas the density structure of the water column severely restricts water column mixing and associated oxygenation of deep water. However, increases in the inputs of bacterially degradable organic matter to such isolated deep waters can exacerbate such oxygen depletion in oceans. These areas of lower dissolved oxygen have increased globally in recent decades. They are usually connected with nutrient enrichment and resulting algal blooms. Climate change will generally tend to increase water column stratification and so exacerbate this oxygen depletion problem. An example of such coastal oxygen depletion is in the Gulf of Mexico where an area of seasonal anoxia more than 5000 square miles in area has developed since the 1950s. The increased primary production driving this anoxia is fueled by nutrients supplied by the Mississippi river. A similar process has been documented in the Black Sea. Hypolimnetic oxygen depletion can lead to summer "kills". During summer stratification, inputs or organic matter and sedimentation of primary producers can increase rates of respiration in the hypolimnion. If oxygen depletion becomes extreme, aerobic organisms (such as fish) may die, resulting in what is known as a "summer kill". Extent of the problem Surveys showed that 54% of lakes in Asia are eutrophic; in Europe, 53%; in North America, 48%; in South America, 41%; and in Africa, 28%. In South Africa, a study by the CSIR using remote sensing has shown more than 60% of the reservoirs surveyed were eutrophic. The World Resources Institute has identified 375 hypoxic coastal zones in the world, concentrated in coastal areas in Western Europe, the Eastern and Southern coasts of the US, and East Asia, particularly Japan. Prevention As a society, there are certain steps we can take to ensure the minimization of eutrophication, thereby reducing its harmful effects on humans and other living organisms in order to sustain a healthy norm of living, some of which are as follows: Minimizing pollution from sewage There are multiple different ways to fix cultural eutrophication with raw sewage being a point source of pollution. For example, sewage treatment plants can be upgraded for biological nutrient removal so that they discharge much less nitrogen and phosphorus to the receiving water body. However, even with good secondary treatment, most final effluents from sewage treatment works contain substantial concentrations of nitrogen as nitrate, nitrite or ammonia. Removal of these nutrients is an expensive and often difficult process. Laws regulating the discharge and treatment of sewage have led to dramatic nutrient reductions to surrounding ecosystems. As a major contributor to the nonpoint source nutrient loading of water bodies is untreated domestic sewage, it is necessary to provide treatment facilities to highly urbanized areas, particularly those in developing countries, in which treatment of domestic waste water is a scarcity. The technology to safely and efficiently reuse wastewater, both from domestic and industrial sources, should be a primary concern for policy regarding eutrophication. Minimizing nutrient pollution by agriculture There are many ways to help fix cultural eutrophication caused by agriculture. Some recommendations issued by the U.S. Department of Agriculture include: Nutrient management techniques - Anyone using fertilizers should apply fertilizer in the correct amount, at the right time of year, with the right method and placement. Organically fertilized fields can "significantly reduce harmful nitrate leaching" compared to conventionally fertilized fields. Eutrophication impacts are in some cases higher from organic production than they are from conventional production. In Japan the amount of nitrogen produced by livestock is adequate to serve the fertilizer needs for the agriculture industry. Year-round ground cover - a cover crop will prevent periods of bare ground thus eliminating erosion and runoff of nutrients even after the growing season has passed. Planting field buffers - Planting trees, shrubs and grasses along the edges of fields can help catch the runoff and absorb some nutrients before the water makes it to a nearby water body. Riparian buffer zones are interfaces between a flowing body of water and land, and have been created near waterways in an attempt to filter pollutants; sediment and nutrients are deposited here instead of in water. Creating buffer zones near farms and roads is another possible way to prevent nutrients from traveling too far. Conservation tillage - By reducing frequency and intensity of tilling, the land will enhance the chance of nutrients absorbing into the ground. Policy The United Nations framework for Sustainable Development Goals recognizes the damaging effects of eutrophication for marine environments. It has established a timeline for creating an Index of Coastal Eutrophication and Floating Plastic Debris Density (ICEP) within Sustainable Development Goal 14 (life below water). SDG 14 specifically has a target to: "by 2025, prevent and significantly reduce marine pollution of all kinds, in particular from land-based activities, including marine debris and nutrient pollution". Policy and regulations are a set of tools to minimize causes of eutrophication. Nonpoint sources of pollution are the primary contributors to eutrophication, and their effects can be minimized through common agricultural practices. Reducing the amount of pollutants that reach a watershed can be achieved through the protection of its forest cover, reducing the amount of erosion leeching into a watershed. Also, through the efficient, controlled use of land using sustainable agricultural practices to minimize land degradation, the amount of soil runoff and nitrogen-based fertilizers reaching a watershed can be reduced. Waste disposal technology constitutes another factor in eutrophication prevention. Because a body of water can have an effect on a range of people reaching far beyond that of the watershed, cooperation between different organizations is necessary to prevent the intrusion of contaminants that can lead to eutrophication. Agencies ranging from state governments to those of water resource management and non-governmental organizations, going as low as the local population, are responsible for preventing eutrophication of water bodies. In the United States, the most well known inter-state effort to prevent eutrophication is the Chesapeake Bay. Reversal and remediation Reducing nutrient inputs is a crucial precondition for restoration. Still, there are two caveats: Firstly, it can take a long time, mainly because of the storage of nutrients in sediments. Secondly, restoration may need more than a simple reversal of inputs since there are sometimes several stable but very different ecological states. Recovery of eutrophicated lakes is slow, often requiring several decades. In environmental remediation, nutrient removal technologies include biofiltration, which uses living material to capture and biologically degrade pollutants. Examples include green belts, riparian areas, natural and constructed wetlands, and treatment ponds. Algae bloom forecasting The National Oceanic Atmospheric Admiration in the United States has created a forecasting tool for regions such as the Great Lakes, the Gulf of Maine, and The Gulf of Mexico. Shorter term predictions can help to show the intensity, location, and trajectory of blooms in order to warn more directly affected communities. Longer term tests in specific regions and bodies help to predict larger scale factors like scale of future blooms and factors that could lead to more adverse effects. Nutrient bioextraction Nutrient bioextraction is bioremediation involving cultured plants and animals. Nutrient bioextraction or bioharvesting is the practice of farming and harvesting shellfish and seaweed to remove nitrogen and other nutrients from natural water bodies. Shellfish in estuaries It has been suggested that nitrogen removal by oyster reefs could generate net benefits for sources facing nitrogen emission restrictions, similar to other nutrient trading scenarios. Specifically, if oysters maintain nitrogen levels in estuaries below thresholds, then oysters effectively stave off an enforcement response, and compliance costs parties responsible for nitrogen emission would otherwise incur. Several studies have shown that oysters and mussels can dramatically impact nitrogen levels in estuaries. Filter feeding activity is considered beneficial to water quality by controlling phytoplankton density and sequestering nutrients, which can be removed from the system through shellfish harvest, buried in the sediments, or lost through denitrification. Foundational work toward the idea of improving marine water quality through shellfish cultivation was conducted by Odd Lindahl et al., using mussels in Sweden. In the United States, shellfish restoration projects have been conducted on the East, West and Gulf coasts. Seaweed farming Studies have demonstrated seaweed's potential to improve nitrogen levels. Seaweed aquaculture offers an opportunity to mitigate, and adapt to climate change. Seaweed, such as kelp, also absorbs phosphorus and nitrogen and is thus helpful to remove excessive nutrients from polluted parts of the sea. Some cultivated seaweeds have very high productivity and could absorb large quantities of N, P, , producing large amounts of having an excellent effect on decreasing eutrophication. It is believed that seaweed cultivation in large scale should be a good solution to the eutrophication problem in coastal waters. Geo-engineering Another technique for combatting hypoxia/eutrophication in localized situations is direct injection of compressed air, a technique used in the restoration of the Salford Docks area of the Manchester Ship Canal in England. For smaller-scale waters such as aquaculture ponds, pump aeration is standard. Chemical removal of phosphorus Removing phosphorus can remediate eutrophication. Of the several phosphate sorbents, alum (aluminium sulfate) is of practical interest.) Many materials have been investigated. The phosphate sorbent is commonly applied in the surface of the water body and it sinks to the bottom of the lake reducing phosphate, such sorbents have been applied worldwide to manage eutrophication and algal bloom (for example under the commercial name Phoslock). In a large-scale study, 114 lakes were monitored for the effectiveness of alum at phosphorus reduction. Across all lakes, alum effectively reduced the phosphorus for 11 years. While there was variety in longevity (21 years in deep lakes and 5.7 years in shallow lakes), the results express the effectiveness of alum at controlling phosphorus within lakes. Alum treatment is less effective in deep lakes, as well as lakes with substantial external phosphorus loading. Finnish phosphorus removal measures started in the mid-1970s and have targeted rivers and lakes polluted by industrial and municipal discharges. These efforts have had a 90% removal efficiency. Still, some targeted point sources did not show a decrease in runoff despite reduction efforts. See also External links International Nitrogen Initiative References Nutrient pollution Water pollution Environmental chemistry Environmental issues with water Aquatic ecology
Eutrophication
[ "Chemistry", "Biology", "Environmental_science" ]
5,470
[ "Nutrient pollution", "Eutrophication", "Environmental chemistry", "Environmental soil science", "Water pollution", "Ecosystems", "nan", "Aquatic ecology" ]
54,888
https://en.wikipedia.org/wiki/Telomere
A telomere (; ) is a region of repetitive nucleotide sequences associated with specialized proteins at the ends of linear chromosomes (see Sequences). Telomeres are a widespread genetic feature most commonly found in eukaryotes. In most, if not all species possessing them, they protect the terminal regions of chromosomal DNA from progressive degradation and ensure the integrity of linear chromosomes by preventing DNA repair systems from mistaking the very ends of the DNA strand for a double-strand break. Discovery The existence of a special structure at the ends of chromosomes was independently proposed in 1938 by Hermann Joseph Muller, studying the fruit fly Drosophila melanogaster, and in 1939 by Barbara McClintock, working with maize. Muller observed that the ends of irradiated fruit fly chromosomes did not present alterations such as deletions or inversions. He hypothesized the presence of a protective cap, which he coined "telomeres", from the Greek telos (end) and meros (part). In the early 1970s, Soviet theorist Alexey Olovnikov first recognized that chromosomes could not completely replicate their ends; this is known as the "end replication problem". Building on this, and accommodating Leonard Hayflick's idea of limited somatic cell division, Olovnikov suggested that DNA sequences are lost every time a cell replicates until the loss reaches a critical level, at which point cell division ends. According to his theory of marginotomy, DNA sequences at the ends of telomeres are represented by tandem repeats, which create a buffer that determines the number of divisions that a certain cell clone can undergo. Furthermore, it was predicted that a specialized DNA polymerase (originally called a tandem-DNA-polymerase) could extend telomeres in immortal tissues such as germ line, cancer cells and stem cells. It also followed from this hypothesis that organisms with circular genome, such as bacteria, do not have the end replication problem and therefore do not age. Olovnikov suggested that in germline cells, cells of vegetatively propagated organisms, and immortal cell populations such as most cancer cell lines, an enzyme might be activated to prevent the shortening of DNA termini with each cell division. In 1975–1977, Elizabeth Blackburn, working as a postdoctoral fellow at Yale University with Joseph G. Gall, discovered the unusual nature of telomeres, with their simple repeated DNA sequences composing chromosome ends. Blackburn, Carol Greider, and Jack Szostak were awarded the 2009 Nobel Prize in Physiology or Medicine for the discovery of how chromosomes are protected by telomeres and the enzyme telomerase. Structure and function End replication problem During DNA replication, DNA polymerase cannot replicate the sequences present at the 3' ends of the parent strands. This is a consequence of its unidirectional mode of DNA synthesis: it can only attach new nucleotides to an existing 3'-end (that is, synthesis progresses 5'-3') and thus it requires a primer to initiate replication. On the leading strand (oriented 5'-3' within the replication fork), DNA-polymerase continuously replicates from the point of initiation all the way to the strand's end with the primer (made of RNA) then being excised and substituted by DNA. The lagging strand, however, is oriented 3'-5' with respect to the replication fork so continuous replication by DNA-polymerase is impossible, which necessitates discontinuous replication involving the repeated synthesis of primers further 5' of the site of initiation (see lagging strand replication). The last primer to be involved in lagging-strand replication sits near the 3'-end of the template (corresponding to the potential 5'-end of the lagging-strand). Originally it was believed that the last primer would sit at the very end of the template, thus, once removed, the DNA-polymerase that substitutes primers with DNA (DNA-Pol δ in eukaryotes) would be unable to synthesize the "replacement DNA" from the 5'-end of the lagging strand so that the template nucleotides previously paired to the last primer would not be replicated. It has since been questioned whether the last lagging strand primer is placed exactly at the 3'-end of the template and it was demonstrated that it is rather synthesized at a distance of about 70–100 nucleotides which is consistent with the finding that DNA in cultured human cell is shortened by 50–100 base pairs per cell division. If coding sequences are degraded in this process, potentially vital genetic code would be lost. Telomeres are non-coding, repetitive sequences located at the termini of linear chromosomes to act as buffers for those coding sequences further behind. They "cap" the end-sequences and are progressively degraded in the process of DNA replication. The "end replication problem" is exclusive to linear chromosomes as circular chromosomes do not have ends lying without reach of DNA-polymerases. Most prokaryotes, relying on circular chromosomes, accordingly do not possess telomeres. A small fraction of bacterial chromosomes (such as those in Streptomyces, Agrobacterium, and Borrelia), however, are linear and possess telomeres, which are very different from those of the eukaryotic chromosomes in structure and function. The known structures of bacterial telomeres take the form of proteins bound to the ends of linear chromosomes, or hairpin loops of single-stranded DNA at the ends of the linear chromosomes. Telomere ends and shelterin At the very 3'-end of the telomere there is a 300 base pair overhang which can invade the double-stranded portion of the telomere forming a structure known as a T-loop. This loop is analogous to a knot, which stabilizes the telomere, and prevents the telomere ends from being recognized as breakpoints by the DNA repair machinery. Should non-homologous end joining occur at the telomeric ends, chromosomal fusion would result. The T-loop is maintained by several proteins, collectively referred to as the shelterin complex. In humans, the shelterin complex consists of six proteins identified as TRF1, TRF2, TIN2, POT1, TPP1, and RAP1. In many species, the sequence repeats are enriched in guanine, e.g. TTAGGG in vertebrates, which allows the formation of G-quadruplexes, a special conformation of DNA involving non-Watson-Crick base pairing. There are different subtypes depending on the involvement of single- or double-stranded DNA, among other things. There is evidence for the 3'-overhang in ciliates (that possess telomere repeats similar to those found in vertebrates) to form such G-quadruplexes that accommodate it, rather than a T-loop. G-quadruplexes present an obstacle for enzymes such as DNA-polymerases and are thus thought to be involved in the regulation of replication and transcription. Telomerase Many organisms have a ribonucleoprotein enzyme called telomerase, which carries out the task of adding repetitive nucleotide sequences to the ends of the DNA. Telomerase "replenishes" the telomere "cap" and requires no ATP. In most multicellular eukaryotic organisms, telomerase is active only in germ cells, some types of stem cells such as embryonic stem cells, and certain white blood cells. Telomerase can be reactivated and telomeres reset back to an embryonic state by somatic cell nuclear transfer. The steady shortening of telomeres with each replication in somatic (body) cells may have a role in senescence and in the prevention of cancer. This is because the telomeres act as a sort of time-delay "fuse", eventually running out after a certain number of cell divisions and resulting in the eventual loss of vital genetic information from the cell's chromosome with future divisions. Length Telomere length varies greatly between species, from approximately 300 base pairs in yeast to many kilobases in humans, and usually is composed of arrays of guanine-rich, six- to eight-base-pair-long repeats. Eukaryotic telomeres normally terminate with 3′ single-stranded-DNA overhang ranging from 75 to 300 bases, which is essential for telomere maintenance and capping. Multiple proteins binding single- and double-stranded telomere DNA have been identified. These function in both telomere maintenance and capping. Telomeres form large loop structures called telomere loops, or T-loops. Here, the single-stranded DNA curls around in a long circle, stabilized by telomere-binding proteins. At the very end of the T-loop, the single-stranded telomere DNA is held onto a region of double-stranded DNA by the telomere strand disrupting the double-helical DNA, and base pairing to one of the two strands. This triple-stranded structure is called a displacement loop or D-loop. Shortening Oxidative damage Apart from the end replication problem, in vitro studies have shown that telomeres accumulate damage due to oxidative stress and that oxidative stress-mediated DNA damage has a major influence on telomere shortening in vivo. There is a multitude of ways in which oxidative stress, mediated by reactive oxygen species (ROS), can lead to DNA damage; however, it is yet unclear whether the elevated rate in telomeres is brought about by their inherent susceptibility or a diminished activity of DNA repair systems in these regions. Despite widespread agreement of the findings, widespread flaws regarding measurement and sampling have been pointed out; for example, a suspected species and tissue dependency of oxidative damage to telomeres is said to be insufficiently accounted for. Population-based studies have indicated an interaction between anti-oxidant intake and telomere length. In the Long Island Breast Cancer Study Project (LIBCSP), authors found a moderate increase in breast cancer risk among women with the shortest telomeres and lower dietary intake of beta carotene, vitamin C or E. These results suggest that cancer risk due to telomere shortening may interact with other mechanisms of DNA damage, specifically oxidative stress. Association with aging Although telomeres shorten during the lifetime of an individual, it is telomere shortening-rate rather than telomere length that is associated with the lifespan of a species. Critically short telomeres trigger a DNA damage response and cellular senescence. Mice have much longer telomeres, but a greatly accelerated telomere shortening-rate and greatly reduced lifespan compared to humans and elephants. Telomere shortening is associated with aging, mortality, and aging-related diseases in experimental animals. Although many factors can affect human lifespan, such as smoking, diet, and exercise, as persons approach the upper limit of human life expectancy, longer telomeres may be associated with lifespan. Potential effect of psychological stress Meta-analyses found that increased perceived psychological stress was associated with a small decrease in telomere length—but that these associations attenuate to no significant association when accounting for publication bias. The literature concerning telomeres as integrative biomarkers of exposure to stress and adversity is dominated by cross-sectional and correlational studies, which makes causal interpretation problematic. A 2020 review argued that the relationship between psychosocial stress and telomere length appears strongest for stress experienced in utero or early life. Lengthening The phenomenon of limited cellular division was first observed by Leonard Hayflick, and is now referred to as the Hayflick limit. Significant discoveries were subsequently made by a group of scientists organized at Geron Corporation by Geron's founder Michael D. West, that tied telomere shortening with the Hayflick limit. The cloning of the catalytic component of telomerase enabled experiments to test whether the expression of telomerase at levels sufficient to prevent telomere shortening was capable of immortalizing human cells. Telomerase was demonstrated in a 1998 publication in Science to be capable of extending cell lifespan, and now is well-recognized as capable of immortalizing human somatic cells. Two studies on long-lived seabirds demonstrate that the role of telomeres is far from being understood. In 2003, scientists observed that the telomeres of Leach's storm-petrel (Oceanodroma leucorhoa) seem to lengthen with chronological age, the first observed instance of such behaviour of telomeres. A study reported that telomere length of different mammalian species correlates inversely rather than directly with lifespan, and concluded that the contribution of telomere length to lifespan remains controversial. There is little evidence that, in humans, telomere length is a significant biomarker of normal aging with respect to important cognitive and physical abilities. Sequences Experimentally verified and predicted telomere sequence motifs from more than 9000 species are collected in research community curated database TeloBase. Some of the experimentally verified telomere nucleotide sequences are also listed in Telomerase Database website (see nucleic acid notation for letter representations). Research on disease risk Preliminary research indicates that disease risk in aging may be associated with telomere shortening, senescent cells, or SASP (senescence-associated secretory phenotype). Measurement Several techniques are currently employed to assess average telomere length in eukaryotic cells. One method is the Terminal Restriction Fragment (TRF) southern blot. There is a Web-based Analyser of the Length of Telomeres (WALTER), software processing the TRF pictures. A Real-Time PCR assay for telomere length involves determining the Telomere-to-Single Copy Gene (T/S) ratio, which is demonstrated to be proportional to the average telomere length in a cell. Tools have also been developed to estimate the length of telomere from whole genome sequencing (WGS) experiments. Amongst these are TelSeq, Telomerecat and telomereHunter. Length estimation from WGS typically works by differentiating telomere sequencing reads and then inferring the length of telomere that produced that number of reads. These methods have been shown to correlate with preexisting methods of estimation such as PCR and TRF. Flow-FISH is used to quantify the length of telomeres in human white blood cells. A semi-automated method for measuring the average length of telomeres with Flow FISH was published in Nature Protocols in 2006. While multiple companies offer telomere length measurement services, the utility of these measurements for widespread clinical or personal use has been questioned. Nobel Prize winner Elizabeth Blackburn, who was co-founder of one company, promoted the clinical utility of telomere length measures. In wildlife During the last two decades, eco-evolutionary studies have investigated the relevance of life-history traits and environmental conditions on telomeres of wildlife. Most of these studies have been conducted in endotherms, i.e. birds and mammals. They have provided evidence for the inheritance of telomere length; however, heritability estimates vary greatly within and among species. Age and telomere length often negatively correlate in vertebrates, but this decline is variable among taxa and linked to the method used for estimating telomere length. In contrast, the available information shows no sex differences in telomere length across vertebrates. Phylogeny and life history traits such as body size or the pace of life can also affect telomere dynamics. For example, it has been described across species of birds and mammals. In 2019, a meta-analysis confirmed that the exposure to stressors (e.g. pathogen infection, competition, reproductive effort and high activity level) was associated with shorter telomeres across different animal taxa. Studies on ectotherms, and other non-mammalian organisms, show that there is no single universal model of telomere erosion; rather, there is wide variation in relevant dynamics across Metazoa, and even within smaller taxonomic groups these patterns appear diverse. See also Epigenetic clock Centromere DNA damage theory of aging Immortality Maximum life span Rejuvenation (aging) Senescence, biological aging Tankyrase Telomere-binding protein G-quartet Immortal DNA strand hypothesis Notes References External links Telomeres and Telomerase: The Means to the End Nobel Lecture by Elizabeth Blackburn, which includes a reference to the impact of stress, and pessimism on telomere length Telomerase and the Consequences of Telomere Dysfunction Nobel Lecture by Carol Greider DNA Ends: Just the Beginning Nobel Lecture by Jack Szostak Chromosomes Molecular biology Repetitive DNA sequences Non-coding DNA
Telomere
[ "Chemistry", "Biology" ]
3,518
[ "Biochemistry", "Senescence", "Molecular genetics", "Repetitive DNA sequences", "Molecular biology", "Telomeres" ]
54,910
https://en.wikipedia.org/wiki/Chlorofluorocarbon
Chlorofluorocarbons (CFCs) and hydrochlorofluorocarbons (HCFCs) are fully or partly halogenated hydrocarbons that contain carbon (C), hydrogen (H), chlorine (Cl), and fluorine (F), produced as volatile derivatives of methane, ethane, and propane. The most common example is dichlorodifluoromethane (R-12). R-12 is also commonly called Freon and was used as a refrigerant. Many CFCs have been widely used as refrigerants, propellants (in aerosol applications), gaseous fire suppression systems, and solvents. As a result of CFCs contributing to ozone depletion in the upper atmosphere, the manufacture of such compounds has been phased out under the Montreal Protocol, and they are being replaced with other products such as hydrofluorocarbons (HFCs) and hydrofluoroolefins (HFOs) including R-410A, R-134a and R-1234yf. Structure, properties and production As in simpler alkanes, carbon in the CFCs bond with tetrahedral symmetry. Because the fluorine and chlorine atoms differ greatly in size and effective charge from hydrogen and from each other, the methane-derived CFCs deviate from perfect tetrahedral symmetry. The physical properties of CFCs and HCFCs are tunable by changes in the number and identity of the halogen atoms. In general, they are volatile but less so than their parent alkanes. The decreased volatility is attributed to the molecular polarity induced by the halides, which induces intermolecular interactions. Thus, methane boils at −161 °C whereas the fluoromethanes boil between −51.7 (CF2H2) and −128 °C (CF4). The CFCs have still higher boiling points because the chloride is even more polarizable than fluoride. Because of their polarity, the CFCs are useful solvents, and their boiling points make them suitable as refrigerants. The CFCs are far less flammable than methane, in part because they contain fewer C-H bonds and in part because, in the case of the chlorides and bromides, the released halides quench the free radicals that sustain flames. The densities of CFCs are higher than their corresponding alkanes. In general, the density of these compounds correlates with the number of chlorides. CFCs and HCFCs are usually produced by halogen exchange starting from chlorinated methanes and ethanes. Illustrative is the synthesis of chlorodifluoromethane from chloroform: HCCl3 + 2 HF → HCF2Cl + 2 HCl Brominated derivatives are generated by free-radical reactions of hydrochlorofluorocarbons, replacing C-H bonds with C-Br bonds. The production of the anesthetic 2-bromo-2-chloro-1,1,1-trifluoroethane ("halothane") is illustrative: CF3CH2Cl + Br2 → CF3CHBrCl + HBr Applications CFCs and HCFCs are used in various applications because of their low toxicity, reactivity and flammability. Every permutation of fluorine, chlorine and hydrogen based on methane and ethane has been examined and most have been commercialized. Furthermore, many examples are known for higher numbers of carbon as well as related compounds containing bromine. Uses include refrigerants, blowing agents, aerosol propellants in medicinal applications, and degreasing solvents. Billions of kilograms of chlorodifluoromethane are produced annually as a precursor to tetrafluoroethylene, the monomer that is converted into Teflon. Classes of compounds and Numbering System Chlorofluorocarbons (CFCs): when derived from methane and ethane these compounds have the formulae CClmF4−m and C2ClmF6−m, where m is nonzero. Hydro-chlorofluorocarbons (HCFCs): when derived from methane and ethane these compounds have the formula CClmFnH4−m−n and C2ClxFyH6−x−y, where m, n, x, and y are nonzero. and bromofluorocarbons have formulae similar to the CFCs and HCFCs but also include bromine. Hydrofluorocarbons (HFCs): when derived from methane, ethane, propane, and butane, these compounds have the respective formulae CFmH4−m, C2FmH6−m, C3FmH8−m, and C4FmH10−m, where m is nonzero. Numbering system A special numbering system is to be used for fluorinated alkanes, prefixed with Freon-, R-, CFC- and HCFC-, where the rightmost value indicates the number of fluorine atoms, the next value to the left is the number of hydrogen atoms plus 1, and the next value to the left is the number of carbon atoms less one (zeroes are not stated), and the remaining atoms are chlorine. Freon-12, for example, indicates a methane derivative (only two numbers) containing two fluorine atoms (the second 2) and no hydrogen (1-1=0). It is therefore CCl2F2. Another equation that can be applied to get the correct molecular formula of the CFC/R/Freon class compounds is to take the numbering and add 90 to it. The resulting value will give the number of carbons as the first numeral, the second numeral gives the number of hydrogen atoms, and the third numeral gives the number of fluorine atoms. The rest of the unaccounted carbon bonds are occupied by chlorine atoms. The value of this equation is always a three figure number. An easy example is that of CFC-12, which gives: 90+12=102 -> 1 carbon, 0 hydrogens, 2 fluorine atoms, and hence 2 chlorine atoms resulting in CCl2F2. The main advantage of this method of deducing the molecular composition in comparison with the method described in the paragraph above is that it gives the number of carbon atoms of the molecule. Freons containing bromine are signified by four numbers. Isomers, which are common for ethane and propane derivatives, are indicated by letters following the numbers: Reactions The reaction of the CFCs which is responsible for the depletion of ozone, is the photo-induced scission of a C-Cl bond: CCl3F → CCl2F. + Cl. The chlorine atom, written often as Cl., behaves very differently from the chlorine molecule (Cl2). The radical Cl. is long-lived in the upper atmosphere, where it catalyzes the conversion of ozone into O2. Ozone absorbs UV-B radiation, so its depletion allows more of this high energy radiation to reach the Earth's surface. Bromine atoms are even more efficient catalysts; hence brominated CFCs are also regulated. Impact as greenhouse gases CFCs were phased out via the Montreal Protocol due to their part in ozone depletion. The atmospheric impacts of CFCs are not limited to their role as ozone-depleting chemicals. Infrared absorption bands prevent heat at that wavelength from escaping Earth's atmosphere. CFCs have their strongest absorption bands from C-F and C-Cl bonds in the spectral region of 7.8–15.3 μm—referred to as the "atmospheric window" due to the relative transparency of the atmosphere within this region. The strength of CFC absorption bands and the unique susceptibility of the atmosphere at wavelengths where CFCs (indeed all covalent fluorine compounds) absorb radiation creates a "super" greenhouse effect from CFCs and other unreactive fluorine-containing gases such as perfluorocarbons, HFCs, HCFCs, bromofluorocarbons, SF6, and NF3. This "atmospheric window" absorption is intensified by the low concentration of each individual CFC. Because CO2 is close to saturation with high concentrations and few infrared absorption bands, the radiation budget and hence the greenhouse effect has low sensitivity to changes in CO2 concentration; the increase in temperature is roughly logarithmic. Conversely, the low concentration of CFCs allow their effects to increase linearly with mass, so that chlorofluorocarbons are greenhouse gases with a much higher potential to enhance the greenhouse effect than CO2. Groups are actively disposing of legacy CFCs to reduce their impact on the atmosphere. According to NASA in 2018, the hole in the ozone layer has begun to recover as a result of CFC bans. However, research released in 2019 reports an alarming increase in CFCs, pointing to unregulated use in China. History Prior to, and during the 1920s, refrigerators used toxic gases as refrigerants, including ammonia, sulphur dioxide, and chloromethane. Later in the 1920s after a series of fatal accidents involving the leaking of chloromethane from refrigerators, a major collaborative effort began between American corporations Frigidaire, General Motors, and DuPont to develop a safer, non-toxic alternative. Thomas Midgley Jr. of General Motors is credited for synthesizing the first chlorofluorocarbons. The Frigidaire corporation was issued the first patent, number 1,886,339, for the formula for CFCs on December 31, 1928. In a demonstration for the American Chemical Society, Midgley flamboyantly demonstrated all these properties by inhaling a breath of the gas and using it to blow out a candle in 1930. By 1930, General Motors and Du Pont formed the Kinetic Chemical Company to produce Freon, and by 1935, over 8 million refrigerators utilizing R-12 were sold by Frigidaire and its competitors. In 1932, Carrier began using R-11 in the worlds first self-contained home air conditioning unit known as the "atmospheric cabinet". As a result of CFCs being largely non-toxic, they quickly became the coolant of choice in large air-conditioning systems. Public health codes in cities were revised to designate chlorofluorocarbons as the only gases that could be used as refrigerants in public buildings. Growth in CFCs continued over the following decades leading to peak annual sales of over 1 billion USD with greater than 1 million metric tonnes being produced annually. It wasn't until 1974 that it was first discovered by two University of California chemists, Professor F. Sherwood Rowland and Dr. Mario Molina, that the use of chlorofluorocarbons were causing a significant depletion in atmospheric ozone concentrations. This initiated the environmental effort which eventually resulted in the enactment of the Montreal Protocol. Commercial development and use in fire extinguishing During World War II, various chloroalkanes were in standard use in military aircraft, although these early halons suffered from excessive toxicity. Nevertheless, after the war they slowly became more common in civil aviation as well. In the 1960s, fluoroalkanes and bromofluoroalkanes became available and were quickly recognized as being highly effective fire-fighting materials. Much early research with Halon 1301 was conducted under the auspices of the US Armed Forces, while Halon 1211 was, initially, mainly developed in the UK. By the late 1960s they were standard in many applications where water and dry-powder extinguishers posed a threat of damage to the protected property, including computer rooms, telecommunications switches, laboratories, museums and art collections. Beginning with warships, in the 1970s, bromofluoroalkanes also progressively came to be associated with rapid knockdown of severe fires in confined spaces with minimal risk to personnel. By the early 1980s, bromofluoroalkanes were in common use on aircraft, ships, and large vehicles as well as in computer facilities and galleries. However, concern was beginning to be expressed about the impact of chloroalkanes and bromoalkanes on the ozone layer. The Vienna Convention for the Protection of the Ozone Layer did not cover bromofluoroalkanes under the same restrictions, instead, the consumption of bromofluoroalkanes was frozen at 1986 levels. This is due to the fact that emergency discharge of extinguishing systems was thought to be too small in volume to produce a significant impact, and too important to human safety for restriction. Regulation Since the late 1970s, the use of CFCs has been heavily regulated because of their destructive effects on the ozone layer. After the development of his electron capture detector, James Lovelock was the first to detect the widespread presence of CFCs in the air, finding a mole fraction of 60 ppt of CFC-11 over Ireland. In a self-funded research expedition ending in 1973, Lovelock went on to measure CFC-11 in both the Arctic and Antarctic, finding the presence of the gas in each of 50 air samples collected, and concluding that CFCs are not hazardous to the environment. The experiment did however provide the first useful data on the presence of CFCs in the atmosphere. The damage caused by CFCs was discovered by Sherry Rowland and Mario Molina who, after hearing a lecture on the subject of Lovelock's work, embarked on research resulting in the first publication suggesting the connection in 1974. It turns out that one of CFCs' most attractive features—their low reactivity—is key to their most destructive effects. CFCs' lack of reactivity gives them a lifespan that can exceed 100 years, giving them time to diffuse into the upper stratosphere. Once in the stratosphere, the sun's ultraviolet radiation is strong enough to cause the homolytic cleavage of the C-Cl bond. In 1976, under the Toxic Substances Control Act, the EPA banned commercial manufacturing and use of CFCs and aerosol propellants. This was later superseded in the 1990 amendments to the Clean Air Act to address stratospheric ozone depletion. By 1987, in response to a dramatic seasonal depletion of the ozone layer over Antarctica, diplomats in Montreal forged a treaty, the Montreal Protocol, which called for drastic reductions in the production of CFCs. On 2 March 1989, 12 European Community nations agreed to ban the production of all CFCs by the end of the century. In 1990, diplomats met in London and voted to significantly strengthen the Montreal Protocol by calling for a complete elimination of CFCs by 2000. By 2010, CFCs should have been completely eliminated from developing countries as well. Because the only CFCs available to countries adhering to the treaty is from recycling, their prices have increased considerably. A worldwide end to production should also terminate the smuggling of this material. However, there are current CFC smuggling issues, as recognized by the United Nations Environmental Programme (UNEP) in a 2006 report titled "Illegal Trade in Ozone Depleting Substances". UNEP estimates that between 16,000–38,000 tonnes of CFCs passed through the black market in the mid-1990s. The report estimated between 7,000 and 14,000 tonnes of CFCs are smuggled annually into developing countries. Asian countries are those with the most smuggling; as of 2007, China, India and South Korea were found to account for around 70% of global CFC production, South Korea later to ban CFC production in 2010. Possible reasons for continued CFC smuggling were also examined: the report noted that many of the refrigeration systems that were designed to be operated utilizing the banned CFC products have long lifespans and continue to operate. The cost of replacing the equipment of these items is sometimes cheaper than outfitting them with a more ozone-friendly appliance. Additionally, CFC smuggling is not considered a significant issue, so the perceived penalties for smuggling are low. In 2018 public attention was drawn to the issue, that at an unknown place in east Asia an estimated amount of 13,000 metric tons annually of CFCs have been produced since about 2012 in violation of the protocol. While the eventual phaseout of CFCs is likely, efforts are being taken to stem these current non-compliance problems. By the time of the Montreal Protocol, it was realised that deliberate and accidental discharges during system tests and maintenance accounted for substantially larger volumes than emergency discharges, and consequently halons were brought into the treaty, albeit with many exceptions. Regulatory gap While the production and consumption of CFCs are regulated under the Montreal Protocol, emissions from existing banks of CFCs are not regulated under the agreement. In 2002, there were an estimated 5,791 kilotons of CFCs in existing products such as refrigerators, air conditioners, aerosol cans and others. Approximately one-third of these CFCs are projected to be emitted over the next decade if action is not taken, posing a threat to both the ozone layer and the climate. A proportion of these CFCs can be safely captured and destroyed by means of high temperature, controlled incineration which destroys the CFC molecule. Regulation and DuPont In 1978 the United States banned the use of CFCs such as Freon in aerosol cans, the beginning of a long series of regulatory actions against their use. The critical DuPont manufacturing patent for Freon ("Process for Fluorinating Halohydrocarbons", U.S. Patent #3258500) was set to expire in 1979. In conjunction with other industrial peers DuPont formed a lobbying group, the "Alliance for Responsible CFC Policy", to combat regulations of ozone-depleting compounds. In 1986 DuPont, with new patents in hand, reversed its previous stance and publicly condemned CFCs. DuPont representatives appeared before the Montreal Protocol urging that CFCs be banned worldwide and stated that their new HCFCs would meet the worldwide demand for refrigerants. Phasing-out of CFCs Use of certain chloroalkanes as solvents for large scale application, such as dry cleaning, have been phased out, for example, by the IPPC directive on greenhouse gases in 1994 and by the volatile organic compounds (VOC) directive of the EU in 1997. Permitted chlorofluoroalkane uses are medicinal only. Bromofluoroalkanes have been largely phased out and the possession of equipment for their use is prohibited in some countries like the Netherlands and Belgium, from 1 January 2004, based on the Montreal Protocol and guidelines of the European Union. Production of new stocks ceased in most (probably all) countries in 1994. However many countries still require aircraft to be fitted with halon fire suppression systems because no safe and completely satisfactory alternative has been discovered for this application. There are also a few other, highly specialized uses. These programs recycle halon through "halon banks" coordinated by the Halon Recycling Corporation to ensure that discharge to the atmosphere occurs only in a genuine emergency and to conserve remaining stocks. The interim replacements for CFCs are hydrochlorofluorocarbons (HCFCs), which deplete stratospheric ozone, but to a much lesser extent than CFCs. Ultimately, hydrofluorocarbons (HFCs) will replace HCFCs. Unlike CFCs and HCFCs, HFCs have an ozone depletion potential (ODP) of 0. DuPont began producing hydrofluorocarbons as alternatives to Freon in the 1980s. These included Suva refrigerants and Dymel propellants. Natural refrigerants are climate friendly solutions that are enjoying increasing support from large companies and governments interested in reducing global warming emissions from refrigeration and air conditioning. Phasing-out of HFCs and HCFCs Hydrofluorocarbons are included in the Kyoto Protocol and are regulated under the Kigali Amendment to the Montreal Protocol due to their very high Global Warming Potential (GWP) and the recognition of halocarbon contributions to climate change. On September 21, 2007, approximately 200 countries agreed to accelerate the elimination of hydrochlorofluorocarbons entirely by 2020 in a United Nations-sponsored Montreal summit. Developing nations were given until 2030. Many nations, such as the United States and China, who had previously resisted such efforts, agreed with the accelerated phase out schedule. India successfully achieved the complete phase out of HCFC-141 b in 2020. It was reported that levels of HCFCs in the atmosphere had started to fall in 2021 due to their phase out under the Montreal Protocol. Properly collecting, controlling, and destroying CFCs and HCFCs While new production of these refrigerants has been banned, large volumes still exist in older systems and have been said to pose an immediate threat to our environment. Preventing the release of these harmful refrigerants has been ranked as one of the single most effective actions we can take to mitigate catastrophic climate change. Development of alternatives for CFCs Work on alternatives for chlorofluorocarbons in refrigerants began in the late 1970s after the first warnings of damage to stratospheric ozone were published. The hydrochlorofluorocarbons (HCFCs) are less stable in the lower atmosphere, enabling them to break down before reaching the ozone layer. Nevertheless, a significant fraction of the HCFCs do break down in the stratosphere and they have contributed to more chlorine buildup there than originally predicted. Later alternatives lacking the chlorine, the hydrofluorocarbons (HFCs) have an even shorter lifetimes in the lower atmosphere. One of these compounds, HFC-134a, were used in place of CFC-12 in automobile air conditioners. Hydrocarbon refrigerants (a propane/isobutane blend) were also used extensively in mobile air conditioning systems in Australia, the US and many other countries, as they had excellent thermodynamic properties and performed particularly well in high ambient temperatures. 1,1-Dichloro-1-fluoroethane (HCFC-141b) has replaced HFC-134a, due to its low ODP and GWP values. And according to the Montreal Protocol, HCFC-141b is supposed to be phased out completely and replaced with zero ODP substances such as cyclopentane, HFOs, and HFC-345a before January 2020. Among the natural refrigerants (along with ammonia and carbon dioxide), hydrocarbons have negligible environmental impacts and are also used worldwide in domestic and commercial refrigeration applications, and are becoming available in new split system air conditioners. Various other solvents and methods have replaced the use of CFCs in laboratory analytics. In Metered-dose inhalers (MDI), a non-ozone effecting substitute was developed as a propellant, known as "hydrofluoroalkane." Development of Hydrofluoroolefins as alternatives to CFCs and HCFCs The development of Hydrofluoroolefins (HFOs) as replacements for Hydrochlorofluorocarbons and Hydrofluorocarbons began after the Kigali amendment to the Montreal Protocol in 2016, which called for the phase out of high global warming potential (GWP) refrigerants and to replace them with other refrigerants with a lower GWP, closer to that of carbon dioxide. HFOs have an ozone depletion potential of 0.0, compared to the 1.0 of principal CFC-11, and a low GWP which make them environmentally safer alternatives to CFCs, HCFCs and HFCs. Hydrofluoroolefins serve as functional replacements for applications where high GWP hydrofluorocarbons were once used. In April 2022, the EPA signed a pre-published final rule Listing of HFO-1234yf under the Significant New Alternatives Policy (SNAP) Program for Motor Vehicle Air Conditioning in Nonroad Vehicles and Servicing Fittings for Small Refrigerant Cans. This ruling allows HFO-1234yf to take over in applications where ozone depleting CFCs such as R-12, and high GWP HFCs such as R-134a were once used. The phaseout and replacement of CFCs and HFCs in the automotive industry will ultimately reduce the release of these gases to atmosphere and in turn have a positive contribution to the mitigation of climate change. Tracer of ocean circulation Since the time history of CFC concentrations in the atmosphere is relatively well known, they have provided an important constraint on ocean circulation. CFCs dissolve in seawater at the ocean surface and are subsequently transported into the ocean interior. Because CFCs are inert, their concentration in the ocean interior reflects simply the convolution of their atmospheric time evolution and ocean circulation and mixing. CFC and SF6 tracer-derived age of ocean water Chlorofluorocarbons (CFCs) are anthropogenic compounds that have been released into the atmosphere since the 1930s in various applications such as in air-conditioning, refrigeration, blowing agents in foams, insulations and packing materials, propellants in aerosol cans, and as solvents. The entry of CFCs into the ocean makes them extremely useful as transient tracers to estimate rates and pathways of ocean circulation and mixing processes. However, due to production restrictions of CFCs in the 1980s, atmospheric concentrations of CFC-11 and CFC-12 has stopped increasing, and the CFC-11 to CFC-12 ratio in the atmosphere have been steadily decreasing, making water dating of water masses more problematic. Incidentally, production and release of sulfur hexafluoride (SF6) have rapidly increased in the atmosphere since the 1970s. Similar to CFCs, SF6 is also an inert gas and is not affected by oceanic chemical or biological activities. Thus, using CFCs in concert with SF6 as a tracer resolves the water dating issues due to decreased CFC concentrations. Using CFCs or SF6 as a tracer of ocean circulation allows for the derivation of rates for ocean processes due to the time-dependent source function. The elapsed time since a subsurface water mass was last in contact with the atmosphere is the tracer-derived age. Estimates of age can be derived based on the partial pressure of an individual compound and the ratio of the partial pressure of CFCs to each other (or SF6). Partial pressure and ratio dating techniques The age of a water parcel can be estimated by the CFC partial pressure (pCFC) age or SF6 partial pressure (pSF6) age. The pCFC age of a water sample is defined as: where [CFC] is the measured CFC concentration (pmol kg−1) and F is the solubility of CFC gas in seawater as a function of temperature and salinity. The CFC partial pressure is expressed in units of 10–12 atmospheres or parts-per-trillion (ppt). The solubility measurements of CFC-11 and CFC-12 have been previously measured by Warner and Weiss Additionally, the solubility measurement of CFC-113 was measured by Bu and Warner and SF6 by Wanninkhof et al. and Bullister et al. Theses authors mentioned above have expressed the solubility (F) at a total pressure of 1 atm as: where F = solubility expressed in either mol l−1 or mol kg−1 atm−1, T = absolute temperature, S = salinity in parts per thousand (ppt), a1, a2, a3, b1, b2, and b3 are constants to be determined from the least squares fit to the solubility measurements. This equation is derived from the integrated Van 't Hoff equation and the logarithmic Setchenow salinity dependence. It can be noted that the solubility of CFCs increase with decreasing temperature at approximately 1% per degree Celsius. Once the partial pressure of the CFC (or SF6) is derived, it is then compared to the atmospheric time histories for CFC-11, CFC-12, or SF6 in which the pCFC directly corresponds to the year with the same. The difference between the corresponding date and the collection date of the seawater sample is the average age for the water parcel. The age of a parcel of water can also be calculated using the ratio of two CFC partial pressures or the ratio of the SF6 partial pressure to a CFC partial pressure. Safety According to their material safety data sheets, CFCs and HCFCs are colorless, volatile, non-toxic liquids and gases with a faintly sweet ethereal odor. Overexposure at concentrations of 11% or more may cause dizziness, loss of concentration, central nervous system depression or cardiac arrhythmia. Vapors displace air and can cause asphyxiation in confined spaces. Dermal absorption of chlorofluorocarbons is possible, but low. Where the pulmonary uptake of inhaled chlorofluorocarbons occurs quickly with peak blood concentrations occurring in as little as 15 seconds with steady concentrations evening out after 20 minutes. Absorption of orally ingested chlorofluorocarbons is 35 to 48 times lower compared to inhalation. Although non-flammable, their combustion products include hydrofluoric acid and related species. Normal occupational exposure is rated at 0.07% and does not pose any serious health risks. References External links Gas conversion table Nomenclature FAQ Class I Ozone-Depleting Substances Class II Ozone-Depleting Substances (HCFCs) History of halon-use by the US Navy Process using pyrolysis in an ultra high temperature plasma arc, for the elimination of CFCs Freon in car A/C Phasing out halons in extinguishers Aerosol propellants DuPont Firefighting Greenhouse gases Halogenated solvents Halomethanes Heating, ventilation, and air conditioning Ozone depletion Refrigerants Belgian inventions Environmental controversies Pollution Air pollution
Chlorofluorocarbon
[ "Chemistry", "Environmental_science" ]
6,342
[ "Greenhouse gases", "Environmental chemistry" ]
54,912
https://en.wikipedia.org/wiki/Dew%20point
The dew point of a given body of air is the temperature to which it must be cooled to become saturated with water vapor. This temperature depends on the pressure and water content of the air. When the air is cooled below the dew point, its moisture capacity is reduced and airborne water vapor will condense to form liquid water known as dew. When this occurs through the air's contact with a colder surface, dew will form on that surface. The dew point is affected by the air's humidity. The more moisture the air contains, the higher its dew point. When the temperature is below the freezing point of water, the dew point is called the frost point, as frost is formed via deposition rather than condensation. In liquids, the analog to the dew point is the cloud point. Humidity If all the other factors influencing humidity remain constant, at ground level the relative humidity rises as the temperature falls; this is because less vapor is needed to saturate the air. In normal conditions, the dew point temperature will not be greater than the air temperature, since relative humidity typically does not exceed 100%. In technical terms, the dew point is the temperature at which the water vapor in a sample of air at constant barometric pressure condenses into liquid water at the same rate at which it evaporates. At temperatures below the dew point, the rate of condensation will be greater than that of evaporation, forming more liquid water. The condensed water is called dew when it forms on a solid surface, or frost if it freezes. In the air, the condensed water is called either fog or a cloud, depending on its altitude when it forms. If the temperature is below the dew point, and no dew or fog forms, the vapor is called supersaturated. This can happen if there are not enough particles in the air to act as condensation nuclei. The dew point depends on how much water vapor the air contains. If the air is very dry and has few water molecules, the dew point is low and surfaces must be much cooler than the air for condensation to occur. If the air is very humid and contains many water molecules, the dew point is high and condensation can occur on surfaces that are only a few degrees cooler than the air. A high relative humidity implies that the dew point is close to the current air temperature. A relative humidity of 100% indicates the dew point is equal to the current temperature and that the air is maximally saturated with water. When the moisture content remains constant and temperature increases, relative humidity decreases, but the dew point remains constant. General aviation pilots use dew point data to calculate the likelihood of carburetor icing and fog, and to estimate the height of a cumuliform cloud base. Increasing the barometric pressure raises the dew point. This means that, if the pressure increases, the mass of water vapor per volume unit of air must be reduced in order to maintain the same dew point. For example, consider New York City ( elevation) and Denver ( elevation). Because Denver is at a higher elevation than New York, it will tend to have a lower barometric pressure. This means that if the dew point and temperature in both cities are the same, the amount of water vapor in the air will be greater in Denver. Relationship to human comfort When the air temperature is high, the human body uses the evaporation of perspiration to cool down, with the cooling effect directly related to how fast the perspiration evaporates. The rate at which perspiration can evaporate depends on how much moisture is in the air and how much moisture the air can hold. If the air is already saturated with moisture (humid), perspiration will not evaporate. The body's thermoregulation will produce perspiration in an effort to keep the body at its normal temperature even when the rate at which it is producing sweat exceeds the evaporation rate, so one can become coated with sweat on humid days even without generating additional body heat (such as by exercising). As the air surrounding one's body is warmed by body heat, it will rise and be replaced with other air. If air is moved away from one's body with a natural breeze or a fan, sweat will evaporate faster, making perspiration more effective at cooling the body, thereby increasing comfort. By contrast, comfort decreases as unevaporated perspiration increases. A wet bulb thermometer also uses evaporative cooling, so it provides a good measure for use in evaluating comfort level. Discomfort also exists when the dew point is very low (below around ). The drier air can cause skin to crack and become irritated more easily. It will also dry out the airways. The US Occupational Safety and Health Administration recommends indoor air be maintained at with a 20–60% relative humidity, equivalent to a dew point of approximately (by Simple Rule calculation below). Lower dew points, less than , correlate with lower ambient temperatures and cause the body to require less cooling. A lower dew point can go along with a high temperature only at extremely low relative humidity, allowing for relatively effective cooling. People inhabiting tropical and subtropical climates acclimatize somewhat to higher dew points. Thus, a resident of Singapore or Miami, for example, might have a higher threshold for discomfort than a resident of a temperate climate like London or Chicago. People accustomed to temperate climates often begin to feel uncomfortable when the dew point gets above , while others might find dew points up to comfortable. Most inhabitants of temperate areas will consider dew points above oppressive and tropical-like, while inhabitants of hot and humid areas may not find this uncomfortable. Thermal comfort depends not just on physical environmental factors, but also on psychological factors. Dew point weather records Highest dew point temperature: A dew point of — while the temperature was — was observed at Dhahran, Saudi Arabia, at 3:00 p.m. on 8 July 2003. Measurement Devices called hygrometers are used to measure dew point over a wide range of temperatures. These devices consist of a polished metal mirror which is cooled as air is passed over it. The dew point is revealed by observing the loss of clarity in the reflection cast by the mirror. Manual devices of this sort can be used to calibrate other types of humidity sensors, and automatic sensors may be used in a control loop with a humidifier or dehumidifier to control the dew point of the air in a building or in a smaller space for a manufacturing process. Calculating the dew point A well-known empirical approximation used to calculate the dew point, Td, given just the actual ("dry bulb") air temperature, T (in degrees Celsius) and relative humidity (in percent), RH, is the Magnus formula: where b = 17.625 and c = 243.04°C. The values of b and c were selected by minimizing the maximum deviation over the range -40°C to +50°C. The more complete formulation and origin of this approximation involves the interrelated saturated water vapor pressure (in units of millibars, also called hectopascals) at T, Ps(T), and the actual vapor pressure (also in units of millibars), Pa(T), which can be either found with RH or approximated with the barometric pressure (in millibars), BPmbar, and "wet-bulb" temperature, Tw is (unless declared otherwise, all temperatures are expressed in degrees Celsius): For greater accuracy, Ps(T) (and therefore γ(T, RH)) can be enhanced, using part of the Bögel modification, also known as the Arden Buck equation, which adds a fourth constant d: where a = 6.1121 mbar, b = 18.678, c = 257.14 °C, d = 234.5 °C. There are several different constant sets in use. The ones used in NOAA's presentation are taken from a 1980 paper by David Bolton in the Monthly Weather Review: a = 6.112 mbar, b = 17.67, c = 243.5 °C. These valuations provide a maximum error of 0.1%, for and . Also noteworthy is the Sonntag1990, a = 6.112 mbar, b = 17.62, c = 243.12 °C; for (error ±0.35 °C). Another common set of values originates from the 1974 Psychrometry and Psychrometric Charts. a = 6.105 mbar, b = 17.27, c = 237.7 °C; for (error ±0.4 °C). Also, in the Journal of Applied Meteorology and Climatology, Arden Buck presents several different valuation sets, with different maximum errors for different temperature ranges. Two particular sets provide a range of −40 °C to +50 °C between the two, with even lower maximum error within the indicated range than all the sets above: a = 6.1121 mbar, b = 17.368, c = 238.88 °C; for (error ≤ 0.05%). a = 6.1121 mbar, b = 17.966, c = 247.15 °C; for (error ≤ 0.06%). Simple approximation There is also a very simple approximation that allows conversion between the dew point, temperature, and relative humidity. This approach is accurate to within about ±1 °C as long as the relative humidity is above 50%: This can be expressed as a simple rule of thumb: For every 1 °C difference in the dew point and dry bulb temperatures, the relative humidity decreases by 5%, starting with RH = 100% when the dew point equals the dry bulb temperature. The derivation of this approach, a discussion of its accuracy, comparisons to other approximations, and more information on the history and applications of the dew point, can be found in an article published in the Bulletin of the American Meteorological Society. For temperatures in degrees Fahrenheit, these approximations work out to For example, a relative humidity of 100% means dew point is the same as air temp. For 90% RH, dew point is 3 °F lower than air temperature. For every 10 percent lower, dew point drops 3 °F. Frost point The frost point is similar to the dew point in that it is the temperature to which a given parcel of humid air must be cooled, at constant atmospheric pressure, for water vapor to be deposited on a surface as ice crystals without undergoing the liquid phase (compare with sublimation). The frost point for a given parcel of air is always higher than the dew point, as breaking the stronger bonding between water molecules on the surface of ice compared to the surface of (supercooled) liquid water requires a higher temperature. See also Bubble point Carburetor heat Hydrocarbon dew point Psychrometrics Thermodynamic diagrams References External links Often Needed Answers about Temp, Humidity & Dew Point from the sci.geo.meteorology Atmospheric thermodynamics Gases Humidity and hygrometry Meteorological quantities Psychrometrics Temperature Threshold temperatures sv:Luftfuktighet#Daggpunkt
Dew point
[ "Physics", "Chemistry", "Mathematics" ]
2,330
[ "Scalar physical quantities", "Temperature", "Thermodynamic properties", "Gases", "Physical quantities", "Physical phenomena", "Phase transitions", "SI base quantities", "Intensive quantities", "Phases of matter", "Threshold temperatures", "Meteorological quantities", "Quantity", "Thermody...
54,952
https://en.wikipedia.org/wiki/Technical%20drawing
Technical drawing, drafting or drawing, is the act and discipline of composing drawings that visually communicate how something functions or is constructed. Technical drawing is essential for communicating ideas in industry and engineering. To make the drawings easier to understand, people use familiar symbols, perspectives, units of measurement, notation systems, visual styles, and page layout. Together, such conventions constitute a visual language and help to ensure that the drawing is unambiguous and relatively easy to understand. Many of the symbols and principles of technical drawing are codified in an international standard called ISO 128. The need for precise communication in the preparation of a functional document distinguishes technical drawing from the expressive drawing of the visual arts. Artistic drawings are subjectively interpreted; their meanings are multiply determined. Technical drawings are understood to have one intended meaning. A draftsman is a person who makes a drawing (technical or expressive). A professional drafter who makes technical drawings is sometimes called a drafting technician. Methods Sketching A sketch is a quickly executed, freehand drawing that is usually not intended as a finished work. In general, sketching is a quick way to record an idea for later use. Architect's sketches primarily serve as a way to try out different ideas and establish a composition before a more finished work, especially when the finished work is expensive and time-consuming. Architectural sketches, for example, are a kind of diagram. These sketches, like metaphors, are used by architects as a means of communication in aiding design collaboration. This tool helps architects to abstract attributes of hypothetical provisional design solutions and summarize their complex patterns, thereby enhancing the design process. Manual or by instrument The basic drafting procedure is to place a piece of paper (or other material) on a smooth surface with right-angle corners and straight sides—typically a drawing board. A sliding straightedge known as a T-square is then placed on one of the sides, allowing it to be slid across the side of the table, and over the surface of the paper. "Parallel lines" can be drawn by moving the T-square and running a pencil or technical pen along the T-square's edge. The T-square is used to hold other devices such as set squares or triangles. In this case, the drafter places one or more triangles of known angles on the T-square — which is itself at right angles to the edge of the table — and can then draw lines at any chosen angle to others on the page. Modern drafting tables are equipped with a drafting machine that is supported on both sides of the table to slide over a large piece of paper. Because it is secured on both sides, lines drawn along the edge are guaranteed to be parallel. The drafter uses several technical drawing tools to draw curves and circles. Primary among these are the compasses, used for drawing arcs and circles, and the French curve, for drawing curves. A spline is a rubber coated articulated metal that can be manually bent to most curves. Drafting templates assist the drafter with creating recurring objects in a drawing without having to reproduce the object from scratch every time. This is especially useful when using common symbols; i.e. in the context of stagecraft, a lighting designer will draw from the USITT standard library of lighting fixture symbols to indicate the position of a common fixture across multiple positions. Templates are sold commercially by a number of vendors, usually customized to a specific task, but it is also not uncommon for a drafter to create his own templates. This basic drafting system requires an accurate table and constant attention to the positioning of the tools. A common error is to allow the triangles to push the top of the T-square down slightly, thereby throwing off all angles. Even tasks as simple as drawing two angled lines meeting at a point require a number of moves of the T-square and triangles, and in general, drafting can be a time-consuming process. A solution to these problems was the introduction of the mechanical "drafting machine", an application of the pantograph (sometimes referred to incorrectly as a "pentagraph" in these situations) which allowed the drafter to have an accurate right angle at any point on the page quickly. These machines often included the ability to change the angle, hence removing the need for the triangles. In addition to the mastery of the mechanics of drawing lines, arcs and circles (and text) onto a piece of paper—with respect to the detailing of physical objects—the drafting effort requires a thorough understanding of geometry, trigonometry and spatial comprehension, and in all cases demands precision and accuracy, and attention to detail of high order. Although drafting is sometimes accomplished by a project engineer, architect, or shop personnel (such as a machinist), skilled drafters (and/or designers) usually accomplish the task, and are always in demand to some degree. Computer aided design Today, the mechanics of the drafting task have largely been automated and accelerated through the use of computer-aided design systems (CAD). There are two types of computer-aided design systems used for the production of technical drawings: two dimensions ("2D") and three dimensions ("3D"). 2D CAD systems such as AutoCAD or MicroStation replace the paper drawing discipline. The lines, circles, arcs, and curves are created within the software. It is down to the technical drawing skill of the user to produce the drawing. There is still much scope for error in the drawing when producing first and third angle orthographic projections, auxiliary projections and cross-section views. A 2D CAD system is merely an electronic drawing board. Its greatest strength over direct to paper technical drawing is in the making of revisions. Whereas in a conventional hand drawn technical drawing, if a mistake is found, or a modification is required, a new drawing must be made from scratch, the 2D CAD system allows a copy of the original to be modified, saving considerable time. 2D CAD systems can be used to create plans for large projects such as buildings and aircraft but provide no way to check the various components will fit together. A 3D CAD system (such as KeyCreator, Autodesk Inventor, or SolidWorks) first produces the geometry of the part; the technical drawing comes from user defined views of that geometry. Any orthographic, projected or sectioned view is created by the software. There is no scope for error in the production of these views. The main scope for error comes in setting the parameter of first or third angle projection and displaying the relevant symbol on the technical drawing. 3D CAD allows individual parts to be assembled together to represent the final product. Buildings, aircraft, ships, and cars are modelled, assembled, and checked in 3D before technical drawings are released for manufacture. Both 2D and 3D CAD systems can be used to produce technical drawings for any discipline. The various disciplines (electrical, electronic, pneumatic, hydraulic, etc.) have industry recognized symbols to represent common components. BS and ISO produce standards to show recommended practices but it is up to individuals to produce the drawings to a standard. There is no definitive standard for layout or style. The only standard across engineering workshop drawings is in the creation of orthographic projections and cross-section views. In representing complex, three-dimensional objects in two-dimensional drawings, the objects can be described by at least one view plus material thickness note, 2, 3 or as many views and sections that are required to show all features of object. Applications Architecture The art and design that goes into making buildings is known as architecture. To communicate all aspects of the shape or design, detail drawings are used. In this field, the term plan is often used when referring to the full section view of these drawings as viewed from three feet above finished floor to show the locations of doorways, windows, stairwells, etc. Architectural drawings describe and document an architect's design. Engineering Engineering can be a very broad term. It stems from the Latin ingenerare, meaning "to create". Because this could apply to everything that humans create, it is given a narrower definition in the context of technical drawing. Engineering drawings generally deal with mechanical engineered items, such as manufactured parts and equipment. Engineering drawings are usually created in accordance with standardized conventions for layout, nomenclature, interpretation, appearance (such as typefaces and line styles), size, etc. Its purpose is to accurately and unambiguously capture all the geometric features of a product or a component. The end goal of an engineering drawing is to convey all the required information that will allow a manufacturer to produce that component. Software engineering Software engineering practitioners make use of diagrams for designing software. Formal standards and modelling languages such as Unified Modelling Language (UML) exist but most diagramming happens using informal ad hoc diagrams that illustrate a conceptual model. Practitioners reported that diagramming helped with analysing requirements, design, refactoring, documentation, onboarding, communication with stake holders. Diagrams are often transient or redrawn as required. Redrawn diagrams can act as a form of shared understanding in a team. Related fields Technical illustration Technical illustration is the use of illustration to visually communicate information of a technical nature. Technical illustrations can be component technical drawings or diagrams. The aim of technical illustration is "to generate expressive images that effectively convey certain information via the visual channel to the human observer". The main purpose of technical illustration is to describe or explain these items to a more or less nontechnical audience. The visual image should be accurate in terms of dimensions and proportions, and should provide "an overall impression of what an object is or does, to enhance the viewer's interest and understanding". According to Viola (2005), "illustrative techniques are often designed in a way that even a person with no technical understanding clearly understands the piece of art. The use of varying line widths to emphasize mass, proximity, and scale helped to make a simple line drawing more understandable to the lay person. Cross hatching, stippling, and other low abstraction techniques gave greater depth and dimension to the subject matter". Cutaway drawing A cutaway drawing is a technical illustration, in which part of the surface of a three-dimensional model is removed in order to show some of the model's interior in relation to its exterior. The purpose of a cutaway drawing is to "allow the viewer to have a look into an otherwise solid opaque object. Instead of letting the inner object shine through the surrounding surface, parts of outside object are simply removed. This produces a visual appearance as if someone had cutout a piece of the object or sliced it into parts. Cutaway illustrations avoid ambiguities with respect to spatial ordering, provide a sharp contrast between foreground and background objects, and facilitate a good understanding of spatial ordering". Technical drawings Types The two types of technical drawings are based on graphical projection. This is used to create an image of a three-dimensional object onto a two-dimensional surface. Two-dimensional representation Two-dimensional representation uses orthographic projection to create an image where only two of the three dimensions of the object are seen. Three-dimensional representation In a three-dimensional representation, also referred to as a pictorial, all three dimensions of an object are visible. Views Multiview Multiview is a type of orthographic projection. There are two conventions for using multiview, first-angle and third-angle. In both cases, the front or main side of the object is the same. First-angle is drawing the object sides based on where they land. Example, looking at the front side, rotate the object 90 degrees to the right. What is seen will be drawn to the right of the front side. Third-angle is drawing the object sides based on where they are. Example, looking at the front side, rotate the object 90 degrees to the right. What is seen is actually the left side of the object and will be drawn to the left of the front side. Section While multiview relates to external surfaces of an object, section views show an imaginary plane cut through an object. This is often useful to show voids in an object. Auxiliary Auxiliary views utilize an additional projection plane other than the common planes in a multiview. Since the features of an object need to show the true shape and size of the object, the projection plane must be parallel to the object surface. Therefore, any surface that is not in line with the three major axis needs its own projection plane to show the features correctly. Pattern Patterns, sometimes called developments, show the size and shape of a flat piece of material needed for later bending or folding into a three-dimensional shape. Exploded An exploded-view drawing is a technical drawing of an object that shows the relationship or order of assembly of the various parts. It shows the components of an object slightly separated by distance or suspended in surrounding space in the case of a three-dimensional exploded diagram. An object is represented as if there had been a small controlled explosion emanating from the middle of the object, causing the object's parts to be separated relative distances away from their original locations. An exploded view drawing (EVD) can show the intended assembly of mechanical or other parts. In mechanical systems, the component closest to the center is usually assembled first or is the main part inside which the other parts are assembled. The EVD can also help to represent the disassembly of parts, where those on the outside are normally removed first. Standards and conventions Basic drafting paper sizes There have been many standard sizes of paper at different times and in different countries, but today most of the world uses the international standard (A4 and its siblings). North America uses its own sizes. Patent drawing The applicant for a patent will be required by law to furnish a drawing of the invention if or when the nature of the case requires a drawing to understand the invention with the job. This drawing must be filed with the application. This includes practically all inventions except compositions of matter or processes, but a drawing may also be useful in the case of many processes. The drawing must show every feature of the invention specified in the claims and is required by the patent office rules to be in a particular form. The Office specifies the size of the sheet on which the drawing is made, the type of paper, the margins, and other details relating to the making of the drawing. The reason for specifying the standards in detail is that the drawings are printed and published in a uniform style when the patent issues and the drawings must also be such that they can be readily understood by persons using the patent descriptions. Sets of technical drawings Working drawings for production Working drawings are the set of technical drawings used during the manufacturing phase of a product. In architecture, these include civil drawings, architectural drawings, structural drawings, mechanical systems drawings, electrical drawings, and plumbing drawings. Assembly drawings Assembly drawings show how different parts go together, identify those parts by number, and have a parts list, often referred to as a bill of materials. In a technical service manual, this type of drawing may be referred to as an exploded view drawing or diagram. These parts may be used in engineering. As-fitted drawings Also called As-Built drawings or As-made drawings. As-fitted drawings represent a record of the completed works, literally 'as fitted'. These are based upon the working drawings and updated to reflect any changes or alterations undertaken during construction or manufacture. See also Circuit diagram Linear scale Reprography Schematic diagram Shop drawing Technical communication Technical geography Technical lettering Specification (technical standard) Geometric drawing References Further reading Peter J. Booker (1963). A History of Engineering Drawing. London: Northgate. Franz Maria Feldhaus (1963). The History of Technical Drawing Wolfgang Lefèvre ed. (2004). Picturing Machines 1400–1700: How technical drawings shaped early engineering practice. MIT Press, 2004. External links Historical technical diagrams and drawings on NASA.gov A history of CAD Drafting Standards Architecture occupations Engineering occupations Infographics
Technical drawing
[ "Engineering" ]
3,244
[ "Design engineering", "Civil engineering", "Architecture occupations", "Technical drawing", "Architecture" ]
54,962
https://en.wikipedia.org/wiki/Geophysics
Geophysics () is a subject of natural science concerned with the physical processes and physical properties of the Earth and its surrounding space environment, and the use of quantitative methods for their analysis. Geophysicists, who usually study geophysics, physics, or one of the Earth sciences at the graduate level, complete investigations across a wide range of scientific disciplines. The term geophysics classically refers to solid earth applications: Earth's shape; its gravitational, magnetic fields, and electromagnetic fields ; its internal structure and composition; its dynamics and their surface expression in plate tectonics, the generation of magmas, volcanism and rock formation. However, modern geophysics organizations and pure scientists use a broader definition that includes the water cycle including snow and ice; fluid dynamics of the oceans and the atmosphere; electricity and magnetism in the ionosphere and magnetosphere and solar-terrestrial physics; and analogous problems associated with the Moon and other planets. Although geophysics was only recognized as a separate discipline in the 19th century, its origins date back to ancient times. The first magnetic compasses were made from lodestones, while more modern magnetic compasses played an important role in the history of navigation. The first seismic instrument was built in 132 AD. Isaac Newton applied his theory of mechanics to the tides and the precession of the equinox; and instruments were developed to measure the Earth's shape, density and gravity field, as well as the components of the water cycle. In the 20th century, geophysical methods were developed for remote exploration of the solid Earth and the ocean, and geophysics played an essential role in the development of the theory of plate tectonics. Geophysics is applied to societal needs, such as mineral resources, mitigation of natural hazards and environmental protection. In exploration geophysics, geophysical survey data are used to analyze potential petroleum reservoirs and mineral deposits, locate groundwater, find archaeological relics, determine the thickness of glaciers and soils, and assess sites for environmental remediation. Physical phenomena Geophysics is a highly interdisciplinary subject, and geophysicists contribute to every area of the Earth sciences, while some geophysicists conduct research in the planetary sciences. To provide a more clear idea on what constitutes geophysics, this section describes phenomena that are studied in physics and how they relate to the Earth and its surroundings. Geophysicists also investigate the physical processes and properties of the Earth, its fluid layers, and magnetic field along with the near-Earth environment in the Solar System, which includes other planetary bodies. Gravity The gravitational pull of the Moon and Sun gives rise to two high tides and two low tides every lunar day, or every 24 hours and 50 minutes. Therefore, there is a gap of 12 hours and 25 minutes between every high tide and between every low tide. Gravitational forces make rocks press down on deeper rocks, increasing their density as the depth increases. Measurements of gravitational acceleration and gravitational potential at the Earth's surface and above it can be used to look for mineral deposits (see gravity anomaly and gravimetry). The surface gravitational field provides information on the dynamics of tectonic plates. The geopotential surface called the geoid is one definition of the shape of the Earth. The geoid would be the global mean sea level if the oceans were in equilibrium and could be extended through the continents (such as with very narrow canals). Heat flow The Earth is cooling, and the resulting heat flow generates the Earth's magnetic field through the geodynamo and plate tectonics through mantle convection. The main sources of heat are: primordial heat due to Earth's cooling and radioactivity in the planets upper crust. There is also some contributions from phase transitions. Heat is mostly carried to the surface by thermal convection, although there are two thermal boundary layers – the core–mantle boundary and the lithosphere – in which heat is transported by conduction. Some heat is carried up from the bottom of the mantle by mantle plumes. The heat flow at the Earth's surface is about , and it is a potential source of geothermal energy. Vibrations Seismic waves are vibrations that travel through the Earth's interior or along its surface. The entire Earth can also oscillate in forms that are called normal modes or free oscillations of the Earth. Ground motions from waves or normal modes are measured using seismographs. If the waves come from a localized source such as an earthquake or explosion, measurements at more than one location can be used to locate the source. The locations of earthquakes provide information on plate tectonics and mantle convection. Recording of seismic waves from controlled sources provides information on the region that the waves travel through. If the density or composition of the rock changes, waves are reflected. Reflections recorded using Reflection Seismology can provide a wealth of information on the structure of the earth up to several kilometers deep and are used to increase our understanding of the geology as well as to explore for oil and gas. Changes in the travel direction, called refraction, can be used to infer the deep structure of the Earth. Earthquakes pose a risk to humans. Understanding their mechanisms, which depend on the type of earthquake (e.g., intraplate or deep focus), can lead to better estimates of earthquake risk and improvements in earthquake engineering. Electricity Although we mainly notice electricity during thunderstorms, there is always a downward electric field near the surface that averages 120 volts per meter. Relative to the solid Earth, the ionization of the planet's atmosphere is a result of the galactic cosmic rays penetrating it, which leaves it with a net positive charge. A current of about 1800 amperes flows in the global circuit. It flows downward from the ionosphere over most of the Earth and back upwards through thunderstorms. The flow is manifested by lightning below the clouds and sprites above. A variety of electric methods are used in geophysical survey. Some measure spontaneous potential, a potential that arises in the ground because of human-made or natural disturbances. Telluric currents flow in Earth and the oceans. They have two causes: electromagnetic induction by the time-varying, external-origin geomagnetic field and motion of conducting bodies (such as seawater) across the Earth's permanent magnetic field. The distribution of telluric current density can be used to detect variations in electrical resistivity of underground structures. Geophysicists can also provide the electric current themselves (see induced polarization and electrical resistivity tomography). Electromagnetic waves Electromagnetic waves occur in the ionosphere and magnetosphere as well as in Earth's outer core. Dawn chorus is believed to be caused by high-energy electrons that get caught in the Van Allen radiation belt. Whistlers are produced by lightning strikes. Hiss may be generated by both. Electromagnetic waves may also be generated by earthquakes (see seismo-electromagnetics). In the highly conductive liquid iron of the outer core, magnetic fields are generated by electric currents through electromagnetic induction. Alfvén waves are magnetohydrodynamic waves in the magnetosphere or the Earth's core. In the core, they probably have little observable effect on the Earth's magnetic field, but slower waves such as magnetic Rossby waves may be one source of geomagnetic secular variation. Electromagnetic methods that are used for geophysical survey include transient electromagnetics, magnetotellurics, surface nuclear magnetic resonance and electromagnetic seabed logging. Magnetism The Earth's magnetic field protects the Earth from the deadly solar wind and has long been used for navigation. It originates in the fluid motions of the outer core. The magnetic field in the upper atmosphere gives rise to the auroras. The Earth's field is roughly like a tilted dipole, but it changes over time (a phenomenon called geomagnetic secular variation). Mostly the geomagnetic pole stays near the geographic pole, but at random intervals averaging 440,000 to a million years or so, the polarity of the Earth's field reverses. These geomagnetic reversals, analyzed within a Geomagnetic Polarity Time Scale, contain 184 polarity intervals in the last 83 million years, with change in frequency over time, with the most recent brief complete reversal of the Laschamp event occurring 41,000 years ago during the last glacial period. Geologists observed geomagnetic reversal recorded in volcanic rocks, through magnetostratigraphy correlation (see natural remanent magnetization) and their signature can be seen as parallel linear magnetic anomaly stripes on the seafloor. These stripes provide quantitative information on seafloor spreading, a part of plate tectonics. They are the basis of magnetostratigraphy, which correlates magnetic reversals with other stratigraphies to construct geologic time scales. In addition, the magnetization in rocks can be used to measure the motion of continents. Radioactivity Radioactive decay accounts for about 80% of the Earth's internal heat, powering the geodynamo and plate tectonics. The main heat-producing isotopes are potassium-40, uranium-238, uranium-235, and thorium-232. Radioactive elements are used for radiometric dating, the primary method for establishing an absolute time scale in geochronology. Unstable isotopes decay at predictable rates, and the decay rates of different isotopes cover several orders of magnitude, so radioactive decay can be used to accurately date both recent events and events in past geologic eras. Radiometric mapping using ground and airborne gamma spectrometry can be used to map the concentration and distribution of radioisotopes near the Earth's surface, which is useful for mapping lithology and alteration. Fluid dynamics Fluid motions occur in the magnetosphere, atmosphere, ocean, mantle and core. Even the mantle, though it has an enormous viscosity, flows like a fluid over long time intervals. This flow is reflected in phenomena such as isostasy, post-glacial rebound and mantle plumes. The mantle flow drives plate tectonics and the flow in the Earth's core drives the geodynamo. Geophysical fluid dynamics is a primary tool in physical oceanography and meteorology. The rotation of the Earth has profound effects on the Earth's fluid dynamics, often due to the Coriolis effect. In the atmosphere, it gives rise to large-scale patterns like Rossby waves and determines the basic circulation patterns of storms. In the ocean, they drive large-scale circulation patterns as well as Kelvin waves and Ekman spirals at the ocean surface. In the Earth's core, the circulation of the molten iron is structured by Taylor columns. Waves and other phenomena in the magnetosphere can be modeled using magnetohydrodynamics. Mineral physics The physical properties of minerals must be understood to infer the composition of the Earth's interior from seismology, the geothermal gradient and other sources of information. Mineral physicists study the elastic properties of minerals; their high-pressure phase diagrams, melting points and equations of state at high pressure; and the rheological properties of rocks, or their ability to flow. Deformation of rocks by creep make flow possible, although over short times the rocks are brittle. The viscosity of rocks is affected by temperature and pressure, and in turn, determines the rates at which tectonic plates move. Water is a very complex substance and its unique properties are essential for life. Its physical properties shape the hydrosphere and are an essential part of the water cycle and climate. Its thermodynamic properties determine evaporation and the thermal gradient in the atmosphere. The many types of precipitation involve a complex mixture of processes such as coalescence, supercooling and supersaturation. Some precipitated water becomes groundwater, and groundwater flow includes phenomena such as percolation, while the conductivity of water makes electrical and electromagnetic methods useful for tracking groundwater flow. Physical properties of water such as salinity have a large effect on its motion in the oceans. The many phases of ice form the cryosphere and come in forms like ice sheets, glaciers, sea ice, freshwater ice, snow, and frozen ground (or permafrost). Regions of the Earth Size and form of the Earth Contrary to popular belief, the earth is not entirely spherical but instead generally exhibits an ellipsoid shape- which is a result of the centrifugal forces the planet generates due to its constant motion. These forces cause the planets diameter to bulge towards the Equator and results in the ellipsoid shape. Earth's shape is constantly changing, and different factors including glacial isostatic rebound (large ice sheets melting causing the Earth's crust to the rebound due to the release of the pressure), geological features such as mountains or ocean trenches, tectonic plate dynamics, and natural disasters can further distort the planet's shape. Structure of the interior Evidence from seismology, heat flow at the surface, and mineral physics is combined with the Earth's mass and moment of inertia to infer models of the Earth's interior – its composition, density, temperature, pressure. For example, the Earth's mean specific gravity () is far higher than the typical specific gravity of rocks at the surface (), implying that the deeper material is denser. This is also implied by its low moment of inertia (, compared to for a sphere of constant density). However, some of the density increase is compression under the enormous pressures inside the Earth. The effect of pressure can be calculated using the Adams–Williamson equation. The conclusion is that pressure alone cannot account for the increase in density. Instead, we know that the Earth's core is composed of an alloy of iron and other minerals. Reconstructions of seismic waves in the deep interior of the Earth show that there are no S-waves in the outer core. This indicates that the outer core is liquid, because liquids cannot support shear. The outer core is liquid, and the motion of this highly conductive fluid generates the Earth's field. Earth's inner core, however, is solid because of the enormous pressure. Reconstruction of seismic reflections in the deep interior indicates some major discontinuities in seismic velocities that demarcate the major zones of the Earth: inner core, outer core, mantle, lithosphere and crust. The mantle itself is divided into the upper mantle, transition zone, lower mantle and D′′ layer. Between the crust and the mantle is the Mohorovičić discontinuity. The seismic model of the Earth does not by itself determine the composition of the layers. For a complete model of the Earth, mineral physics is needed to interpret seismic velocities in terms of composition. The mineral properties are temperature-dependent, so the geotherm must also be determined. This requires physical theory for thermal conduction and convection and the heat contribution of radioactive elements. The main model for the radial structure of the interior of the Earth is the preliminary reference Earth model (PREM). Some parts of this model have been updated by recent findings in mineral physics (see post-perovskite) and supplemented by seismic tomography. The mantle is mainly composed of silicates, and the boundaries between layers of the mantle are consistent with phase transitions. The mantle acts as a solid for seismic waves, but under high pressures and temperatures, it deforms so that over millions of years it acts like a liquid. This makes plate tectonics possible. Magnetosphere If a planet's magnetic field is strong enough, its interaction with the solar wind forms a magnetosphere. Early space probes mapped out the gross dimensions of the Earth's magnetic field, which extends about 10 Earth radii towards the Sun. The solar wind, a stream of charged particles, streams out and around the terrestrial magnetic field, and continues behind the magnetic tail, hundreds of Earth radii downstream. Inside the magnetosphere, there are relatively dense regions of solar wind particles called the Van Allen radiation belts. Methods Geodesy Geophysical measurements are generally at a particular time and place. Accurate measurements of position, along with earth deformation and gravity, are the province of geodesy. While geodesy and geophysics are separate fields, the two are so closely connected that many scientific organizations such as the American Geophysical Union, the Canadian Geophysical Union and the International Union of Geodesy and Geophysics encompass both. Absolute positions are most frequently determined using the global positioning system (GPS). A three-dimensional position is calculated using messages from four or more visible satellites and referred to the 1980 Geodetic Reference System. An alternative, optical astronomy, combines astronomical coordinates and the local gravity vector to get geodetic coordinates. This method only provides the position in two coordinates and is more difficult to use than GPS. However, it is useful for measuring motions of the Earth such as nutation and Chandler wobble. Relative positions of two or more points can be determined using very-long-baseline interferometry. Gravity measurements became part of geodesy because they were needed to related measurements at the surface of the Earth to the reference coordinate system. Gravity measurements on land can be made using gravimeters deployed either on the surface or in helicopter flyovers. Since the 1960s, the Earth's gravity field has been measured by analyzing the motion of satellites. Sea level can also be measured by satellites using radar altimetry, contributing to a more accurate geoid. In 2002, NASA launched the Gravity Recovery and Climate Experiment (GRACE), wherein two twin satellites map variations in Earth's gravity field by making measurements of the distance between the two satellites using GPS and a microwave ranging system. Gravity variations detected by GRACE include those caused by changes in ocean currents; runoff and ground water depletion; melting ice sheets and glaciers. Satellites and space probes Satellites in space have made it possible to collect data from not only the visible light region, but in other areas of the electromagnetic spectrum. The planets can be characterized by their force fields: gravity and their magnetic fields, which are studied through geophysics and space physics. Measuring the changes in acceleration experienced by spacecraft as they orbit has allowed fine details of the gravity fields of the planets to be mapped. For example, in the 1970s, the gravity field disturbances above lunar maria were measured through lunar orbiters, which led to the discovery of concentrations of mass, mascons, beneath the Imbrium, Serenitatis, Crisium, Nectaris and Humorum basins. Global positioning systems (GPS) and geographical information systems (GIS) Since geophysics is concerned with the shape of the Earth, and by extension the mapping of features around and in the planet, geophysical measurements include high accuracy GPS measurements. These measurements are processed to increase their accuracy through differential GPS processing. Once the geophysical measurements have been processed and inverted, the interpreted results are plotted using GIS. Programs such as ArcGIS and Geosoft were built to meet these needs and include many geophysical functions that are built-in, such as upward continuation, and the calculation of the measurement derivative such as the first-vertical derivative. Many geophysics companies have designed in-house geophysics programs that pre-date ArcGIS and GeoSoft in order to meet the visualization requirements of a geophysical dataset. Remote sensing Exploration geophysics is a branch of applied geophysics that involves the development and utilization of different seismic or electromagnetic methods which the aim of investigating different energy, mineral and water resources. This is done through the uses of various remote sensing platforms such as; satellites, aircraft, boats, drones, borehole sensing equipment and seismic receivers. These equipment are often used in conjunction with different geophysical methods such as magnetic, gravimetry, electromagnetic, radiometric, barometry methods in order to gather the data. The remote sensing platforms used in exploration geophysics are not perfect and need adjustments done on them in order to accurately account for the effects that the platform itself may have on the collected data. For example, when gathering aeromagnetic data (aircraft gathered magnetic data) using a conventional fixed-wing aircraft- the platform has to be adjusted to account for the electromagnetic currents that it may generate as it passes through Earth's magnetic field. There are also corrections related to changes in measured potential field intensity as the Earth rotates, as the Earth orbits the Sun, and as the moon orbits the Earth. Signal processing Geophysical measurements are often recorded as time-series with GPS location. Signal processing involves the correction of time-series data for unwanted noise or errors introduced by the measurement platform, such as aircraft vibrations in gravity data. It also involves the reduction of sources of noise, such as diurnal corrections in magnetic data. In seismic data, electromagnetic data, and gravity data, processing continues after error corrections to include computational geophysics which result in the final interpretation of the geophysical data into a geological interpretation of the geophysical measurements History Geophysics emerged as a separate discipline only in the 19th century, from the intersection of physical geography, geology, astronomy, meteorology, and physics. The first known use of the word geophysics was in German ("Geophysik") by Julius Fröbel in 1834. However, many geophysical phenomena – such as the Earth's magnetic field and earthquakes – have been investigated since the ancient era. Ancient and classical eras The magnetic compass existed in China back as far as the fourth century BC. It was used as much for feng shui as for navigation on land. It was not until good steel needles could be forged that compasses were used for navigation at sea; before that, they could not retain their magnetism long enough to be useful. The first mention of a compass in Europe was in 1190 AD. In circa 240 BC, Eratosthenes of Cyrene deduced that the Earth was round and measured the circumference of Earth with great precision. He developed a system of latitude and longitude. Perhaps the earliest contribution to seismology was the invention of a seismoscope by the prolific inventor Zhang Heng in 132 AD. This instrument was designed to drop a bronze ball from the mouth of a dragon into the mouth of a toad. By looking at which of eight toads had the ball, one could determine the direction of the earthquake. It was 1571 years before the first design for a seismoscope was published in Europe, by Jean de la Hautefeuille. It was never built. Beginnings of modern science The 17th century had major milestones that marked the beginning of modern science. In 1600, William Gilbert release a publication titled De Magnete (1600) where he conducted series of experiments on both natural magnets (called 'loadstones') and artificially magnetized iron. His experiments lead to observations involving a small compass needle (versorium) which replicated magnetic behaviours when subjected to a spherical magnet, along with it experiencing 'magnetic dips' when it was pivoted on a horizontal axis. HIs findings led to the deduction that compasses point north due to the Earth itself being a giant magnet. In 1687 Isaac Newton published his work titled Principia which was pivotal in the development of modern scientific fields such as astronomy and physics. In it, Newton both laid the foundations for classical mechanics and gravitation, as well as explained different geophysical phenomena such as the precession of the equinox (the orbit of whole star patterns along an ecliptic axis. Newton's theory of gravity had gained so much success, that it resulted in changing the main objective of physics in that era to unravel natures fundamental forces, and their characterizations in laws. The first seismometer, an instrument capable of keeping a continuous record of seismic activity, was built by James Forbes in 1844. See also International Union of Geodesy and Geophysics (IUGG) Sociedade Brasileira de Geofísica Geological Engineering Physics Space physics Geosciences Geodesy Notes References External links A reference manual for near-surface geophysics techniques and applications Commission on Geophysical Risk and Sustainability (GeoRisk), International Union of Geodesy and Geophysics (IUGG) Study of the Earth's Deep Interior, a Committee of IUGG Union Commissions (IUGG) USGS Geomagnetism Program Career crate: Seismic processor Society of Exploration Geophysicists Earth sciences Subfields of geology Applied and interdisciplinary physics
Geophysics
[ "Physics", "Mathematics" ]
5,006
[ "Applied and interdisciplinary physics", "Applied mathematics", "nan", "Geophysics", "Geodesy" ]
54,969
https://en.wikipedia.org/wiki/Snail
A snail is a shelled gastropod. The name is most often applied to land snails, terrestrial pulmonate gastropod molluscs. However, the common name snail is also used for most of the members of the molluscan class Gastropoda that have a coiled shell that is large enough for the animal to retract completely into. When the word "snail" is used in this most general sense, it includes not just land snails but also numerous species of sea snails and freshwater snails. Gastropods that naturally lack a shell, or have only an internal shell, are mostly called slugs, and land snails that have only a very small shell (that they cannot retract into) are often called semi-slugs. Snails have considerable human relevance, including as food items, as pests, and as vectors of disease, and their shells are used as decorative objects and are incorporated into jewellery. The snail has also had some cultural significance, tending to be associated with lethargy. The snail has also been used as a figure of speech in reference to slow-moving things. Overview Snails that respire using a lung belong to the group Pulmonata. As traditionally defined, the Pulmonata were found to be polyphyletic in a molecular study per Jörger et al., dating from 2010. But snails with gills also form a polyphyletic group; in other words, snails with lungs and snails with gills form a number of taxonomic groups that are not necessarily more closely related to each other than they are related to some other groups. Both snails that have lungs and snails that have gills have diversified so widely over geological time that a few species with gills can be found on land and numerous species with lungs can be found in freshwater. Even a few marine species have lungs. Snails can be found in a very wide range of environments, including ditches, deserts, and the abyssal depths of the sea. Although land snails may be more familiar to laymen, marine snails constitute the majority of snail species, and have much greater diversity and a greater biomass. Numerous kinds of snail can also be found in fresh water. Most snails have thousands of microscopic tooth-like structures located on a banded ribbon-like tongue called a radula. The radula works like a file, ripping food into small pieces. Many snails are herbivorous, eating plants or rasping algae from surfaces with their radulae, though a few land species and many marine species are omnivores or predatory carnivores. Snails cannot absorb colored pigments when eating paper or cardboard so their feces are also colored. Several species of the genus Achatina and related genera are known as giant African land snails; some grow to from snout to tail, and weigh . The largest living species of sea snail is Syrinx aruanus; its shell can measure up to in length, and the whole animal with the shell can weigh up to . The smallest land snail, Angustopila psammion, was discovered in 2022 and measures 0.6 mm in diameter. The largest known land gastropod is the African giant snail Achatina achatina, the largest recorded specimen of which measured from snout to tail when fully extended, with a shell length of in December 1978. It weighed exactly 900 g (about 2 lb). Named Gee Geronimo, this snail was owned by Christopher Hudson (1955–79) of Hove, East Sussex, UK, and was collected in Sierra Leone in June 1976. Snails are protostomes. That means during development, in the gastrulation phase, the blastopore forms the mouth first. Cleavage in snails is spiral holoblastic patterning. In spiral holoblastic cleavage, the cleavage plane rotates each division and the cell divisions are complete. Snails do not undergo metamorphosis after hatching. Snails hatch in the form of small adults. The only additional development they will undergo is to consume calcium to strengthen their shell. Snails can be male, female, hermaphroditic, or parthenogenetic so there are many different systems of sexual determination. Anatomy Snails have complex organ systems and anatomies that differ greatly from most animals. Snails and most other Mollusca share three anatomical features; the foot, the mantle, and the radula. Foot: The foot is a muscular organ used by Gastropods for locomotion. Gastropods' stomachs are located within their foot. Both land and sea snails travel by contracting foot muscles to deform the mucus layer beneath it into different wave-like patterns. Mantle: The mantle is the organ that produces shells for most species of mollusca. In snails, the mantle secretes the shell along the snail shell opening, continuously growing and producing the shell for the entirety of the snail’s life. The mantle creates a compartment known as the mantle cavity and is used by many mollusca as the surface where gas exchange occurs. Snails that use the mantle cavity as a lung are known as Pulmonate snails. Other snails may only have a gill. Snails in the Caenogastropoda families like Ampullariidae, have both a gill and a lung. Shell: Snail shells are mainly composed of a mixture of proteins called conchin, and calcium carbonate. Conchin is the main component in the outer layer of the shell, known as the periostracum. The inner layers of the shell are composed of a network of calcium carbonate, conchin, and different mineral salts. The mantle produces the shell through addition around a central axis called the columella, causing a spiraling pattern. The spiraling patterns on a snail’s shell are known as coils or whorls. Whorl size generally increases as the snail ages. Size differences in shell size are believed to be mainly influenced by genetic and environmental components. Moister conditions often correlate with larger snails. In larger populations, adult snails attain smaller shell sizes due to the effects of pheromones on growth rate. Radula: The radula is an anatomical structure used by most species of Mollusca for feeding. Gastropods are morphologically highly variable and have diverse feeding strategies. Snails can be herbivores, detritivores, scavengers, parasites, ciliary feeders, or have highly specialized predation. Nearly all snails utilize a feeding apparatus including the oral structures of one or more jaws and the radula. The radula comprises a chitinous ribbon with teeth arranged in transverse and longitudinal rows. The radula continually renews itself during the entire lifespan of a mollusk. The teeth and membrane are continuously synthesized in the radular sac and then shifted forward towards the working zone of the radula. The teeth harden and mineralize during their travel to the working zone. The presence of the radula is common throughout most snail species, but often differs in many characteristics, like the shape, size, and number of odontoblasts that form a tooth. Diet The average snail's diet varies greatly depending on the species, including different feeding styles from herbivores to highly specialized feeders and parasites. Some snails like the Euglandina rosea, or rosy wolfsnail, are carnivorous and prey on other snails. However, most land snails are herbivores or omnivores. Among land snails, there is also a large variation in preference for specific food. For example, Cepaea nemoralis, or the grove snail, prefers dead plant material over fresh herbs or grasses. Age may also impact food preference, with adult grove snails showing a significantly larger preference for dead plant material than juvenile grove snails. Other snails, like the generalist herbivore Arianta arbustorum, or copse snail, choose their meals based on availability, consuming a mix of arthropods, wilted flowers, fresh and decayed plant material, and soil. Generally, land snails are most active at night due to the damp weather. The humid nighttime air minimizes water evaporation and is beneficial to land snails because their movement requires mucus, which is mostly composed of water. In addition to aiding movement, mucus plays a vital role in transporting food from the gill to the mouth, cleansing the mantle cavity, and trapping food before ingestion. Types of snails by habitat Slugs Gastropods that lack a conspicuous shell are commonly called slugs rather than snails. Some species of slug have a maroon-brown shell, some have only an internal vestige that serves mainly as a calcium lactate repository, and others have some to no shell at all. Other than that there is little morphological difference between slugs and snails. There are however important differences in habitats and behavior. A shell-less animal is much more maneuverable and compressible, so even quite large land slugs can take advantage of habitats or retreats with very little space, retreats that would be inaccessible to a similar-sized snail. Slugs squeeze themselves into confined spaces such as under loose bark on trees or under stone slabs, logs or wooden boards lying on the ground. In such retreats they are in less danger from either predators or desiccation. Those are often suitable places for laying their eggs. Slugs as a group are far from monophyletic; scientifically speaking "slug" is a term of convenience with little taxonomic significance. The reduction or loss of the shell has evolved many times independently within several very different lineages of gastropods. The various taxa of land and sea gastropods with slug morphology occur within numerous higher taxonomic groups of shelled species; such independent slug taxa are not in general closely related to one another. Parasitic diseases Snails can also be associated with parasitic diseases such as schistosomiasis, angiostrongyliasis, fasciolopsiasis, opisthorchiasis, fascioliasis, paragonimiasis and clonorchiasis, which can be transmitted to humans. Human relevance Land snails are known as an agricultural and garden pest but some species are an edible delicacy and occasionally household pets. In addition, their mucus can also be used for skin care products. In agriculture There is a variety of snail-control measures that gardeners and farmers use in an attempt to reduce damage to valuable plants. Traditional pesticides are still used, as are many less toxic control options such as concentrated garlic or wormwood solutions. Copper metal is also a snail repellent, and thus a copper band around the trunk of a tree will prevent snails from climbing up and reaching the foliage and fruit. A layer of a dry, finely ground, and scratchy substance such as diatomaceous earth can also deter snails. The decollate snail (Rumina decollata) will capture and eat garden snails, and because of this it has sometimes been introduced as a biological pest control agent. However, this is not without problems, as the decollate snail is just as likely to attack and devour other gastropods that may represent a valuable part of the native fauna of the region. Textiles Certain varieties of snails, notably the family Muricidae, produce a secretion that is a color-fast natural dye. The ancient Tyrian purple was made in this way as were other purple and blue dyes. The extreme expense of extracting this secretion is sufficient quantities limited its use to the very wealthy. It is such dyes as these that led to certain shades of purple and blue being associated with royalty and wealth. As pets Throughout history, snails have been kept as pets. There are many famous snails such as Lefty (Born Jeremy) and within fiction, Gary and Brian the snail. Culinary use In French cuisine, edible snails are served for instance in Escargot à la Bourguignonne. The practice of rearing snails for food is known as heliciculture. For purposes of cultivation, the snails are kept in a dark place in a wired cage with dry straw or dry wood. Coppiced wine-grape vines are often used for this purpose. During the rainy period, the snails come out of hibernation and release most of their mucus onto the dry wood/straw. The snails are then prepared for cooking. Their texture when cooked is slightly chewy and tender. As well as being eaten as gourmet food, several species of land snails provide an easily harvested source of protein to many people in poor communities around the world. Many land snails are valuable because they can feed on a wide range of agricultural wastes, such as shed leaves in banana plantations. In some countries, giant African land snails are produced commercially for food. Land snails, freshwater snails and sea snails are all eaten in many countries. In certain parts of the world snails are fried. For example, in Indonesia, they are fried as satay, a dish known as sate kakul. The eggs of certain snail species are eaten in a fashion similar to the way caviar is eaten. In Bulgaria, snails are traditionally cooked in an oven with rice or fried in a pan with vegetable oil and red paprika powder. Before they are used for those dishes, however, they are thoroughly boiled in hot water (for up to 90 minutes) and manually extracted from their shells. The two species most commonly used for food in the country are Helix lucorum and Helix pomatia. Snails and slug species that are not normally eaten in certain areas have occasionally been used as famine food in historical times. A history of Scotland written in the 1800s recounts a description of various snails and their use as food items in times of plague. Cultural depictions Because of its slowness, the snail has traditionally been seen as a symbol of laziness. In Christian culture, it has been used as a symbol of the deadly sin of sloth. In Mayan mythology, the snail is associated with sexual desire, being personified by the god Uayeb. Snails were widely noted and used in divination. The Greek poet Hesiod wrote that snails signified the time to harvest by climbing the stalks, while the Aztec moon god Tecciztecatl bore a snail shell on his back. This symbolised rebirth; the snail's penchant for appearing and disappearing was analogised with the moon. Keong Emas (Javanese and Indonesian for Golden Snail) is a popular Javanese folklore about a princess magically transformed and contained in a golden snail shell. The folklore is a part of popular Javanese Panji cycle telling the stories about the prince Panji Asmoro Bangun (also known as Raden Inu Kertapati) and his consort, princess Dewi Sekartaji (also known as Dewi Chandra Kirana). In contemporary speech, the expression "a snail's pace" is often used to describe a slow, inefficient process. The phrase "snail mail" is used to mean regular postal service delivery of paper messages as opposed to the delivery of email, which can be virtually instantaneous. See also Pasilalinic-sympathetic compass References Gallery External links Introduction to Snails, Infoqis Publishing, Co. Articles containing video clips Mollusc common names Paraphyletic groups
Snail
[ "Biology" ]
3,111
[ "Phylogenetics", "Paraphyletic groups" ]
54,995
https://en.wikipedia.org/wiki/Mantoux%20test
The Mantoux test or Mendel–Mantoux test (also known as the Mantoux screening test, tuberculin sensitivity test, Pirquet test, or PPD test for purified protein derivative) is a tool for screening for tuberculosis (TB) and for tuberculosis diagnosis. It is one of the major tuberculin skin tests used around the world, largely replacing multiple-puncture tests such as the tine test. The Heaf test, a form of tine test, was used until 2005 in the UK, when it was replaced by the Mantoux test. The Mantoux test is endorsed by the American Thoracic Society and Centers for Disease Control and Prevention. It was also used in the USSR and is now prevalent in most of the post-Soviet states, although Soviet mantoux produced many false positives due to children's allergic reaction. History Tuberculin is a glycerol extract of the tubercle bacillus. Purified protein derivative (PPD) tuberculin is a precipitate of species-nonspecific molecules obtained from filtrates of sterilized, concentrated cultures. The tuberculin reaction was first described by Robert Koch in 1890. The test was first developed and described by the German physician Felix Mendel in 1908. It is named after Charles Mantoux, a French physician who built on the work of Koch and Clemens von Pirquet to create his test in 1907. However, the test was unreliable due to impurities in tuberculin which tended to cause false results. Esmond R. Long and Florence B. Seibert identified the active agent in tuberculin as a protein. Seibert then spent a number of years developing methods for separating and purifying the protein from Mycobacterium tuberculosis, obtaining purified protein derivative (PPD) and enabling the creation of a reliable test for tuberculosis. Her first publication on the purification of tuberculin appeared in 1934. By the 1940s, Seibert's PPD was the international standard for tuberculin tests. In 1939, Russian M.A. Linnikova created a modified version of PPD. In 1954, the Soviet Union started mass production of PPD-L, named after Linnikova. Procedure In the Mantoux test, a standard dose of 5 tuberculin units (TU - 0.1 ml), according to the CDC, or 2 TU of Statens Serum Institute (SSI) tuberculin RT23 in 0.1 ml solution, according to the National Health Service, is injected intradermally (between the layers of dermis) on the flexor surface of the left forearm, mid-way between elbow and wrist. The injection should be made with a tuberculin syringe, with the needle bevel facing upward. When placed correctly, injection should produce a pale wheal of the skin, 6 to 10 mm in diameter. The result of the test is read after 48–96 hours, ideally after 72 hours/3rd day. This procedure is termed the 'Mantoux technique'. A person who has been exposed to the bacteria would be expected to mount an immune response in the area of skin containing the bacterial proteins. This response is a classic example of 'delayed-type hypersensitivity reaction' (DTH), a type IV of hypersensitivities. T cells and myeloid cells are attracted to the site of reaction in 1–3 days and generate local inflammation. The reaction is read by measuring the diameter of induration (palpable raised, hardened area) across the forearm (perpendicular to the long axis) in millimeters. If there is no induration, the result should be recorded as "0 mm". Erythema (redness) should not be measured. In the Pirquet version of the test tuberculin is applied to the skin via scarification. Classification of tuberculin reaction The results of this test must be interpreted carefully. The person's medical risk factors determine at which increment (5 mm, 10 mm, or 15 mm) of induration the result is considered positive. A positive result indicates TB exposure. 5 mm or more is positive in An HIV-positive person Persons with recent contacts with a TB patient Persons with nodular or fibrotic changes on chest X-ray consistent with old healed TB Patients with organ transplants, and other immunosuppressed patients 10 mm or more is positive in Recent arrivals (less than five years) from high-prevalence countries Injection drug users Residents and employees of high-risk congregate settings (e.g., prisons, nursing homes, hospitals, homeless shelters, etc.) Mycobacteriology lab personnel Persons with clinical conditions that place them at high risk (e.g., diabetes, prolonged corticosteroid therapy, leukemia, end-stage renal disease, chronic malabsorption syndromes, low body weight, etc.) Children less than four years of age, or children and adolescents exposed to adults in high-risk categories 15 mm or more is positive in Persons with no known risk factors for TB A tuberculin test conversion is defined as an increase of 10 mm or more within a two-year period, regardless of age. Alternative criteria include increases of 6, 12, 15 or 18 mm. False positive result TST (tuberculin skin test) positive is measured by size of induration. The size of the induration considered to be a positive result depends on risk factors. For example, a low-risk patient must have a larger induration for a positive result than a high-risk patient. High-risk groups include recent contacts, those with HIV, those with chest radiograph with fibrotic changes, organ transplant recipients, and those with immunosuppression. A meta-analysis in 2014 found that the Bacillus Calmette–Guérin (BCG) vaccine reduced infections by 19–27% and reduced progression to active tuberculosis by 71%. The Ohio Department of Health states that it give 80% of children protection against tuberculous meningitis and miliary tuberculosis. Therefore, a positive TST/PPD in a person who has received BCG vaccine is interpreted as latent TB infection (LTBI). Due to the test's low specificity, most positive reactions in low-risk individuals are false positives. A false positive result may be caused by nontuberculous mycobacteria or previous administration of BCG vaccine. Vaccination with BCG may result in a false-positive result for many years after vaccination. False positives can also occur when the injected area is touched, causing swelling and itching. If the swelling is less than 5 mm, it is possibly due to error by the healthcare personnel causing inflammation to the area. Another source of false positive results can be allergic reaction or hypersensitivity. Although rare (about 0.08 reported reactions per million doses of tuberculin), these reactions can be dangerous and precautions should be taken by having epinephrin available. False negative result Reaction to the PPD or tuberculin test is suppressed by the following conditions: Recent TB infection (less than 8–10 weeks) Infectious mononucleosis Live virus vaccine - The test should not be carried out within 3 weeks of live virus vaccination (e. g. MMR vaccine or Sabin vaccine). Sarcoidosis Hodgkin's disease Corticosteroid therapy/steroid use Malnutrition Immunological compromise - Those on immuno-suppressive treatment or those with HIV and low CD4 T cell counts, frequently show negative results from the PPD test. This is because the immune system needs to be functional to mount a response to the protein derivative injected under the skin. A false negative result may occur in a person who has been recently infected with TB, but whose immune system hasn't yet reacted to the bacteria. Upper respiratory virus infection In case a second tuberculin test is necessary it should be carried out in the other arm to avoid hypersensitising the skin. BCG vaccine and the Mantoux test The role of Mantoux testing in people who have been vaccinated is disputed. The US recommends that tuberculin skin testing is not contraindicated for BCG-vaccinated persons, and prior BCG vaccination should not influence the interpretation of the test. The UK recommends that interferon-γ testing should be used to help interpret positive Mantoux tests of over 5 mm, and repeated tuberculin skin testing must not be done in people who have had BCG vaccinations. In general, the US recommendation may result in a larger number of people being falsely diagnosed with latent tuberculosis, while the UK approach has an increased chance of missing patients with latent tuberculosis who should be treated. According to the US guidelines, latent tuberculosis infection diagnosis and treatment is considered for any BCG-vaccinated person whose skin test is 10 mm or greater, if any of these circumstances are present: Was in contact with another person with infectious TB Was born or has lived in a high TB prevalence country Is continually exposed to populations where TB prevalence is high Anergy testing In cases of anergy, a lack of reaction by the body's defence mechanisms when it comes into contact with foreign substances, the tuberculin reaction will occur weakly, thus compromising the value of Mantoux testing. For example, anergy is present in AIDS, a disease which strongly depresses the immune system. Therefore, anergy testing is advised in cases where there is suspicion that anergy is present. However, routine anergy skin testing is not recommended. Two-step testing Some people who have been infected with TB may have a negative reaction when tested years after infection, as the immune system response may gradually wane. This initial skin test, though negative, may stimulate (boost) the body's ability to react to tuberculin in future tests. Thus, a positive reaction to a subsequent test may be misinterpreted as a new infection, when in fact it is the result of the boosted reaction to an old infection. Use of two-step testing is recommended for initial skin testing of adults who will be retested periodically (e.g., health care workers). This ensures any future positive tests can be interpreted as being caused by a new infection, rather than simply a reaction to an old infection. The first test is read 48–72 hours after injection. If the first test is positive, consider the person infected. If the first test is negative, give a second test one to three weeks after the first injection. The second test is read 48–72 hours after injection. If the second test is positive, consider the person infected in the distant past. If the second test is negative, consider the person uninfected. A person who is diagnosed as "infected in the distant past" on two-step testing is called a "tuberculin reactor". The US recommendation that prior BCG vaccination be ignored results in almost universal false diagnosis of tuberculosis infection in people who have had BCG (mostly foreign nationals). The latest interpretation for Mantoux test results According to the guidelines published by Centers for Disease Control and Prevention in 2005, the results are re-categorized into 3 parts based on their previous or baseline outcomes: Baseline test: ≥10 mm is positive (either first or second step); 0 to 9 mm is negative Serial testing without known exposure: Increase of ≥10 mm is positive Known exposure: ≥5 mm is positive in patients with baseline of 0 mm ≥10 mm is positive in patients with negative baseline or previous screening result of >0 mm Recent developments In addition to tuberculin skin tests such as (principally) the Mantoux test, interferon gamma release assays (IGRAs) have become common in clinical use in the 2010s. In some contexts they are used instead of TSTs, whereas in other contexts TSTs and IGRAs both continue to be useful. The QuantiFERON-TB Gold blood test measures the patient's immune reactivity to the TB bacterium, and is useful for initial and serial testing of persons with an increased risk of latent or active tuberculosis infection. Guidelines for its use were released by the CDC in December 2005. QuantiFERON-TB Gold is FDA-approved in the United States, has CE Mark approval in Europe and has been approved by the MHLW in Japan. The interferon gamma release assay is the preferred method for patients who have had immunosuppression and are about to start biological therapies. T-SPOT.TB is another IGRA; it uses the ELISPOT method. Heaf test The Heaf tuberculin skin test was used in the United Kingdom, but discontinued in 2005. The equivalent Mantoux test positive levels done with 10 TU (0.1 ml at 100 TU/ml, 1:1000) are <5 mm induration (Heaf 0–1) 5–15 mm induration (Heaf 2) >15 mm induration (Heaf 3–4) See also Latent tuberculosis QuantiFERON Geronimo (alpaca) Shambo References Immunologic tests Tuberculosis Dermatologic procedures
Mantoux test
[ "Biology" ]
2,755
[ "Immunologic tests" ]
55,017
https://en.wikipedia.org/wiki/Fusion%20power
Fusion power is a proposed form of power generation that would generate electricity by using heat from nuclear fusion reactions. In a fusion process, two lighter atomic nuclei combine to form a heavier nucleus, while releasing energy. Devices designed to harness this energy are known as fusion reactors. Research into fusion reactors began in the 1940s, but as of 2024, no device has reached net power, although net positive reactions have been achieved. Fusion processes require fuel and a confined environment with sufficient temperature, pressure, and confinement time to create a plasma in which fusion can occur. The combination of these figures that results in a power-producing system is known as the Lawson criterion. In stars the most common fuel is hydrogen, and gravity provides extremely long confinement times that reach the conditions needed for fusion energy production. Proposed fusion reactors generally use heavy hydrogen isotopes such as deuterium and tritium (and especially a mixture of the two), which react more easily than protium (the most common hydrogen isotope) and produce a helium nucleus and an energized neutron, to allow them to reach the Lawson criterion requirements with less extreme conditions. Most designs aim to heat their fuel to around 100 million kelvins, which presents a major challenge in producing a successful design. Tritium is extremely rare on Earth, having a half life of only ~12.3 years. Consequently, during the operation of envisioned fusion reactors, known as breeder reactors, helium cooled pebble beds (HCPBs) are subjected to neutron fluxes to generate tritium to complete the fuel cycle. As a source of power, nuclear fusion has a number of potential advantages compared to fission. These include reduced radioactivity in operation, little high-level nuclear waste, ample fuel supplies (assuming tritium breeding or some forms of aneutronic fuels), and increased safety. However, the necessary combination of temperature, pressure, and duration has proven to be difficult to produce in a practical and economical manner. A second issue that affects common reactions is managing neutrons that are released during the reaction, which over time degrade many common materials used within the reaction chamber. Fusion researchers have investigated various confinement concepts. The early emphasis was on three main systems: z-pinch, stellarator, and magnetic mirror. The current leading designs are the tokamak and inertial confinement (ICF) by laser. Both designs are under research at very large scales, most notably the ITER tokamak in France and the National Ignition Facility (NIF) laser in the United States. Researchers are also studying other designs that may offer less expensive approaches. Among these alternatives, there is increasing interest in magnetized target fusion and inertial electrostatic confinement, and new variations of the stellarator. Background Mechanism Fusion reactions occur when two or more atomic nuclei come close enough for long enough that the nuclear force pulling them together exceeds the electrostatic force pushing them apart, fusing them into heavier nuclei. For nuclei heavier than iron-56, the reaction is endothermic, requiring an input of energy. The heavy nuclei bigger than iron have many more protons resulting in a greater repulsive force. For nuclei lighter than iron-56, the reaction is exothermic, releasing energy when they fuse. Since hydrogen has a single proton in its nucleus, it requires the least effort to attain fusion, and yields the most net energy output. Also since it has one electron, hydrogen is the easiest fuel to fully ionize. The repulsive electrostatic interaction between nuclei operates across larger distances than the strong force, which has a range of roughly one femtometer—the diameter of a proton or neutron. The fuel atoms must be supplied enough kinetic energy to approach one another closely enough for the strong force to overcome the electrostatic repulsion in order to initiate fusion. The "Coulomb barrier" is the quantity of kinetic energy required to move the fuel atoms near enough. Atoms can be heated to extremely high temperatures or accelerated in a particle accelerator to produce this energy. An atom loses its electrons once it is heated past its ionization energy. An ion is the name for the resultant bare nucleus. The result of this ionization is plasma, which is a heated cloud of ions and free electrons that were formerly bound to them. Plasmas are electrically conducting and magnetically controlled because the charges are separated. This is used by several fusion devices to confine the hot particles. Cross section A reaction's cross section, denoted σ, measures the probability that a fusion reaction will happen. This depends on the relative velocity of the two nuclei. Higher relative velocities generally increase the probability, but the probability begins to decrease again at very high energies. In a plasma, particle velocity can be characterized using a probability distribution. If the plasma is thermalized, the distribution looks like a Gaussian curve, or Maxwell–Boltzmann distribution. In this case, it is useful to use the average particle cross section over the velocity distribution. This is entered into the volumetric fusion rate: where: is the energy made by fusion, per time and volume n is the number density of species A or B, of the particles in the volume is the cross section of that reaction, average over all the velocities of the two species v is the energy released by that fusion reaction. Lawson criterion The Lawson criterion considers the energy balance between the energy produced in fusion reactions to the energy being lost to the environment. In order to generate usable energy, a system would have to produce more energy than it loses. Lawson assumed an energy balance, shown below. where: is the net power from fusion is the efficiency of capturing the output of the fusion is the rate of energy generated by the fusion reactions is the conduction losses as energetic mass leaves the plasma is the radiation losses as energy leaves as light. The rate of fusion, and thus Pfusion, depends on the temperature and density of the plasma. The plasma loses energy through conduction and radiation. Conduction occurs when ions, electrons, or neutrals impact other substances, typically a surface of the device, and transfer a portion of their kinetic energy to the other atoms. The rate of conduction is also based on the temperature and density. Radiation is energy that leaves the cloud as light. Radiation also increases with temperature as well as the mass of the ions. Fusion power systems must operate in a region where the rate of fusion is higher than the losses. Triple product: density, temperature, time The Lawson criterion argues that a machine holding a thermalized and quasi-neutral plasma has to generate enough energy to overcome its energy losses. The amount of energy released in a given volume is a function of the temperature, and thus the reaction rate on a per-particle basis, the density of particles within that volume, and finally the confinement time, the length of time that energy stays within the volume. This is known as the "triple product": the plasma density, temperature, and confinement time. In magnetic confinement, the density is low, on the order of a "good vacuum". For instance, in the ITER device the fuel density is about , which is about one-millionth atmospheric density. This means that the temperature and/or confinement time must increase. Fusion-relevant temperatures have been achieved using a variety of heating methods that were developed in the early 1970s. In modern machines, , the major remaining issue was the confinement time. Plasmas in strong magnetic fields are subject to a number of inherent instabilities, which must be suppressed to reach useful durations. One way to do this is to simply make the reactor volume larger, which reduces the rate of leakage due to classical diffusion. This is why ITER is so large. In contrast, inertial confinement systems approach useful triple product values via higher density, and have short confinement intervals. In NIF, the initial frozen hydrogen fuel load has a density less than water that is increased to about 100 times the density of lead. In these conditions, the rate of fusion is so high that the fuel fuses in the microseconds it takes for the heat generated by the reactions to blow the fuel apart. Although NIF is also large, this is a function of its "driver" design, not inherent to the fusion process. Energy capture Multiple approaches have been proposed to capture the energy that fusion produces. The simplest is to heat a fluid. The commonly targeted D-T reaction releases much of its energy as fast-moving neutrons. Electrically neutral, the neutron is unaffected by the confinement scheme. In most designs, it is captured in a thick "blanket" of lithium surrounding the reactor core. When struck by a high-energy neutron, the blanket heats up. It is then actively cooled with a working fluid that drives a turbine to produce power. Another design proposed to use the neutrons to breed fission fuel in a blanket of nuclear waste, a concept known as a fission-fusion hybrid. In these systems, the power output is enhanced by the fission events, and power is extracted using systems like those in conventional fission reactors. Designs that use other fuels, notably the proton-boron aneutronic fusion reaction, release much more of their energy in the form of charged particles. In these cases, power extraction systems based on the movement of these charges are possible. Direct energy conversion was developed at Lawrence Livermore National Laboratory (LLNL) in the 1980s as a method to maintain a voltage directly using fusion reaction products. This has demonstrated energy capture efficiency of 48 percent. Plasma behavior Plasma is an ionized gas that conducts electricity. In bulk, it is modeled using magnetohydrodynamics, which is a combination of the Navier–Stokes equations governing fluids and Maxwell's equations governing how magnetic and electric fields behave. Fusion exploits several plasma properties, including: Self-organizing plasma conducts electric and magnetic fields. Its motions generate fields that can in turn contain it. Diamagnetic plasma can generate its own internal magnetic field. This can reject an externally applied magnetic field, making it diamagnetic. Magnetic mirrors can reflect plasma when it moves from a low to high density field.:24 Methods Magnetic confinement Tokamak: the most well-developed and well-funded approach. This method drives hot plasma around in a magnetically confined torus, with an internal current. When completed, ITER will become the world's largest tokamak. As of September 2018 an estimated 226 experimental tokamaks were either planned, decommissioned or operating (50) worldwide. Spherical tokamak: also known as spherical torus. A variation on the tokamak with a spherical shape. Stellarator: Twisted rings of hot plasma. The stellarator attempts to create a natural twisted plasma path, using external magnets. Stellarators were developed by Lyman Spitzer in 1950 and evolved into four designs: Torsatron, Heliotron, Heliac and Helias. One example is Wendelstein 7-X, a German device. It is the world's largest stellarator. Internal rings: Stellarators create a twisted plasma using external magnets, while tokamaks do so using a current induced in the plasma. Several classes of designs provide this twist using conductors inside the plasma. Early calculations showed that collisions between the plasma and the supports for the conductors would remove energy faster than fusion reactions could replace it. Modern variations, including the Levitated Dipole Experiment (LDX), use a solid superconducting torus that is magnetically levitated inside the reactor chamber. Magnetic mirror: Developed by Richard F. Post and teams at Lawrence Livermore National Laboratory (LLNL) in the 1960s. Magnetic mirrors reflect plasma back and forth in a line. Variations included the Tandem Mirror, magnetic bottle and the biconic cusp. A series of mirror machines were built by the US government in the 1970s and 1980s, principally at LLNL. However, calculations in the 1970s estimated it was unlikely these would ever be commercially useful. Bumpy torus: A number of magnetic mirrors are arranged end-to-end in a toroidal ring. Any fuel ions that leak out of one are confined in a neighboring mirror, permitting the plasma pressure to be raised arbitrarily high without loss. An experimental facility, the ELMO Bumpy Torus or EBT was built and tested at Oak Ridge National Laboratory (ORNL) in the 1970s. Field-reversed configuration: This device traps plasma in a self-organized quasi-stable structure; where the particle motion makes an internal magnetic field which then traps itself. Spheromak: Similar to a field-reversed configuration, a semi-stable plasma structure made by using the plasmas' self-generated magnetic field. A spheromak has both toroidal and poloidal fields, while a field-reversed configuration has no toroidal field. Dynomak is a spheromak that is formed and sustained using continuous magnetic flux injection. Reversed field pinch: Here the plasma moves inside a ring. It has an internal magnetic field. Moving out from the center of this ring, the magnetic field reverses direction. Inertial confinement Indirect drive: Lasers heat a structure known as a Hohlraum that becomes so hot it begins to radiate x-ray light. These x-rays heat a fuel pellet, causing it to collapse inward to compress the fuel. The largest system using this method is the National Ignition Facility, followed closely by Laser Mégajoule. Direct drive: Lasers directly heat the fuel pellet. Notable direct drive experiments have been conducted at the Laboratory for Laser Energetics (LLE) and the GEKKO XII facilities. Good implosions require fuel pellets with close to a perfect shape in order to generate a symmetrical inward shock wave that produces the high-density plasma. Fast ignition: This method uses two laser blasts. The first blast compresses the fusion fuel, while the second ignites it. this technique had lost favor for energy production. Magneto-inertial fusion or Magnetized Liner Inertial Fusion: This combines a laser pulse with a magnetic pinch. The pinch community refers to it as magnetized liner inertial fusion while the ICF community refers to it as magneto-inertial fusion. Ion Beams: Ion beams replace laser beams to heat the fuel. The main difference is that the beam has momentum due to mass, whereas lasers do not. As of 2019 it appears unlikely that ion beams can be sufficiently focused spatially and in time. Z-machine: Sends an electric current through thin tungsten wires, heating them sufficiently to generate x-rays. Like the indirect drive approach, these x-rays then compress a fuel capsule. Magnetic or electric pinches Z-pinch: A current travels in the z-direction through the plasma. The current generates a magnetic field that compresses the plasma. Pinches were the first method for human-made controlled fusion. The z-pinch has inherent instabilities that limit its compression and heating to values too low for practical fusion. The largest such machine, the UK's ZETA, was the last major experiment of the sort. The problems in z-pinch led to the tokamak design. The dense plasma focus is a possibly superior variation. Theta-pinch: A current circles around the outside of a plasma column, in the theta direction. This induces a magnetic field running down the center of the plasma, as opposed to around it. The early theta-pinch device Scylla was the first to conclusively demonstrate fusion, but later work demonstrated it had inherent limits that made it uninteresting for power production. Sheared Flow Stabilized Z-Pinch: Research at the University of Washington under Uri Shumlak investigated the use of sheared-flow stabilization to smooth out the instabilities of Z-pinch reactors. This involves accelerating neutral gas along the axis of the pinch. Experimental machines included the FuZE and Zap Flow Z-Pinch experimental reactors. In 2017, British technology investor and entrepreneur Benj Conway, together with physicists Brian Nelson and Uri Shumlak, co-founded Zap Energy to attempt to commercialize the technology for power production. Screw Pinch: This method combines a theta and z-pinch for improved stabilization. Inertial electrostatic confinement Fusor: An electric field heats ions to fusion conditions. The machine typically uses two spherical cages, a cathode inside the anode, inside a vacuum. These machines are not considered a viable approach to net power because of their high conduction and radiation losses. They are simple enough to build that amateurs have fused atoms using them. Polywell: Attempts to combine magnetic confinement with electrostatic fields, to avoid the conduction losses generated by the cage. Other Magnetized target fusion: Confines hot plasma using a magnetic field and squeezes it using inertia. Examples include LANL FRX-L machine, General Fusion (piston compression with liquid metal liner), HyperJet Fusion (plasma jet compression with plasma liner). Uncontrolled: Fusion has been initiated by man, using uncontrolled fission explosions to stimulate fusion. Early proposals for fusion power included using bombs to initiate reactions. See Project PACER. Colliding beam fusion: A beam of high energy particles fired at another beam or target can initiate fusion. This was used in the 1970s and 1980s to study the cross sections of fusion reactions. However beam systems cannot be used for power because keeping a beam coherent takes more energy than comes from fusion. Muon-catalyzed fusion: This approach replaces electrons in diatomic molecules of isotopes of hydrogen with muons—more massive particles with the same electric charge. Their greater mass compresses the nuclei enough such that the strong interaction can cause fusion. As of 2007 producing muons required more energy than can be obtained from muon-catalyzed fusion. Lattice confinement fusion: Lattice confinement fusion (LCF) is a type of nuclear fusion in which deuteron-saturated metals are exposed to gamma radiation or ion beams, such as in an IEC fusor, avoiding the confined high-temperature plasmas used in other methods of fusion. Common tools Many approaches, equipment, and mechanisms are employed across multiple projects to address fusion heating, measurement, and power production. Machine learning A deep reinforcement learning system has been used to control a tokamak-based reactor. The system was able to manipulate the magnetic coils to manage the plasma. The system was able to continuously adjust to maintain appropriate behavior (more complex than step-based systems). In 2014, Google began working with California-based fusion company TAE Technologies to control the Joint European Torus (JET) to predict plasma behavior. DeepMind has also developed a control scheme with TCV. Heating Electrostatic heating: an electric field can do work on charged ions or electrons, heating them. Neutral beam injection: hydrogen is ionized and accelerated by an electric field to form a charged beam that is shone through a source of neutral hydrogen gas towards the plasma which itself is ionized and contained by a magnetic field. Some of the intermediate hydrogen gas is accelerated towards the plasma by collisions with the charged beam while remaining neutral: this neutral beam is thus unaffected by the magnetic field and so reaches the plasma. Once inside the plasma the neutral beam transmits energy to the plasma by collisions which ionize it and allow it to be contained by the magnetic field, thereby both heating and refueling the reactor in one operation. The remainder of the charged beam is diverted by magnetic fields onto cooled beam dumps. Radio frequency heating: a radio wave causes the plasma to oscillate (i.e., microwave oven). This is also known as electron cyclotron resonance heating, using for example gyrotrons, or dielectric heating. Magnetic reconnection: when plasma gets dense, its electromagnetic properties can change, which can lead to magnetic reconnection. Reconnection helps fusion because it instantly dumps energy into a plasma, heating it quickly. Up to 45% of the magnetic field energy can heat the ions. Magnetic oscillations: varying electric currents can be supplied to magnetic coils that heat plasma confined within a magnetic wall. Antiproton annihilation: antiprotons injected into a mass of fusion fuel can induce thermonuclear reactions. This possibility as a method of spacecraft propulsion, known as antimatter-catalyzed nuclear pulse propulsion, was investigated at Pennsylvania State University in connection with the proposed AIMStar project. Measurement The diagnostics of a fusion scientific reactor are extremely complex and varied. The diagnostics required for a fusion power reactor will be various but less complicated than those of a scientific reactor as by the time of commercialization, many real-time feedback and control diagnostics will have been perfected. However, the operating environment of a commercial fusion reactor will be harsher for diagnostic systems than in a scientific reactor because continuous operations may involve higher plasma temperatures and higher levels of neutron irradiation. In many proposed approaches, commercialization will require the additional ability to measure and separate diverter gases, for example helium and impurities, and to monitor fuel breeding, for instance the state of a tritium breeding liquid lithium liner. The following are some basic techniques. Flux loop: a loop of wire is inserted into the magnetic field. As the field passes through the loop, a current is made. The current measures the total magnetic flux through that loop. This has been used on the National Compact Stellarator Experiment, the polywell, and the LDX machines. A Langmuir probe, a metal object placed in a plasma, can be employed. A potential is applied to it, giving it a voltage against the surrounding plasma. The metal collects charged particles, drawing a current. As the voltage changes, the current changes. This makes an IV Curve. The IV-curve can be used to determine the local plasma density, potential and temperature. Thomson scattering: "Light scatters" from plasma can be used to reconstruct plasma behavior, including density and temperature. It is common in Inertial confinement fusion, Tokamaks, and fusors. In ICF systems, firing a second beam into a gold foil adjacent to the target makes x-rays that traverse the plasma. In tokamaks, this can be done using mirrors and detectors to reflect light. Neutron detectors: Several types of neutron detectors can record the rate at which neutrons are produced. X-ray detectors Visible, IR, UV, and X-rays are emitted anytime a particle changes velocity. If the reason is deflection by a magnetic field, the radiation is cyclotron radiation at low speeds and synchrotron radiation at high speeds. If the reason is deflection by another particle, plasma radiates X-rays, known as Bremsstrahlung radiation. Power production Neutron blankets absorb neutrons, which heats the blanket. Power can be extracted from the blanket in various ways: Steam turbines can be driven by heat transferred into a working fluid that turns into steam, driving electric generators. Neutron blankets: These neutrons can regenerate spent fission fuel. Tritium can be produced using a breeder blanket of liquid lithium or a helium cooled pebble bed made of lithium-bearing ceramic pebbles. Direct conversion: The kinetic energy of a particle can be converted into voltage. It was first suggested by Richard F. Post in conjunction with magnetic mirrors, in the late 1960s. It has been proposed for Field-Reversed Configurations as well as Dense Plasma Focus devices. The process converts a large fraction of the random energy of the fusion products into directed motion. The particles are then collected on electrodes at various large electrical potentials. This method has demonstrated an experimental efficiency of 48 percent. Traveling-wave tubes pass charged helium atoms at several megavolts and just coming off the fusion reaction through a tube with a coil of wire around the outside. This passing charge at high voltage pulls electricity through the wire. Confinement Confinement refers to all the conditions necessary to keep a plasma dense and hot long enough to undergo fusion. General principles: Equilibrium: The forces acting on the plasma must be balanced. One exception is inertial confinement, where the fusion must occur faster than the dispersal time. Stability: The plasma must be constructed so that disturbances will not lead to the plasma dispersing. Transport or conduction: The loss of material must be sufficiently slow. The plasma carries energy off with it, so rapid loss of material will disrupt fusion. Material can be lost by transport into different regions or conduction through a solid or liquid. To produce self-sustaining fusion, part of the energy released by the reaction must be used to heat new reactants and maintain the conditions for fusion. Magnetic confinement Magnetic Mirror Magnetic mirror effect. If a particle follows the field line and enters a region of higher field strength, the particles can be reflected. Several devices apply this effect. The most famous was the magnetic mirror machines, a series of devices built at LLNL from the 1960s to the 1980s. Other examples include magnetic bottles and Biconic cusp. Because the mirror machines were straight, they had some advantages over ring-shaped designs. The mirrors were easier to construct and maintain and direct conversion energy capture was easier to implement. Poor confinement has led this approach to be abandoned, except in the polywell design. Magnetic loops Magnetic loops bend the field lines back on themselves, either in circles or more commonly in nested toroidal surfaces. The most highly developed systems of this type are the tokamak, the stellarator, and the reversed field pinch. Compact toroids, especially the field-reversed configuration and the spheromak, attempt to combine the advantages of toroidal magnetic surfaces with those of a simply connected (non-toroidal) machine, resulting in a mechanically simpler and smaller confinement area. Inertial confinement Inertial confinement is the use of rapid implosion to heat and confine plasma. A shell surrounding the fuel is imploded using a direct laser blast (direct drive), a secondary x-ray blast (indirect drive), or heavy beams. The fuel must be compressed to about 30 times solid density with energetic beams. Direct drive can in principle be efficient, but insufficient uniformity has prevented success.:19–20 Indirect drive uses beams to heat a shell, driving the shell to radiate x-rays, which then implode the pellet. The beams are commonly laser beams, but ion and electron beams have been investigated.:182–193 Electrostatic confinement Electrostatic confinement fusion devices use electrostatic fields. The best known is the fusor. This device has a cathode inside an anode wire cage. Positive ions fly towards the negative inner cage, and are heated by the electric field in the process. If they miss the inner cage they can collide and fuse. Ions typically hit the cathode, however, creating prohibitory high conduction losses. Fusion rates in fusors are low because of competing physical effects, such as energy loss in the form of light radiation. Designs have been proposed to avoid the problems associated with the cage, by generating the field using a non-neutral cloud. These include a plasma oscillating device, a magnetically shielded-grid, a penning trap, the polywell, and the F1 cathode driver concept. Fuels The fuels considered for fusion power have all been light elements like the isotopes of hydrogen—protium, deuterium, and tritium. The deuterium and helium-3 reaction requires helium-3, an isotope of helium so scarce on Earth that it would have to be mined extraterrestrially or produced by other nuclear reactions. Ultimately, researchers hope to adopt the protium–boron-11 reaction, because it does not directly produce neutrons, although side reactions can. Deuterium, tritium The easiest nuclear reaction, at the lowest energy, is D+T: + → (3.5 MeV) + (14.1 MeV) This reaction is common in research, industrial and military applications, usually as a neutron source. Deuterium is a naturally occurring isotope of hydrogen and is commonly available. The large mass ratio of the hydrogen isotopes makes their separation easy compared to the uranium enrichment process. Tritium is a natural isotope of hydrogen, but because it has a short half-life of 12.32 years, it is hard to find, store, produce, and is expensive. Consequently, the deuterium-tritium fuel cycle requires the breeding of tritium from lithium using one of the following reactions: + → + + → + + The reactant neutron is supplied by the D-T fusion reaction shown above, and the one that has the greatest energy yield. The reaction with 6Li is exothermic, providing a small energy gain for the reactor. The reaction with 7Li is endothermic, but does not consume the neutron. Neutron multiplication reactions are required to replace the neutrons lost to absorption by other elements. Leading candidate neutron multiplication materials are beryllium and lead, but the 7Li reaction helps to keep the neutron population high. Natural lithium is mainly 7Li, which has a low tritium production cross section compared to 6Li so most reactor designs use breeding blankets with enriched 6Li. Drawbacks commonly attributed to D-T fusion power include: The supply of neutrons results in neutron activation of the reactor materials.:242 80% of the resultant energy is carried off by neutrons, which limits the use of direct energy conversion. It requires the radioisotope tritium. Tritium may leak from reactors. Some estimates suggest that this would represent a substantial environmental radioactivity release. The neutron flux expected in a commercial D-T fusion reactor is about 100 times that of fission power reactors, posing problems for material design. After a series of D-T tests at JET, the vacuum vessel was sufficiently radioactive that it required remote handling for the year following the tests. In a production setting, the neutrons would react with lithium in the breeding blanket composed of lithium ceramic pebbles or liquid lithium, yielding tritium. The energy of the neutrons ends up in the lithium, which would then be transferred to drive electrical production. The lithium blanket protects the outer portions of the reactor from the neutron flux. Newer designs, the advanced tokamak in particular, use lithium inside the reactor core as a design element. The plasma interacts directly with the lithium, preventing a problem known as "recycling". The advantage of this design was demonstrated in the Lithium Tokamak Experiment. Deuterium Fusing two deuterium nuclei is the second easiest fusion reaction. The reaction has two branches that occur with nearly equal probability: + → + + → + This reaction is also common in research. The optimum energy to initiate this reaction is 15 keV, only slightly higher than that for the D-T reaction. The first branch produces tritium, so that a D-D reactor is not tritium-free, even though it does not require an input of tritium or lithium. Unless the tritons are quickly removed, most of the tritium produced is burned in the reactor, which reduces the handling of tritium, with the disadvantage of producing more, and higher-energy, neutrons. The neutron from the second branch of the D-D reaction has an energy of only , while the neutron from the D-T reaction has an energy of , resulting in greater isotope production and material damage. When the tritons are removed quickly while allowing the 3He to react, the fuel cycle is called "tritium suppressed fusion". The removed tritium decays to 3He with a 12.5 year half life. By recycling the 3He decay into the reactor, the fusion reactor does not require materials resistant to fast neutrons. Assuming complete tritium burn-up, the reduction in the fraction of fusion energy carried by neutrons would be only about 18%, so that the primary advantage of the D-D fuel cycle is that tritium breeding is not required. Other advantages are independence from lithium resources and a somewhat softer neutron spectrum. The disadvantage of D-D compared to D-T is that the energy confinement time (at a given pressure) must be 30 times longer and the power produced (at a given pressure and volume) is 68 times less. Assuming complete removal of tritium and 3He recycling, only 6% of the fusion energy is carried by neutrons. The tritium-suppressed D-D fusion requires an energy confinement that is 10 times longer compared to D-T and double the plasma temperature. Deuterium, helium-3 A second-generation approach to controlled fusion power involves combining helium-3 (3He) and deuterium (2H): + → + This reaction produces 4He and a high-energy proton. As with the p-11B aneutronic fusion fuel cycle, most of the reaction energy is released as charged particles, reducing activation of the reactor housing and potentially allowing more efficient energy harvesting (via any of several pathways). In practice, D-D side reactions produce a significant number of neutrons, leaving p-11B as the preferred cycle for aneutronic fusion. Proton, boron-11 Both material science problems and non-proliferation concerns are greatly diminished by aneutronic fusion. Theoretically, the most reactive aneutronic fuel is 3He. However, obtaining reasonable quantities of 3He implies large scale extraterrestrial mining on the Moon or in the atmosphere of Uranus or Saturn. Therefore, the most promising candidate fuel for such fusion is fusing the readily available protium (i.e. a proton) and boron. Their fusion releases no neutrons, but produces energetic charged alpha (helium) particles whose energy can directly be converted to electrical power: + → 3 Side reactions are likely to yield neutrons that carry only about 0.1% of the power,:177–182 which means that neutron scattering is not used for energy transfer and material activation is reduced several thousand-fold. The optimum temperature for this reaction of 123 keV is nearly ten times higher than that for pure hydrogen reactions, and energy confinement must be 500 times better than that required for the D-T reaction. In addition, the power density is 2500 times lower than for D-T, although per unit mass of fuel, this is still considerably higher compared to fission reactors. Because the confinement properties of the tokamak and laser pellet fusion are marginal, most proposals for aneutronic fusion are based on radically different confinement concepts, such as the Polywell and the Dense Plasma Focus. In 2013, a research team led by Christine Labaune at École Polytechnique, reported a new fusion rate record for proton-boron fusion, with an estimated 80 million fusion reactions during a 1.5 nanosecond laser fire, 100 times greater than reported in previous experiments. Material selection Structural material stability is a critical issue. Materials that can survive the high temperatures and neutron bombardment experienced in a fusion reactor are considered key to success. The principal issues are the conditions generated by the plasma, neutron degradation of wall surfaces, and the related issue of plasma-wall surface conditions. Reducing hydrogen permeability is seen as crucial to hydrogen recycling and control of the tritium inventory. Materials with the lowest bulk hydrogen solubility and diffusivity provide the optimal candidates for stable barriers. A few pure metals, including tungsten and beryllium, and compounds such as carbides, dense oxides, and nitrides have been investigated. Research has highlighted that coating techniques for preparing well-adhered and perfect barriers are of equivalent importance. The most attractive techniques are those in which an ad-layer is formed by oxidation alone. Alternative methods utilize specific gas environments with strong magnetic and electric fields. Assessment of barrier performance represents an additional challenge. Classical coated membranes gas permeation continues to be the most reliable method to determine hydrogen permeation barrier (HPB) efficiency. In 2021, in response to increasing numbers of designs for fusion power reactors for 2040, the United Kingdom Atomic Energy Authority published the UK Fusion Materials Roadmap 2021–2040, focusing on five priority areas, with a focus on tokamak family reactors: Novel materials to minimize the amount of activation in the structure of the fusion power plant; Compounds that can be used within the power plant to optimise breeding of tritium fuel to sustain the fusion process; Magnets and insulators that are resistant to irradiation from fusion reactions—especially under cryogenic conditions; Structural materials able to retain their strength under neutron bombardment at high operating temperatures (over 550 degrees C); Engineering assurance for fusion materials—providing irradiated sample data and modelled predictions such that plant designers, operators and regulators have confidence that materials are suitable for use in future commercial power stations. Superconducting materials In a plasma that is embedded in a magnetic field (known as a magnetized plasma) the fusion rate scales as the magnetic field strength to the 4th power. For this reason, many fusion companies that rely on magnetic fields to control their plasma are trying to develop high temperature superconducting devices. In 2021, SuperOx, a Russian and Japanese company, developed a new manufacturing process for making superconducting YBCO wire for fusion reactors. This new wire was shown to conduct between 700 and 2000 Amps per square millimeter. The company was able to produce 186 miles of wire in nine months. Containment considerations Even on smaller production scales, the containment apparatus is blasted with matter and energy. Designs for plasma containment must consider: A heating and cooling cycle, up to a 10 MW/m2 thermal load. Neutron radiation, which over time leads to neutron activation and embrittlement. High energy ions leaving at tens to hundreds of electronvolts. Alpha particles leaving at millions of electronvolts. Electrons leaving at high energy. Light radiation (IR, visible, UV, X-ray). Depending on the approach, these effects may be higher or lower than fission reactors. One estimate put the radiation at 100 times that of a typical pressurized water reactor. Depending on the approach, other considerations such as electrical conductivity, magnetic permeability, and mechanical strength matter. Materials must also not end up as long-lived radioactive waste. Plasma-wall surface conditions For long term use, each atom in the wall is expected to be hit by a neutron and displaced about 100 times before the material is replaced. These high-energy neutron collisions with the atoms in the wall result in the absorption of the neutrons, forming unstable isotopes of the atoms. When the isotope decays, it may emit alpha particles, protons, or gamma rays. Alpha particles, once stabilized by capturing electrons, form helium atoms which accumulate at grain boundaries and may result in swelling, blistering, or embrittlement of the material. Selection of materials Tungsten is widely regarded as the optimal material for plasma-facing components in next-generation fusion devices due to its unique properties and potential for enhancements. Its low sputtering rates and high melting point make it particularly suitable for the high-stress environments of fusion reactors, allowing it to withstand intense conditions without rapid degradation. Additionally, tungsten's low tritium retention through co-deposition and implantation is essential in fusion contexts, as it helps to minimize the accumulation of this radioactive isotope. Liquid metals (lithium, gallium, tin) have been proposed, e.g., by injection of 1–5 mm thick streams flowing at 10 m/s on solid substrates. Graphite features a gross erosion rate due to physical and chemical sputtering amounting to many meters per year, requiring redeposition of the sputtered material. The redeposition site generally does not exactly match the sputter site, allowing net erosion that may be prohibitive. An even larger problem is that tritium is redeposited with the redeposited graphite. The tritium inventory in the wall and dust could build up to many kilograms, representing a waste of resources and a radiological hazard in case of an accident. Graphite found favor as material for short-lived experiments, but appears unlikely to become the primary plasma-facing material (PFM) in a commercial reactor. Ceramic materials such as silicon carbide (SiC) have similar issues like graphite. Tritium retention in silicon carbide plasma-facing components is approximately 1.5-2 times higher than in graphite, resulting in reduced fuel efficiency and heightened safety risks in fusion reactors. SiC tends to trap more tritium, limiting its availability for fusion and increasing the risk of hazardous accumulation, complicating tritium management. Furthermore, the chemical and physical sputtering of SiC remains significant, contributing to tritium buildup through co-deposition over time and with increasing particle fluence. As a result, carbon-based materials have been excluded from ITER, DEMO, and similar devices. Tungsten's sputtering rate is orders of magnitude smaller than carbon's, and tritium is much less incorporated into redeposited tungsten. However, tungsten plasma impurities are much more damaging than carbon impurities, and self-sputtering can be high, requiring the plasma in contact with the tungsten not be too hot (a few tens of eV rather than hundreds of eV). Tungsten also has issues around eddy currents and melting in off-normal events, as well as some radiological issues. Recent advances in materials for containment apparatus materials have found that certain ceramics can actually improve the longevity of the material of the containment apparatus. Studies on MAX phases, such as titanium silicon carbide, show that under the high operating temperatures of nuclear fusion, the material undergoes a phase transformation from a hexagonal structure to a face-centered-cubic (FCC) structure, driven by helium bubble growth. Helium atoms preferentially accumulate in the Si layer of the hexagonal structure, as the Si atoms are more mobile than the Ti-C slabs. As more atoms are trapped, the Ti-C slab is peeled off, causing the Si atoms to become highly mobile interstitial atoms in the new FCC structure. Lattice strain induced by the He bubbles cause Si atoms to diffuse out of compressive areas, typically towards the surface of the material, forming a protective silicon dioxide layer. Doping vessel materials with iron silicate has emerged as a promising approach to enhance containment materials in fusion reactors, as well. This method targets helium embrittlement at grain boundaries, a common issue that arises as helium atoms accumulate and form bubbles. Over time, these bubbles coalesce at grain boundaries, causing them to expand and degrade the material's structural integrity. By contrast, introducing iron silicate creates nucleation sites within the metal matrix that are more thermodynamically favorable for helium aggregation. This localized congregation around iron silicate nanoparticles induces matrix strain rather than weakening grain boundaries, preserving the material’s strength and longevity. Safety and the environment Accident potential Accident potential and effect on the environment are critical to social acceptance of nuclear fusion, also known as a social license. Fusion reactors are not subject to catastrophic meltdown. It requires precise and controlled temperature, pressure and magnetic field parameters to produce net energy, and any damage or loss of required control would rapidly quench the reaction. Fusion reactors operate with seconds or even microseconds worth of fuel at any moment. Without active refueling, the reactions immediately quench. The same constraints prevent runaway reactions. Although the plasma is expected to have a volume of or more, the plasma typically contains only a few grams of fuel. By comparison, a fission reactor is typically loaded with enough fuel for months or years, and no additional fuel is necessary to continue the reaction. This large fuel supply is what offers the possibility of a meltdown. In magnetic containment, strong fields develop in coils that are mechanically held in place by the reactor structure. Failure of this structure could release this tension and allow the magnet to "explode" outward. The severity of this event would be similar to other industrial accidents or an MRI machine quench/explosion, and could be effectively contained within a containment building similar to those used in fission reactors. In laser-driven inertial containment the larger size of the reaction chamber reduces the stress on materials. Although failure of the reaction chamber is possible, stopping fuel delivery prevents catastrophic failure. Most reactor designs rely on liquid hydrogen as a coolant and to convert stray neutrons into tritium, which is fed back into the reactor as fuel. Hydrogen is flammable, and it is possible that hydrogen stored on-site could ignite. In this case, the tritium fraction of the hydrogen would enter the atmosphere, posing a radiation risk. Calculations suggest that about of tritium and other radioactive gases in a typical power station would be present. The amount is small enough that it would dilute to legally acceptable limits by the time they reached the station's perimeter fence. The likelihood of small industrial accidents, including the local release of radioactivity and injury to staff, are estimated to be minor compared to fission. They would include accidental releases of lithium or tritium or mishandling of radioactive reactor components. Magnet quench A magnet quench is an abnormal termination of magnet operation that occurs when part of the superconducting coil exits the superconducting state (becomes normal). This can occur because the field inside the magnet is too large, the rate of change of field is too large (causing eddy currents and resultant heating in the copper support matrix), or a combination of the two. More rarely a magnet defect can cause a quench. When this happens, that particular spot is subject to rapid Joule heating from the current, which raises the temperature of the surrounding regions. This pushes those regions into the normal state as well, which leads to more heating in a chain reaction. The entire magnet rapidly becomes normal over several seconds, depending on the size of the superconducting coil. This is accompanied by a loud bang as the energy in the magnetic field is converted to heat, and the cryogenic fluid boils away. The abrupt decrease of current can result in kilovolt inductive voltage spikes and arcing. Permanent damage to the magnet is rare, but components can be damaged by localized heating, high voltages, or large mechanical forces. In practice, magnets usually have safety devices to stop or limit the current when a quench is detected. If a large magnet undergoes a quench, the inert vapor formed by the evaporating cryogenic fluid can present a significant asphyxiation hazard to operators by displacing breathable air. A large section of the superconducting magnets in CERN's Large Hadron Collider unexpectedly quenched during start-up operations in 2008, destroying multiple magnets. In order to prevent a recurrence, the LHC's superconducting magnets are equipped with fast-ramping heaters that are activated when a quench event is detected. The dipole bending magnets are connected in series. Each power circuit includes 154 individual magnets, and should a quench event occur, the entire combined stored energy of these magnets must be dumped at once. This energy is transferred into massive blocks of metal that heat up to several hundred degrees Celsius—because of resistive heating—in seconds. A magnet quench is a "fairly routine event" during the operation of a particle accelerator. Effluents The natural product of the fusion reaction is a small amount of helium, which is harmless to life. Hazardous tritium is difficult to retain completely. Although tritium is volatile and biologically active, the health risk posed by a release is much lower than that of most radioactive contaminants, because of tritium's short half-life (12.32 years) and very low decay energy (~14.95 keV), and because it does not bioaccumulate (it cycles out of the body as water, with a biological half-life of 7 to 14 days). ITER incorporates total containment facilities for tritium. Radioactive waste Fusion reactors create far less radioactive material than fission reactors. Further, the material it creates is less damaging biologically, and the radioactivity dissipates within a time period that is well within existing engineering capabilities for safe long-term waste storage. In specific terms, except in the case of aneutronic fusion, the neutron flux turns the structural materials radioactive. The amount of radioactive material at shut-down may be comparable to that of a fission reactor, with important differences. The half-lives of fusion and neutron activation radioisotopes tend to be less than those from fission, so that the hazard decreases more rapidly. Whereas fission reactors produce waste that remains radioactive for thousands of years, the radioactive material in a fusion reactor (other than tritium) would be the reactor core itself and most of this would be radioactive for about 50 years, with other low-level waste being radioactive for another 100 years or so thereafter. The fusion waste's short half-life eliminates the challenge of long-term storage. By 500 years, the material would have the same radiotoxicity as coal ash. Nonetheless, classification as intermediate level waste rather than low-level waste may complicate safety discussions. The choice of materials is less constrained than in conventional fission, where many materials are required for their specific neutron cross-sections. Fusion reactors can be designed using "low activation", materials that do not easily become radioactive. Vanadium, for example, becomes much less radioactive than stainless steel. Carbon fiber materials are also low-activation, are strong and light, and are promising for laser-inertial reactors where a magnetic field is not required. Nuclear proliferation In some scenarios, fusion power technology could be adapted to produce materials for military purposes. A huge amount of tritium could be produced by a fusion power station; tritium is used in the trigger of hydrogen bombs and in modern boosted fission weapons, but it can be produced in other ways. The energetic neutrons from a fusion reactor could be used to breed weapons-grade plutonium or uranium for an atomic bomb (for example by transmutation of to , or to ). A study conducted in 2011 assessed three scenarios: Small-scale fusion station: As a result of much higher power consumption, heat dissipation and a more recognizable design compared to enrichment gas centrifuges, this choice would be much easier to detect and therefore implausible. Commercial facility: The production potential is significant. But no fertile or fissile substances necessary for the production of weapon-usable materials needs to be present at a civil fusion system at all. If not shielded, detection of these materials can be done by their characteristic gamma radiation. The underlying redesign could be detected by regular design information verification. In the (technically more feasible) case of solid breeder blanket modules, it would be necessary for incoming components to be inspected for the presence of fertile material, otherwise plutonium for several weapons could be produced each year. Prioritizing weapon-grade material regardless of secrecy: The fastest way to produce weapon-usable material was seen in modifying a civil fusion power station. No weapons-compatible material is required during civil use. Even without the need for covert action, such a modification would take about two months to start production and at least an additional week to generate a significant amount. This was considered to be enough time to detect a military use and to react with diplomatic or military means. To stop the production, a military destruction of parts of the facility while leaving out the reactor would be sufficient. Another study concluded "...large fusion reactors—even if not designed for fissile material breeding—could easily produce several hundred kg Pu per year with high weapon quality and very low source material requirements." It was emphasized that the implementation of features for intrinsic proliferation resistance might only be possible at an early phase of research and development. The theoretical and computational tools needed for hydrogen bomb design are closely related to those needed for inertial confinement fusion, but have very little in common with magnetic confinement fusion. Fuel reserves Fusion power commonly proposes the use of deuterium as fuel and many current designs also use lithium. Assuming a fusion energy output equal to the 1995 global power output of about 100 EJ/yr (= 1 × 1020 J/yr) and that this does not increase in the future, which is unlikely, then known current lithium reserves would last 3000 years. Lithium from sea water would last 60 million years, however, and a more complicated fusion process using only deuterium would have fuel for 150 billion years. To put this in context, 150 billion years is close to 30 times the remaining lifespan of the Sun, and more than 10 times the estimated age of the universe. Economics The EU spent almost through the 1990s. ITER represents an investment of over twenty billion dollars, and possibly tens of billions more, including in kind contributions. Under the European Union's Sixth Framework Programme, nuclear fusion research received (in addition to ITER funding), compared with for sustainable energy research, putting research into fusion power well ahead of that of any single rival technology. The United States Department of Energy has allocated $US367M–$US671M every year since 2010, peaking in 2020, with plans to reduce investment to $US425M in its FY2021 Budget Request. About a quarter of this budget is directed to support ITER. The size of the investments and time lines meant that fusion research was traditionally almost exclusively publicly funded. However, starting in the 2010s, the promise of commercializing a paradigm-changing low-carbon energy source began to attract a raft of companies and investors. Over two dozen start-up companies attracted over one billion dollars from roughly 2000 to 2020, mainly from 2015, and a further three billion in funding and milestone related commitments in 2021, with investors including Jeff Bezos, Peter Thiel and Bill Gates, as well as institutional investors including Legal & General, and energy companies including Equinor, Eni, Chevron, and the Chinese ENN Group. In 2021, Commonwealth Fusion Systems (CFS) obtained $1.8 billion in scale-up funding, and Helion Energy obtained a half-billion dollars with an additional $1.7 billion contingent on meeting milestones. Scenarios developed in the 2000s and early 2010s discussed the effects of the commercialization of fusion power on the future of human civilization. Using nuclear fission as a guide, these saw ITER and later DEMO as bringing online the first commercial reactors around 2050 and a rapid expansion after mid-century. Some scenarios emphasized "fusion nuclear science facilities" as a step beyond ITER. However, the economic obstacles to tokamak-based fusion power remain immense, requiring investment to fund prototype tokamak reactors and development of new supply chains, a problem which will affect any kind of fusion reactor. Tokamak designs appear to be labour-intensive, while the commercialization risk of alternatives like inertial fusion energy is high due to the lack of government resources. Scenarios since 2010 note computing and material science advances enabling multi-phase national or cost-sharing "Fusion Pilot Plants" (FPPs) along various technology pathways, such as the UK Spherical Tokamak for Energy Production, within the 2030–2040 time frame. Notably, in June 2021, General Fusion announced it would accept the UK government's offer to host the world's first substantial public-private partnership fusion demonstration plant, at Culham Centre for Fusion Energy. The plant will be constructed from 2022 to 2025 and is intended to lead the way for commercial pilot plants in the late 2025s. The plant will be 70% of full scale and is expected to attain a stable plasma of 150 million degrees. In the United States, cost-sharing public-private partnership FPPs appear likely, and in 2022 the DOE announced a new Milestone-Based Fusion Development Program as the centerpiece of its Bold Decadal Vision for Commercial Fusion Energy, which envisages private sector-led teams delivering FPP pre-conceptual designs, defining technology roadmaps, and pursuing the R&D necessary to resolve critical-path scientific and technical issues towards an FPP design. Compact reactor technology based on such demonstration plants may enable commercialization via a fleet approach from the 2030s if early markets can be located. The widespread adoption of non-nuclear renewable energy has transformed the energy landscape. Such renewables are projected to supply 74% of global energy by 2050. The steady fall of renewable energy prices challenges the economic competitiveness of fusion power. Some economists suggest fusion power is unlikely to match other renewable energy costs. Fusion plants are expected to face large start up and capital costs. Moreover, operation and maintenance are likely to be costly. While the costs of the China Fusion Engineering Test Reactor are not well known, an EU DEMO fusion concept was projected to feature a levelized cost of energy (LCOE) of $121/MWh. Fuel costs are low, but economists suggest that the energy cost for a one-gigawatt plant would increase by $16.5 per MWh for every $1 billion increase in the capital investment in construction. There is also the risk that easily obtained lithium will be used up making batteries. Obtaining it from seawater would be very costly and might require more energy than the energy that would be generated. In contrast, renewable levelized cost of energy estimates are substantially lower. For instance, the 2019 levelized cost of energy of solar energy was estimated to be $40-$46/MWh, on shore wind was estimated at $29-$56/MWh, and offshore wind was approximately $92/MWh. However, fusion power may still have a role filling energy gaps left by renewables, depending on how administration priorities for energy and environmental justice influence the market. In the 2020s, socioeconomic studies of fusion that began to consider these factors emerged, and in 2022 EUROFusion launched its Socio-Economic Studies and Prospective Research and Development strands to investigate how such factors might affect commercialization pathways and timetables. Similarly, in April 2023 Japan announced a national strategy to industrialise fusion. Thus, fusion power may work in tandem with other renewable energy sources rather than becoming the primary energy source. In some applications, fusion power could provide the base load, especially if including integrated thermal storage and cogeneration and considering the potential for retrofitting coal plants. Regulation As fusion pilot plants move within reach, legal and regulatory issues must be addressed. In September 2020, the United States National Academy of Sciences consulted with private fusion companies to consider a national pilot plant. The following month, the United States Department of Energy, the Nuclear Regulatory Commission (NRC) and the Fusion Industry Association co-hosted a public forum to begin the process. In November 2020, the International Atomic Energy Agency (IAEA) began working with various nations to create safety standards such as dose regulations and radioactive waste handling. In January and March 2021, NRC hosted two public meetings on regulatory frameworks. A public-private cost-sharing approach was endorsed in the 27 December H.R.133 Consolidated Appropriations Act, 2021, which authorized $325 million over five years for a partnership program to build fusion demonstration facilities, with a 100% match from private industry. Subsequently, the UK Regulatory Horizons Council published a report calling for a fusion regulatory framework by early 2022 in order to position the UK as a global leader in commercializing fusion power. This call was met by the UK government publishing in October 2021 both its Fusion Green Paper and its Fusion Strategy, to regulate and commercialize fusion, respectively. Then, in April 2023, in a decision likely to influence other nuclear regulators, the NRC announced in a unanimous vote that fusion energy would be regulated not as fission but under the same regulatory regime as particle accelerators. Then, in October 2023 the UK government, in enacting the Energy Act 2023, made the UK the first country to legislate for fusion separately from fission, to support planning and investment, including the UK's planned prototype fusion power plant for 2040; STEP the UK is working with Canada and Japan in this regard. Meanwhile, in February 2024 the US House of Representatives passed the Atomic Energy Advancement Act, which includes the Fusion Energy Act, which establishes a regulatory framework for fusion energy systems. Geopolitics Given the potential of fusion to transform the world's energy industry and mitigate climate change, fusion science has traditionally been seen as an integral part of peace-building science diplomacy. However, technological developments and private sector involvement has raised concerns over intellectual property, regulatory administration, global leadership; equity, and potential weaponization. These challenge ITER's peace-building role and led to calls for a global commission. Fusion power significantly contributing to climate change by 2050 seems unlikely without substantial breakthroughs and a space race mentality emerging, but a contribution by 2100 appears possible, with the extent depending on the type and particularly cost of technology pathways. Developments from late 2020 onwards have led to talk of a "new space race" with multiple entrants, pitting the US against China and the UK's STEP FPP, with China now outspending the US and threatening to leapfrog US technology. On 24 September 2020, the United States House of Representatives approved a research and commercialization program. The Fusion Energy Research section incorporated a milestone-based, cost-sharing, public-private partnership program modeled on NASA's COTS program, which launched the commercial space industry. In February 2021, the National Academies published Bringing Fusion to the U.S. Grid, recommending a market-driven, cost-sharing plant for 2035–2040, and the launch of the Congressional Bipartisan Fusion Caucus followed. In December 2020, an independent expert panel reviewed EUROfusion's design and R&D work on DEMO, and EUROfusion confirmed it was proceeding with its Roadmap to Fusion Energy, beginning the conceptual design of DEMO in partnership with the European fusion community, suggesting an EU-backed machine had entered the race. In October 2023, the UK-oriented Agile Nations group announced a fusion working group. One month later, the UK and the US announced a bilateral partnership to accelerate fusion energy. Then, in December 2023 at COP28 the US announced a US global strategy to commercialize fusion energy. Then, in April 2024, Japan and the US announced a similar partnership, and in May of the same year, the G7 announced a G7 Working Group on Fusion Energy to promote international collaborations to accelerate the development of commercial energy and promote R&D between countries, as well as rationalize fusion regulation. Later the same year, the US partnered with the IAEA to launch the Fusion Energy Solutions Taskforce, to collaboratively crowdsource ideas to accelerate commercial fusion energy, in line with the US COP28 statement. Specifically to resolve the tritium supply problem, in February 2024, the UK (UKAEA) and Canada (Canadian Nuclear Laboratories) announced an agreement by which Canada could refurbish its Candu deuterium-uranium tritium-generating heavywater nuclear plants and even build new ones, guaranteeing a supply of tritium into the 2070s, while the UKAEA would test breeder materials and simulate how tritium could be captured, purified, and injected back into the fusion reaction. In 2024, both South Korea and Japan announced major initiatives to accelerate their national fusion strategies, by building electricity-generating public-private fusion plants in the 2030s, aiming to begin operations in the 2040s and 2030s respectively. Advantages Fusion power promises to provide more energy for a given weight of fuel than any fuel-consuming energy source currently in use. The fuel (primarily deuterium) exists abundantly in the ocean: about 1 in 6500 hydrogen atoms in seawater is deuterium. Although this is only about 0.015%, seawater is plentiful and easy to access, implying that fusion could supply the world's energy needs for millions of years. First generation fusion plants are expected to use the deuterium-tritium fuel cycle. This will require the use of lithium for breeding of the tritium. It is not known for how long global lithium supplies will suffice to supply this need as well as those of the battery and metallurgical industries. It is expected that second generation plants will move on to the more formidable deuterium-deuterium reaction. The deuterium-helium-3 reaction is also of interest, but the light helium isotope is practically non-existent on Earth. It is thought to exist in useful quantities in the lunar regolith, and is abundant in the atmospheres of the gas giant planets. Fusion power could be used for so-called "deep space" propulsion within the solar system and for interstellar space exploration where solar energy is not available, including via antimatter-fusion hybrid drives. Helium production Deuterium–tritium fusion produces helium as a by-product. Disadvantages Fusion power has a number of disadvantages. Because 80 percent of the energy in any reactor fueled by deuterium and tritium appears in the form of neutron streams, such reactors share many of the drawbacks of fission reactors. This includes the production of large quantities of radioactive waste and serious radiation damage to reactor components. Additionally, naturally occurring tritium is extremely rare. While the hope is that fusion reactors can breed their own tritium, tritium self-sufficiency is extremely challenging, not least because tritium is difficult to contain (tritium has leaked from 48 of 65 nuclear sites in the US). In any case the reserve and start-up tritium inventory requirements are likely to be unacceptably large. If reactors can be made to operate using only deuterium fuel, then the tritium replenishment issue is eliminated and neutron radiation damage may be reduced. However, the probabilities of deuterium-deuterium reactions are about 20 times lower than for deuterium-tritium. Additionally, the temperature needed is about 3 times higher than for deuterium-tritium (see cross section). The higher temperatures and lower reaction rates thus significantly complicate the engineering challenges. In any case, other drawbacks remain, for instance reactors requiring only deuterium fueling will have greatly enhanced nuclear weapons proliferation potential. History Early experiments The first machine to achieve controlled thermonuclear fusion was a pinch machine at Los Alamos National Laboratory called Scylla I at the start of 1958. The team that achieved it was led by a British scientist named James Tuck and included a young Marshall Rosenbluth. Tuck had been involved in the Manhattan project, but had switched to working on fusion in the early 1950s. He applied for funding for the project as part of a White House sponsored contest to develop a fusion reactor along with Lyman Spitzer. The previous year, 1957, the British had claimed that they had achieved thermonuclear fusion reactions on the Zeta pinch machine. However, it turned out that the neutrons they had detected were from beam-target interactions, not fusion, and they withdrew the claim. Scylla I was a classified machine at the time, so the achievement was hidden from the public. A traditional Z-pinch passes a current down the center of a plasma, which makes a magnetic force around the outside which squeezes the plasma to fusion conditions. Scylla I was a θ-pinch, which used deuterium to pass a current around the outside of its cylinder to create a magnetic force in the center. After the success of Scylla I, Los Alamos went on to build multiple pinch machines over the next few years. Spitzer continued his stellarator research at Princeton. While fusion did not immediately transpire, the effort led to the creation of the Princeton Plasma Physics Laboratory. First tokamak In the early 1950s, Soviet physicists I.E. Tamm and A.D. Sakharov developed the concept of the tokamak, combining a low-power pinch device with a low-power stellarator. A.D. Sakharov's group constructed the first tokamaks, achieving the first quasistationary fusion reaction.:90 Over time, the "advanced tokamak" concept emerged, which included non-circular plasma, internal diverters and limiters, superconducting magnets, operation in the "H-mode" island of increased stability, and the compact tokamak, with the magnets on the inside of the vacuum chamber. First inertial confinement experiments Laser fusion was suggested in 1962 by scientists at Lawrence Livermore National Laboratory (LLNL), shortly after the invention of the laser in 1960. Inertial confinement fusion experiments using lasers began as early as 1965. Several laser systems were built at LLNL, including the Argus, the Cyclops, the Janus, the long path, the Shiva laser, and the Nova. Laser advances included frequency-tripling crystals that transformed infrared laser beams into ultraviolet beams and "chirping", which changed a single wavelength into a full spectrum that could be amplified and then reconstituted into one frequency. Laser research cost over one billion dollars in the 1980s. 1980s The Tore Supra, JET, T-15, and JT-60 tokamaks were built in the 1980s. In 1984, Martin Peng of ORNL proposed the spherical tokamak with a much smaller radius. It used a single large conductor in the center, with magnets as half-rings off this conductor. The aspect ratio fell to as low as 1.2.:B247:225 Peng's advocacy caught the interest of Derek Robinson, who built the Small Tight Aspect Ratio Tokamak, (START). 1990s In 1991, the Preliminary Tritium Experiment at the Joint European Torus achieved the world's first controlled release of fusion power. In 1996, Tore Supra created a plasma for two minutes with a current of almost 1 million amperes, totaling 280 MJ of injected and extracted energy. In 1997, JET produced a peak of 16.1 MW of fusion power (65% of heat to plasma), with fusion power of over 10 MW sustained for over 0.5 sec. 2000s "Fast ignition" saved power and moved ICF into the race for energy production. In 2006, China's Experimental Advanced Superconducting Tokamak (EAST) test reactor was completed. It was the first tokamak to use superconducting magnets to generate both toroidal and poloidal fields. In March 2009, the laser-driven ICF NIF became operational. In the 2000s, privately backed fusion companies entered the race, including TAE Technologies, General Fusion, and Tokamak Energy. 2010s Private and public research accelerated in the 2010s. General Fusion developed plasma injector technology and Tri Alpha Energy tested its C-2U device. The French Laser Mégajoule began operation. NIF achieved net energy gain in 2013, as defined in the very limited sense as the hot spot at the core of the collapsed target, rather than the whole target. In 2014, Phoenix Nuclear Labs sold a high-yield neutron generator that could sustain 5×1011 deuterium fusion reactions per second over a 24-hour period. In 2015, MIT announced a tokamak it named the ARC fusion reactor, using rare-earth barium-copper oxide (REBCO) superconducting tapes to produce high-magnetic field coils that it claimed could produce comparable magnetic field strength in a smaller configuration than other designs. In October, researchers at the Max Planck Institute of Plasma Physics in Greifswald, Germany, completed building the largest stellarator to date, the Wendelstein 7-X (W7-X). The W7-X stellarator began Operational phase 1 (OP1.1) on 10 December 2015, successfully producing helium plasma. The objective was to test vital systems and understand the machine's physics. By February 2016, hydrogen plasma was achieved, with temperatures reaching up to 100 million Kelvin. The initial tests used five graphite limiters. After over 2,000 pulses and achieving significant milestones, OP1.1 concluded on 10 March 2016. An upgrade followed, and OP1.2 in 2017 aimed to test an uncooled divertor. By June 2018, record temperatures were reached. W7-X concluded its first campaigns with limiter and island divertor tests, achieving notable advancements by the end of 2018. It soon produced helium and hydrogen plasmas lasting up to 30 minutes. In 2017, Helion Energy's fifth-generation plasma machine went into operation. The UK's Tokamak Energy's ST40 generated "first plasma". The next year, Eni announced a $50 million investment in Commonwealth Fusion Systems, to attempt to commercialize MIT's ARC technology. 2020s In January 2021, SuperOx announced the commercialization of a new superconducting wire with more than 700 A/mm2 current capability. TAE Technologies announced results for its Norman device, holding a temperature of about 60 MK for 30 milliseconds, 8 and 10 times higher, respectively, than the company's previous devices. In October, Oxford-based First Light Fusion revealed its projectile fusion project, which fires an aluminum disc at a fusion target, accelerated by a 9 mega-amp electrical pulse, reaching speeds of . The resulting fusion generates neutrons whose energy is captured as heat. On November 8, in an invited talk to the 63rd Annual Meeting of the APS Division of Plasma Physics, the National Ignition Facility claimed to have triggered fusion ignition in the laboratory on August 8, 2021, for the first time in the 60+ year history of the ICF program. The shot yielded 1.3 MJ of fusion energy, an over 8X improvement on tests done in spring of 2021. NIF estimates that 230 kJ of energy reached the fuel capsule, which resulted in an almost 6-fold energy output from the capsule. A researcher from Imperial College London stated that the majority of the field agreed that ignition had been demonstrated. In November 2021, Helion Energy reported receiving $500 million in Series E funding for its seventh-generation Polaris device, designed to demonstrate net electricity production, with an additional $1.7 billion of commitments tied to specific milestones, while Commonwealth Fusion Systems raised an additional $1.8 billion in Series B funding to construct and operate its SPARC tokamak, the single largest investment in any private fusion company. In April 2022, First Light announced that their hypersonic projectile fusion prototype had produced neutrons compatible with fusion. Their technique electromagnetically fires projectiles at Mach 19 at a caged fuel pellet. The deuterium fuel is compressed at Mach 204, reaching pressure levels of 100 TPa. On December 13, 2022, the US Department of Energy reported that researchers at the National Ignition Facility had achieved a net energy gain from a fusion reaction. The reaction of hydrogen fuel at the facility produced about 3.15 MJ of energy while consuming 2.05 MJ of input. However, while the fusion reactions may have produced more than 3 megajoules of energy—more than was delivered to the target—NIF's 192 lasers consumed 322 MJ of grid energy in the conversion process. In May 2023, the United States Department of Energy (DOE) provided a grant of $46 million to eight companies across seven states to support fusion power plant design and research efforts. This funding, under the Milestone-Based Fusion Development Program, aligns with objectives to demonstrate pilot-scale fusion within a decade and to develop fusion as a carbon-neutral energy source by 2050. The granted companies are tasked with addressing the scientific and technical challenges to create viable fusion pilot plant designs in the next 5–10 years. The recipient firms include Commonwealth Fusion Systems, Focused Energy Inc., Princeton Stellarators Inc., Realta Fusion Inc., Tokamak Energy Inc., Type One Energy Group, Xcimer Energy Inc., and Zap Energy Inc. In December 2023, the largest and most advanced tokamak JT-60SA was inaugurated in Naka, Japan. The reactor is a joint project between Japan and the European Union. The reactor had achieved its first plasma in October 2023. Subsequently, South Korea's fusion reactor project, the Korean Superconducting Tokamak Advanced Research, successfully operated for 102 seconds in a high-containment mode (H-mode) containing high ion temperatures of more than 100 million degrees in plasma tests conducted from December 2023 to February 2024. In January 2025, EAST fusion reactor in China was reported to maintain a steady-state high-confinement plasma operation for 1066 seconds. Future development Claims of commercially viable fusion power being relatively imminent have often attracted ridicule within the scientific community. A common joke is that human-engineered fusion has always been promised as 30 years away since the concept was first discussed, or that it has been "20 years away for 50 years". In 2024, Commonwealth Fusion Systems announced plans to build the world's first grid-scale commercial nuclear fusion power plant at the James River Industrial Center in Chesterfield County, Virginia, which is part of the Greater Richmond Region; the plant is designed to produce about 400 MW of electric power, and is intended to come online in the early 2030s. Records Fusion records continue to advance: See also COLEX process, for production of Li-6 Fusion ignition High beta fusion reactor Inertial electrostatic confinement Levitated dipole List of fusion experiments Magnetic mirror Starship References Bibliography (manuscript) Nuttall, William J., Konishi, Satoshi, Takeda, Shutaro, and Webbe-Wood, David (2020). Commercialising Fusion Energy: How Small Businesses are Transforming Big Science. IOP Publishing. . Further reading Oreskes, Naomi, "Fusion's False Promise: Despite a recent advance, nuclear fusion is not the solution to the climate crisis", Scientific American, vol. 328, no. 6 (June 2023), p. 86. External links Fusion Device Information System Fusion Energy Base Fusion Industry Association Princeton Satellite Systems News U.S. Fusion Energy Science Program Sustainable energy
Fusion power
[ "Physics", "Chemistry" ]
16,024
[ "Nuclear fusion", "Fusion power", "Plasma physics" ]
55,125
https://en.wikipedia.org/wiki/Long%20March%20%28rocket%20family%29
The Long March rockets are a family of expendable launch system rockets operated by the China Aerospace Science and Technology Corporation. The rockets are named after the Chinese Red Army's 1934–35 Long March military retreat during the Chinese Civil War. The Long March series has performed more than 500 launches, including missions to low Earth orbit, Sun-synchronous orbit, geostationary transfer orbit, and Earth-Moon transfer orbit. The new-generation carrier rockets, Long March 5, Long March 6, Long March 7, Long March 11, and Long March 8, have made their maiden flights. Among them, the Long March 5 has a low-Earth orbit carrying capacity of 25,000 kilograms, and a geosynchronous transfer orbit carrying capacity of 14,000 kilograms. History China used the Long March 1 rocket to launch its first satellite, Dong Fang Hong 1 (lit. "The East is Red 1"), into low Earth orbit on 24 April 1970, becoming the fifth nation to achieve independent launch capability. Early launches had an inconsistent record, focusing on the launching of Chinese satellites. The Long March 1 was quickly replaced by the Long March 2 family of launchers. Entry into commercial launch market After the U.S. Space Shuttle Challenger was destroyed in 1986, a growing commercial backlog gave China the chance to enter the international launch market. In September 1988, U.S. President Ronald Reagan agreed to allow U.S. satellites to be launched on Chinese rockets. Reagan's satellite export policy would continue to 1998, through Bush and Clinton administrations, with 20 or more approvals. AsiaSat 1, which had originally been launched by the Space Shuttle and retrieved by another Space Shuttle after a failure, was launched by a Long March 3 in 1990 as the first foreign payload on a Chinese rocket. However, major setbacks occurred in 1992–1996. The Long March 2E was designed with a defective payload fairing, which collapsed when faced with the rocket's excessive vibration. After just seven launches, the Long March 2E destroyed the Optus B2 and Apstar 2 satellites and damaged AsiaSat 2. The Long March 3B also experienced a catastrophic failure in 1996, veering off course shortly after liftoff and crashing into a nearby village. At least 6 people were killed on the ground, and the Intelsat 708 satellite was also destroyed. A Long March 3 also experienced a partial failure in August 1996 during the launch of Chinasat-7. Six Long March rockets (Chang Zheng 2C/SD) launched 12 Iridium satellites, about a sixth of Iridium satellites in the original fleet. United States embargo on Chinese launches The involvement of United States companies in the Apstar 2 and Intelsat 708 investigations caused great controversy in the United States. In the Cox Report, the United States Congress accused Space Systems/Loral and Hughes Aircraft Company of transferring information that would improve the design of Chinese rockets and ballistic missiles. Although the Long March was allowed to launch its commercial backlog, the United States Department of State has not approved any satellite export licenses to China since 1998. ChinaSat 8, which had been scheduled for launch in April 1999 on a Long March 3B rocket, was placed in storage, sold to the Singapore company ProtoStar, and finally launched on a European rocket Ariane 5 in 2008. From 2005 to 2012, Long March rockets launched ITAR-free satellites made by the European company Thales Alenia Space. However, Thales Alenia was forced to discontinue its ITAR-free satellite line in 2013 after the United States State Department fined a United States company for selling ITAR components. Thales Alenia Space had long complained that "every satellite nut and bolt" was being ITAR-restricted, and the European Space Agency (ESA) accused the United States of using ITAR to block exports to China instead of protecting technology. In 2016, an official at the United States Bureau of Industry and Security confirmed that "no U.S.-origin content, regardless of significance, regardless of whether it is incorporated into a foreign-made item, can go to China". The European aerospace industry is working on developing replacements for United States satellite components. Return to success After the failures of 1992–1996, the troublesome Long March 2E was withdrawn from the market. Design changes were made to improve the reliability of Long March rockets. From October 1996 to April 2009, the Long March rocket family delivered 75 consecutive successful launches, including several major milestones in space flight: On 15 October 2003, the Long March 2F rocket successfully launched the Shenzhou 5 spacecraft, carrying China's first astronaut into space. China became the third nation with independent human spaceflight capability, after the Soviet Union/Russia and the United States. On 1 June 2007, Long March rockets completed their 100th launch overall. On 24 October 2007, the Long March 3A successfully launched (10:05 UTC) the "Chang'e 1" lunar orbiting spacecraft from the Xichang Satellite Launch Center. The Long March rockets have subsequently maintained an excellent reliability record. Since 2010, Long March launches have made up 15–25% of all space launches globally. Growing domestic demand has maintained a healthy manifest. International deals have been secured through a package deal that bundles the launch with a Chinese satellite, circumventing the United States embargo. Payloads The Long March is China's primary expendable launch system family. The Shenzhou spacecraft and Chang'e lunar orbiters are also launched on the Long March rocket. The maximum payload for LEO is 25,000 kilograms (CZ-5B), the maximum payload for GTO is 14,000 kg (CZ-5). The next generation rocket Long March 5 variants will offer more payload in the future. Propellants Long March 1's 1st and 2nd stage used nitric acid and unsymmetrical dimethylhydrazine (UDMH) propellants, and its upper stage used a spin-stabilized solid rocket engine. Long March 2, Long March 3, Long March 4, the main stages and associated liquid rocket boosters use dinitrogen tetroxide (N2O4) as the oxidizing agent and UDMH as the fuel. The upper stages (third stage) of Long March 3 rockets use YF-73 and YF-75 engines, using liquid hydrogen (LH2) as the fuel and liquid oxygen (LOX) as the oxidizer. The new generation of Long March rocket family, Long March 5 and its derivations Long March 6, Long March 7, Long March 8, and Long March 10 use non-toxic LOX/kerosene and LOX/LH2 liquid propellants (except in some upper stages where UDMH/N2O4 continues to be used). Long March 9 is being developed as a LOX/CH4, or methalox, rocket. Long March 11 is a solid-fuel rocket. Variants The Long March rockets are organized into several series: Long March 1 Long March 2 Long March 3 Long March 4 Long March 5 Long March 6 Long March 7 Long March 8 Long March 9 Long March 10 Long March 11 Long March 12 The Long March 5, 6 and 7 are a newer generation of rockets sharing the new 1200 kN class YF-100 engines, which burns RP-1 / LOX, unlike earlier 2, 3 and 4 series which uses more expensive and dangerous N2O4 / UDMH propellants. The 5 series is a heavy-lift launch vehicle, with a capacity of 25,000 kg to LEO while the 6 series is a small-lift launch vehicle with a capacity of 1,500 kg to LEO, and the 7 series is a medium-lift launch vehicle, with a capacity of 14,000 kg to LEO. The Long March 10A is a partially-reusable crewed-rated rocket designed for LEO missions currently under development; the Long March 9 is initially designed to be partially reusable before becoming a fully reusable launcher. Long March 8 The Long March 8 is a new series of launch vehicles, which is geared towards Sun-synchronous orbit (SSO) launches. In early 2017, it was expected to be based on the Long March 7, and have two solid fuel boosters, and first launch by the end of 2018. By 2019, it was intended to be partially reusable. The first stage will have legs and grid fins (like Falcon 9) and it may land with side boosters still attached. The first Long March 8 was rolled out to for a test launch on or around 20 December 2020 and launched on 22 December 2020. The second flight with no side boosters occurred on 27 February 2022, sending a national record of 22 satellites into SSO. Future development Long March 9 The Long March 9 (LM-9, CZ-9, or Changzheng 9, Chinese: 长征九号) is a Chinese super-heavy carrier rocket concept proposed in 2018 that is currently in study. It is planned for a maximum payload capacity of 140,000 kg to low Earth orbit (LEO), 50,000 kg to trans-lunar injection or 44,000 kg to Mars. Its first flight is expected by 2028 or 2029 in preparation for a lunar landing sometime in the 2030s; a sample return mission from Mars has been proposed as first major mission. It has been stated that around 70% of the hardware and components needed for a test flight are currently undergoing testing, with the first engine test to occur by the end of 2018. The 2011 proposed design would be a three-staged rocket, with the initial core having a diameter of 10 meters and use a cluster of four engines. Multiple variants of the rocket have been proposed, CZ-9 being the largest with four liquid-fuel boosters with the aforementioned LEO payload capacity of 140,000 kg, CZ-9A having just two boosters and a LEO payload capacity of 100,000 kg, and finally CZ-9B having just the core stage and a LEO payload capacity of 50,000 kg. Approved in 2021, the Long March 9 is classified as a super heavy-lift launch vehicle. A very different design of LM-9 was announced in June 2021, with more engines and no external boosters. Payload capacities are 160 tonnes to LEO and 53 tonnes to TLI. Long March 10 The Long March 10, previously known as the "921 rocket", is under development for crewed lunar missions. The nickname "921" refers to the founding date of China's human spaceflight program. Like the Long March 5, it uses 5-meter (16.4 ft) diameter rocket bodies and YF-100K engines, although with 7 engines on each of 3 cores. The launch weight is 2187 tonnes, delivering 25 tonnes into trans-lunar injection. The proposed crewed lunar mission uses two rockets; the crewed spacecraft and lunar landing stack launch separately and rendezvous in lunar orbit. Development was announced at the 2020 China Space Conference. As of 2022, the first flight of this triple-cored rocket is targeted for 2027. Origins The Long March 1 rocket is derived from earlier Chinese 2-stage Intermediate-range ballistic missile (IRBM) DF-4, or Dong Feng 4 missile, and Long March 2, Long March 3, Long March 4 rocket families are derivatives of the Chinese 2-stage Intercontinental ballistic missile (ICBM) DF-5, or Dong Feng 5 missile. However, like its counterparts in both the United States and in Russia, the differing needs of space rockets and strategic missiles have caused the development of space rockets and missiles to diverge. The main goal of a launch vehicle is to maximize payload, while for strategic missiles increased throw weight is much less important than the ability to launch quickly and to survive a first strike. This divergence has become clear in the next generation of Long March rockets, which use cryogenic propellants in sharp contrast to the next generation of strategic missiles, which are mobile and solid fuelled. The next generation of Long March rocket, Long March 5 rocket family, is a brand new design, while Long March 6 and Long March 7 can be seen as derivations because they use the liquid rocket booster design of Long March 5 to build small-to-mid capacity launch vehicles. Launch sites There are four launch centers in China. They are: Jiuquan Satellite Launch Center Taiyuan Satellite Launch Center Wenchang Spacecraft Launch Site Xichang Satellite Launch Center Most of the commercial satellite launches of Long March vehicles have been from Xichang Satellite Launch Center, located in Xichang, Sichuan province. Wenchang Spacecraft Launch Site in Hainan province is under expansion and will be the main launch center for future commercial satellite launches. Long March launches also take place from the more military oriented Jiuquan Satellite Launch Center in Gansu province from which the crewed Shenzhou spacecraft also launches. Taiyuan Satellite Launch Center is located in Shanxi province and focuses on the launches of Sun-synchronous orbit (SSO) satellites. On 5 June 2019, China launched a Long March 11 rocket from a mobile launch platform in the Yellow Sea. Commercial launch services China markets launch services under the China Aerospace Science and Technology Corporation (China Great Wall Industry Corporation). Its efforts to launch communications satellites were dealt a blow in the mid-1990s after the United States stopped issuing export licenses to companies to allow them to launch on Chinese launch vehicles out of fear that this would help China's military. In the face of this, Thales Alenia Space built the Chinasat-6B satellite with no components from the United States whatsoever. This allowed it to be launched on a Chinese launch vehicle without violating United States International Traffic in Arms Regulations (ITAR) restrictions. The launch, on a Long March 3B rocket, was successfully conducted on 5 July 2007. A Chinese Long March 2D launched VRSS-1 (Venezuelan Remote Sensing Satellite-1) of Venezuela, "Francisco de Miranda" on 29 September 2012. Notes See also China National Space Administration Shenzhou spacecraft Space program of China Tsien Hsue-shen Comparison of orbital launchers families Comparison of orbital launch systems Kaituozhe launcher Kuaizhou launcher References External links Extensive information on the Chinese space program China Great Wall Industry Corporation NASA links – substitute year for other years Long March Engines List Rocket and Space Technology Chinese Launch Vehicle Overview Rocket families Space launch vehicles of China 1970 in spaceflight 1970 in China 1970 in technology Projects established in 1970 Chinese brands Spacecraft that broke apart in space
Long March (rocket family)
[ "Technology" ]
2,985
[ "Space debris", "Spacecraft that broke apart in space" ]
55,172
https://en.wikipedia.org/wiki/Proteomics
Proteomics is the large-scale study of proteins. Proteins are vital macromolecules of all living organisms, with many functions such as the formation of structural fibers of muscle tissue, enzymatic digestion of food, or synthesis and replication of DNA. In addition, other kinds of proteins include antibodies that protect an organism from infection, and hormones that send important signals throughout the body. The proteome is the entire set of proteins produced or modified by an organism or system. Proteomics enables the identification of ever-increasing numbers of proteins. This varies with time and distinct requirements, or stresses, that a cell or organism undergoes. Proteomics is an interdisciplinary domain that has benefited greatly from the genetic information of various genome projects, including the Human Genome Project. It covers the exploration of proteomes from the overall level of protein composition, structure, and activity, and is an important component of functional genomics. Proteomics generally denotes the large-scale experimental analysis of proteins and proteomes, but often refers specifically to protein purification and mass spectrometry. Indeed, mass spectrometry is the most powerful method for analysis of proteomes, both in large samples composed of millions of cells and in single cells. History and etymology The first studies of proteins that could be regarded as proteomics began in 1974, after the introduction of the two-dimensional gel and mapping of the proteins from the bacterium Escherichia coli. Proteome is a blend of the words "protein" and "genome". It was coined in 1994 by then-Ph.D student Marc Wilkins at Macquarie University, which founded the first dedicated proteomics laboratory in 1995. Complexity of the problem After genomics and transcriptomics, proteomics is the next step in the study of biological systems. It is more complicated than genomics because an organism's genome is more or less constant, whereas proteomes differ from cell to cell and from time to time. Distinct genes are expressed in different cell types, which means that even the basic set of proteins produced in a cell must be identified. In the past this phenomenon was assessed by RNA analysis, which was found to lack correlation with protein content. It is now known that mRNA is not always translated into protein, and the amount of protein produced for a given amount of mRNA depends on the gene it is transcribed from and on the cell's physiological state. Proteomics confirms the presence of the protein and provides a direct measure of its quantity. Post-translational modifications Not only does the translation from mRNA cause differences, but many proteins also are subjected to a wide variety of chemical modifications after translation. The most common and widely studied post-translational modifications include phosphorylation and glycosylation. Many of these post-translational modifications are critical to the protein's function. Phosphorylation One such modification is phosphorylation, which happens to many enzymes and structural proteins in the process of cell signaling. The addition of a phosphate to particular amino acids—most commonly serine and threonine mediated by serine-threonine kinases, or more rarely tyrosine mediated by tyrosine kinases—causes a protein to become a target for binding or interacting with a distinct set of other proteins that recognize the phosphorylated domain. Because protein phosphorylation is one of the most studied protein modifications, many "proteomic" efforts are geared to determining the set of phosphorylated proteins in a particular cell or tissue-type under particular circumstances. This alerts the scientist to the signaling pathways that may be active in that instance. Ubiquitination Ubiquitin is a small protein that may be affixed to certain protein substrates by enzymes called E3 ubiquitin ligases. Determining which proteins are poly-ubiquitinated helps understand how protein pathways are regulated. This is, therefore, an additional legitimate "proteomic" study. Similarly, once a researcher determines which substrates are ubiquitinated by each ligase, determining the set of ligases expressed in a particular cell type is helpful. Additional modifications In addition to phosphorylation and ubiquitination, proteins may be subjected to (among others) methylation, acetylation, glycosylation, oxidation, and nitrosylation. Some proteins undergo all these modifications, often in time-dependent combinations. This illustrates the potential complexity of studying protein structure and function. Distinct proteins are made under distinct settings A cell may make different sets of proteins at different times or under different conditions, for example during development, cellular differentiation, cell cycle, or carcinogenesis. Further increasing proteome complexity, as mentioned, most proteins are able to undergo a wide range of post-translational modifications. Therefore, a "proteomics" study may become complex very quickly, even if the topic of study is restricted. In more ambitious settings, such as when a biomarker for a specific cancer subtype is sought, the proteomics scientist might elect to study multiple blood serum samples from multiple cancer patients to minimise confounding factors and account for experimental noise. Thus, complicated experimental designs are sometimes necessary to account for the dynamic complexity of the proteome. Limitations of genomics and proteomics studies Proteomics gives a different level of understanding than genomics for many reasons: the level of transcription of a gene gives only a rough estimate of its level of translation into a protein. An mRNA produced in abundance may be degraded rapidly or translated inefficiently, resulting in a small amount of protein. as mentioned above, many proteins experience post-translational modifications that profoundly affect their activities; for example, some proteins are not active until they become phosphorylated. Methods such as phosphoproteomics and glycoproteomics are used to study post-translational modifications. many transcripts give rise to more than one protein, through alternative splicing or alternative post-translational modifications. many proteins form complexes with other proteins or RNA molecules, and only function in the presence of these other molecules. protein degradation rate plays an important role in protein content. Reproducibility. One major factor affecting reproducibility in proteomics experiments is the simultaneous elution of many more peptides than mass spectrometers can measure. This causes stochastic differences between experiments due to data-dependent acquisition of tryptic peptides. Although early large-scale shotgun proteomics analyses showed considerable variability between laboratories, presumably due in part to technical and experimental differences between laboratories, reproducibility has been improved in more recent mass spectrometry analysis, particularly on the protein level. Notably, targeted proteomics shows increased reproducibility and repeatability compared with shotgun methods, although at the expense of data density and effectiveness. Data quality. Proteomic analysis is highly amenable to automation and large data sets are created, which are processed by software algorithms. Filter parameters are used to reduce the number of false hits, but they cannot be completely eliminated. Scientists have expressed the need for awareness that proteomics experiments should adhere to the criteria of analytical chemistry (sufficient data quality, sanity check, validation). Methods of studying proteins In proteomics, there are multiple methods to study proteins. Generally, proteins may be detected by using either antibodies (immunoassays), electrophoretic separation or mass spectrometry. If a complex biological sample is analyzed, either a very specific antibody needs to be used in quantitative dot blot analysis (QDB), or biochemical separation then needs to be used before the detection step, as there are too many analytes in the sample to perform accurate detection and quantification. Protein detection with antibodies (immunoassays) Antibodies to particular proteins, or their modified forms, have been used in biochemistry and cell biology studies. These are among the most common tools used by molecular biologists today. There are several specific techniques and protocols that use antibodies for protein detection. The enzyme-linked immunosorbent assay (ELISA) has been used for decades to detect and quantitatively measure proteins in samples. The western blot may be used for detection and quantification of individual proteins, where in an initial step, a complex protein mixture is separated using SDS-PAGE and then the protein of interest is identified using an antibody. Modified proteins may be studied by developing an antibody specific to that modification. For example, some antibodies only recognize certain proteins when they are tyrosine-phosphorylated, they are known as phospho-specific antibodies. Also, there are antibodies specific to other modifications. These may be used to determine the set of proteins that have undergone the modification of interest. Immunoassays can also be carried out using recombinantly generated immunoglobulin derivatives or synthetically designed protein scaffolds that are selected for high antigen specificity. Such binders include single domain antibody fragments (Nanobodies), designed ankyrin repeat proteins (DARPins) and aptamers. Disease detection at the molecular level is driving the emerging revolution of early diagnosis and treatment. A challenge facing the field is that protein biomarkers for early diagnosis may be present in very low abundance. The lower limit of detection with conventional immunoassay technology is the upper femtomolar range (10−13 M). Digital immunoassay technology has improved detection sensitivity three logs, to the attomolar range (10−16 M). This capability has the potential to open new advances in diagnostics and therapeutics, but such technologies have been relegated to manual procedures that are not well suited for efficient routine use. Antibody-free protein detection While protein detection with antibodies is still very common in molecular biology, other methods have been developed as well, that do not rely on an antibody. These methods offer various advantages, for instance they often are able to determine the sequence of a protein or peptide, they may have higher throughput than antibody-based, and they sometimes can identify and quantify proteins for which no antibody exists. Detection methods One of the earliest methods for protein analysis has been Edman degradation (introduced in 1967) where a single peptide is subjected to multiple steps of chemical degradation to resolve its sequence. These early methods have mostly been supplanted by technologies that offer higher throughput. More recently implemented methods use mass spectrometry-based techniques, a development that was made possible by the discovery of "soft ionization" methods developed in the 1980s, such as matrix-assisted laser desorption/ionization (MALDI) and electrospray ionization (ESI). These methods gave rise to the top-down and the bottom-up proteomics workflows where often additional separation is performed before analysis (see below). Separation methods For the analysis of complex biological samples, a reduction of sample complexity is required. This may be performed off-line by one-dimensional or two-dimensional separation. More recently, on-line methods have been developed where individual peptides (in bottom-up proteomics approaches) are separated using reversed-phase chromatography and then, directly ionized using ESI; the direct coupling of separation and analysis explains the term "on-line" analysis. Hybrid technologies Several hybrid technologies use antibody-based purification of individual analytes and then perform mass spectrometric analysis for identification and quantification. Examples of these methods are the MSIA (mass spectrometric immunoassay), developed by Randall Nelson in 1995, and the SISCAPA (Stable Isotope Standard Capture with Anti-Peptide Antibodies) method, introduced by Leigh Anderson in 2004. Current research methodologies Fluorescence two-dimensional differential gel electrophoresis (2-D DIGE) may be used to quantify variation in the 2-D DIGE process and establish statistically valid thresholds for assigning quantitative changes between samples. Comparative proteomic analysis may reveal the role of proteins in complex biological systems, including reproduction. For example, treatment with the insecticide triazophos causes an increase in the content of brown planthopper (Nilaparvata lugens (Stål)) male accessory gland proteins (Acps) that may be transferred to females via mating, causing an increase in fecundity (i.e. birth rate) of females. To identify changes in the types of accessory gland proteins (Acps) and reproductive proteins that mated female planthoppers received from male planthoppers, researchers conducted a comparative proteomic analysis of mated N. lugens females. The results indicated that these proteins participate in the reproductive process of N. lugens adult females and males. Proteome analysis of Arabidopsis peroxisomes has been established as the major unbiased approach for identifying new peroxisomal proteins on a large scale. There are many approaches to characterizing the human proteome, which is estimated to contain between 20,000 and 25,000 non-redundant proteins. The number of unique protein species likely will increase by between 50,000 and 500,000 due to RNA splicing and proteolysis events, and when post-translational modification also are considered, the total number of unique human proteins is estimated to range in the low millions. In addition, the first promising attempts to decipher the proteome of animal tumors have recently been reported. This method was used as a functional method in Macrobrachium rosenbergii protein profiling. High-throughput proteomic technologies Proteomics has steadily gained momentum over the past decade with the evolution of several approaches. Few of these are new, and others build on traditional methods. Mass spectrometry-based methods, affinity proteomics, and micro arrays are the most common technologies for large-scale study of proteins. Mass spectrometry and protein profiling There are two mass spectrometry-based methods currently used for protein profiling. The more established and widespread method uses high resolution, two-dimensional electrophoresis to separate proteins from different samples in parallel, followed by selection and staining of differentially expressed proteins to be identified by mass spectrometry. Despite the advances in 2-DE and its maturity, it has its limits as well. The central concern is the inability to resolve all the proteins within a sample, given their dramatic range in expression level and differing properties. The combination of pore size, and protein charge, size and shape can greatly determine migration rate which leads to other complications. The second quantitative approach uses stable isotope tags to differentially label proteins from two different complex mixtures. Here, the proteins within a complex mixture are labeled isotopically first, and then digested to yield labeled peptides. The labeled mixtures are then combined, the peptides separated by multidimensional liquid chromatography and analyzed by tandem mass spectrometry. Isotope coded affinity tag (ICAT) reagents are the widely used isotope tags. In this method, the cysteine residues of proteins get covalently attached to the ICAT reagent, thereby reducing the complexity of the mixtures omitting the non-cysteine residues. Quantitative proteomics using stable isotopic tagging is an increasingly useful tool in modern development. Firstly, chemical reactions have been used to introduce tags into specific sites or proteins for the purpose of probing specific protein functionalities. The isolation of phosphorylated peptides has been achieved using isotopic labeling and selective chemistries to capture the fraction of protein among the complex mixture. Secondly, the ICAT technology was used to differentiate between partially purified or purified macromolecular complexes such as large RNA polymerase II pre-initiation complex and the proteins complexed with yeast transcription factor. Thirdly, ICAT labeling was recently combined with chromatin isolation to identify and quantify chromatin-associated proteins. Finally ICAT reagents are useful for proteomic profiling of cellular organelles and specific cellular fractions. Another quantitative approach is the accurate mass and time (AMT) tag approach developed by Richard D. Smith and coworkers at Pacific Northwest National Laboratory. In this approach, increased throughput and sensitivity is achieved by avoiding the need for tandem mass spectrometry, and making use of precisely determined separation time information and highly accurate mass determinations for peptide and protein identifications. Affinity proteomics Affinity proteomics uses antibodies or other affinity reagents (such as oligonucleotide-based aptamers) as protein-specific detection probes. Currently this method can interrogate several thousand proteins, typically from biofluids such as plasma, serum or cerebrospinal fluid (CSF). A key differentiator for this technology is the ability to analyze hundreds or thousands of samples in a reasonable timeframe (a matter of days or weeks); mass spectrometry-based methods are not scalable to this level of sample throughput for proteomics analyses. Protein chips Balancing the use of mass spectrometers in proteomics and in medicine is the use of protein micro arrays. The aim behind protein micro arrays is to print thousands of protein detecting features for the interrogation of biological samples. Antibody arrays are an example in which a host of different antibodies are arrayed to detect their respective antigens from a sample of human blood. Another approach is the arraying of multiple protein types for the study of properties like protein-DNA, protein-protein and protein-ligand interactions. Ideally, the functional proteomic arrays would contain the entire complement of the proteins of a given organism. The first version of such arrays consisted of 5000 purified proteins from yeast deposited onto glass microscopic slides. Despite the success of first chip, it was a greater challenge for protein arrays to be implemented. Proteins are inherently much more difficult to work with than DNA. They have a broad dynamic range, are less stable than DNA and their structure is difficult to preserve on glass slides, though they are essential for most assays. The global ICAT technology has striking advantages over protein chip technologies. Reverse-phased protein microarrays This is a promising and newer microarray application for the diagnosis, study and treatment of complex diseases such as cancer. The technology merges laser capture microdissection (LCM) with micro array technology, to produce reverse-phase protein microarrays. In this type of microarrays, the whole collection of protein themselves are immobilized with the intent of capturing various stages of disease within an individual patient. When used with LCM, reverse phase arrays can monitor the fluctuating state of proteome among different cell population within a small area of human tissue. This is useful for profiling the status of cellular signaling molecules, among a cross-section of tissue that includes both normal and cancerous cells. This approach is useful in monitoring the status of key factors in normal prostate epithelium and invasive prostate cancer tissues. LCM then dissects these tissue and protein lysates were arrayed onto nitrocellulose slides, which were probed with specific antibodies. This method can track all kinds of molecular events and can compare diseased and healthy tissues within the same patient enabling the development of treatment strategies and diagnosis. The ability to acquire proteomics snapshots of neighboring cell populations, using reverse-phase microarrays in conjunction with LCM has a number of applications beyond the study of tumors. The approach can provide insights into normal physiology and pathology of all the tissues and is invaluable for characterizing developmental processes and anomalies. Protein Detection via Bioorthogonal Chemistry Recent advancements in bioorthogonal chemistry have revealed applications in protein analysis. The extension of using organic molecules to observe their reaction with proteins reveals extensive methods to tag them. Unnatural amino acids and various functional groups represent new growing technologies in proteomics. Specific biomolecules that are capable of being metabolized in cells or tissues are inserted into proteins or glycans. The molecule will have an affinity tag, modifying the protein allowing it to be detected. Azidohomoalanine (AHA) utilizes this affinity tag via incorporation with Met-t-RNA synthetase to incorporate into proteins. This has allowed AHA to assist in determine the identity of newly synthesized proteins created in response to perturbations and to identify proteins secreted by cells. Recent studies using ketones and aldehydes condensations show that they are best suited for in vitro or cell surface labeling. However, using ketones and aldehydes as bioorthogonal reporters revealed slow kinetics indicating that while effective for labeling, the concentration must be high. Certain proteins can be detected via their reactivity to azide groups. Non-proteinogenic amino acids can bear azide groups which react with phosphines in Staudinger ligations. This reaction has already been used to label other biomolecules in living cells and animals. The bioorthoganal field is expanding and is driving further applications within proteomics. It is worthwhile noting the limitations and benefits. Rapid reactions can create bioconjuctions and create high concentrations with low amounts of reactants. Contrarily slow kinetic reactions like aldehyde and ketone condensation while effective require a high concentration making it cost inefficient. Practical applications New drug discovery One major development to come from the study of human genes and proteins has been the identification of potential new drugs for the treatment of disease. This relies on genome and proteome information to identify proteins associated with a disease, which computer software can then use as targets for new drugs. For example, if a certain protein is implicated in a disease, its 3D structure provides the information to design drugs to interfere with the action of the protein. A molecule that fits the active site of an enzyme, but cannot be released by the enzyme, inactivates the enzyme. This is the basis of new drug-discovery tools, which aim to find new drugs to inactivate proteins involved in disease. As genetic differences among individuals are found, researchers expect to use these techniques to develop personalized drugs that are more effective for the individual. Proteomics is also used to reveal complex plant-insect interactions that help identify candidate genes involved in the defensive response of plants to herbivory. A branch of proteomics called chemoproteomics provides numerous tools and techniques to detect protein targets of drugs. Interaction proteomics and protein networks Interaction proteomics is the analysis of protein interactions from scales of binary interactions to proteome- or network-wide. Most proteins function via protein–protein interactions, and one goal of interaction proteomics is to identify binary protein interactions, protein complexes, and interactomes. Several methods are available to probe protein–protein interactions. While the most traditional method is yeast two-hybrid analysis, a powerful emerging method is affinity purification followed by protein mass spectrometry using tagged protein baits. Other methods include surface plasmon resonance (SPR), protein microarrays, dual polarisation interferometry, microscale thermophoresis, kinetic exclusion assay, and experimental methods such as phage display and in silico computational methods. Knowledge of protein-protein interactions is especially useful in regard to biological networks and systems biology, for example in cell signaling cascades and gene regulatory networks (GRNs, where knowledge of protein-DNA interactions is also informative). Proteome-wide analysis of protein interactions, and integration of these interaction patterns into larger biological networks, is crucial towards understanding systems-level biology. Expression proteomics Expression proteomics includes the analysis of protein expression at a larger scale. It helps identify main proteins in a particular sample, and those proteins differentially expressed in related samples—such as diseased vs. healthy tissue. If a protein is found only in a diseased sample then it can be a useful drug target or diagnostic marker. Proteins with the same or similar expression profiles may also be functionally related. There are technologies such as 2D-PAGE and mass spectrometry that are used in expression proteomics. Biomarkers The National Institutes of Health has defined a biomarker as "a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention." Understanding the proteome, the structure and function of each protein and the complexities of protein–protein interactions are critical for developing the most effective diagnostic techniques and disease treatments in the future. For example, proteomics is highly useful in the identification of candidate biomarkers (proteins in body fluids that are of value for diagnosis), identification of the bacterial antigens that are targeted by the immune response, and identification of possible immunohistochemistry markers of infectious or neoplastic diseases. An interesting use of proteomics is using specific protein biomarkers to diagnose disease. A number of techniques allow to test for proteins produced during a particular disease, which helps to diagnose the disease quickly. Techniques include western blot, immunohistochemical staining, enzyme linked immunosorbent assay (ELISA) or mass spectrometry. Secretomics, a subfield of proteomics that studies secreted proteins and secretion pathways using proteomic approaches, has recently emerged as an important tool for the discovery of biomarkers of disease. Proteogenomics In proteogenomics, proteomic technologies such as mass spectrometry are used for improving gene annotations. Parallel analysis of the genome and the proteome facilitates discovery of post-translational modifications and proteolytic events, especially when comparing multiple species (comparative proteogenomics). Structural proteomics Structural proteomics includes the analysis of protein structures at large-scale. It compares protein structures and helps identify functions of newly discovered genes. The structural analysis also helps to understand that where drugs bind to proteins and also shows where proteins interact with each other. This understanding is achieved using different technologies such as X-ray crystallography and NMR spectroscopy. Bioinformatics for proteomics (proteome informatics) Much proteomics data is collected with the help of high throughput technologies such as mass spectrometry and microarray. It would often take weeks or months to analyze the data and perform comparisons by hand. For this reason, biologists and chemists are collaborating with computer scientists and mathematicians to create programs and pipeline to computationally analyze the protein data. Using bioinformatics techniques, researchers are capable of faster analysis and data storage. A good place to find lists of current programs and databases is on the ExPASy bioinformatics resource portal. The applications of bioinformatics-based proteomics include medicine, disease diagnosis, biomarker identification, and many more. Protein identification Mass spectrometry and microarray produce peptide fragmentation information but do not give identification of specific proteins present in the original sample. Due to the lack of specific protein identification, past researchers were forced to decipher the peptide fragments themselves. However, there are currently programs available for protein identification. These programs take the peptide sequences output from mass spectrometry and microarray and return information about matching or similar proteins. This is done through algorithms implemented by the program which perform alignments with proteins from known databases such as UniProt and PROSITE to predict what proteins are in the sample with a degree of certainty. Protein structure The biomolecular structure forms the 3D configuration of the protein. Understanding the protein's structure aids in the identification of the protein's interactions and function. It used to be that the 3D structure of proteins could only be determined using X-ray crystallography and NMR spectroscopy. As of 2017, Cryo-electron microscopy is a leading technique, solving difficulties with crystallization (in X-ray crystallography) and conformational ambiguity (in NMR); resolution was 2.2Å as of 2015. Now, through bioinformatics, there are computer programs that can in some cases predict and model the structure of proteins. These programs use the chemical properties of amino acids and structural properties of known proteins to predict the 3D model of sample proteins. This also allows scientists to model protein interactions on a larger scale. In addition, biomedical engineers are developing methods to factor in the flexibility of protein structures to make comparisons and predictions. Post-translational modifications Most programs available for protein analysis are not written for proteins that have undergone post-translational modifications. Some programs will accept post-translational modifications to aid in protein identification but then ignore the modification during further protein analysis. It is important to account for these modifications since they can affect the protein's structure. In turn, computational analysis of post-translational modifications has gained the attention of the scientific community. The current post-translational modification programs are only predictive. Chemists, biologists and computer scientists are working together to create and introduce new pipelines that allow for analysis of post-translational modifications that have been experimentally identified for their effect on the protein's structure and function. Computational methods in studying protein biomarkers One example of the use of bioinformatics and the use of computational methods is the study of protein biomarkers. Computational predictive models have shown that extensive and diverse feto-maternal protein trafficking occurs during pregnancy and can be readily detected non-invasively in maternal whole blood. This computational approach circumvented a major limitation, the abundance of maternal proteins interfering with the detection of fetal proteins, to fetal proteomic analysis of maternal blood. Computational models can use fetal gene transcripts previously identified in maternal whole blood to create a comprehensive proteomic network of the term neonate. Such work shows that the fetal proteins detected in pregnant woman's blood originate from a diverse group of tissues and organs from the developing fetus. The proteomic networks contain many biomarkers that are proxies for development and illustrate the potential clinical application of this technology as a way to monitor normal and abnormal fetal development. An information-theoretic framework has also been introduced for biomarker discovery, integrating biofluid and tissue information. This new approach takes advantage of functional synergy between certain biofluids and tissues with the potential for clinically significant findings not possible if tissues and biofluids were considered individually. By conceptualizing tissue-biofluid as information channels, significant biofluid proxies can be identified and then used for the guided development of clinical diagnostics. Candidate biomarkers are then predicted based on information transfer criteria across the tissue-biofluid channels. Significant biofluid-tissue relationships can be used to prioritize clinical validation of biomarkers. Emerging trends A number of emerging concepts have the potential to improve the current features of proteomics. Obtaining absolute quantification of proteins and monitoring post-translational modifications are the two tasks that impact the understanding of protein function in healthy and diseased cells. Further, the throughput and sensitivity of proteomic assays, often measured as samples analyzed per day and depth of proteome coverage, respectively, have driven development of cutting-edge instrumentation and methodologies. For many cellular events, the protein concentrations do not change; rather, their function is modulated by post-translational modifications (PTM). Methods of monitoring PTM are an underdeveloped area in proteomics. Selecting a particular subset of protein for analysis substantially reduces protein complexity, making it advantageous for diagnostic purposes where blood is the starting material. Another important aspect of proteomics, yet not addressed, is that proteomics methods should focus on studying proteins in the context of the environment. The increasing use of chemical cross-linkers, introduced into living cells to fix protein-protein, protein-DNA and other interactions, may ameliorate this problem partially. The challenge is to identify suitable methods of preserving relevant interactions. Another goal for studying proteins is development of more sophisticated methods to image proteins and other molecules in living cells and real-time. Systems biology Advances in quantitative proteomics would clearly enable more in-depth analysis of cellular systems. Another research frontier is the analysis of single cells, and protein covariation across single cells which reflects biological processes such as protein complex formation, immune functions, as well as cell cycle and priming of cancer cells for drug resistance Biological systems are subject to a variety of perturbations (cell cycle, cellular differentiation, carcinogenesis, environment (biophysical), etc.). Transcriptional and translational responses to these perturbations results in functional changes to the proteome implicated in response to the stimulus. Therefore, describing and quantifying proteome-wide changes in protein abundance is crucial towards understanding biological phenomenon more holistically, on the level of the entire system. In this way, proteomics can be seen as complementary to genomics, transcriptomics, epigenomics, metabolomics, and other -omics approaches in integrative analyses attempting to define biological phenotypes more comprehensively. As an example, The Cancer Proteome Atlas provides quantitative protein expression data for ~200 proteins in over 4,000 tumor samples with matched transcriptomic and genomic data from The Cancer Genome Atlas. Similar datasets in other cell types, tissue types, and species, particularly using deep shotgun mass spectrometry, will be an immensely important resource for research in fields like cancer biology, developmental and stem cell biology, medicine, and evolutionary biology. Human plasma proteome Characterizing the human plasma proteome has become a major goal in the proteomics arena, but it is also the most challenging proteomes of all human tissues. It contains immunoglobulin, cytokines, protein hormones, and secreted proteins indicative of infection on top of resident, hemostatic proteins. It also contains tissue leakage proteins due to the blood circulation through different tissues in the body. The blood thus contains information on the physiological state of all tissues and, combined with its accessibility, makes the blood proteome invaluable for medical purposes. It is thought that characterizing the proteome of blood plasma is a daunting challenge. The depth of the plasma proteome encompasses a dynamic range of more than 1010 between the highest abundant protein (albumin) and the lowest (some cytokines) and is thought to be one of the main challenges for proteomics. Temporal and spatial dynamics further complicate the study of human plasma proteome. The turnover of some proteins is quite faster than others and the protein content of an artery may substantially vary from that of a vein. All these differences make even the simplest proteomic task of cataloging the proteome seem out of reach. To tackle this problem, priorities need to be established. Capturing the most meaningful subset of proteins among the entire proteome to generate a diagnostic tool is one such priority. Secondly, since cancer is associated with enhanced glycosylation of proteins, methods that focus on this part of proteins will also be useful. Again: multiparameter analysis best reveals a pathological state. As these technologies improve, the disease profiles should be continually related to respective gene expression changes. Due to the above-mentioned problems plasma proteomics remained challenging. However, technological advancements and continuous developments seem to result in a revival of plasma proteomics as it was shown recently by a technology called plasma proteome profiling. Due to such technologies researchers were able to investigate inflammation processes in mice, the heritability of plasma proteomes as well as to show the effect of such a common life style change like weight loss on the plasma proteome. Journals Numerous journals are dedicated to the field of proteomics and related areas. Note that journals dealing with proteins are usually more focused on structure and function while proteomics journals are more focused on the large-scale analysis of whole proteomes or at least large sets of proteins. Some relevant proteomics journals are listed below (with their publishers). Molecular and Cellular Proteomics (ASBMB) Journal of Proteome Research (ACS) Journal of Proteomics (Elsevier) Proteomics (Wiley) See also Activity-based proteomics Bottom-up proteomics Cytomics Functional genomics Heat stabilization Human proteome project Immunoproteomics List of biological databases List of omics topics in biology PEGylation Phosphoproteomics Protein production Proteogenomics Proteomic chemistry Secretomics Shotgun proteomics Top-down proteomics Systems biology Yeast two-hybrid system TCP-seq Glycomics Protein databases Human Protein Atlas Human Protein Reference Database National Center for Biotechnology Information (NCBI) PeptideAtlas Protein Data Bank (PDB) Protein Information Resource (PIR) Proteomics Identifications Database (PRIDE) Proteopedia—The collaborative, 3D encyclopedia of proteins and other molecules Swiss-Prot UniProt Research centers European Bioinformatics Institute Netherlands Proteomics Centre (NPC) References Bibliography (electronic, on Netlibrary?), hbk (focused on 2D-gels, good on detail) (covers almost all branches of proteomics) External links Genomics Omics
Proteomics
[ "Biology" ]
7,722
[ "Bioinformatics", "Omics" ]
55,174
https://en.wikipedia.org/wiki/Falun%20Gong
Falun Gong ( , ) or Falun Dafa ( ; ) is a new religious movement. Falun Gong was founded by its leader Li Hongzhi in China in the early 1990s. Falun Gong has its global headquarters in Dragon Springs, a compound in Deerpark, New York, United States, near the residence of Li Hongzhi. Led by Li Hongzhi, who is viewed by adherents as a deity-like figure, Falun Gong practitioners operate a variety of organizations in the United States and elsewhere, including the dance troupe Shen Yun. They are known for their opposition to the Chinese Communist Party (CCP), espousing anti-evolutionary views, opposition to homosexuality and feminism, and rejection of modern medicine, among other views described as "ultra-conservative". The Falun Gong also operates the Epoch Media Group, which is known for its subsidiaries, New Tang Dynasty Television and The Epoch Times newspaper. The latter has been broadly noted as a politically far-right media entity, and it has received significant attention in the United States for promoting conspiracy theories, such as QAnon and anti-vaccine misinformation, and producing advertisements for former U.S. President Donald Trump. It has also drawn attention in Europe for promoting far-right politicians, primarily in France and Germany. Falun Gong emerged from the qigong movement in China in 1992, combining meditation, qigong exercises, and moral teachings rooted in Buddhist and Taoist traditions. While supported by some government agencies, Falun Gong's rapid growth and independence from state control led several top officials to perceive it as a threat, resulting in periodic acts of harassment in the late 1990s. On April 25, 1999, over 10,000 Falun Gong practitioners gathered peacefully outside the central government compound in Beijing, seeking official recognition of the right to practice their faith without interference. In July 1999, the government of China implemented a ban on Falun Gong, categorizing it as an "illegal organization". Mass arrests, widespread torture and abuses followed. In 2008, U.S. government reports cited estimates that as much as half of China's labor camp population was made up of Falun Gong practitioners. In 2009, human rights groups estimated that at least 2,000 Falun Gong practitioners had died from persecution by that time. A 2022 United States Department of State report on religious freedom in China stated that "Falun Gong practitioners reported societal discrimination in employment, housing, and business opportunities". According to the same report: "Prior to the government's 1999 ban on Falun Gong, the government [of China] estimated there were 70 million adherents. Falun Gong sources estimate that tens of millions continue to practice privately, and Freedom House estimates there are seven to 20 million practitioners." Beliefs and practices Falun Gong is entirely based around the teachings of its autocratic founder and leader: China-born Li Hongzhi. According to NBC News, to his followers, Li is "a God-like figure who can levitate, walk through walls and see into the future. His ultra-conservative and controversial teachings include a rejection of modern science, art and medicine, and a denunciation of homosexuality, feminism and general worldliness." Hongzhi instructs his followers to downplay his controversial teachings when speaking to outsiders. Central teachings According to the Falun Gong, the Falun Gong aspires to enable the practitioner to ascend spiritually through moral rectitude and the practice of a set of exercises and meditation. The three stated tenets of the belief are truthfulness (), compassion (), and forbearance (). These principles have been repeated by Falun Gong members to outsiders as a tactic for evading deeper inquiry, and followers have been instructed by Li to lie about the practice. Together these principles are regarded as the fundamental nature of the cosmos, the criteria for differentiating right from wrong, and are held to be the highest manifestations of the Tao. Adherence to and cultivation of these virtues is regarded as a fundamental part of Falun Gong practice. In Zhuan Falun (), the foundational text published in 1995, Li Hongzhi writes "It doesn't matter how mankind's moral standard changes[...] The nature of the cosmos doesn't change, and it is the only standard for determining who's good and who's bad. So to be a cultivator you have to take the nature of the cosmos as your guide for improving yourself." Practice of Falun Gong consists of two features: performance of the exercises, and the refinement of one's (moral character, temperament). In Falun Gong's central text, Li states that "includes virtue (which is a type of matter), it includes forbearance, it includes awakening to things, it includes giving up things—giving up all the desires and all the attachments that are found in an ordinary person—and you also have to endure hardship, to name just a few things." The elevation of one's moral character is achieved, on the one hand, by aligning one's life with truth, compassion, and tolerance; and on the other, by abandoning desires and "negative thoughts and behaviors, such as greed, profit, lust, desire, killing, fighting, theft, robbery, deception, jealousy, etc." Among the central concepts found in the teachings of Falun Gong is the existence of 'Virtue' () and 'Karma' (). The former is generated through doing good deeds and suffering, while the latter is accumulated through doing wrong deeds. A person's ratio of karma to virtue is said to determine their fortunes in this life or the next. While virtue engenders good fortune and enables spiritual transformation, an accumulation of karma results in suffering, illness, and alienation from the nature of the universe. Spiritual elevation is achieved through the elimination of negative karma and the accumulation of virtue. Practitioners believe that through a process of moral cultivation, one can achieve Tao and obtain special powers and a level of divinity. Falun Gong's teachings posit that human beings are originally and innately good—even divine—but that they descended into a realm of delusion and suffering after developing selfishness and accruing karma. The practice holds that reincarnation exists, with the cycle of rebirth shaped by the accumulation of karma—a concept somewhat analogous to the Christian notion of "reaping what one sows." This perspective helps explain the perceived unfairness of differences among individuals, such as between the rich and the poor, while also encouraging moral behavior despite these inequalities. To re-ascend and return to the "original, true self", Falun Gong practitioners are supposed to assimilate themselves to the qualities of truthfulness, compassion and tolerance, let go of "attachments and desires" and suffer to repay karma. Traditional Chinese cultural thought and opposition to modernity are two focuses of Li Hongzhi's teachings. Falun Gong echoes traditional Chinese beliefs that humans are connected to the universe through mind and body, and Li seeks to challenge "conventional mentalities", concerning the nature and genesis of the universe, time-space, and the human body. The practice draws on East Asian mysticism and traditional Chinese medicine, but claims to have the power to heal incurable illnesses. Falun Gong describes modern science as too limited, and views traditional Chinese research and practice as valid. Li says that he is a being who has come to help humankind from the destruction it could face as the result of rampant evil. When asked if he was a human being, Li replied "You can think of me as a human being." According to the founder Li in his book, Zhuan Falun, he claims to have cultivated supernatural powers starting at age eight. According to Radio France International, Zhuan Falun also promises to teach practitioners to cultivate supernatural powers such as "see[ing] through a wall or into a human body". Exercises In addition to its moral philosophy, Falun Gong consists of four standing exercises and one sitting meditation. The exercises are regarded as secondary to moral elevation, though are still an essential component of Falun Gong cultivation practice. The first exercises, called "Buddha Stretching a Thousand Arms", are intended to facilitate the free flow of energy through the body and open up the meridians. The second exercise, "Falun Standing Stance", involves holding four static poses—each of which resembles holding a wheel—for an extended period. The objective of this exercise is to "enhances wisdom, increases strength, raises a person's level, and strengthens divine powers". The third, "Penetrating the Cosmic Extremes", involves three sets of movements, which aim to enable the expulsion of bad energy (e.g., pathogenic or black ) and the absorption of good energy into the body. Through practice of this exercise, the practitioner aspires to cleanse and purify the body. The fourth exercise, "Falun Cosmic Orbit", seeks to circulate energy freely throughout the body. Unlike the first through fourth exercises, the fifth exercise is performed in the seated lotus position. Called "Reinforcing Supernatural Powers", it is a meditation intended to be maintained as long as possible. Falun Gong exercises can be practiced individually or in group settings, and can be performed for varying lengths of time in accordance with the needs and abilities of the individual practitioner. Porter writes that practitioners of Falun Gong are encouraged to read Falun Gong books and practice its exercises on a regular basis, preferably daily. Falun Gong exercises are practiced in group settings in parks, university campuses, and other public spaces in over 70 countries worldwide, and are taught for free by volunteers. In addition to five exercises, in 2001 another meditation activity was introduced called "sending righteous thoughts", which is intended to reduce persecution on the spiritual plane. Discussions of supernatural skills also feature prominently within the movement, and the existence of these skills gained a level of mainstream acceptance in China's scientific community in the 1980s.Falun Gong's teachings hold that practitioners can acquire supernatural skills through a combination of moral cultivation, meditation and exercises. These include—but are not limited to—precognition, clairaudience, telepathy, and divine sight (via the opening of the third eye or celestial eye). However, Falun Gong stresses that these powers can be developed only as a result of moral practice, and should not be pursued or casually displayed. According to David Ownby, Falun Gong teaches that "pride in one's abilities, or the desire to show off, are marks of dangerous attachments", and Li warns his followers not to be distracted by the pursuit of such powers. Social practices Falun Gong differentiates itself from Buddhist monastic traditions in that it places great importance on participation in the secular world. Falun Gong practitioners are required to maintain regular jobs and family lives, to observe the laws of their respective governments, and are instructed not to distance themselves from society. An exception is made for Buddhist Bhikkhus and Bhikkhunīs, who are permitted to continue a monastic lifestyle while practicing Falun Gong. As part of its emphasis on ethical behavior, Falun Gong's teachings prescribe a strict personal morality for practitioners. They are expected to do good deeds, and conduct themselves with patience and forbearance when encountering difficulties. For instance, Li stipulates that a practitioner of Falun Gong must "not hit back when attacked, not talk back when insulted." In addition, they must "abandon negative thoughts and behaviors", such as greed, deception, jealousy, etc. The teachings contain injunctions against smoking and the consumption of alcohol, as these are considered addictions that are detrimental to health and mental clarity. Practitioners of Falun Gong are forbidden to kill living things—including animals for the purpose of obtaining food—though they are not required to adopt a vegetarian diet. In addition to these things, practitioners of Falun Gong must abandon a variety of worldly attachments and desires. In the course of cultivation practice, the student of Falun Gong aims to relinquish the pursuit of fame, monetary gain, sentimentality, and other entanglements. Li's teachings repeatedly emphasize the emptiness of material pursuits; although practitioners of Falun Gong are not encouraged to leave their jobs or eschew money, they are expected to give up the psychological attachments to these things. Falun Gong doctrine counsels against participation in political or social issues. Excessive interest in politics is viewed as an attachment to worldly power and influence, and Falun Gong aims for transcendence of such pursuits. According to Hu Ping, "Falun Gong deals only with purifying the individual through exercise, and does not touch on social or national concerns. It has not suggested or even intimated a model for social change. Many religions[...] pursue social reform to some extent[...] but there is no such tendency evident in Falun Gong." Sexual desire and lust are treated as attachments to be discarded, though Falun Gong students are still generally expected to marry and have families. All sexual relations outside the confines of monogamous, heterosexual marriage are regarded as immoral. Li Hongzhi taught that homosexuality makes one "unworthy of being human", creates bad karma, and is comparable to organized crime. He also taught that "disgusting homosexuality shows the dirty abnormal psychology of the gay who has lost his ability of reasoning", and that homosexuality is a "filthy, deviant state of mind". Li additionally stated in a 1998 speech in Switzerland that the gods' "first target of annihilation would be homosexuals". Although gay, lesbian, and bisexual people may practice Falun Gong, founder Li stated that they must "give up the bad conduct" of all same-sex sexual activity. Falun Gong's cosmology includes the belief that different ethnicities each have a correspondence to their own heavens, and that individuals of mixed race lose some aspect of this connection. Falun Gong's teachings include belief in reincarnation and that one's soul (original spirit) always maintains single racial identity despite having a body of mixed race. Investigative journalist Ethan Gutmann noted that interracial marriage is common in the Falun Gong community. Texts Li Hongzhi authored the first book of Falun Gong teachings in April 1993; titled China Falun Gong, or simply Falun Gong, it is an introductory text that discusses , Falun Gong's relationship to Buddhism, the principles of cultivation practice, and the improvement of moral character (). The book also provides illustrations and explanations of the exercises and meditation. The main body of teachings is articulated in the book Zhuan Falun, published in Chinese in January 1995. The book is divided into nine "lectures", and was based on edited transcriptions of the talks Li gave throughout China in the preceding three years. Falun Gong texts have since been translated into an additional 40 languages. In addition to these central texts, Li has published several books, lectures, articles and books of poetry, which are made available on Falun Gong websites. The Falun Gong teachings use numerous untranslated Chinese religious and philosophical terms, and make frequent allusion to characters and incidents in Chinese folk literature and concepts drawn from Chinese popular religion. This, coupled with the literal translation style of the texts, which imitate the colloquial style of Li's speeches, can make Falun Gong scriptures difficult to approach for Westerners. Symbols The main symbol of the practice is the (Dharma wheel, or in Sanskrit). In Buddhism, the represents the completeness of the doctrine. To "turn the wheel of dharma" () means to preach the Buddhist doctrine, and is the title of Falun Gong's main text. Despite the invocation of Buddhist language and symbols, the law wheel as understood in Falun Gong has distinct connotations, and is held to represent the universe. It is conceptualized by an emblem consisting of one large and four small (counter-clockwise) swastika symbols, representing the Buddha, and four small Taiji (yin-yang) symbols of the Daoist tradition. Dharma-ending period Li situates his teaching of Falun Gong amidst the "Dharma-ending period" (, ), described in Buddhist scriptures as an age of moral decline when the teachings of Buddhism would need to be rectified. The current era is described in Falun Gong's teachings as the " rectification" period (, which might also be translated as "to correct the dharma"), a time of cosmic transition and renewal. The process of rectification is necessitated by the moral decline and degeneration of life in the universe, and in the post-1999 context, the persecution of Falun Gong by the Chinese government has come to be viewed as a tangible symptom of this moral decay. Through the process of the rectification, life will be reordered according to the moral and spiritual quality of each, with good people being saved and ascending to higher spiritual planes, and bad ones being eliminated or cast down. In this paradigm, Li assumes the role of rectifying the Dharma by disseminating through his moral teachings. Some scholars, such as Maria Hsia Chang and Susan Palmer, have described Li's rhetoric about the " rectification" and providing salvation "in the final period of the Last Havoc" as apocalyptic. However, Benjamin Penny, a professor of Chinese history at the Australian National University, argues that Li's teachings are better understood in the context of a "Buddhist notion of the cycle of the Dharma or the Buddhist law". Richard Gunde wrote that, unlike apocalyptic groups in the West, Falun Gong does not fixate on death or the end of the world, and instead "has a simple, innocuous ethical message". Li Hongzhi does not discuss a "time of reckoning", and has rejected predictions of an impending apocalypse in his teachings. Extraterrestrials Li in the 1990s repeated claims that aliens were responsible for scientific inventions through the manipulation of scientists. For example, in a 1999 interview with Time, Li attributed the invention of computers and airplanes to extraterrestrials, as well as war and violence. However, his position on aliens seemed fairly inconsistent to observers Graeme Lang and Lu Yunfeng. In the Time interview, Li believed that aliens were attempting to replace humans through a cloning process, in which human bodies would be cloned with no soul, so that the aliens can replace the soul and inhabit human bodies (which to him are perfect). Li Hongzhi alleged that extraterrestrials disguise themselves as humans to corrupt and manipulate humanity. According to an ABC investigation, while some practitioners stated that this was metaphorical, a former member said she was taught it as literal truth. Categorization Scholars describe Falun Gong as a new religious movement. The organization is regularly featured in handbooks describing new religious movements. While commonly described by scholars as a new religious movement, adherents may reject this term. Yuezhi Zhao describes Falun Gong as "a multifaceted and totalizing movement that means different things to different people, ranging from a set of physical exercises and a praxis of transformation to a moral philosophy and a new knowledge system." In the cultural context of China, Falun Gong is generally described either as a system of qigong, or a type of "cultivation practice" (xiulian), a process by which an individual seeks spiritual perfection, often through both physical and moral conditioning. Varieties of cultivation practice are found throughout Chinese history, spanning Buddhist, Daoist, and Confucian traditions. Benjamin Penny writes "the best way to describe Falun Gong is as a cultivation system. Cultivation systems have been a feature of Chinese life for at least 2,500 years." Qigong practices can also be understood as a part of a broader tradition of "cultivation practice". In the West, Falun Gong is frequently classified as a religion on the basis of its theological and moral teachings, its concerns with spiritual cultivation and transformation, and its extensive body of scripture. Falun Gong practitioners themselves have sometimes disavowed this classification, however. This rejection reflects the relatively narrow definition of "religion" in contemporary China. According to David Ownby, religion in China has been defined since 1912 to refer to "world-historical faiths" that have "well-developed institutions, clergy, and textual traditions"—namely, Buddhism, Daoism, Islam, Protestantism, and Catholicism. Moreover, if Falun Gong had described itself as a religion in China, it likely would have invited immediate suppression. These historical and cultural circumstances notwithstanding, the practice has often been described as a form of Chinese religion. Approaches to media: The Epoch Times, Shen Yun, and Wikipedia The performance arts group Shen Yun and the media organization The Epoch Times are the major outreach organizations of Falun Gong. Both promote the spiritual and political teachings of Falun Gong. They and a variety of other organizations such as New Tang Dynasty Television (NTD) operate as extensions of Falun Gong. These extensions promote the new religious movement and its teachings. In the case of The Epoch Times, they also promote conspiracy theories such as QAnon and anti-vaccine misinformation and far-right politics in both Europe and the United States. Around the time of the 2016 United States presidential election, The Epoch Times began running articles supportive of Donald Trump and critical of his opponents. Falun Gong extensions have also been active in promoting the European Radical right. The exact financial and structural connections between Falun Gong, Shen Yun and The Epoch Times remains unclear. According to NBC News: The Epoch Media Group, along with Shen Yun, a dance troupe known for its ubiquitous advertising and unsettling performances, make up the outreach effort of Falun Gong, a relatively new spiritual practice that combines ancient Chinese meditative exercises, mysticism and often ultraconservative cultural worldviews. Falun Gong's founder has referred to Epoch Media Group as "our media", and the group's practice heavily informs The Epoch Times coverage, according to former employees who spoke with NBC News. The Epoch Times, digital production company NTD and the heavily advertised dance troupe Shen Yun make up the nonprofit network that Li calls "our media". Financial documents paint a complicated picture of more than a dozen technically separate organizations that appear to share missions, money and executives. Though the source of their revenue is unclear, the most recent financial records from each organization paint a picture of an overall business thriving in the Trump era. According to scholar James R. Lewis writing in 2018, Falun Gong adherents have attempted to control English Wikipedia articles covering the group and articles related to it. Lewis highlights Falun Gong's extensive internet presence, and how editors who have to date contributed to English Wikipedia entries associated with Falun Gong to the point where "Falun Gong followers and/or sympathizers de facto control the relevant pages on Wikipedia", and how this is particularly important for Falun Gong as an organization due to the search engine optimization results of these entries, and how the entries can influence other media entities. Lewis notes also how this fits in as part of Falun Gong's general media strategy, such as Falun Gong media like The Epoch Times, New Tang Dynasty, Sound of Hope Radio, and, as Lewis discusses, the Rachlin media group. Lewis reports that the Rachlin media group is the Falun Gong's de facto PR firm operated by Gail Rachlin, spokesperson for the Falun Dafa Information Centre. Lewis says that Amnesty International does not independently verify its reports from Falun Gong groups, accepting material directly from Falun Gong organizations as fact. According to Lewis, "[Falun Gong] has thus been able to influence other media via its presence on the web, through its direct press releases, and through its own media." Ultrasurf, Freegate, the Open Technology Fund, and whistleblower allegations In the early 2000s, Falun Gong adherents in the United States developed Ultrasurf and Freegate, freeware intended to circumvent Chinese government internet censorship. According to NPR: Adherents of Falun Gong first developed Ultrasurf nearly two decades ago to get around censors in China and elsewhere. Early on, Ultrasurf seemed a highly promising tool in aiding activists and journalists to talk securely online. It earlier received development money from the State Department and the predecessor agency to USAGM. A Berkman Klein Center for Internet and Society report on the circumvention landscape in 2007 found Ultrasurf's performance to be "the best of any tool tested in filtering countries, the only tool to display okay speed for both image heavy and simple, text oriented sites." A Wired article described Ultrasurf as "one of the most important free-speech tools on the Internet, used by millions from China to Saudi Arabia." Beyond China, Freegate gained popularity among Iranian protesters soon after its Farsi version was introduced in July 2008. During the Green Movement protests surrounding the 2009 election, its servers were overwhelmed by Iranian Internet users. In 2010, the US State Department under the Obama administration offered a $1.5 million grant to the Global Internet Freedom Consortium founded by Falun Gong adherents that developed Ultrasurf and Freegate, drawing opposition from the Chinese government. A 2011 Center for a New American Security report recognized the need for the US government to fund high-performing technologies like Ultrasurf and Freegate, despite the stress it might cause on the U.S.-China relationship, but recommended the US government diversify the technologies it funds. In recent years, Ultrasurf has been a major point of contention in large part because it is not open source, meaning that it cannot be reviewed by outside engineers for vulnerabilities and back doors.<ref name="THE-VERGE-2020" Additionally, as reported by The Verge, since the 2000s, the software has drawn criticism "for its content filtering (which blocks pornography) and its ability to surveil user traffic, which is often impossible by design in competing tools". Although it receives public funding, both its creators and owners have rejected attempts at allowing outside parties to review its effectiveness and utility. A 2020 audit by the U.S. State Department concluded that "censoring Ultrasurf nation-wide would have been trivial for a moderate-budget adversary". After conservative documentary filmmaker Michael Pack was appointed CEO of the U.S. Agency for Global Media during the Trump administration in 2020, Pack tied up $19 million in federal funds from other projects for the Ultrasurf project. Numerous other projects, including other secure communication projects, lost funding during this period. Ultrasoft eventually received $249,000 of the allotted funds. Once receiving funding, only "four people abroad used it to access Voice of America and Radio Free Asia, a key purpose for its subsidy" during December 2020 and January 2021. Two days before U.S. President Joe Biden's 2021 inauguration, Pack appointed a columnist from the Epoch Times to the board of directors for the networks his agency oversaw. This columnist had claimed the January 6 insurrection was a "false flag operation". During his eight months in office, Pack regularly appeared in the Epoch Times, where he also discussed Ultrasurf. As of 2020, Pack, along with other USAGM officials he did not fire during his time there, faced a criminal inquiry in response to whistleblower allegations that the "concerted effort to divert funds to the Falun Gong software Ultrasurf was a criminal conspiracy". Organization Spiritual authority is vested exclusively in the teachings of founder Li Hongzhi. Volunteer "assistants" or "contact persons" do not hold authority over other practitioners, regardless of how long they have practiced Falun Gong. Li stipulates that practitioners of Falun Gong cannot collect money or charge fees, conduct healings, or teach or interpret doctrine for others. There is no system of membership within the practice and no rituals of worship. Falun Gong operates through a global, networked, and largely virtual online community. In particular, electronic communications, email lists and a collection of websites are the primary means of coordinating activities and disseminating Li Hongzhi's teachings. Outside Mainland China, a network of volunteer 'contact persons', regional Falun Dafa Associations and university clubs exist in approximately 80 countries. Li Hongzhi's teachings are principally spread through the Internet. In most mid- to large-sized cities, Falun Gong practitioners organize regular group meditation or study sessions in which they practice Falun Gong exercises and read Li Hongzhi's writings. The exercise and meditation sessions are described as informal groups of practitioners who gather in public parks—usually in the morning—for one to two hours. Group study sessions typically take place in the evenings in private residences or university or high school classrooms, and are described by David Ownby as "the closest thing to a regular 'congregational experience that Falun Gong offers. Individuals who are too busy, isolated, or who simply prefer solitude may elect to practice privately. When there are expenses to be covered (such as for the rental of facilities for large-scale conferences), costs are borne by self-nominated and relatively affluent individual members of the community. Within China In 1993, the Beijing-based Falun Dafa Research Society was accepted as a branch of the state-run China Qigong Research Society (CQRS), which oversaw the administration of the country's various qigong schools, and sponsored activities and seminars. As per the requirements of the CQRS, Falun Gong was organized into a nationwide network of assistance centers, "main stations", "branches", "guidance stations", and local practice sites, mirroring the structure of the qigong society or even of the CCP itself. Falun Gong assistants were self-selecting volunteers who taught the exercises, organized events, and disseminated new writings from Li Hongzhi. The Falun Dafa Research Society provided advice to students on meditation techniques, translation services, and coordination for the practice nationwide. Following its departure from the CQRS in 1996, Falun Gong came under increased scrutiny from authorities and responded by adopting a more decentralized and loose organizational structure. In 1997, the Falun Dafa Research Society was formally dissolved, along with the regional "main stations". Yet practitioners continued to organize themselves at local levels, being connected through electronic communications, interpersonal networks and group exercise sites. Both Falun Gong sources and Chinese government sources claimed that there were some 1,900 "guidance stations" and 28,263 local Falun Gong exercise sites nationwide by 1999, though they disagree over the extent of vertical coordination among these organizational units. In response to the persecution that began in 1999, Falun Gong was driven underground, the organizational structure grew yet more informal within China, and the internet took precedence as a means of connecting practitioners. Following the persecution of Falun Gong in 1999, Chinese authorities sought to portray Falun Gong as a hierarchical and well-funded organization. James Tong writes that it was in the government's interest to portray Falun Gong as highly organized in order to justify its repression of the group: "The more organized the Falun Gong could be shown to be, then the more justified the regime's repression in the name of social order was." He concluded that Party's claims lacked "both internal and external substantiating evidence", and that despite the arrests and scrutiny, the authorities never "credibly countered Falun Gong rebuttals". Dragon Springs compound Falun Gong operates out of Dragon Springs, a compound located in Deerpark, New York. Falun Gong founder and leader Li Hongzhi resides near the compound, along with "hundreds" of Falun Gong adherents. Members of Falun Gong extension Shen Yun live and rehearse in the compound, which also contains schools and temples. The compound is registered as a church, Dragon Springs Buddhist, which gives it tax exemptions and greater privacy. Scholar Andrew Junker noted that in 2019, near Dragon Springs, in Middletown, was an office for the Falun Gong media extension The Epoch Times, which published a special local edition. The compound has been a point of controversy among former residents. According to NBC News: [F]our former compound residents and former Falun Gong practitioners who spoke to NBC News... said that life in Dragon Springs is tightly controlled by Li, that internet access is restricted, the use of medicines is discouraged, and arranged relationships are common. Two former residents on visas said they were offered to be set up with U.S. residents at the compound. Tiger Huang, a former Dragon Springs resident who was on a United States student visa from Taiwan, said she was set up on three dates on the compound, and she believed her ability to stay in the United States was tied to the arrangement. "The purpose of setting up the dates was obvious", Huang said. Her now-husband, a former Dragon Springs resident, confirmed the account. Huang said she was told by Dragon Springs officials her visa had expired and was told to go back to Taiwan after months of dating a nonpractitioner in the compound. She later learned that her visa had not expired when she was told to leave the country. Acquired by Falun Gong in 2000, the site is closed to visitors and features guarded gates, has been a point of contention for some Deer Park residents concerned. In 2019, Falun Gong requested to expand the site, wishing to add a 920-seat concert hall, a new parking garage, a wastewater treatment plant and a conversion of meditation space into residential space large enough to bring the total residential capacity to 500 people. These plans met with opposition from the Delaware Riverkeeper Network regarding the wastewater treatment facility and the elimination of local wetlands, impacting local waterways such as the Basher Kill and Neversink River. Local residents opposed the expansion because it would increase traffic and reduce the rural character of the area. Falun Gong adherents living in the area have claimed that they have experienced discrimination from local residents. After visiting in 2019, Junker noted that "the secrecy of Dragon Springs was obvious and a source of tension for the town." Junker adds that, Dragon Springs's website says its restricted access is for security reasons, and that the site claims the compound contains orphans and refugees. Demography Prior to July 1999, official Chinese government estimates placed the number of Falun Gong practitioners at 70 million nationwide, rivalling membership in the CCP. By the time of the persecution on 22 July 1999, most Chinese government numbers said the population of Falun Gong was between 2 and 3 million, though some publications maintained an estimate of 40 million. The Falun Gong organization estimated in the same period that the total number of practitioners in China was between 70 and 80 million, though sociologist David A. Palmer notes these numbers were likely highly inflated and gives a more reasonable estimate of 10 million. Other sources have estimated the Falun Gong population in China to have peaked between 10 and 70 million practitioners. The number of Falun Gong practitioners still practicing in China today is difficult to confirm, though Freedom House estimates that seven to 20 million continue to practice privately. Demographic surveys conducted in China in 1998 found a population that was mostly female and elderly. Of 34,351 Falun Gong practitioners surveyed, 27% were male and 73% female. Only 38% were under 50 years old. Falun Gong attracted a range of other individuals, from young college students to bureaucrats, intellectuals and Party officials. Surveys in China from the 1990s found that between 23 and 40% of practitioners held university degrees at the college or graduate level—several times higher than the general population. Falun Gong is practiced by tens, and possibly hundreds, of thousands outside China, with the largest communities found in Taiwan and North American cities with large Chinese populations, such as New York and Toronto. Demographic surveys by Palmer and Ownby in these communities found that 90% of practitioners are ethnic Chinese. The average age was approximately 40. Among survey respondents, 56% were female and 44% male; 80% were married. The surveys found the respondents to be highly educated: 9% held PhDs, 34% had master's degrees, and 24% had a bachelor's degree. As of 2008, the most commonly reported reasons for being attracted to Falun Gong were intellectual content, cultivation exercises, and health benefits. Non-Chinese Falun Gong practitioners tend to fit the profile of "spiritual seekers"—people who had tried a variety of qigong, yoga, or religious practices before finding Falun Gong. According to sociologist Richard Madsen, who specializes in studying modern Chinese culture, Chinese scientists with doctorates from prestigious American universities who practice Falun Gong claim that modern physics (for example, superstring theory) and biology (specifically the pineal gland's function) provide a scientific basis for their beliefs. From their point of view, "Falun Dafa is knowledge rather than religion, a new form of science rather than faith". History inside China 1992–1996 Li Hongzhi introduced Falun Gong to the public on 13 May 1992, in Changchun, Jilin Province. Several months later, in September 1992, Falun Gong was admitted as a branch of qigong under the administration of the state-run China Qigong Scientific Research Society (CQRS). Li was recognized as a qigong master, and was authorized to teach his practice nationwide. Like many qigong masters at the time, Li toured major cities in China from 1992 to 1994 to teach the practice. He was granted a number of awards by PRC governmental organizations. According to David Ownby, Professor of History and Director of the Center for East Asian Studies at the Université de Montréal, Li became an "instant star of the qigong movement", and Falun Gong was embraced by the government as an effective means of lowering health care costs, promoting Chinese culture, and improving public morality. In December 1992, for instance, Li and several Falun Gong students participated in the Asian Health Expo in Beijing, where he reportedly "received the most praise [of any qigong school] at the fair, and achieved very good therapeutic results", according to the fair's organizer. The event helped cement Li's popularity, and journalistic reports of Falun Gong's healing powers spread. In 1993, Li received a letter of appreciation from the Ministry of Public Security for providing treatment to around 100 police officers injured while on duty. Falun Gong had differentiated itself from other qigong groups in its emphasis on morality, low cost, and health benefits. It rapidly spread via word-of-mouth, attracting a wide range of practitioners from all walks of life, including numerous members of the Chinese Communist Party. From 1992 to 1994, Li did charge fees for the seminars he was giving across China, though the fees were considerably lower than those of competing qigong practices, and the local qigong associations received a substantial share. Li justified the fees as being necessary to cover travel costs and other expenses, and on some occasions, he donated the money earned to charitable causes. In 1994, Li ceased charging fees altogether, thereafter stipulating that Falun Gong must always be taught for free, and its teachings made available without charge (including online). Although some observers believe Li continued to earn substantial income through the sale of Falun Gong books, others dispute this, asserting that most Falun Gong books in circulation were bootleg copies. With the publication of the books Falun Gong and Zhuan Falun, Li made his teachings more widely accessible. Zhuan Falun, published in January 1995 at an unveiling ceremony held in the auditorium of the Ministry of Public Security, became a best-seller in China. In 1995, Chinese authorities began looking to Falun Gong to solidify its organizational structure and ties to the party-state. Li was approached by the Chinese National Sports Committee, Ministry of Public Health, and China Qigong Science Research Association (CQRS) to jointly establish a Falun Gong association. Li declined the offer. The same year, the CQRS issued a new regulation mandating that all qigong denominations establish a Chinese Communist Party branch. Li again refused. Tensions continued to mount between Li and the CQRS in 1996. In the face of Falun Gong's rise in popularity—a large part of which was attributed to its low cost—competing qigong masters accused Li of undercutting them. According to Schechter, the qigong society under which Li and other qigong masters belonged asked Li to hike his tuition, but Li emphasized the need for the teachings to be free of charge. In March 1996, Falun Gong withdrew from the CQRS in response to mounting disagreements, after which time it operated outside the official sanction of the state. Falun Gong representatives attempted to register with other government entities, but were rebuffed. Li and Falun Gong were then outside the circuit of personal relations and financial exchanges through which masters and their qigong organizations could find a place within the state system, and also the protections this afforded. 1996–1999 Falun Gong's departure from the state-run CQRS corresponded to a wider shift in the government's attitudes towards qigong practices. As qigong's detractors in government grew more influential, authorities began attempting to rein in the growth and influence of these groups, some of which had amassed tens of millions of followers. In the mid-1990s the state-run media began publishing articles critical of qigong. Falun Gong was initially shielded from the mounting criticism, but following its withdrawal from the CQRS in March 1996, it lost this protection. On 17 June 1996, the Guangming Daily, an influential state-run newspaper, published a polemic against Falun Gong in which its central text, Zhuan Falun, was described as an example of "feudal superstition". The author wrote that the history of humanity is a "struggle between science and superstition", and called on Chinese publishers not to print "pseudo-scientific books of the swindlers". The article was followed by at least twenty more in newspapers nationwide. Soon after, on 24 July, the Central Propaganda Department banned all publication of Falun Gong books (though the ban was not consistently enforced). The state-administered Buddhist Association of China also began issuing criticisms of Falun Gong, urging lay Buddhists not to take up the practice. The events were an important challenge to Falun Gong, and one that practitioners did not take lightly. Thousands of Falun Gong followers wrote to Guangming Daily and to the CQRS to complain against the measures, claiming that they violated Hu Yaobang's 1982 'Triple No' directive, which prohibited the media from either encouraging or criticizing qigong practices. In other instances, Falun Gong practitioners staged peaceful demonstrations outside media or local government offices to request retractions of perceived unfair coverage. The polemics against Falun Gong were part of a larger movement opposing qigong organizations in the state-run media. Although Falun Gong was not the only target of the media criticism, nor the only group to protest, theirs was the most mobilized and steadfast response. Many of Falun Gong's protests against negative media portrayals were successful, resulting in the retraction of several newspaper stories critical of the practice. This contributed to practitioners' belief that the media claims against them were false or exaggerated, and that their stance was justified. In June 1998, He Zuoxiu, an outspoken critic of qigong and a fierce defender of Marxism, appeared on a talk show on Beijing Television and openly disparaged qigong groups, making particular mention of Falun Gong. Falun Gong practitioners responded with peaceful protests and by lobbying the station for a retraction. The reporter responsible for the program was reportedly fired, and a program favorable to Falun Gong was aired several days later. Falun Gong practitioners also mounted demonstrations at 14 other media outlets. In 1997, The Ministry of Public Security launched an investigation into whether Falun Gong should be deemed xie jiao (, "heretical teaching"). The report concluded that "no evidence has appeared thus far". The following year, however, on 21 July 1998, the Ministry of Public Security issued Document No. 555, "Notice of the Investigation of Falun Gong". The document asserted that Falun Gong is a "heretical teaching", and mandated that another investigation be launched to seek evidence in support of the conclusion. Falun Gong practitioners reported having phone lines tapped, homes ransacked and raided, and Falun Gong exercise sites disrupted by public security agents. In this time period, even as criticism of qigong and Falun Gong mounted in some circles, the practice maintained a number of high-profile supporters in the government. In 1998, Qiao Shi, the recently retired Chairman of the Standing Committee of the National People's Congress, initiated his own investigation into Falun Gong. After months of investigations, his group concluded that "Falun Gong has hundreds of benefits for the Chinese people and China, and does not have one single bad effect." In May of the same year, China's National Sports Commission launched its own survey of Falun Gong. Based on interviews with over 12,000 Falun Gong practitioners in Guangdong province, they stated that they were "convinced the exercises and effects of Falun Gong are excellent. It has done an extraordinary amount to improve society's stability and ethics." The practice's founder, Li Hongzhi, was largely absent from the country during the period of rising tensions with the government. In March 1995, Li had left China to first teach his practice in France and then other countries, and in 1998 obtained permanent residency in the United States. By 1999, estimates provided by the State Sports Commission suggested there were 70 million Falun Gong practitioners in China. An anonymous employee of China's National Sports Commission, was at this time quoted in an interview with U.S. News & World Report as speculating that if 100 million had taken up Falun Gong and other forms of qigong there would be a dramatic reduction of health care costs and that "Premier Zhu Rongji is very happy about that." Tianjin and Zhongnanhai protests By the late 1990s, the Chinese government's relationship to the growing Falun Gong movement had become increasingly tense. Reports of discrimination and surveillance by the Public Security Bureau were escalating, and Falun Gong practitioners were routinely organizing sit-in demonstrations responding to media articles they deemed to be unfair. The conflicting investigations launched by the Ministry of the Public Security on one side and the State Sports Commission and Qiao Shi on the other spoke of the disagreements among China's elites on how to regard the growing practice. In April 1999, an article critical of Falun Gong was published in Tianjin Normal University's Youth Reader magazine. The article was authored by physicist He Zuoxiu who, as Porter and Gutmann indicate, is a relative of Politburo member and public security secretary Luo Gan. The article cast qigong, and Falun Gong in particular, as superstitious and harmful for youth. Falun Gong practitioners responded by picketing the offices of the newspaper requesting a retraction of the article. Unlike past instances in which Falun Gong protests were successful, on 22 April the Tianjin demonstration was broken up by the arrival of three hundred riot police. Some of the practitioners were beaten, and forty-five arrested. Other Falun Gong practitioners were told that if they wished to appeal further, they needed to take the issue up with the Ministry of Public Security and go to Beijing to appeal. The Falun Gong community quickly mobilized a response, and on the morning of 25 April, upwards of 10,000 practitioners gathered near the central appeals office to demand an end to the escalating harassment against the movement, and request the release of the Tianjin practitioners. According to Benjamin Penny, practitioners sought redress from the leadership of the country by going to them and, "albeit very quietly and politely, making it clear that they would not be treated so shabbily." They sat or read quietly on the sidewalks surrounding the Zhongnanhai. Five Falun Gong representatives met with Premier Zhu Rongji and other senior officials to negotiate a resolution. The Falun Gong representatives were assured that the regime supported physical exercises for health improvements and did not consider the Falun Gong to be anti-government. President Jiang Zemin was alerted to the demonstration by Secretary of the Central Political and Legal Affairs Commission Luo Gan, and was reportedly angered by the audacity of the demonstration—the largest since the 1989 Tiananmen Square protests and massacre. Jiang called for resolute action to suppress the group, and reportedly criticized Premier Zhu for being "too soft" in his handling of the situation. That evening, Jiang composed a letter indicating his desire to see Falun Gong "defeated". In the letter, Jiang expressed concerns over the size and popularity of Falun Gong, and in particular about the large number of senior CCP members found among Falun Gong practitioners. He believed it possible foreign forces were behind Falun Gong's protests (the practice's founder, Li Hongzhi, had emigrated to the United States), and expressed concern about their use of the internet to coordinate a large-scale demonstration. Jiang also intimated that Falun Gong's moral philosophy was at odds with the atheist values of Marxist–Leninism, and therefore constituted a form of ideological competition. Jiang is held by Falun Gong to be personally responsible for this decision to persecute Falun Gong. Peerman cited reasons such as suspected personal jealousy of Li Hongzhi; Saich points to Jiang's anger at Falun Gong's widespread appeal, and ideological struggle as causes for the crackdown that followed. Willy Wo-Lap Lam suggests Jiang's decision to suppress Falun Gong was related to a desire to consolidate his power within the Politburo. According to Human Rights Watch, senior officials were far from unified in their support for the crackdown. Persecution On 20 July 1999, security forces abducted and detained thousands of Falun Gong practitioners who they identified as leaders. Two days later, on 22 July, the PRC Ministry of Civil Affairs outlawed the Falun Dafa Research Society as an illegal organization that was "engaged in illegal activities, advocating superstition and spreading fallacies, hoodwinking people, inciting and creating disturbances, and jeopardizing social stability". The same day, the Ministry of Public Security issued a circular forbidding citizens from practicing Falun Gong in groups, possessing Falun Gong's teachings, displaying Falun Gong banners or symbols, or protesting against the ban. The aim of the ensuing campaign was to "eradicate" the group through a combination of means which included the publication and distribution of propaganda which denounced it and the imprisonment and coercive thought reform of its practitioners, sometimes resulting in deaths. In October 1999, four months after the imposition of the ban, legislation was passed in order to outlaw "heterodox religions" and sentence Falun Gong devotees to prison terms. Hundreds of thousands of Falun Gong practitioners are estimated to have been extrajudicially imprisoned, and practitioners who are currently in detention are reportedly subjected to forced labor, psychiatric abuse, torture, and other coercive methods of thought reform at the hands of Chinese authorities. The U.S. Department of State and Congressional-Executive Commission on China cite estimates that as much as half of China's reeducation-through-labor camp population is made up of Falun Gong practitioners. Researcher Ethan Gutmann estimates that Falun Gong practitioners represent an average of 15 to 20 percent of the total "laogai" population, a population which includes practitioners who are currently being held in re-education through labor camps as well as practitioners who are currently being held in prisons and other forms of administrative detention. Former detainees of the labor camp system have reported that Falun Gong practitioners comprise one of the largest groups of prisoners; in some labor camp and prison facilities, they comprise the majority of the detainees, and they are often said to receive the longest sentences and the worst treatment. A 2013 report on labor reeducation camps by Amnesty International found that in some cases, Falun Gong practitioners "constituted on average from one third to 100 per cent of the total population" of certain camps. According to Johnson, the campaign against Falun Gong extends to many aspects of society, including the media apparatus, the police force, the military, the education system, and workplaces. An extra-constitutional body, the "610 Office" was created to "oversee" the effort. Human Rights Watch (2002) commented that families and workplace employees were urged to cooperate with the government. Causes Observers have attempted to explain the Party's rationale for banning Falun Gong as stemming from a variety of factors. Many of these explanations centre on institutional causes, such as Falun Gong's size and popularity, its independence from the state, and internal politics within the Chinese government. Other scholars have noted that Chinese authorities were troubled by Falun Gong's moral and spiritual content, which put it at odds with aspects of the official Marxist ideology. Still others have pointed to China's history of bloody sectarian revolts as a possible factor leading to the crackdown. Xinhua News Agency, the official news organization of the Chinese government, declared that Falun Gong is "opposed to the Communist Party of China and the central government, preaches idealism, theism and feudal superstition." Xinhua also asserted that "the so-called 'truth, kindness and forbearance' principle preached by [Falun Gong] has nothing in common with the socialist ethical and cultural progress we are striving to achieve", and it also argued that it was necessary to crush Falun Gong in order to preserve the "vanguard role and purity" of the Chinese Communist Party. Other articles which appeared in the state-run media in the first days and weeks after the ban was imposed posited that Falun Gong must be defeated because its "theistic" philosophy was at odds with the Marxism–Leninism paradigm and the secular values of materialism. Willy Wo-Lap Lam writes that Jiang Zemin's campaign against Falun Gong may have been used to promote allegiance to himself; Lam quotes one party veteran as saying "by unleashing a Mao-style movement [against Falun Gong], Jiang is forcing senior cadres to pledge allegiance to his line." The Washington Post reported that sources indicated not all of the Politburo Standing Committee shared Jiang's view that Falun Gong should be eradicated, and Jiang alone made the decision of crackdown. Human Rights Watch commented that the crackdown on Falun Gong reflects historical efforts by the CCP to eradicate religion, which the government believes is inherently subversive. The Chinese government protects five "patriotic", state-sanctioned religious groups. Unregistered religions that fall outside the state-sanctioned organizations are thus vulnerable to suppression. The Globe and Mail wrote: "any group that does not come under the control of the Party is a threat". Craig S. Smith of The New York Times wrote that the party feels increasingly threatened by any belief system that challenges its ideology and has an ability to organize itself. That Falun Gong, whose belief system represented a revival of traditional Chinese religion, was being practiced by a large number of Communist Party members and members of the military was seen as particularly disturbing to Jiang Zemin; according to Julia Ching, "Jiang accepts the threat of Falun Gong as an ideological one: spiritual beliefs against militant atheism and historical materialism. He [wished] to purge the government and the military of such beliefs." Yuezhi Zhao points to several other factors that may have led to a deterioration of the relationship between Falun Gong and the Chinese state and media. These included infighting within China's qigong establishment, the influence of qigong opponents among leaders of China, and the struggles from mid-1996 to mid-1999 between Falun Gong and the Chinese power elite over the status and treatment of the movement. According to Zhao, Falun Gong practitioners have established a "resistance identity"—one that stands against prevailing pursuits of wealth, power, scientific rationality, and "the entire value system associated with China's project of modernization." In China the practice represented an indigenous spiritual and moral tradition, a cultural revitalization movement, and it was a sharp contrast to "Marxism with Chinese characteristics". Vivienne Shue similarly writes that Falun Gong presented a comprehensive challenge to the CCP's legitimacy. Shue argues that Chinese rulers have historically derived their legitimacy from their claim to possess an exclusive connection to the "Truth". In imperial China, truth was based on a Confucian and Daoist cosmology, where in the case of the Communist Party, the truth is represented by Marxist–Leninism and historical materialism. Falun Gong challenged the Marxist–Leninism paradigm, reviving an understanding which is based on more traditionally Buddhist or Daoist conceptions. David Ownby contends that Falun Gong also challenged the Communist Party's hegemony over the Chinese nationalist discourse: "[Falun Gong's] evocation of a different vision of Chinese tradition and its contemporary values are now so threatening to the state and the party because it denies them the sole right to define the meaning of Chinese nationalism, and it even denies them the sole right to define the meaning of Chineseness." Maria Chang commented that since the overthrow of the Qin dynasty, "Millenarian movements had exerted a profound impact on the course of Chinese history", culminating in the Chinese Revolutions of 1949, which brought the Chinese Communists to power. Patsy Rahn (2002) describes a paradigm of conflict between Chinese sectarian groups and the rulers who they often challenge. According to Rahn, the history of this paradigm goes back to the collapse of the Han dynasty: "The pattern of a ruling power keeping a watchful eye on sectarian groups, at times threatened by them, at times raising campaigns against them, began as early as the second century and continued throughout the dynastic period, through the Mao era and into the present." Conversion program According to James Tong, the regime aimed at both coercive dissolution of the Falun Gong denomination and "transformation" of the practitioners. By 2000, the Party escalated its campaign by sentencing "recidivist" practitioners to "re-education through labor" in an effort to have them renounce their beliefs and "transform" their thoughts. Terms were also arbitrarily extended by police, while some practitioners had ambiguous charges levied against them, such as "disrupting social order", "endangering national security", or "subverting the socialist system". According to Bejesky, the majority of long-term Falun Gong detainees are processed administratively through this system instead of the criminal justice system. Upon completion of their re-education sentences, those practitioners who refused to recant were then incarcerated in "legal education centers" set up by provincial authorities to "transform minds". Much of the conversion program relied on Mao-style techniques of indoctrination and thought reform, where Falun Gong practitioners were organized to view anti-Falun Gong television programs and enroll in Marxism and materialism study sessions. Traditional Marxism and materialism were the core content of the sessions. The government-sponsored image of the conversion process emphasizes psychological persuasion and a variety of "soft-sell" techniques; this is the "ideal norm" in regime reports, according to Tong. Falun Gong reports, on the other hand, depict "disturbing and sinister" forms of coercion against practitioners who fail to renounce their beliefs. Among them are cases of severe beatings; psychological torment, corporal punishment and forced intense, heavy-burden hard labor and stress positions; solitary confinement in squalid conditions; "heat treatment" including burning and freezing; electric shocks delivered to sensitive parts of the body that may result in nausea, convulsions, or fainting; "devastative" forced feeding; sticking bamboo strips into fingernails; deprivation of food, sleep, and use of toilet; rape and gang rape; asphyxiation; and threat, extortion, and termination of employment and student status. The cases appear verifiable, and the great majority identify (1) the individual practitioner, often with age, occupation, and residence; (2) the time and location that the alleged abuse took place, down to the level of the district, township, village, and often the specific jail institution; and (3) the names and ranks of the alleged perpetrators. Many such reports include lists of the names of witnesses and descriptions of injuries, Tong says. The publication of "persistent abusive, often brutal behavior by named individuals with their official title, place, and time of torture" suggests that there is no official will to cease and desist such activities. Deaths Due to the difficulty in corroborating reports of torture deaths in China, estimates of the number of Falun Gong practitioners who have been killed as a result of the persecution vary widely. In 2009, The New York Times reported that, according to human rights groups, the repressions had claimed "at least 2,000" lives. Amnesty International said at least 100 Falun Gong practitioners had reportedly died in the 2008 calendar year, either in custody or shortly after their release. Investigative journalist Ethan Gutmann estimated 65,000 Falun Gong were killed for their organs from 2000 to 2008 based on extensive interviews, while researchers David Kilgour and David Matas reported, "the source of 41,500 transplants for the six-year period 2000 to 2005 is unexplained". Chinese authorities do not publish statistics on Falun Gong practitioners killed amidst the crackdown. In individual cases, however, authorities have denied that deaths in custody were due to torture. Organ harvesting allegations In 2006, allegations emerged that a large number of Falun Gong practitioners had been killed to supply China's organ transplant industry. These allegations prompted an investigation by former Canadian Secretary of State David Kilgour and human rights lawyer David Matas. The Kilgour-Matas report was published in July 2006, and concluded that "the government of China and its agencies in numerous parts of the country, in particular hospitals but also detention centers and 'people's courts', since 1999 have put to death a large but unknown number of Falun Gong prisoners of conscience." The report, which was based mainly on circumstantial evidence, called attention to the extremely short wait times for organs in China—one to two weeks for a liver compared with 32.5 months in Canada—implying it was indicative of organs being procured on demand. It also tracked a significant increase in the number of annual organ transplants in China beginning in 1999, corresponding with the onset of the persecution of Falun Gong. Despite very low levels of voluntary organ donation, China performs the second-highest number of transplants per year. Kilgour and Matas also presented self-accusatory material from Chinese transplant center web sites advertising the immediate availability of organs from living donors, and transcripts of interviews in which hospitals told prospective transplant recipients that they could obtain Falun Gong organs. In May 2008 two United Nations Special Rapporteurs reiterated requests for the Chinese authorities to respond to the allegations, and to explain a source for the organs that would account for the sudden increase in organ transplants in China since 2000. Chinese officials have responded by denying the organ harvesting allegations, and insisting that China abides by World Health Organization principles that prohibit the sale of human organs without written consent from donors. Responding to a U.S. House of Representatives Resolution calling for an end to abusing transplant practices against religious and ethnic minorities, a Chinese embassy spokesperson said "the so-called organ harvesting from death-row prisoners is totally a lie fabricated by Falun Gong." In August 2009, Manfred Nowak, the United Nations Special Rapporteur on Torture, said, "The Chinese government has yet to come clean and be transparent... It remains to be seen how it could be possible that organ transplant surgeries in Chinese hospitals have risen massively since 1999, while there are never that many voluntary donors available." In 2014, investigative journalist Ethan Gutmann published the result of his own investigation. Gutmann conducted extensive interviews with former detainees in Chinese labor camps and prisons, as well as former security officers and medical professionals with knowledge of China's transplant practices. He reported that organ harvesting from political prisoners likely began in Xinjiang province in the 1990s, and then spread nationwide. Gutmann estimates that some 64,000 Falun Gong prisoners may have been killed for their organs between the years 2000 and 2008. In a 2016 report, David Kilgour found that he had underestimated. In the new report he found that the government's official estimates for the volume of organs harvested since the persecution of Falun Gong began to be 150,000 to 200,000. Media outlets have extrapolated from this study a death toll of 1.5 million. Ethan Gutmann estimated from this update that 60,000 to 110,000 organs are harvested in China annually observing that it is (paraphrasing): "difficult but plausible to harvest 3 organs from a single body" and also calls the harvest "a new form of genocide using the most respected members of society." In June 2019, the China Tribunal—an independent tribunal set up by the International Coalition to End Transplant Abuse in China—concluded that detainees including imprisoned followers of the Falun Gong movement are still being killed for organ harvesting. The Tribunal, chaired by Sir Geoffrey Nice QC, said it was "certain that Falun Gong as a source—probably the principal source—of organs for forced organ harvesting". In June 2021, the Special Procedures of the United Nations Human Rights Council voiced concerns over having "received credible information that detainees from ethnic, linguistic or religious minorities may be forcibly subjected to blood tests and organ examinations such as ultrasound and x-rays, without their informed consent; while other prisoners are not required to undergo such examinations." The press release stated that UN's human rights experts "were extremely alarmed by reports of alleged 'organ harvesting' targeting minorities, including Falun Gong practitioners, Uyghurs, Tibetans, Muslims and Christians, in detention in China." Media campaign The Chinese government's campaign against Falun Gong was driven by large-scale propaganda through television, newspapers, radio and internet. The propaganda campaign focused on allegations that Falun Gong jeopardized social stability, was deceiving and dangerous, was anti-science and threatened progress, and argued that Falun Gong's moral philosophy was incompatible with a Marxist social ethic. China scholars Daniel Wright and Joseph Fewsmith stated that for several months after Falun Gong was outlawed, China Central Television's evening news contained little but anti-Falun Gong rhetoric; the government operation was "a study in all-out demonization", they wrote. Falun Gong was compared to "a rat crossing the street that everyone shouts out to squash" by Beijing Daily; other officials said it would be a "long-term, complex and serious" struggle to "eradicate" Falun Gong. State propaganda initially used the appeal of scientific rationalism to argue that Falun Gong's worldview was in "complete opposition to science" and communism. For example, the People's Daily asserted on 27 July 1999, that the fight against Falun Gong "was a struggle between theism and atheism, superstition and science, idealism and materialism." Other editorials declared that Falun Gong's "idealism and theism" are "absolutely contradictory to the fundamental theories and principles of Marxism", and that the truth, kindness and forbearance' principle preached by [Falun Gong] has nothing in common with the socialist ethical and cultural progress we are striving to achieve." Suppressing Falun Gong was presented as a necessary step to maintaining the "vanguard role" of the CCP in Chinese society. Despite Party efforts, initial charges leveled against Falun Gong failed to elicit widespread popular support for the persecution of the group. In the months following July 1999, the rhetoric in the state-run press escalated to include charges that Falun Gong was colluding with foreign, "anti-China" forces. In October 1999, three months after the persecution began, the People's Daily newspaper claimed Falun Gong as a xiejiao (). A direct translation of that term is "heretical teaching", but during the anti-Falun Gong propaganda campaign was rendered as "evil cult" in English. According to a Washington Post report, it was Jiang Zemin who issued the order to label Falun Gong a "cult". In Mainland China, the term xiejiao has been used to target religious organizations that do not submit to Communist Party authority. Ian Johnson argued that applying the 'cult' label to Falun Gong effectively "cloaked the government's crackdown with the legitimacy of the West's anticult movement." He wrote that Falun Gong does not satisfy common definitions of a cult: "its members marry outside the group, have outside friends, hold normal jobs, do not live isolated from society, do not believe that the world's end is imminent and do not give significant amounts of money to the organisation... it does not advocate violence and is at heart an apolitical, inward-oriented discipline, one aimed at cleansing oneself spiritually and improving one's health." David Ownby similarly wrote that "the entire issue of the supposed cultic nature of Falun Gong was a red herring from the beginning, cleverly exploited by the Chinese state to blunt the appeal of Falun Gong". According to John Powers and Meg Y. M. Lee, because the Falun Gong was categorized in the popular perception as an "apolitical, qigong exercise club", it was not seen as a threat to the government. The most critical strategy in the Falun Gong suppression campaign, therefore, was to convince people to reclassify the Falun Gong into a number of "negatively charged religious labels", like "evil cult", "sect", or "superstition". The group's silent protests were reclassified as creating "social disturbances". In this process of relabelling, the government was attempting to tap into a "deep reservoir of negative feelings related to the historical role of quasi-religious cults as a destabilising force in Chinese political history." A turning point in the propaganda campaign came on the eve of Chinese New Year on 23 January 2001, when five people attempted to set themselves ablaze on Tiananmen Square. The official Chinese press agency, Xinhua News Agency, and other state media asserted that the self-immolators were practitioners, though the Falun Dafa Information Center disputed this, on the grounds that the movement's teachings explicitly forbid suicide and killing, further alleging that the event was "a cruel (but clever) piece of stunt-work." The incident received international news coverage, and video footage of the burnings were broadcast later inside China by China Central Television (CCTV). The broadcasts showed images of a 12-year-old girl, Liu Siying, burning, and interviews with the other participants in which they stated a belief that self-immolation would lead them to paradise. But one of the CNN producers on the scene did not even see a child there. Falun Gong sources and other commentators pointed out that the main participants' account of the incident and other aspects of the participants' behavior were inconsistent with Falun Gong's teachings. Media Channel and the International Education Development (IED) agree that the supposed self-immolation incident was staged by CCP to "prove" that Falun Gong brainwashes its followers to commit suicide and has therefore to be banned as a threat to the nation. IED's statement at the 53rd UN session describes China's violent assault on Falun Gong practitioners as state terrorism and that the self-immolation "was staged by the government." Washington Post journalist Phillip Pan wrote that the two self-immolators who died were not actually Falun Gong practitioners. On March 21, 2001, Liu Siying suddenly died after appearing very lively and being deemed ready to leave the hospital to go home. Time reported that prior to the self-immolation incident, many Chinese had felt that Falun Gong posed no real threat, and that the state's crackdown had gone too far. After the event, however, the mainland Chinese media campaign against Falun Gong gained significant traction. As public sympathy for Falun Gong declined, the government began sanctioning "systematic use of violence" against the group. In February, 2001, the month following the Tiananmen Square self-immolation incident, Jiang Zemin convened a rare Central Work Conference to stress the importance of continuity in the anti-Falun Gong campaign and unite senior party officials behind the effort. Under Jiang's leadership, the crackdown on Falun Gong became part of the Chinese political ethos of "upholding stability"—much the same rhetoric employed by the party during 1989 Tiananmen Square protests and massacre. Jiang's message was echoed at the 2001 National People's Congress, where the Falun Gong's eradication was tied to China's economic progress. Though less prominent on the national agenda, the persecution of Falun Gong has carried on after Jiang was retired; successive, high-level "strike hard" campaigns against Falun Gong were initiated in both 2008 and 2009. In 2010, a three-year campaign was launched to renew attempts at the coercive "transformation" of Falun Gong practitioners. In the education system Anti-Falun Gong propaganda efforts have also permeated the Chinese education system. Following Jiang Zemin's 1999 ban of Falun Gong, then-Minister of Education Chen Zhili launched an active campaign to promote the Party's line on Falun Gong within all levels of academic institutions, including graduate schools, universities and colleges, middle schools, primary schools, and kindergartens. Her efforts included a "Cultural Revolution-like pledge" in Chinese schools that required faculty members, staff, and students to publicly denounce Falun Gong. Teachers who did not comply with Chen's program were dismissed or detained; uncooperative students were refused academic advancement, expelled from school, or sent to "transformation" camps to alter their thinking. Chen also worked to spread the anti-Falun Gong academic propaganda movement overseas, using domestic educational funding to donate aid to foreign institutions, encouraging them to oppose Falun Gong. Falun Gong's response to the persecution Falun Gong's response to the persecution in China began in July 1999 with appeals to local, provincial, and central petitioning offices in Beijing. It soon progressed to larger demonstrations, with hundreds of Falun Gong practitioners traveling daily to Tiananmen Square to perform Falun Gong exercises or raise banners in defense of the practice. These demonstrations were invariably broken up by security forces, and the practitioners involved were arrested—sometimes violently—and detained. By 25 April 2000, a total of more than 30,000 practitioners had been arrested on the square; seven hundred Falun Gong followers were arrested during a demonstration in the square on 1 January 2001. Public protests continued well into 2001. Writing for the Wall Street Journal, Ian Johnson wrote that "Falun Gong faithful have mustered what is arguably the most sustained challenge to authority in 50 years of Communist rule." By late 2001, demonstrations in Tiananmen Square had become less frequent, and the practice was driven deeper underground. As public protest fell out of favor, practitioners established underground "material sites", which would produce literature and DVDs to counter the portrayal of Falun Gong in the official media. Practitioners then distribute these materials, often door-to-door. Falun Gong sources estimated in 2009 that over 200,000 such sites exist across China today. The production, possession, or distribution of these materials is frequently grounds for security agents to incarcerate or sentence Falun Gong practitioners. In 2002, Falun Gong activists in China tapped into television broadcasts, replacing regular state-run programming with their own content. One of the more notable instances occurred in March 2002, when Falun Gong practitioners in Changchun intercepted eight cable television networks in Jilin Province, and for nearly an hour, televised a program titled "Self-Immolation or a Staged Act?". All six of the Falun Gong practitioners involved were captured over the next few months. Two were killed immediately, while the other four were all dead by 2010 as a result of injuries sustained while imprisoned. Outside China, Falun Gong practitioners established international media organizations to gain wider exposure for their cause and challenge narratives of the Chinese state-run media. These include The Epoch Times newspaper, New Tang Dynasty Television, and Sound of Hope radio station. According to Zhao, through The Epoch Times it can be discerned how Falun Gong is building a "de facto media alliance" with China's democracy movements in exile, as demonstrated by its frequent printing of articles by prominent overseas Chinese critics of the PRC government. In 2004, The Epoch Times published a collection of nine editorials that presented a critical history of the Chinese Communist Party. This catalyzed the Tuidang movement, which encourages Chinese citizens to renounce their affiliations to the Chinese Communist Party, including ex post facto renunciations of the Communist Youth League and Young Pioneers. The Epoch Times claims that tens of millions have renounced the Chinese Communist Party as part of the movement, though these numbers have not been independently verified. In 2006, Falun Gong practitioners in the United States formed Shen Yun Performing Arts, a dance and music company that tours internationally. During Shen Yun's 2024 season, the company's eight touring troupes performed over 800 shows on five continents. By 2024, Shen Yun accumulated $266 million in assets mainly through ticket sales and by keeping its costs down through numerous volunteer hours and sometimes personal savings of Falun Gong adherents. Falun Gong software developers in the United States are also responsible for the creation of several popular censorship-circumvention tools employed by internet users in China. Falun Gong practitioners outside China have filed dozens of lawsuits against Jiang Zemin, Luo Gan, Bo Xilai, and other Chinese officials alleging genocide and crimes against humanity. According to International Advocates for Justice, Falun Gong has filed the largest number of human rights lawsuits in the 21st century and the charges are among the most severe international crimes defined by international criminal laws. As of 2006, 54 civil and criminal lawsuits were under way in 33 countries. In many instances, courts have refused to adjudicate the cases on the grounds of sovereign immunity. In late 2009, however, separate courts in Spain and Argentina indicted Jiang Zemin and Luo Gan on charges of "crimes of humanity" and genocide, and asked for their arrest—the ruling is acknowledged to be largely symbolic and unlikely to be carried out. The court in Spain also indicted Bo Xilai, Jia Qinglin and Wu Guanzheng. Falun Gong practitioners and their supporters also filed a lawsuit in May 2011 against the technology company Cisco Systems, alleging that the company helped design and implement a surveillance system for the Chinese government to suppress Falun Gong. Cisco denied customizing their technology for this purpose. Falun Gong outside China Li Hongzhi began teaching Falun Gong internationally in March 1995. His first stop was in Paris where, at the invitation of the Chinese ambassador, he held a lecture seminar at the PRC embassy. This was followed by lectures in Sweden in May 1995. Between 1995 and 1999, Li gave lectures in the United States, Canada, Australia, New Zealand, Germany, Switzerland, and Singapore. Falun Gong's growth outside China largely corresponded to the migration of students from mainland China to the West in the early-to-mid-1990s. Falun Gong associations and clubs began appearing in Europe, North America and Australia, with activities centered mainly on university campuses. Translations of Falun Gong teachings began appearing in the late 1990s. As the practice began proliferating outside China, Li Hongzhi was beginning to receive recognition in the United States and elsewhere in the western world. In May 1999, Li was welcomed to Toronto with greetings from the city's mayor and the provincial lieutenant governor, and in the two months that followed also received recognition from the cities of Chicago and San Jose. Although the practice was beginning to attract an overseas constituency in the 1990s, it remained relatively unknown outside China until the Spring of 1999, when tensions between Falun Gong and the CCP became a subject of international media coverage. With the increased attention, the practice gained a greater following outside China. Following the launch of the CCP's suppression campaign against Falun Gong, the overseas presence became vital to the practice's resistance in China and its continued survival. Falun Gong practitioners overseas have responded to the persecution in China through regular demonstrations, parades, and through the creation of media outlets, performing arts companies, and censorship-circumvention software mainly intended to reach mainland Chinese audiences. In its study of transnational repression committed by governments, Freedom House has reported that practitioners of Falun Gong have been targeted by the Chinese government's transnational repression campaign. International reception Since 1999, numerous Western governments and human rights organizations have expressed condemnation of the Chinese government's suppression of Falun Gong. Since 1999, members of the United States Congress have made public pronouncements and introduced several resolutions in support of Falun Gong. In 2010, U.S. House of Representatives Resolution 605 called for "an immediate end to the campaign to persecute, intimidate, imprison, and torture Falun Gong practitioners", condemned the Chinese authorities' efforts to distribute "false propaganda" about the practice worldwide, and expressed sympathy to persecuted Falun Gong practitioners and their families. Adam Frank writes that in reporting on the Falun Gong, the Western tradition of casting the Chinese as "exotic" took dominance, and that while the facts were generally correct in Western media coverage, "the normalcy that millions of Chinese practitioners associated with the practice had all but disappeared." David Ownby wrote that alongside these tactics, the "cult" label applied to Falun Gong by the Chinese authorities never entirely went away in the minds of some Westerners, and the stigma still plays a role in wary public perceptions of Falun Gong. To counter the support of Falun Gong in the West, the Chinese government expanded their efforts against the group internationally. This included visits to newspaper officers by diplomats to "extol the virtues of Communist China and the evils of Falun Gong", linking support for Falun Gong with "jeopardizing trade relations", and sending letters to local politicians telling them to withdraw support for the practice. According to Perry Link, pressure on Western institutions also takes more subtle forms, including academic self-censorship, whereby research on Falun Gong could result in a denial of visa for fieldwork in China; or exclusion and discrimination from business and community groups who have connections with China and fear angering Chinese government. Although the persecution of Falun Gong has drawn considerable condemnation outside China, some observers assert that Falun Gong has failed to attract the level of sympathy and sustained attention afforded to other Chinese dissident groups. Katrina Lantos Swett, vice chair of the United States Commission on International Religious Freedom, has said most Americans are aware of the suppression of "Tibetan Buddhists and unregistered Christian groups or pro-democracy and free speech advocates such as Liu Xiaobo and Ai Weiwei", and yet "know little to nothing about China's assault on the Falun Gong". Ethan Gutmann, a journalist reporting on China since the early 1990s, has attempted to explain this apparent dearth of public sympathy for Falun Gong as stemming, in part, from the group's shortcomings in public relations. Unlike the democracy activists or Tibetans, who have found a comfortable place in Western perceptions, "Falun Gong marched to a distinctly Chinese drum", Gutmann writes. Moreover, practitioners' attempts at getting their message across carried some of the uncouthness of Communist Party culture, including a perception that practitioners tended to exaggerate, create "torture tableaux straight out of a Cultural Revolution opera", or "spout slogans rather than facts". This is coupled with a general doubtfulness in the West of persecuted refugees. Gutmann also says that media organizations and human rights groups also self-censor on the topic, given the PRC governments vehement attitude toward the practice, and the potential repercussions that may follow for making overt representations on Falun Gong's behalf. Richard Madsen writes that Falun Gong lacks robust backing from the American constituencies that usually support religious freedom. For instance, Falun Gong's conservative moral beliefs have alienated some liberal constituencies in the West (e.g. its teachings against promiscuity and homosexual behavior). He also states that Christian conservatives do not support Falun Gong while they do Chinese Christians. Madsen charges that the American political center does not want to push the human rights issue so hard that it would disrupt commercial and political relations with China. Thus, Falun Gong practitioners have largely had to rely on their own resources in responding to suppression. In August 2007, the newly reestablished Rabbinic Sanhedrin deliberated persecution of the movement by the Chinese government at the request of Falun Gong. See also Freedom of religion in China Human rights in China List of new religious movements References Bibliography External links Falun Gong Meditation New religious movements Qigong Creationism Pseudoscience Anti-communism in China Right-wing politics in China Conservatism in China
Falun Gong
[ "Biology" ]
17,655
[ "Creationism", "Biology theories", "Obsolete biology theories" ]
55,175
https://en.wikipedia.org/wiki/Biotope
A biotope is an area of uniform environmental conditions providing a living place for a specific assemblage of plants and animals. Biotope is almost synonymous with the term "habitat", which is more commonly used in English-speaking countries. However, in some countries these two terms are distinguished: the subject of a habitat is a population, the subject of a biotope is a biocoenosis or "biological community". It is an English loanword derived from the German Biotop, which in turn came from the Greek bios (meaning 'life') and topos ('place'). (The related word geotope has made its way into the English language by the same route, from the German Geotop.) Ecology The concept of a biotope was first advocated by Ernst Haeckel (1834–1919), a German zoologist famous for the recapitulation theory. In his book General Morphology (1866), which defines the term "ecology", he stresses the importance of the concept of habitat as a prerequisite for an organism's existence. Haeckel also explains that with one ecosystem, its biota is shaped by environmental factors (such as water, soil, and geographical features) and interaction among living things; the original idea of a biotope was closely related to evolutional theory. Following this, F. Dahl, a professor at the Berlin Zoological Museum, referred to this ecological system as a "biotope" (biotop) (1908). Biotope restoration Although the term "biotope" is considered to be a technical word with respect to ecology, in recent years the term is more generally used in administrative and civic activities. Since the 1970s the term "biotope" has received great attention as a keyword throughout Europe (mainly Germany) for the preservation, regeneration, and creation of natural environmental settings. Used in this context, the term "biotope" often refers to a smaller and more specific ecology and is very familiar to human life. In Germany especially, activities related to regenerating biotopes are enthusiastically received. These activities include: making roof gardens reconstructing rivers to restore their natural qualities leaving bushes or trees on farms building nature parks along motorways making school gardens or ponds by considering the ecosystem bearing in mind ecological considerations in private gardens Various sectors play a part in these activities, including architecture, civil engineering, urban planning, traffic, agriculture, river engineering, limnology, biology, education, landscape gardening, and domestic gardening. In all fields, all sorts of people are seeking a viable way for humans to respect other living things. The term "biotope" would include a complete environmental approach. Characteristics The following four points are the chief characteristics of biotopes. Microscale A biotope is generally not considered to be a large-scale phenomenon. For example, a biotope might be a neighbouring park, a back garden, potted plant, a terrarium or a fish tank on a porch. In other words, the biotope is not a macroscopic but a microscopic approach to preserving the ecosystem and biological diversity. So biotopes fit into ordinary people's daily activities and lives, with more people being able to take part in biotope creation and continuing management. Biotope networks It is commonly emphasised that biotopes should not be isolated (although there are exceptions, such as manmade closed ecological systems which are specifically designed for no exchange of materials with the outside world). Instead biotopes need to be connected to each other and other surrounding life for without these connections to life-forms such as animals and plants, biotopes would not effectively work as a place in which diverse organisms live. So one of the most effective strategies for regenerating biotopes is to plan a stretch of biotopes, not just a point where animals and plants come and go. (Such an organic traffic course is called a corridor.) In the stretch method, the centre of the network would be large green tracts of land: a forest, natural park, or cemetery. By connecting parcels of land with smaller biotope areas such as a green belt along the river, small town parks, gardens, or even roadside trees, biotopes can exist in a network. In other words, a biotope is an open, not a closed, system and is a practicable strategy. Human daily life The term "biotope" does not apply to biosphere reserves, which are completely separate from humans and become the object of human admiration. Instead, it is an active part of human daily life. For example, an ornamental flower bed may be considered a biotope (though a rather small one) since it enhances the experience of daily life. An area that has many functions, such as human living space, and is home to other living things, whether plant or animal, can be considered a biosphere reserve. Artificial When artificial items are introduced to a biotope setting, their design and arrangement is of great importance for biotope regeneration. Tree-planting areas where the surface is uneven results in plants that sprout and the nesting of small insects. A mat or net made from natural fibres will gradually biodegrade as it is exposed to the weather. So there is no binomial opposition between the natural and the artificial in a biotope. Rather, such artificial materials are widely used. Germany It is especially characteristic in Germany, which is the birthplace of the term biotope, that the authorities take the initiative in conserving biotopes, maintaining consistency with urban or rural planning and considering the regions' history and landscape. Legal basis Since 1976, the federal nature protection law, Bundesnaturschutzgesetz (BNUM), requires that wild animals and plants and their community should be protected as part of the ecosystem in the specific diversity that has grown naturally and historically, and their biotope and other living conditions should be protected, preserved, developed, and restored. (Number 9, Clause 1, Article 2). The law also requires that some kinds of biotope that are full of a specific variety should not be harmed by development. So there is a law that mandates the protection of biotopes. There is also a provincial law corresponding to the federal one. Such developments were uncommon in those times. Landscape plan Many German states are obliged by law to produce a landscape plan (Landschaftsplan) as part of their urban planning, though these plans vary somewhat from place to place. The purpose of the Landschaftsplan is to protect the region's environment and landscape. These plans use text and figures to describe the present environmental state and proposed remedies. They consider, for example, the regional lie of the land, climate, wind direction, soil, ground water, type of biotope, distribution of animals and plants, inhabitants' welfare and competition with development projects. Citizen welfare Biotope preservation in cities also emphasises recreation and relaxation for citizens and improving the urban environment. For example, in the reserve of Karlsruhe in Baden-Württemberg people can cycle on the bike path or walk the dog, although it is forbidden to gather plants and animals there or walk in the exclusion zone. At the core of biotope preservation is the idea that if civic life is surrounded by a rich profusion of nature whose background is in local history and culture, it is improved by protecting nature and preserving the landscape. Aquaria The term "biotope" is also often used by aquarium hobbyists to describe an aquarium setup that tries to simulate the natural habitat of a specific assemblage of fish. The idea is to replicate conditions such as water parameters, natural plants, substrate, water type (fresh, saline or brackish), lighting, and to include other native fish which usually live together in nature and as such, represent a particular real-world biotope. An example of one South American biotope type might be "Forest creek tributary of Rio Negro near Barcelos, Brazil" with many branches, twigs, roots, dead leaves, light sandy substrate, tannin-stained water and subdued lighting with floating plants, along with Nannostomus eques, Paracheirodon axelrodi, Hemigrammus bleheri, and Dicrossus filamentosus. "South American" is not itself a biotope, as South America contains thousands of distinct biotopes in different regions. Artificial closed ecological systems The term "biotope" can be also used to describe manmade closed ecological systems, occasionally also referred to as CES systems. Examples of these include the Biosphere 2 project and to a lesser degree the Eden Project, which contain areas of uniform environmental conditions and house numerous species of plants, animals and fungi. Therefore these can be considered biotopes. Homemade ecological systems, often incorrectly referred to as ecospheres (due to the product, known as the EcoSphere) or jarrariums also fall under the definition of biotope. These homemade ecosystems (which also include closed terrariums) are often made by hobbyists in what are often jars (hence the name jarrarium) or sealed glass tanks with the intention of mimicking a larger ecosystem. They are often made by going out and collecting material (including soil, plants, small insects and water if an aquatic ecosystem) from said ecosystem and sealing it in an airtight container. These closed ecosystems are often made by hobbyists who enjoy the idea of having an entire ecosystem on their windowsill, or by those interested in studying the viability of small scale, closed loop ecological systems for the purpose potentially creating life support systems. See also Dieter Duhm Ecological land classification Ecotope Geotope Microclimate Biotopes of national importance in Switzerland References External links CORINE Biotopes Abstract on-line at EEA of CORINE Biotypes – The design, compilation and use of an inventory of sites of major importance for nature conservation in the European Community CORINE Biotopes Manual – Habitats of the European Community German Biotopes (www.biolflor.de) Swiss Plant Biotopes MarLIN The Marine Life Information Network for Britain & Ireland Biotope Aquariums at Badman's Tropical Fish Ecology terminology Ecosystems Environmental soil science Habitat Fishkeeping
Biotope
[ "Biology", "Environmental_science" ]
2,120
[ "Ecology terminology", "Environmental soil science", "Symbiosis", "Ecosystems" ]
55,182
https://en.wikipedia.org/wiki/Jakarta%20Project
The Jakarta Project created and maintained open source software for the Java platform. It operated as an umbrella project under the auspices of the Apache Software Foundation, and all Jakarta products are released under the Apache License. As of December 21, 2011 the Jakarta project was retired because no subprojects were remaining. In 2018 Jakarta EE, a part of the Eclipse Enterprise for Java (EE4J) project, became the new name for the Java EE platform at the Eclipse Foundation. Subprojects Major contributions by the Jakarta Project include tools, libraries and frameworks such as: BCEL - a Java byte code manipulation library BSF - a scripting framework Cactus - a unit testing framework for server-side Java classes Apache JMeter - a load- and stress-testing tool. Slide - a content repository primarily using WebDAV The following projects were formerly part of Jakarta, but now form independent projects within the Apache Software Foundation: Ant - a build tool Commons - a collection of useful classes intended to complement Java's standard library. HiveMind - a services and configuration microkernel Maven - a project build and management tool POI - a pure Java port of Microsoft's popular file formats. Struts - a web application development framework Tapestry - A component object model based on JavaBeans properties and strong specifications Tomcat - a JSP/Servlet container Turbine - a rapid development web application framework Velocity - a template engine Project name Jakarta is named after the conference room at Sun Microsystems where the majority of discussions leading to the project's creation took place. At the time, Sun's Java software division was headquartered in a Cupertino building where the conference room names were all coffee references. References External links The Jakarta home page Java platform Jakarta
Jakarta Project
[ "Technology" ]
355
[ "Computing platforms", "Java platform" ]
55,184
https://en.wikipedia.org/wiki/Hop%20%28telecommunications%29
In telecommunications, a hop is a portion of a signal's journey from source to receiver. Examples include: The excursion of a radio wave from the Earth to the ionosphere and back to the Earth. The number of hops indicates the number of reflections from the ionosphere. A similar excursion from an earth station to a communications satellite to another station, counted similarly except that if the return trip is not by satellite, then it is only a half hop. In computer networks, a hop is the step from one network segment to the next. References Telecommunications engineering Radio frequency propagation
Hop (telecommunications)
[ "Physics", "Engineering" ]
117
[ "Physical phenomena", "Telecommunications engineering", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves", "Electrical engineering" ]
55,188
https://en.wikipedia.org/wiki/Barbara%20McClintock
Barbara McClintock (June 16, 1902 – September 2, 1992) was an American scientist and cytogeneticist who was awarded the 1983 Nobel Prize in Physiology or Medicine. McClintock received her PhD in botany from Cornell University in 1927. There she started her career as the leader of the development of maize cytogenetics, the focus of her research for the rest of her life. From the late 1920s, McClintock studied chromosomes and how they change during reproduction in maize. She developed the technique for visualizing maize chromosomes and used microscopic analysis to demonstrate many fundamental genetic ideas. One of those ideas was the notion of genetic recombination by crossing-over during meiosis—a mechanism by which chromosomes exchange information. She produced the first genetic map for maize, linking regions of the chromosome to physical traits. She demonstrated the role of the telomere and centromere, regions of the chromosome that are important in the conservation of genetic information. She was recognized as among the best in the field, awarded prestigious fellowships, and elected a member of the National Academy of Sciences in 1944. During the 1940s and 1950s, McClintock discovered transposons and used it to demonstrate that genes are responsible for turning physical characteristics on and off. She developed theories to explain the suppression and expression of genetic information from one generation of maize plants to the next. Due to skepticism of her research and its implications, she stopped publishing her data in 1953. Later, she made an extensive study of the cytogenetics and ethnobotany of maize races from South America. McClintock's research became well understood in the 1960s and 1970s, as other scientists confirmed the mechanisms of genetic change and protein expression that she had demonstrated in her maize research in the 1940s and 1950s. Awards and recognition for her contributions to the field followed, including the Nobel Prize in Physiology or Medicine, awarded to her in 1983 for the discovery of genetic transposition; as of 2023, she remains the only woman who has received an unshared Nobel Prize in that category. Early life Barbara McClintock was born Eleanor McClintock on June 16, 1902, in Hartford, Connecticut, the third of four children born to homeopathic physician Thomas Henry McClintock and Sara Handy McClintock. Thomas McClintock was the child of British immigrants. Marjorie, the oldest child, was born in October 1898; Mignon, the second daughter, was born in November 1900. The youngest, Malcolm Rider (called Tom), was born 18 months after Barbara. When she was a young girl, her parents determined that Eleanor, a "feminine" and "delicate" name, was not appropriate for her, and chose Barbara instead. McClintock was an independent child beginning at a very young age, a trait she later identified as her "capacity to be alone". From the age of three until she began school, McClintock lived with an aunt and uncle in Brooklyn, New York, in order to reduce the financial burden on her parents while her father established his medical practice. She was described as a solitary and independent child. She was close to her father, but had a difficult relationship with her mother, tension that began when she was young. The McClintock family moved to Brooklyn in 1908 and McClintock completed her secondary education there at Erasmus Hall High School; she graduated early in 1919. She discovered her love of science and reaffirmed her solitary personality during high school. She wanted to continue her studies at Cornell University's College of Agriculture. Her mother resisted sending McClintock to college for fear that she would be unmarriageable, a common attitude at the time. McClintock was almost prevented from starting college, but her father allowed her to just before registration began, and she matriculated at Cornell in 1919. Education and research at Cornell McClintock began her studies at Cornell's College of Agriculture in 1919. There, she participated in student government and was invited to join a sorority, though she soon realized that she preferred not to join formal organizations. Instead, McClintock took up music, specifically jazz. She studied botany, receiving a BSc in 1923. Her interest in genetics began when she took her first course in that field in 1921. The course was based on a similar one offered at Harvard University, and was taught by C. B. Hutchison, a plant breeder and geneticist. Hutchison was impressed by McClintock's interest, and telephoned to invite her to participate in the graduate genetics course at Cornell in 1922. McClintock pointed to Hutchison's invitation as a catalyst for her interest in genetics: "Obviously, this telephone call cast the die for my future. I remained with genetics thereafter." Although it has been reported that women could not major in genetics at Cornell, and therefore her MS and PhD—earned in 1925 and 1927, respectively—were officially awarded in botany, recent research has revealed that women were permitted to earn graduate degrees in Cornell's Plant Breeding Department during the time that McClintock was a student at Cornell. During her graduate studies and postgraduate appointment as a botany instructor, McClintock was instrumental in assembling a group that studied the new field of cytogenetics in maize. This group brought together plant breeders and cytologists, and included Marcus Rhoades, future Nobel laureate George Beadle, and Harriet Creighton. Rollins A. Emerson, head of the Plant Breeding Department, supported these efforts, although he was not a cytologist himself. She also worked as a research assistant for Lowell Fitz Randolph and then for Lester W. Sharp, both Cornell botanists. McClintock's cytogenetic research focused on developing ways to visualize and characterize maize chromosomes. This particular part of her work influenced a generation of students, as it was included in most textbooks. She also developed a technique using carmine staining to visualize maize chromosomes, and showed for the first time the morphology of the 10 maize chromosomes. This discovery was made because she observed cells from the microspore as opposed to the root tip. By studying the morphology of the chromosomes, McClintock was able to link specific chromosome groups of traits that were inherited together. Marcus Rhoades noted that McClintock's 1929 Genetics paper on the characterization of triploid maize chromosomes triggered scientific interest in maize cytogenetics, and attributed to her 10 of the 17 significant advances in the field that were made by Cornell scientists between 1929 and 1935. In 1930, McClintock was the first person to describe the cross-shaped interaction of homologous chromosomes during meiosis. The following year, McClintock and Creighton proved the link between chromosomal crossover during meiosis and the recombination of genetic traits. They observed how the recombination of chromosomes seen under a microscope correlated with new traits. Until this point, it had only been hypothesized that genetic recombination could occur during meiosis, although it had not been shown genetically. McClintock published the first genetic map for maize in 1931, showing the order of three genes on maize chromosome 9. This information provided necessary data for the crossing-over study she published with Creighton; they also showed that crossing-over occurs in sister chromatids as well as homologous chromosomes. In 1938, she produced a cytogenetic analysis of the centromere, describing the organization and function of the centromere, as well as the fact that it can divide. McClintock's breakthrough publications, and support from her colleagues, led to her being awarded several postdoctoral fellowships from the National Research Council. This funding allowed her to continue to study genetics at Cornell, the University of Missouri, and the California Institute of Technology, where she worked with E. G. Anderson. During the summers of 1931 and 1932, she worked at the University of Missouri with geneticist Lewis Stadler, who introduced her to the use of X-rays as a mutagen. Exposure to X-rays can increase the rate of mutation above the natural background level, making it a powerful research tool for genetics. Through her work with X-ray-mutagenized maize, she identified ring chromosomes, which form when the ends of a single chromosome fuse together after radiation damage. From this evidence, McClintock hypothesized that there must be a structure on the chromosome tip that would normally ensure stability. She showed that the loss of ring-chromosomes at meiosis caused variegation in maize foliage in generations subsequent to irradiation resulting from chromosomal deletion. During this period, she demonstrated the presence of the nucleolus organizer region on a region on maize chromosome 6, which is required for the assembly of the nucleolus. In 1933, she established that cells can be damaged when nonhomologous recombination occurs. During this same period, McClintock hypothesized that the tips of chromosomes are protected by telomeres. McClintock received a fellowship from the Guggenheim Foundation that made possible six months of training in Germany during 1933 and 1934. She had planned to work with Curt Stern, who had demonstrated crossing-over in Drosophila just weeks after McClintock and Creighton had done so; however, Stern emigrated to the United States. Instead, she worked with geneticist Richard B. Goldschmidt, who was a director of the Kaiser Wilhelm Institute for Biology in Berlin. She left Germany early amidst mounting political tension in Europe, returned to Cornell, but found that the university would not hire a woman professor. In 1936, she accepted an Assistant Professorship offered to her by Lewis Stadler in the Department of Botany at the University of Missouri in Columbia. While still at Cornell, she was supported by a two-year Rockefeller Foundation grant obtained for her through Emerson's efforts. University of Missouri During her time at Missouri, McClintock expanded her research on the effect of X-rays on maize cytogenetics. McClintock observed the breakage and fusion of chromosomes in irradiated maize cells. She was also able to show that, in some plants, spontaneous chromosome breakage occurred in the cells of the endosperm. Over the course of mitosis, she observed that the ends of broken chromatids were rejoined after the chromosome replication. In the anaphase of mitosis, the broken chromosomes formed a chromatid bridge, which was broken when the chromatids moved towards the cell poles. The broken ends were rejoined in the interphase of the next mitosis, and the cycle was repeated, causing massive mutation, which she could detect as variegation in the endosperm. This breakage–rejoining–bridge cycle was a key cytogenetic discovery for several reasons. First, it showed that the rejoining of chromosomes was not a random event, and second, it demonstrated a source of large-scale mutation. For this reason, it remains an area of interest in cancer research today. Although her research was progressing at Missouri, McClintock was not satisfied with her position at the university. She recalled being excluded from faculty meetings, and was not made aware of positions available at other institutions. In 1940, she wrote to Charles Burnham, "I have decided that I must look for another job. As far as I can make out, there is nothing more for me here. I am an assistant professor at $3,000 and I feel sure that that is the limit for me." Initially, McClintock's position was created especially for her by Stadler, and might have depended on his presence at the university. McClintock believed she would not gain tenure at Missouri, even though according to some accounts, she knew she would be offered a promotion from Missouri in the spring of 1942. Recent evidence reveals that McClintock more likely decided to leave Missouri because she had lost trust in her employer and in the university administration, after discovering that her job would be in jeopardy if Stadler were to leave for Caltech, as he had considered doing. The university's retaliation against Stadler amplified her sentiments. In early 1941, she took a leave of absence from Missouri in hopes of finding a position elsewhere. She accepted a visiting Professorship at Columbia University, where her former Cornell colleague Marcus Rhoades was a professor. Rhoades also offered to share his research field at Cold Spring Harbor on Long Island. In December 1941, she was offered a research position by Milislav Demerec, the newly appointed acting director of the Carnegie Institution of Washington's Department of Genetics Cold Spring Harbor Laboratory; McClintock accepted his invitation despite her qualms and became a permanent member of the faculty. Cold Spring Harbor After her year-long temporary appointment, McClintock accepted a full-time research position at Cold Spring Harbor Laboratory. There, she was highly productive and continued her work with the breakage-fusion-bridge cycle, using it to substitute for X-rays as a tool for mapping new genes. In 1944, in recognition of her prominence in the field of genetics during this period, McClintock was elected to the National Academy of Sciences—only the third woman to be elected. The following year she became the first female president of the Genetics Society of America; she had been elected its vice-president in 1939. In 1944 she undertook a cytogenetic analysis of Neurospora crassa at the suggestion of George Beadle, who used the fungus to demonstrate the one gene–one enzyme relationship. He invited her to Stanford to undertake the study. She successfully described the number of chromosomes, or karyotype, of N. crassa and described the entire life cycle of the species. Beadle said, "Barbara, in two months at Stanford, did more to clean up the cytology of Neurospora than all other cytological geneticists had done in all previous time on all forms of mold." N. crassa has since become a model species for classical genetic analysis. Discovery of controlling elements In the summer of 1944 at Cold Spring Harbor Laboratory, McClintock began systematic studies on the mechanisms of the mosaic color patterns of maize seed and the unstable inheritance of this mosaicism. She identified two new dominant and interacting genetic loci that she named Dissociation (Ds) and Activator (Ac). She found that the Dissociation did not just dissociate or cause the chromosome to break, it also had a variety of effects on neighboring genes when the Activator was also present, which included making certain stable mutations unstable. In early 1948, she made the surprising discovery that both Dissociation and Activator could transpose, or change position, on the chromosome. She observed the effects of the transposition of Ac and Ds by the changing patterns of coloration in maize kernels over generations of controlled crosses, and described the relationship between the two loci through intricate microscopic analysis. She concluded that Ac controls the transposition of the Ds from chromosome 9, and that the movement of Ds is accompanied by the breakage of the chromosome. When Ds moves, the aleurone-color gene is released from the suppressing effect of the Ds and transformed into the active form, which initiates the pigment synthesis in cells. The transposition of Ds in different cells is random, it may move in some but not others, which causes color mosaicism. The size of the colored spot on the seed is determined by stage of the seed development during dissociation. McClintock also found that the transposition of Ds is determined by the number of Ac copies in the cell. Between 1948 and 1950, she developed a theory by which these mobile elements regulated the genes by inhibiting or modulating their action. She referred to Dissociation and Activator as "controlling units"—later, as "controlling elements"—to distinguish them from genes. She hypothesized that gene regulation could explain how complex multicellular organisms made of cells with identical genomes have cells of different function. McClintock's discovery challenged the concept of the genome as a static set of instructions passed between generations. In 1950, she reported her work on Ac/Ds and her ideas about gene regulation in a paper entitled "The origin and behavior of mutable loci in maize" published in the journal Proceedings of the National Academy of Sciences. In summer 1951, she reported her work on the origin and behavior of mutable loci in maize at the annual symposium at Cold Spring Harbor Laboratory, presenting a paper of the same name. The paper delved into the instability caused by Ds and Ac or just Ac in four genes, along with the tendency of those genes to unpredictably revert to the wild phenotype. She also identified "families" of transposons, which did not interact with one another. Her work on controlling elements and gene regulation was conceptually difficult and was not immediately understood or accepted by her contemporaries; she described the reception of her research as "puzzlement, even hostility". Nevertheless, McClintock continued to develop her ideas on controlling elements. She published a paper in Genetics in 1953, where she presented all her statistical data, and undertook lecture tours to universities throughout the 1950s to speak about her work. She continued to investigate the problem and identified a new element that she called Suppressor-mutator (Spm), which, although similar to Ac/Ds, acts in a more complex manner. Like Ac/Ds, some versions could transpose on their own and some could not; unlike Ac/Ds, when present, it fully suppressed the expression of mutant genes when they normally would not be entirely suppressed. Based on the reactions of other scientists to her work, McClintock felt she risked alienating the scientific mainstream, and from 1953 was forced to stop publishing accounts of her research on controlling elements. The origins of maize In 1957, McClintock received funding from the National Academy of Sciences to start research on indigenous strains of maize in Central America and South America. She was interested in studying the evolution of maize through chromosomal changes, and being in South America would allow her to work on a larger scale. McClintock explored the chromosomal, morphological, and evolutionary characteristics of various races of maize. After extensive work in the 1960s and 1970s, McClintock and her collaborators published the seminal study The Chromosomal Constitution of Races of Maize, leaving their mark on paleobotany, ethnobotany, and evolutionary biology. Rediscovery McClintock officially retired from her position at the Carnegie Institution in 1967, and was made a Distinguished Service Member of the Carnegie Institution of Washington. This honor allowed her to continue working with graduate students and colleagues in the Cold Spring Harbor Laboratory as scientist emerita; she lived in the town. In reference to her decision 20 years earlier to stop publishing detailed accounts of her work on controlling elements, she wrote in 1973: The importance of McClintock's contributions was revealed in the 1960s, when the work of French geneticists François Jacob and Jacques Monod described the genetic regulation of the lac operon, a concept she had demonstrated with Ac/Ds in 1951. Following Jacob and Monod's 1961 Journal of Molecular Biology paper "Genetic regulatory mechanisms in the synthesis of proteins", McClintock wrote an article for American Naturalist comparing the lac operon and her work on controlling elements in maize. Even late in the twentieth century, McClintock's contribution to biology was still not widely acknowledged as amounting to the discovery of genetic regulation. McClintock was widely credited with discovering transposition after other researchers finally discovered the process in bacteria, yeast, and bacteriophages in the late 1960s and early 1970s. During this period, molecular biology had developed significant new technology, and scientists were able to show the molecular basis for transposition. In the 1970s, Ac and Ds were cloned by other scientists and were shown to be class II transposons. Ac is a complete transposon that can produce a functional transposase, which is required for the element to move within the genome. Ds has a mutation in its transposase gene, which means that it cannot move without another source of transposase. Thus, as McClintock observed, Ds cannot move in the absence of Ac. Spm has also been characterized as a transposon. Subsequent research has shown that transposons typically do not move unless the cell is placed under stress, such as by irradiation or the breakage-fusion-bridge cycle, and thus their activation during stress can serve as a source of genetic variation for evolution. McClintock understood the role of transposons in evolution and genome change well before other researchers grasped the concept. Nowadays, Ac/Ds is used as a tool in plant biology to generate mutant plants used for the characterization of gene function. Honors and recognition In 1947, McClintock received the Achievement Award from the American Association of University Women. She was elected a Fellow of the American Academy of Arts and Sciences in 1959. In 1967, McClintock was awarded the Kimber Genetics Award; three years later, she was given the National Medal of Science by Richard Nixon in 1970. She was the first woman to be awarded the National Medal of Science. Cold Spring Harbor named a building in her honor in 1973. She received the Louis and Bert Freedman Foundation Award and the Lewis S. Rosensteil Award in 1978. In 1981, she became the first recipient of the MacArthur Foundation Grant, and was awarded the Albert Lasker Award for Basic Medical Research, the Wolf Prize in Medicine and the Thomas Hunt Morgan Medal by the Genetics Society of America. In 1982, she was awarded the Louisa Gross Horwitz Prize from Columbia University for her research in the "evolution of genetic information and the control of its expression." Most notably, she received the Nobel Prize for Physiology or Medicine in 1983, the first woman to win that prize unshared, and the first American woman to win any unshared Nobel Prize in the sciences. It was given to her by the Nobel Foundation for discovering "mobile genetic elements"; this was more than 30 years after she initially described the phenomenon of controlling elements. She was compared to Gregor Mendel in terms of her scientific career by the Swedish Academy of Sciences when she was awarded the Prize. She was elected a Foreign Member of the Royal Society (ForMemRS) in 1989. McClintock received the Benjamin Franklin Medal for Distinguished Achievement in the Sciences of the American Philosophical Society in 1993. She had been previously elected to the APS in 1946. She was awarded 14 Honorary Doctor of Science degrees and an Honorary Doctor of Humane Letters. In 1986 she was inducted into the National Women's Hall of Fame. During her final years, McClintock led a more public life, especially after Evelyn Fox Keller's 1983 biography of her, A Feeling for the Organism, brought McClintock's story to the public. She remained a regular presence in the Cold Spring Harbor community, and gave talks on mobile genetic elements and the history of genetics research for the benefit of junior scientists. An anthology of her 43 publications The Discovery and Characterization of Transposable Elements: The Collected Papers of Barbara McClintock was published in 1987. The McClintock Prize is named in her honor. Laureates of the award include David Baulcombe, Detlef Weigel, Robert A. Martienssen, Jeffrey D. Palmer and Susan R. Wessler. In May 2005 the U.S. Postal Service issued a panel of first-class stamps honoring Barbara McClintock, along with Richard Feynman, Josiah Willard Gibbs, and John von Neumann. Later years McClintock spent her later years, post Nobel Prize, as a key leader and researcher in the field at Cold Spring Harbor Laboratory on Long Island, New York. McClintock died of natural causes in Huntington, New York, on September 2, 1992, at the age of 90; she never married or had children. Legacy McClintock was the subject of a 1983 biography by physicist Evelyn Fox Keller, titled A Feeling for the Organism. Keller argued that because McClintock felt like an outsider within her field, (in part, because of her sex) she was able to look at her scientific subjects from a perspective different from the dominant one, leading to several important insights. Keller shows how this led many of her colleagues to reject her ideas and undermine her abilities for many years. For example, when McClintock presented her findings that the genetics of maize did not conform to Mendelian distributions, geneticist Sewall Wright expressed the belief that she did not understand the underlying mathematics of her work, a belief he had also expressed towards other women at the time. In addition, geneticist Lotte Auerbach recounted that Joshua Lederberg returned from a visit to McClintock's lab with the remark: 'By God, that woman is either crazy or a genius.' " As Auerbach recounts, McClintock had thrown Lederberg and his colleagues out after half an hour 'because of their arrogance. She was intolerant of arrogance ... She felt she had crossed a desert alone and no one had followed her.'" In 2001, a second biography by science historian Nathaniel C. Comfort's The Tangled Field: Barbara McClintock's Search for the Patterns of Genetic Control challenged this narrative. Comfort's biography contests the claim that McClintock was marginalized by other scientists, which he calls the "McClintock Myth" and argues was perpetuated both by McClintock herself as well as in the earlier biography by Keller. Comfort, however, asserts that McClintock was not discriminated against because of her gender, citing that she was well regarded by her professional peers, even in the early years of her career. Many recent biographical works on women in science feature accounts of McClintock's work and experience. She is held up as a role model for girls in such works of children's literature as Edith Hope Fine's Barbara McClintock, Nobel Prize Geneticist, Deborah Heiligman's Barbara McClintock: Alone in Her Field and Mary Kittredge's Barbara McClintock. A recent biography for young adults by Naomi Pasachoff, Barbara McClintock, Genius of Genetics, provides a new perspective, based on the current literature. On May 4, 2005, the United States Postal Service issued the "American Scientists" commemorative postage stamp series, a set of four 37-cent self-adhesive stamps in several configurations. The scientists depicted were Barbara McClintock, John von Neumann, Josiah Willard Gibbs, and Richard Feynman. McClintock was also featured in a 1989 four-stamp issue from Sweden which illustrated the work of eight Nobel Prize-winning geneticists. A laboratory building at Cold Spring Harbor Laboratory were named for her. A street has been named after her in the new "Adlershof Development Society" science park in Berlin. A 103,835 square-foot residence hall at Cornell University was named for McClintock in 2022. Some of McClintock's personality and scientific achievements were referred to in Jeffrey Eugenides's 2011 novel The Marriage Plot, which tells the story of a yeast geneticist named Leonard who has bipolar disorder. He works at a laboratory loosely based on Cold Spring Harbor. The character reminiscent of McClintock is a reclusive geneticist at the fictional laboratory, who makes the same discoveries as her factual counterpart. Judith Pratt wrote a play about McClintock, called MAIZE, which was read at Artemesia Theatre in Chicago in 2015, and was produced in Ithaca NY, the home of Cornell University, in February–March 2018. Key publications McClintock, B., Kato Yamakake, T. A. & Blumenschein, A. (1981). Chromosome constitution of races of maize. Its significance in the interpretation of relationships between races and varieties in the Americas. Chapingo, Mexico: Escuela de Nacional de Agricultura, Colegio de Postgraduados. See also Timeline of women in science Citations References Archives and research collections The Barbara McClintock Papers – Profiles in Science, National Library of Medicine. Barbara McClintock Papers, 1927–1991 at the American Philosophical Society External links Cold Spring Harbor Laboratory Archives, Barbara McClintock: A Brief Biographical Sketch Enhancer and Gene Trap Transposon Mutagenesis in Arabidopsis, comprehensive article on the use of Ac/Ds and other transposons for plant mutagenesis Barbara McClintock archive on New Scientist Barbara McClintock on Pnas.org American geneticists American evolutionary biologists Theoretical biologists Women Nobel laureates 1902 births 1992 deaths American women botanists American women evolutionary biologists Women food scientists American women geneticists American women physiologists American Nobel laureates Nobel laureates in Physiology or Medicine Fellows of the American Academy of Arts and Sciences Members of the United States National Academy of Sciences National Medal of Science laureates Recipients of the Albert Lasker Award for Basic Medical Research Wolf Prize in Medicine laureates MacArthur Fellows Foreign members of the Royal Society Cornell University College of Agriculture and Life Sciences alumni Erasmus Hall High School alumni University of Missouri faculty American people of British descent Scientists from Brooklyn Scientists from Columbia, Missouri Scientists from Hartford, Connecticut 20th-century American botanists 20th-century American women scientists Scientists from Missouri Scientists from New York (state) American lecturers Plant geneticists Graduate Women in Science members
Barbara McClintock
[ "Technology" ]
6,049
[ "Women Nobel laureates", "Women in science and technology" ]
55,199
https://en.wikipedia.org/wiki/M%C3%B6ssbauer%20effect
The Mössbauer effect, or recoilless nuclear resonance fluorescence, is a physical phenomenon discovered by Rudolf Mössbauer in 1958. It involves the resonant and recoil-free emission and absorption of gamma radiation by atomic nuclei bound in a solid. Its main application is in Mössbauer spectroscopy. In the Mössbauer effect, a narrow resonance for nuclear gamma emission and absorption results from the momentum of recoil being delivered to a surrounding crystal lattice rather than to the emitting or absorbing nucleus alone. When this occurs, no gamma energy is lost to the kinetic energy of recoiling nuclei at either the emitting or absorbing end of a gamma transition: emission and absorption occur at the same energy, resulting in strong, resonant absorption. History The emission and absorption of X-rays by gases had been observed previously. It was expected that a similar phenomenon would be found for gamma rays, which are created by nuclear transitions (as opposed to X-rays, which are typically produced by electronic transitions). However, attempts to observe nuclear resonance produced by gamma rays in gases failed due to energy being lost to recoil, preventing resonance (the Doppler effect also broadens the gamma-ray spectrum). Mössbauer observed resonance in nuclei of solid iridium, which raised the question of why gamma-ray resonance was possible in solids but not in gases. Mössbauer proposed that, for the case of atoms bound into a solid, a fraction of the nuclear events could occur essentially without recoil under certain circumstances. He attributed the observed resonance to this recoil-free fraction of nuclear events. The Mössbauer effect was one of the last major discoveries in physics to be originally reported in the German language. The first reports in English were a pair of letters describing independent repetitions of the experiment. The discovery was rewarded with the Nobel Prize in Physics in 1961, together with Robert Hofstadter's research of electron scattering in atomic nuclei. Description In general, gamma rays are produced by nuclear transitions from an unstable high-energy state to a stable low-energy state. The energy of the emitted gamma ray corresponds to the energy of the nuclear transition, minus an amount of energy that is lost as recoil to the emitting atom. If the lost recoil energy is small compared with the energy linewidth of the nuclear transition, then the gamma-ray energy still corresponds to the energy of the nuclear transition and the gamma ray can be absorbed by a second atom of the same type as the first. This emission and subsequent absorption is called resonant fluorescence. Additional recoil energy is also lost during absorption, so in order for resonance to occur, the recoil energy must actually be less than half the linewidth for the corresponding nuclear transition. The amount of energy in the recoiling body () can be found from momentum conservation: where is the momentum of the recoiling matter, and the momentum of the gamma ray. Substituting energy into the equation gives: where ( for ) is the energy lost as recoil, is the energy of the gamma ray ( for ), ( for ) is the mass of the emitting or absorbing body, and c is the speed of light. In the case of a gas, the emitting and absorbing bodies are atoms, so the mass is relatively small, resulting in a large recoil energy, which prevents resonance. (The same equation applies for recoil energy losses in X-rays, but the photon energy is much less, resulting in a lower energy loss, which is why gas-phase resonance could be observed with X-rays.) In a solid, the nuclei are bound to the lattice and do not recoil as in a gas. The lattice as a whole recoils, but the recoil energy is negligible because the in the above equation is the mass of the entire lattice. However, the energy in a decay can be taken up or supplied by lattice vibrations. The energy of these vibrations is quantised in units known as phonons. The Mössbauer effect occurs because there is a finite probability of a decay involving no phonons. Thus in a fraction of the nuclear events (the recoil-free fraction, given by the Lamb–Mössbauer factor), the entire crystal acts as the recoiling body, and these events are essentially recoil-free. In these cases, since the recoil energy is negligible, the emitted gamma rays have the appropriate energy and resonance can occur. In general (depending on the half-life of the decay), gamma rays have very narrow line widths. This means they are very sensitive to small changes in the energies of nuclear transitions. In fact, gamma rays can be used as a probe to observe the effects of interactions between a nucleus and its electrons and those of its neighbors. This is the basis for Mössbauer spectroscopy, which combines the Mössbauer effect with the Doppler effect to monitor such interactions. Zero-phonon optical transitions, a process closely analogous to the Mössbauer effect, can be observed in lattice-bound chromophores at low temperatures. See also Isomeric shift Mössbauer rotor experiments Mössbauer spectroscopy Nuclear spectroscopy Perturbed angular correlation Pound–Rebka experiment References Further reading External links Condensed matter physics Nuclear physics Physical phenomena
Mössbauer effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,067
[ "Physical phenomena", "Phases of matter", "Materials science", "Condensed matter physics", "Nuclear physics", "Matter" ]