id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
45,247,887
https://en.wikipedia.org/wiki/Operation%20Match
Operation Match was the first computer dating service in the United States, begun in 1965. The predecessor of this was created in London and was called St. James Computer Dating Service (later to become Com-Pat), started by Joan Ball in 1964. The initial idea was to pair Ivy League men with students at the Seven Sisters women’s colleges. Users filled out a 75-point paper questionnaire, covering hobbies, education, physical appearance, race and attitudes towards sex, that could then be mailed with a $3 fee. The questionnaire was geared to young college students seeking a date, not a marriage partner. Questions included "Do you believe in a God who answers prayer?" and "Is extensive sexual activity in preparation for marriage part of 'growing up?'" Participants were asked to answer twice, once describing themselves, the other describing their ideal date. The questionnaires were transferred to punched cards and processed on an IBM 7090 computer at the Avco service bureau in Wilmington, Massachusetts. A week or two later, the user received an IBM 1401 print out in the mail listing the names and telephone numbers of five potential matches. Approximately 90,000 questionnaires were completed after six months of launch, with more than 100,000 respondents paired. Operation Match was started by Harvard University undergraduate students Jeffrey C. Tarr, David L. Crump and Vaughan Morrill, with help from Douglas H. Ginsburg, then a student at Cornell University. Tarr, Crump and Ginsburg formed a company named Compatibility Research, Inc. and rolled out the service in several cities. References History of human–computer interaction Online dating services of the United States 1965 establishments in the United States
Operation Match
Technology
346
4,490,764
https://en.wikipedia.org/wiki/Logarithmic%20mean
In mathematics, the logarithmic mean is a function of two non-negative numbers which is equal to their difference divided by the logarithm of their quotient. This calculation is applicable in engineering problems involving heat and mass transfer. Definition The logarithmic mean is defined as: for the positive numbers . Inequalities The logarithmic mean of two numbers is smaller than the arithmetic mean and the generalized mean with exponent greater than 1. However, it is larger than the geometric mean and the harmonic mean, respectively. The inequalities are strict unless both numbers are equal. Toyesh Prakash Sharma generalizes the arithmetic logarithmic geometric mean inequality for any belongs to the whole number as Now, for : This is the arithmetic logarithmic geometric mean inequality. Similarly, one can also obtain results by putting different values of as below For : for the proof go through the bibliography. Derivation Mean value theorem of differential calculus From the mean value theorem, there exists a value in the interval between and where the derivative equals the slope of the secant line: The logarithmic mean is obtained as the value of by substituting for and similarly for its corresponding derivative: and solving for : Integration The logarithmic mean can also be interpreted as the area under an exponential curve. The area interpretation allows the easy derivation of some basic properties of the logarithmic mean. Since the exponential function is monotonic, the integral over an interval of length 1 is bounded by and . The homogeneity of the integral operator is transferred to the mean operator, that is . Two other useful integral representations areand Generalization Mean value theorem of differential calculus One can generalize the mean to variables by considering the mean value theorem for divided differences for the -th derivative of the logarithm. We obtain where denotes a divided difference of the logarithm. For this leads to Integral The integral interpretation can also be generalized to more variables, but it leads to a different result. Given the simplex with and an appropriate measure which assigns the simplex a volume of 1, we obtain This can be simplified using divided differences of the exponential function to . Example : Connection to other means Arithmetic mean: Geometric mean: Harmonic mean: See also A different mean which is related to logarithms is the geometric mean. The logarithmic mean is a special case of the Stolarsky mean. Logarithmic mean temperature difference Log semiring References Citations Bibliography Oilfield Glossary: Term 'logarithmic mean' Toyesh Prakash Sharma.: https://www.parabola.unsw.edu.au/files/articles/2020-2029/volume-58-2022/issue-2/vol58_no2_3.pdf "A generalisation of the Arithmetic-Logarithmic-Geometric Mean Inequality, Parabola Magazine, Vol. 58, No. 2, 2022, pp 1–5 Mean Means
Logarithmic mean
Physics,Mathematics
611
63,067,144
https://en.wikipedia.org/wiki/Boltzmann%20sampler
A Boltzmann sampler is an algorithm intended for random sampling of combinatorial structures. If the object size is viewed as its energy, and the argument of the corresponding generating function is interpreted in terms of the temperature of the physical system, then a Boltzmann sampler returns an object from a classical Boltzmann distribution. The concept of Boltzmann sampler was proposed by Philippe Duchon, Philippe Flajolet, Guy Louchard and Gilles Schaeffer in 2004. Description The concept of Boltzmann sampling is closely related to the symbolic method in combinatorics. Let be a combinatorial class with an ordinary generating function which has a nonzero radius of convergence , i.e. is complex analytic. Formally speaking, if each object is equipped with a non-negative integer size , then the generating function is defined as where denotes the number of objects of size . The size function is typically used to denote the number of vertices in a tree or in a graph, the number of letters in a word, etc. A Boltzmann sampler for the class with a parameter such that , denoted as returns an object with probability Construction Finite sets If is finite, then an element is drawn with probability proportional to . Disjoint union If the target class is a disjoint union of two other classes, , and the generating functions and of and are known, then the Boltzmann sampler for can be obtained as where stands for "if the random variable is 1, then execute , else execute ". More generally, if the disjoint union is taken over a finite set, the resulting Boltzmann sampler can be represented using a random choice with probabilities proportional to the values of the generating functions. Cartesian product If is a class constructed of ordered pairs where and , then the corresponding Boltzmann sampler can be obtained as i.e. by forming a pair with and drawn independently from and . Sequence If is composed of all the finite sequences of elements of with size of a sequence additively inherited from sizes of components, then the generating function of is expressed as , where is the generating function of . Alternatively, the class admits a recursive representation This gives two possibilities for . where stands for "draw a random variable ; if the value is returned, then execute independently times and return the sequence obtained". Here, stands for the geometric distribution . Recursive classes As the first construction of the sequence operator suggests, Boltzmann samplers can be used recursively. If the target class is a part of the system where each of the expressions involves only disjoint union, cartesian product and sequence operator, then the corresponding Boltzmann sampler is well defined. Given the argument value , the numerical values of the generating functions can be obtained by Newton iteration. Labelled structures Boltzmann sampling can be applied to labelled structures. For a labelled combinatorial class , exponential generating function is used instead: where denotes the number of labelled objects of size . The operation of cartesian product and sequence need to be adjusted to take labelling into account, and the principle of construction remains the same. In the labelled case, the Boltzmann sampler for a labelled class is required to output an object with probability Labelled sets In the labelled universe, a class can be composed of all the finite sets of elements of a class with order-consistent relabellings. In this case, the exponential generating function of the class is written as where is the exponential generating function of the class . The Boltzmann sampler for can be described as where stands for the standard Poisson distribution . Labelled cycles In the cycle construction, a class is composed of all the finite sequences of elements of a class , where two sequences are considered equivalent if they can be obtained by a cyclic shift. The exponential generating function of the class is written as where is the exponential generating function of the class . The Boltzmann sampler for can be described as where describes the log-law distribution . Properties Let denote the random size of the generated object from . Then, the size has the first and the second moment satisfying . Examples Binary trees The class of binary trees can be defined by the recursive specification and its generating function satisfies an equation and can be evaluated as a solution of the quadratic equation The resulting Boltzmann sampler can be described recursively by Set partitions Consider various partitions of the set into several non-empty classes, being disordered between themselves. Using symbolic method, the class of set partitions can be expressed as The corresponding generating function is equal to . Therefore, Boltzmann sampler can be described as where the positive Poisson distribution is a Poisson distribution with a parameter conditioned to take only positive values. Further generalisations The original Boltzmann samplers described by Philippe Duchon, Philippe Flajolet, Guy Louchard and Gilles Schaeffer only support basic unlabelled operations of disjoint union, cartesian product and sequence, and two additional operations for labelled classes, namely the set and the cycle construction. Since then, the scope of combinatorial classes for which a Boltzmann sampler can be constructed, has expanded. Unlabelled structures The admissible operations for unlabelled classes include such additional operations as Multiset, Cycle and Powerset. Boltzmann samplers for these operations have been described by Philippe Flajolet, Éric Fusy and Carine Pivoteau. Differential specifications Let be a labelled combinatorial class. The derivative operation is defined as follows: take a labelled object and replace an atom with the largest label with a distinguished atom without a label, therefore reducing a size of the resulting object by 1. If is the exponential generating function of the class , then the exponential generating function of the derivative class is given byA differential specification is a recursive specification of type where the expression involves only standard operations of union, product, sequence, cycle and set, and does not involve differentiation. Boltzmann samplers for differential specifications have been constructed by Olivier Bodini, Olivier Roussel and Michèle Soria. Multi-parametric Boltzmann samplers A multi-parametric Boltzmann distribution for multiparametric combinatorial classes is defined similarly to the classical case. Assume that each object is equipped with the composition size which is a vector of non-negative integer numbers. Each of the size functions can reflect one of the parameters of a data structure, such as the number of leaves of certain colour in a tree, the height of the tree, etc. The corresponding multivariate generating function is then associated with a multi-parametric class, and is defined asA Boltzmann sampler for the multiparametric class with a vector parameter inside the domain of analyticity of , denoted as returns an object with probability Multiparametric Boltzmann samplers have been constructed by Olivier Bodini and Yann Ponty. A polynomial-time algorithm for finding the numerical values of the parameters given the target parameter expectations, can be obtained by formulating an auxiliary convex optimisation problem Applications Boltzmann sampling can be used to generate algebraic data types for the sake of property-based testing. Software Random Discrete Objects Suite (RDOS): http://lipn.fr/rdos/ Combstruct package in Maple: https://www.maplesoft.com/support/help/Maple/view.aspx?path=combstruct Haskell package Boltzmann Brain: https://github.com/maciej-bendkowski/boltzmann-brain References Combinatorial algorithms
Boltzmann sampler
Mathematics
1,566
56,814,106
https://en.wikipedia.org/wiki/ADS%201359
ADS 1359 is a quadruple star system in the constellation Cassiopeia. It is composed of two sun like stars in an eclipsing binary with a 2.5-day period, which is in turn orbited by an A-type main-sequence star with a 185-year orbital period. There is also HD 236848 which is a distant proper motion companion. It is very faintly visible to the naked eye under ideal observing conditions. Visual binary The visual binary was discovered by Sherburne Wesley Burnham at Dearborn Observatory in Chicago in 1880. A first preliminary orbit was calculated in 1971 by astronomer Georgije Popović using observations from 1880 to 1967. Improved orbits were calculated in 1995, 2009 and 2017. The two stars were separated by when they were discovered, but only in 2010. The orbit has a high eccentricity and the separation of the two stars varies between about and . Eclipsing binary ADS 1359 was discovered by the Hipparcos spacecraft to be a detached eclipsing binary and given the variable star designation V773 Cassiopeiae. The derived period of variability was 1.29 days, exactly half the orbital period of the inner pair since each orbit produces two almost-identical eclipses. The eclipsing stars are the inner pair of the system. The two stars combined are still about eight times fainter than the third star and so the eclipses decrease the overall brightness of V773 Cas by less than 0.1 magnitudes. HD 236848 The Washington Double Star Catalog lists a 16th magnitude companion as component C and a 10th magnitude companion as component D. Component D is HD 236848 and it shares the same space motion and distance as the inner three stars. References Cassiopeia (constellation) Cassiopeiae, V773 10543 8115 499 Durchmusterung objects Spectroscopic binaries Eclipsing binaries G-type main-sequence stars A-type main-sequence stars 4
ADS 1359
Astronomy
408
6,699,371
https://en.wikipedia.org/wiki/TerraTec
TerraTec Electronic GmbH is a German manufacturer of sound cards, computer speakers, webcams, computer mice, video grabbers and TV tuner cards. TerraTec is mainly known for its sound cards, and is the largest German producer of them. The company was founded by Walter Grieger and Heiko Meertz in 1994 in Nettetal, Germany. Both Grieger and Meerts are still CEOs of the company. There was a time when Terratec mainly produced graphic cards. But it dropped the production later and focused on sound cards. Furthermore, TerraTec is distributing hardware and software products like professionally studio software Cubase for musicians or hardware like PhonoPreAmpiVinyl to digitalizing recordings from vinyl or tapes to digital audio formats. TerraTec also produces the "Axon" brand of Pitch-to-MIDI or guitar synthesizer converters. AXON's current models include the AX100 and the AX50USB. Products Sound hardware ISA cards EWS 64 EWS 88 PCI card EWX 24/96 Maestro series Gold series Base 1 Base 2 WT64 (wavetable add-on) PCI cards Solo 1 DMX DMX XFire 1024 DMX 6Fire PCI 128i PCI 512i Digital Aureon 7.1 Aureon 7.1 Universe External devices DMX 6Fire USB Phase series Aureon USB series Aureon 8Fire series Aureon 7.1 FireWire TV and Video Capture hardware TerraTV series (analog TV/FM tuner) TerraCAM series (webcams) VideoSystem Cameo (video editing card) See also Cubase References External links Manufacturing companies established in 1994 Computer companies of Germany Audio equipment manufacturers of Germany Computer hardware companies Computer peripheral companies German brands Companies based in North Rhine-Westphalia
TerraTec
Technology
370
13,313,106
https://en.wikipedia.org/wiki/Astrophysics%20and%20Space%20Science
Astrophysics and Space Science is a bimonthly peer-reviewed scientific journal covering astronomy, astrophysics, and space science and astrophysical aspects of astrobiology. It was established in 1968 and is published by Springer Science+Business Media. From 2016 to 2020, the editors-in-chief were both Prof. Elias Brinks and Prof. Jeremy Mould. Since 2020 the sole editor-in-chief is Prof. Elias Brinks. Other editors-in-chief in the past have been Zdeněk Kopal (Univ. of Manchester) (1968–1993) and Michael A. Dopita (Australian National University) (1994–2015). Abstracting and indexing The journal is abstracted and indexed in the following databases: According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.830. References External links Space science journals Academic journals established in 1968 Springer Science+Business Media academic journals Bimonthly journals Astrophysics journals English-language journals Plasma science journals
Astrophysics and Space Science
Physics,Astronomy
205
34,040,115
https://en.wikipedia.org/wiki/Apache%20OpenOffice
Apache OpenOffice (AOO) is an open-source office productivity software suite. It is one of the successor projects of OpenOffice.org and the designated successor of IBM Lotus Symphony. It was a close cousin of LibreOffice, Collabora Online and NeoOffice in 2014. It contains a word processor (Writer), a spreadsheet (Calc), a presentation application (Impress), a drawing application (Draw), a formula editor (Math), and a database management application (Base). Apache OpenOffice's default file format is the OpenDocument Format (ODF), an ISO/IEC standard. It can also read and write a wide variety of other file formats, with particular attention to those from Microsoft Office although it cannot save documents in Microsoft's post-2007 Office Open XML formats, but only import them. Apache OpenOffice is developed for Linux, macOS and Windows, with ports to other operating systems. It is distributed under the Apache-2.0 license. The first release was version 3.4.0, on 8 May 2012. The most recent significant feature release was version 4.1, which was made available in 2014. The project has continued to release minor updates that fix bugs, update dictionaries and sometimes include feature enhancements. The most recent maintenance release was 4.1.15 on December 22, 2023. Difficulties maintaining a sufficient number of contributors to keep the project viable have persisted for several years. In January 2015, the project reported a lack of active developers and code contributions. There have been continual problems providing timely fixes to security vulnerabilities since 2015. Downloads of the software peaked in 2013 with an average of just under 148,000 per day, compared to about 50,000 in 2019 and 2020. As of January 2025, the Apache Software Foundation has classed its security status as "amber" with multiple unfixed security issues over a year old. History After acquiring Sun Microsystems in January 2010, Oracle Corporation continued developing OpenOffice.org and StarOffice, which it renamed Oracle Open Office. In September 2010, the majority of outside OpenOffice.org developers left the project due to concerns over Sun's, and then Oracle's, management of the project, to form The Document Foundation (TDF). TDF released the fork LibreOffice in January 2011, which most Linux distributions soon moved to, including Oracle Linux in 2012. In April 2011, Oracle stopped development of OpenOffice.org and laid off the remaining development team. Its reasons for doing so were not disclosed; some speculate that it was due to the loss of mindshare with much of the community moving to LibreOffice while others suggest it was a commercial decision. In June 2011 Oracle contributed the OpenOffice.org trademarks and source code to the Apache Software Foundation, which Apache re-licensed under the Apache License. IBM, to whom Oracle had contractual obligations concerning the code, appears to have preferred that OpenOffice.org be spun out to the Apache Software Foundation above other options or being abandoned by Oracle. Additionally, in March 2012, in the context of donating IBM Lotus Symphony to the Apache OpenOffice project, IBM expressed a preference for permissive licenses, such as the Apache license, over copyleft license. The developer pool for the Apache project was seeded by IBM employees, who, from project inception through to 2015, did the majority of the development. The project was accepted to the Apache Incubator on 13 June 2011, the Oracle code drop was imported on 29 August 2011, Apache OpenOffice 3.4 was released 8 May 2012 and Apache OpenOffice graduated as a top-level Apache project on 18 October 2012. IBM donated the Lotus Symphony codebase to the Apache Software Foundation in 2012, and Symphony was deprecated in favour of Apache OpenOffice. Many features and bug fixes, including a reworked sidebar, were merged. The IAccessible2 screen reader support from Symphony was ported and included in the AOO 4.1 release (April 2014), although its first appearance in an open source software release was as part of LibreOffice 4.2 in January 2014. IBM ceased official participation by the release of AOO 4.1.1. In September 2016, OpenOffice's project management committee chair Dennis Hamilton began a discussion of possibly discontinuing the project, after the Apache board had put them on monthly reporting due to the project's ongoing problems handling security issues. Naming By December 2011, the project was being called Apache OpenOffice.org (Incubating); in 2012, the project chose the name Apache OpenOffice, a name used in the 3.4 press release. Features Components Fonts Apache OpenOffice includes OpenSymbol, DejaVu, the Gentium fonts, and the Apache-licensed ChromeOS fonts Arimo (sans serif), Tinos (serif) and Cousine (monospace). OpenOffice Basic Apache OpenOffice includes OpenOffice Basic, a programming language similar to Microsoft Visual Basic for Applications (VBA). Apache OpenOffice has some Microsoft VBA macro support. OpenOffice Basic is available in Writer, Calc, Draw, Impress and Base. File formats Apache OpenOffice obtains its handling of file formats from OpenOffice.org, excluding some which were supported only by copyleft libraries, such as WordPerfect support. There is no definitive list of what formats the program supports other than the program's behaviour. Notable claimed improvements in file format handling in 4.0 include improved interoperability with Microsoft's 2007 format Office Open XML (DOCX, XLSX, PPTX) — although it cannot write OOXML, only read it to some degree. Use of Java Apache OpenOffice does not bundle a Java virtual machine with the installer. The office suite requires Java for "full functionality" but is only required for specific functions. If you require Java for a function, you will see the message "OpenOffice requires a Java runtime environment (JRE) to perform this task". Supported operating systems Apache OpenOffice 4.1.0 was released for x86 and X86-64 versions of Microsoft Windows XP or later, Linux (32-bit and 64-bit), and Mac OS X 10.4 "Tiger" or later. Other operating systems are supported by community ports; completed ports for 3.4.1 included various other Linux platforms, FreeBSD, OS/2 and derivatives like ArcaOS, Solaris SPARC, and ports of 3.4.0 for Mac OS X v10.4–v10.5 PowerPC and Solaris x86. Development Apache OpenOffice does not "release early, release often"; it eschews time-based release schedules, releasing only "when it is ready". Apache OpenOffice has lost its initial developer participation. During March 2014 March 2015 it had only sixteen developers; the top four (by changesets) were IBM employees, and IBM had ceased official participation by the release of 4.1.1. In January 2015, the project reported that it was struggling to attract new volunteers because of a lack of mentoring and was badly in need of contributions from experienced developers. Industry analysts noted the project's inactivity, describing it as "all but stalled" and "dying" and noting its inability to maintain OpenOffice infrastructure or security. Red Hat developer Christian Schaller sent an open letter to the Apache Software Foundation in August 2015 asking them to direct Apache OpenOffice users towards LibreOffice "for the sake of open source and free software", which was widely covered and echoed by others. The project produced two minor updates in 2017, although there was concern about the potential bugginess of the first of these releases. Patricia Shanahan, the release manager for the previous year's update, noted: "I don't like the idea of changes going out to millions of users having only been seriously examined by one programmer — even if I'm that programmer." Brett Porter, then Apache Software Foundation chairman, asked if the project should "discourage downloads". The next update, released in November 2018, included fixes for regressions introduced in previous releases. The Register published an article in October 2018 entitled "Apache OpenOffice, the Schrodinger's app: No one knows if it's dead or alive, no one really wants to look inside", which found there were 141 code committers at the time of publication, compared to 140 in 2014; this was a change from the sustained growth experienced prior to 2014. The article concluded: "Reports of AOO's death appear to have been greatly exaggerated; the project just looks that way because it's moving slowly." Security Between October 2014 and July 2015 the project had no release manager. During this period, in April 2015, a known remote code execution security vulnerability in Apache OpenOffice 4.1.1 was announced (), but the project did not have the developers available to release the software fix. Instead, the Apache project published a workaround for users, leaving the vulnerability in the download. Former PMC chair Andrea Pescetti volunteered as release manager in July 2015 and version 4.1.2 was released in October 2015. It was revealed in October 2016 that 4.1.2 had been distributed with a known security hole () for nearly a year as the project had not had the development resources to fix it. 4.1.3 was known to have security issues since at least January 2017, but fixes to them were delayed by an absent release manager for 4.1.4. The Apache Software Foundation January 2017 Board minutes were edited after publication to remove mention of the security issue, which Jim Jagielski of the ASF board claimed would be fixed by May 2017. Fixes were finally released in October 2017. Further unfixed problems showed up in February 2019, with The Register unable to get a response from the developers, although the existing proof-of-concept exploit doesn't work with OpenOffice out-of-the-box. Version 4.1.11 was released in October 2021 with a fix for a remote code execution security vulnerability () that was publicly revealed the previous month. The project had been notified in early May 2021. The security hole had been fixed in LibreOffice since 2014. In October 2024, the Apache Software Foundation reported further problems, describing OpenOffice's security health status as "amber", with "three issues in OpenOffice over 365 days old and a number of other open issues not fully triaged." Releases Oracle had improved Draw (adding SVG), Writer (adding ODF 1.2) and Calc in the OpenOffice.org 3.4 beta release (12 April 2011), though it cancelled the project only a few days later. Apache OpenOffice 3.4 was released on 8 May 2012. It differed from the thirteen-month-older OpenOffice.org 3.4 beta mainly in license-related details. Notably, the project removed both code and fonts which were under licenses unacceptable to Apache. Language support was considerably reduced, to 15 languages from 121 in OpenOffice.org 3.3. Java, required for the database application, was no longer bundled with the software. 3.4.1, released 23 August 2012, added five languages back, with a further eight added 30 January 2013. Version 4.0 was released 23 July 2013. Features include merging the Symphony code drop, reimplementing the sidebar-style interface from Symphony, improved install, MS Office interoperability enhancements, and performance improvements. 4.0.1 added nine new languages. Version 4.1 was released in April 2014. Various features lined up for 4.1 include comments on text ranges, IAccessible2, in-place editing of input fields, interactive cropping, importing pictures from files and other improvements. 4.1.1 (released 14 August 2014) fixed critical issues in 4.1. 4.1.2 (released in October 2015) was a bugfix release, with improvements in packaging and removal of the HWP file format support associated with the vulnerability . 4.1.3 (September 2016) had updates to the existing language dictionaries, enhanced build tools for AOO developers, a bug fix for databases on macOS, and a security fix for vulnerability . 4.1.4 contained security fixes. Version 4.1.5 was released in December 2017, containing bug fixes. Distribution As a result of harmful downloads being offered by scammers, the project strongly recommends all downloads be made via its official download page, which is managed off-site by SourceForge. SourceForge reported 30 million downloads for the Apache OpenOffice 3.4 series by January 2013, making it one of SourceForge's top downloads; the project claimed 50 million downloads of Apache OpenOffice 3.4.x as of 15 May 2013, slightly over one year after the release of 3.4.0 (8 May 2012), 85,083,221 downloads of all versions by 1 January 2014, 100 million by April 2014, 130 million by the end of 2014 and 200 million by November 2016. As of May 2012 (the first million downloads), 87% of downloads via SourceForge were for Windows, 11% for Mac OS X and 2% for Linux; statistics for the first 50 million downloads remained consistent, at 88% Windows, 10% Mac OS X, and 2% Linux. Apache OpenOffice is available in the FreeBSD ports tree. Derivatives Derivatives include AndrOpen Office, a port for Android, and Office 700 for iOS, both ported by Akikazu Yoshikawa. LibreOffice also used some changes from Apache OpenOffice. In 2013, 4.5% of new commits in LibreOffice 4.1 came from Apache contributors; in 2016, only 11 commits from Apache OpenOffice were merged into LibreOffice, representing 0.07% of LibreOffice's commits for the period. LibreOffice earlier rebased its LGPL-3.0-or-later codebase on the Apache OpenOffice 3.4 source code (though it used MPL-2.0, not the Apache-2.0) to allow wider (but still copyleft) licensing under MPL-2.0 and LGPL-3.0-or-later. Older versions of NeoOffice included stability fixes from Apache OpenOffice, though NeoOffice 2017 and later versions are based on LibreOffice 4.4 which was released mid-2014. References See also List of office suites Comparison of office suites External links 2012 software OpenOffice Cross-platform free software Diagramming software Formerly proprietary software Free and open-source software Free PDF software Free software programmed in C++ Free software programmed in Java (programming language) Office suites Office suites for macOS Office suites for Windows Open-source office suites OpenOffice Portable software Software using the Apache license Spreadsheet software Unix software
Apache OpenOffice
Mathematics
3,143
50,607,692
https://en.wikipedia.org/wiki/Cortinarius%20aerugineoconicus
Cortinarius aerugineoconicus is a basidiomycete fungus of the genus Cortinarius native to New Zealand, where it grows under southern beech trees. References External links aerugineoconicus Fungi of New Zealand Fungi described in 1990 Taxa named by Egon Horak Fungus species
Cortinarius aerugineoconicus
Biology
66
29,995,070
https://en.wikipedia.org/wiki/MAN%20Energy%20Solutions
MAN Energy Solutions SE (Societas Europaea) is a German manufacturer of large diesel engines and turbomachinery for maritime and stationary applications based in Augsburg. The company develops and manufactures two-stroke and four-stroke diesel engines, as well as gas turbines, steam turbines, and compressors. MAN Energy Solutions also offers turbochargers, propellers, gas engines, and chemical reactors. Additionally, it produces ship engines that run on synthetic fuels and develops technologies for carbon capture and storage (CCS), large heat pumps, and electrolysers for the production of green hydrogen. The company employs around 15,000 people at more than 140 international locations, particularly in Germany, Denmark, France, Switzerland, the Czech Republic, India, and China. MAN Energy Solutions previously was a company in the Power Engineering business sector of MAN SE, which had been a subsidiary of Volkswagen AG from 2011. Prior to the merger of MAN SE into Traton, MAN Energy Solutions was sold to its parent company, Volkswagen. History Background MAN originated from three German predecessor companies, the St. Antony ironworks in Oberhausen, Sander'sche Maschinenfabrik. St. Antony and Eisengießerei Klett & Comp. St. Antony was founded in 1758 and became Gutehoffnungshütte (GHH) in 1873, after several changes in ownership and restructuring. Sander'sche Maschinenfabrik was established in Augsburg in 1840, also underwent multiple mergers and transformations, and was merged with Maschinenbau-Actien-Gesellschaft Nürnberg (formerly Eisengießerei Klett & Comp.) in 1908 to become Maschinenfabrik Augsburg-Nürnberg (MAN). Between 1893 and 1897, MAN, in collaboration with Rudolf Diesel, developed the first diesel engine, a single-cylinder four-stroke engine, which achieved a specific consumption of 324 g/kWh and produced 17.8 hp (13.1 kW) at 154 rpm. In 1899, following several experiments on the four-stroke engine by Hugo Güldner, the first two-stroke diesel engine was built. After the turn of the century, MAN developed the first diesel engine with an opposed-piston engine design. MAN built the first diesel power plant in Kyiv in 1905. In 1910, MAN, in collaboration with the shipyard Blohm+Voss, began building upright two-stroke marine engines. Two years later, the ocean-going vessel MS Selandia became the first to be powered by diesel engines. In 1921, Gutehoffnungshütte acquired a majority stake in MAN, creating a conglomerate that encompassed traditional ironworks, modern mechanical engineering, and the production of various commercial vehicles. The conglomerate operated under the abbreviation GHH. From 1926, MAN began designing high-performance two-stroke engines for marine use. Developments after World War II During World War II, the company experienced revenue growth as Gutehoffnungshütte became involved in armament manufacturing. Production primarily included tanks, diesel engines for submarines, and shell casings in support of the Nazi war effort. After global shipbuilding increasingly shifted to Far East Asia in the 1970s and engine manufacturers' development costs rose, several large diesel engine manufacturers withdrew from the market at the end of the 1970s. MAN took over the ship engine design and building company Burmeister & Wain in 1979/80. Since then, the development of four-stroke engines has been carried out in Augsburg, while two-stroke engines are designed in Copenhagen. The marketing name for the largest two-stroke engines still has "B&W" in it. In 1982, the first two-stroke large diesel engine with over 50% efficiency was built. In 1986, the GHH Group was integrated into the MAN Group. From MAN Diesel & Turbo SE to MAN Energy Solutions SE MAN Energy Solutions SE was formed in March 2010 from the merger of the two former MAN companies MAN Diesel and MAN Turbo under the name MAN Diesel & Turbo SE. In addition, the Volkswagen Group acquired the majority of the share capital and voting rights of MAN SE in 2011. In 2013, the company commissioned a methanisation reactor for Europe's first power-to-gas plant in Werlte. Since 2015, the company has been manufacturing marine engines that run on synthetic fuels. In 2018, the company changed its name to MAN Energy Solutions SE, and shifted its strategy more towards sustainable energy. By 2030, it aims for more than 50% of its business to consist of sustainable technologies. Additionally, the company planned to expand its offerings to include hybrid, storage, and other digital service technologies. Furthermore, the company announced its commitment to the development of low-emission gases as ship fuels. In the same year, MAN Energy Solutions introduced the dual-fuel two-stroke engine MAN B&W ME-LGIP for LPG operation. Also in 2018, MAN Energy Solutions was spun off from the truck and bus business of MAN by Volkswagen. Further Developments In 2019, the company ventured into the hydrogen economy sector by acquiring a 40% stake in H-Tec Systems, which was then increased to around 99% by 2021. This acquisition marked the company's entry into the production of hydrogen through power-to-X electrolysers. In 2021, MAN Energy Solutions began developing a seawater-based heat pump for Esbjerg, Denmark, which utilizes seawater for heat generation. Additionally, the company announced the introduction of a hydrogen configuration for gas-powered four-stroke engines in the same year. They also introduced a hydrogen option for gas-powered engines and collaborated with Elbdeich to launch the Elbblue container ship using synthetic methane. Additionally, they're involved in creating Norway's first CO2 capture plant for eco-friendly cement production, supplying a compressor system for CO2 capture and compression. In 2022, the MAN 49/60 four-stroke engine series was introduced to the market. These engines are capable of running on LNG, diesel, HFO, and sustainable fuels in both gas and diesel modes. In 2023, the company conducted trials using carbon-free ammonia as fuel with a two-stroke diesel engine. As a result, MAN Energy Solutions announced plans to provide ammonia propulsion for maritime operations by 2026. In the same year, MAN Energy Solutions equipped 19 container vessels of A. P. Møller-Mærsk shipping company with methanol engines. In 2024, MAN Energy Solutions signed a contract with Karpowership, Istanbul, for the delivery of 48 dual-fuel engines for their power ship fleet, with the engines to be distributed across multiple floating power plants. Products MAN Energy Solutions manufactures large engines for ships and power plants, including gas engines used, for instance, in gas power plants for electricity and district heating generation. Additionally, the product range includes energy storage products and the construction of large heat pumps. Other business areas of the company encompass the production and distribution of gas and steam turbines, power plant technology, and reactors for the chemical industry. MAN Energy Solutions is also involved in equipping container freighters with eco-friendly methanol engines. The company offers engines running on green ship fuels produced from CO2 and hydrogen. Moreover, MAN Energy Solutions supplies electrolysers for green hydrogen production. MAN Energy Solutions is also working on the development of a marine engine based on the combustion of climate-neutral fuels such as methanol or ammonia. Additionally, the company offers complete ship propulsion systems, turbomachinery sets for the oil and gas industry as well as the process industry, and complete power plants. Marine Engines Two-stroke engines are developed at the company's base in Copenhagen. Due to their size, the engines are manufactured by international licensees in the immediate vicinity of dockyards. The engines propel large container vessels, freighters and oil tankers. Low-speed diesel engines do not require a transmission system because they are directly connected to the propellers by drive shafts. MAN's market-leading position in two-stroke marine engines means approximately 50% of global trade is moved by MAN engines. MAN Energy Solutions also offers medium-speed four-stroke engines that can be operated using liquid or gaseous fuel. Medium-speed engines are deployed to propel all types of merchant vessels, but are also used in passenger ships due to their compact nature and their amenability to flexible mounting. As well as cruise liners, other areas of use for medium-speed engines include specialist vessels such as tugs, dredgers or cable-laying ships. Smaller medium-speed four-stroke engines are also used in high-speed ferries and naval vessels. Additionally, the company provides complete propulsion systems, including the main engine, gearbox and propeller. Turbocharger MAN Energy Solutions builds exhaust-gas turbochargers using single-stage radial and axial turbines to create high charging pressures. The performance spectrum of these chargers, which are used both in two-stroke and four-stroke marine engines and in stationary systems, ranges from around 300 kW to 30,000 kW of engine power. Power Plants In the stationary sector, MAN diesel engines are primarily used for power plants and emergency power supplies. MAN Energy Solutions products range from small emergency power generators to turnkey power plants. The power plants are fuelled with heavy fuel oil, diesel, gas or renewable fuels, among others. The gas-powered four-stroke engines developed by the company are hydrogen-ready, allowing for the blending of up to 25% hydrogen into the gas power plant engines to reduce CO2 emissions. Additionally, the engines can operate on biogases and synthetic fuels such as green hydrogen. Turbomachinery For industrial processes, including the production of fertilizers, iron and steel, as well as for petrochemical-manufacturing applications, MAN Energy Solutions develops and produces a variety of compressors, as well as steam and gas turbines for power generation. Furthermore, the company offers gas-compression systems for the oil and gas industry (Upstream, Midstream and Downstream). This includes hermetically sealed compressors using magnetic bearings as well as high-pressure barrel compressors. MAN Energy Solutions also produces isothermal compressors for use in the production of industrial gases. These are supplied for standard industrial applications, such as in the manufacture of speciality chemicals and metal products. Other applications include the handling of bulk carbon dioxide and the production of bulk quantities of oxygen and nitrogen. MAN Energy Solutions manufactures the following turbomachinery: Turbo compressors (radial, axial, multishaft compressors) Expanders (process gas turbines) Steam turbines Gas turbines Process gas screw compressors Machinery control systems/regulation systems Chemical reactors and apparatus In Deggendorf (Germany) MAN Energy Solutions produces tubular reactor systems for the chemical and petrochemical industries and research organisations under the brand DWE Reactors. MAN Energy Solutions Deggendorf is also active in the development of technology for e-fuels. The company designs and manufactures facilities for the production of climate-neutral synthetic fuels, such as environmentally friendly methanol and synthetic natural gas. These fuels are produced using green hydrogen and are used, for example, for the low-emission propulsion of large container ships. Company structure As of 2024, MAN Energy Solutions employs around 15,000 people at over 140 locations. In 2023, the company generated sales of around 4 billion euros. MAN Energy Solutions is wholly owned by the Volkswagen Group. Awards MAN Energy Solutions was awarded the 16th National German Sustainability Award in 2023. Literature Reuß, Hans-Jürgen (2011). "Zweitakt-Motorenprogramm ganz auf Gas eingestellt. MAN Diesel & Turbo führt in Kopenhagen neuen Motor mit Wechselbetrieb von Diesel auf Gas vor". Hansa – International Maritime Journal (in German) (7). Hamburg: Schiffahrts-Verlag Hansa: 43–44. . References External links MAN Website Companies based in Augsburg Electrical generation engine manufacturers Marine engine manufacturers Gas turbine manufacturers Engine manufacturers of Germany MAN SE Diesel engine manufacturers Steam turbine manufacturers Locomotive engine manufacturers Turbocharger manufacturers Industrial machine manufacturers Gas engine manufacturers Volkswagen Group
MAN Energy Solutions
Engineering
2,489
4,432,401
https://en.wikipedia.org/wiki/Euryhaline
Euryhaline organisms are able to adapt to a wide range of salinities. An example of a euryhaline fish is the short-finned molly, Poecilia sphenops, which can live in fresh water, brackish water, or salt water. The green crab (Carcinus maenas) is an example of a euryhaline invertebrate that can live in salt and brackish water. Euryhaline organisms are commonly found in habitats such as estuaries and tide pools where the salinity changes regularly. However, some organisms are euryhaline because their life cycle involves migration between freshwater and marine environments, as is the case with salmon and eels. The opposite of euryhaline organisms are stenohaline ones, which can only survive within a narrow range of salinities. Most freshwater organisms are stenohaline, and will die in seawater, and similarly most marine organisms are stenohaline, and cannot live in fresh water. Osmoregulation Osmoregulation is the active process by which an organism maintains its level of water content. The osmotic pressure in the body is homeostatically regulated in such a manner that it keeps the organism's fluids from becoming too diluted or too concentrated. Osmotic pressure is a measure of the tendency of water to move into one solution from another by osmosis. Two major types of osmoregulation are osmoconformers and osmoregulators. Osmoconformers match their body osmolarity to their environment actively or passively. Most marine invertebrates are osmoconformers, although their ionic composition may be different from that of seawater. Osmoregulators tightly regulate their body osmolarity, which always stays constant, and are more common in the animal kingdom. Osmoregulators actively control salt concentrations despite the salt concentrations in the environment. An example is freshwater fish. The gills actively uptake salt from the environment by the use of mitochondria-rich cells. Water will diffuse into the fish, so it excretes a very hypotonic (dilute) urine to expel all the excess water. A marine fish has an internal osmotic concentration lower than that of the surrounding seawater, so it tends to lose water (to the more negative surroundings) and gain salt. It actively excretes salt out from the gills. Most fish are stenohaline, which means they are restricted to either salt or fresh water and cannot survive in water with a different salt concentration than they are adapted to. However, some fish show a tremendous ability to effectively osmoregulate across a broad range of salinities; fish with this ability are known as euryhaline species, e.g., salmon. Salmon has been observed to inhabit two utterly disparate environments — marine and fresh water — and it is inherent to adapt to both by bringing in behavioral and physiological modifications. Some marine fish, like sharks, have adopted a different, efficient mechanism to conserve water, i.e., osmoregulation. They retain urea in their blood in relatively higher concentration. Urea is damaging to living tissue so, to cope with this problem, some fish retain trimethylamine oxide. This provides a better solution to urea's toxicity. Sharks, having slightly higher solute concentration (i.e., above 1000 mOsm which is sea solute concentration), do not drink water like marine fish. Euryhaline fish The level of salinity in intertidal zones can also be quite variable. Low salinities can be caused by rainwater or river inputs of freshwater. Estuarine species must be especially euryhaline, or able to tolerate a wide range of salinities. High salinities occur in locations with high evaporation rates, such as in salt marshes and high intertidal pools. Shading by plants, especially in the salt marsh, can slow evaporation and thus ameliorate salinity stress. In addition, salt marsh plants tolerate high salinities by several physiological mechanisms, including excreting salt through salt glands and preventing salt uptake into the roots. Despite having a regular freshwater presence, the Atlantic stingray is physiologically euryhaline and no population has evolved the specialized osmoregulatory mechanisms found in the river stingrays of the family Potamotrygonidae. This may be due to the relatively recent date of freshwater colonization (under one million years), and/or possibly incomplete genetic isolation of the freshwater populations, as they remain capable of surviving in salt water. Freshwater Atlantic stingrays have only 30-50% the concentration of urea and other osmolytes in their blood compared to marine populations. However, the osmotic pressure between their internal fluids and external environment still causes water to diffuse into their bodies, and they must produce large quantities of dilute urine (at 10 times the rate of marine individuals) to compensate. Partial list Atlantic stingray Bull shark Green chromide Herring Lamprey Mummichog Molly Guppy Puffer fish Salmon Shad Striped bass Sturgeon Tilapia Trout Barramundi Mangrove jack White perch Killifish Desert pupfish Other euryhaline organisms See also Fish migration Osmoregulation Stenohaline Osmoconformer References Aquatic ecology
Euryhaline
Biology
1,110
743,435
https://en.wikipedia.org/wiki/Muscarinic%20acetylcholine%20receptor
Muscarinic acetylcholine receptors (mAChRs) are acetylcholine receptors that form G protein-coupled receptor complexes in the cell membranes of certain neurons and other cells. They play several roles, including acting as the main end-receptor stimulated by acetylcholine released from postganglionic fibers. They are mainly found in the parasympathetic nervous system, but also have a role in the sympathetic nervous system in the control of sweat glands. Muscarinic receptors are so named because they are more sensitive to muscarine than to nicotine. Their counterparts are nicotinic acetylcholine receptors (nAChRs), receptor ion channels that are also important in the autonomic nervous system. Many drugs and other substances (for example pilocarpine and scopolamine) manipulate these two distinct receptors by acting as selective agonists or antagonists. Function Acetylcholine (ACh) is a neurotransmitter found in the brain, neuromuscular junctions and the autonomic ganglia. Muscarinic receptors are used in the following roles: Recovery receptors ACh is always used as the neurotransmitter within the autonomic ganglion. Nicotinic receptors on the postganglionic neuron are responsible for the initial fast depolarization (Fast EPSP) of that neuron. As a consequence of this, nicotinic receptors are often cited as the receptor on the postganglionic neurons at the ganglion. However, the subsequent hyperpolarization (IPSP) and slow depolarization (Slow EPSP) that represent the recovery of the postganglionic neuron from stimulation are actually mediated by muscarinic receptors, types M2 and M1 respectively (discussed below). Peripheral autonomic fibers (sympathetic and parasympathetic fibers) are categorized anatomically as either preganglionic or postganglionic fibers, then further generalized as either adrenergic fibers, releasing noradrenaline, or cholinergic fibers, both releasing acetylcholine and expressing acetylcholine receptors. Both preganglionic sympathetic fibers and preganglionic parasympathetic fibers are cholinergic. Most postganglionic sympathetic fibers are adrenergic: their neurotransmitter is norepinephrine except postganglionic sympathetic fibers to the sweat glands, piloerectile muscles of the body hairs, and the skeletal muscle arterioles do not use adrenaline/noradrenaline. The adrenal medulla is considered a sympathetic ganglion and, like other sympathetic ganglia, is supplied by cholinergic preganglionic sympathetic fibers: acetylcholine is the neurotransmitter utilized at this synapse. The chromaffin cells of the adrenal medulla act as "modified neurons", releasing adrenaline and noradrenaline into the bloodstream as hormones instead of as neurotransmitters. The other postganglionic fibers of the peripheral autonomic system belong to the parasympathetic division; all are cholinergic fibers, and use acetylcholine as the neurotransmitter. Postganglionic neurons Another role for these receptors is at the junction of the innervated tissues and the postganglionic neurons in the parasympathetic division of the autonomic nervous system. Here acetylcholine is again used as a neurotransmitter, and muscarinic receptors form the principal receptors on the innervated tissue. Innervated tissue Very few parts of the sympathetic system use cholinergic receptors. In sweat glands the receptors are of the muscarinic type. The sympathetic nervous system also has some preganglionic nerves terminating at the chromaffin cells in the adrenal medulla, which secrete epinephrine and norepinephrine into the bloodstream. Some believe that chromaffin cells are modified postganglionic CNS fibers. In the adrenal medulla, acetylcholine is used as a neurotransmitter, and the receptor is of the nicotinic type. The somatic nervous system uses a nicotinic receptor to acetylcholine at the neuromuscular junction. Higher central nervous system Muscarinic acetylcholine receptors are also present and distributed throughout the local nervous system, in post-synaptic and pre-synaptic positions. There is also some evidence for postsynaptic receptors on sympathetic neurons allowing the parasympathetic nervous system to inhibit sympathetic effects. Presynaptic membrane of the neuromuscular junction It is known that muscarinic acetylcholine receptors also appear on the pre-synaptic membrane of somatic neurons in the neuro-muscular junction, where they are involved in the regulation of acetylcholine release. Form of muscarinic receptors Muscarinic acetylcholine receptors belong to a class of metabotropic receptors that use G proteins as their signaling mechanism. In such receptors, the signaling molecule (the ligand) binds to a monomeric receptor that has seven transmembrane regions; in this case, the ligand is ACh. This receptor is bound to intracellular proteins, known as G proteins, which begin the information cascade within the cell. By contrast, nicotinic receptors form pentameric complexes and use a ligand-gated ion channel mechanism for signaling. In this case, binding of the ligands with the receptor causes an ion channel to open, permitting either one or more specific types of ions (e.g., K+, Na+, Ca2+) to diffuse into or out of the cell. Receptor isoforms Classification By the use of selective radioactively labeled agonist and antagonist substances, five subtypes of muscarinic receptors have been determined, named M1–M5 (using an upper case M and subscript number). M1, M3, M5 receptors are coupled with Gq proteins, while M2 and M4 receptors are coupled with Gi/o proteins. There are other classification systems. For example, the drug pirenzepine is a muscarinic antagonist (decreases the effect of ACh), which is much more potent at M1 receptors than it is at other subtypes. The acceptance of the various subtypes proceeded in numerical order, therefore, earlier sources may recognize only M1 and M2 subtypes, while later studies recognize M3, M4, and most recently M5 subtypes. Genetic differences Meanwhile, geneticists and molecular biologists have characterised five genes that appear to encode muscarinic receptors, named m1-m5 (lowercase m; no subscript number). They code for pharmacologic types M1-M5. The receptors m1 and m2 were determined based upon partial sequencing of M1 and M2 receptor proteins. The others were found by searching for homology, using bioinformatic techniques. Difference in G proteins G proteins contain an alpha-subunit that is critical to the functioning of receptors. These subunits can take a number of forms. There are four broad classes of form of G-protein: Gs, Gi, Gq, and G12/13. Muscarinic receptors vary in the G protein to which they are bound, with some correlation according to receptor type. G proteins are also classified according to their susceptibility to cholera toxin (CTX) and pertussis toxin (PTX, whooping cough). Gs and some subtypes of Gi (Gαt and Gαg) are susceptible to CTX. Only Gi is susceptible to PTX, with the exception of one subtype of Gi (Gαz) which is immune. Also, only when bound with an agonist, those G proteins normally sensitive to PTX also become susceptible to CTX. The various G-protein subunits act differently upon secondary messengers, upregulating Phospholipases, downregulating cAMP, and so on. Because of the strong correlations to muscarinic receptor type, CTX and PTX are useful experimental tools in investigating these receptors. The muscarinic acetylcholine receptor subtype sectivities of a large number of antimuscarinic drugs have been reviewed. M1 receptor This receptor is found mediating slow EPSP at the ganglion in the postganglionic nerve, is common in exocrine glands and in the CNS. It is predominantly found bound to G proteins of class Gq, which use upregulation of phospholipase C and, therefore, inositol trisphosphate and intracellular calcium as a signaling pathway. A receptor so bound would not be susceptible to CTX or PTX. However, Gi (causing a downstream decrease in cAMP) and Gs (causing an increase in cAMP) have also been shown to be involved in interactions in certain tissues, and so would be susceptible to PTX and CTX, respectively. M2 receptor The M2 muscarinic receptors are located in the heart and lungs. In the heart, they act to slow the heart rate down below the normal baseline sinus rhythm, by slowing the speed of depolarization. In humans, under resting conditions, vagal activity dominates over sympathetic activity. Hence, inhibition of M2 receptors (e.g. by atropine) will cause a raise in heart rate. They also moderately reduce contractile forces of the atrial cardiac muscle, and reduce conduction velocity of the atrioventricular node (AV node). It also serves to slightly decrease the contractile forces of the ventricular muscle. M2 muscarinic receptors act via a Gi type receptor, which causes a decrease in cAMP in the cell, inhibition of voltage-gated Ca2+ channels, and increasing efflux of K+, in general, leading to inhibitory-type effects. M3 receptor The M3 muscarinic receptors are located at many places in the body. They are located in the smooth muscles of the blood vessels, as well as in the lungs. Because the M3 receptor is Gq-coupled and mediates an increase in intracellular calcium, it typically causes contraction of smooth muscle, such as that observed during bronchoconstriction and bladder voiding. However, with respect to vasculature, activation of M3 on vascular endothelial cells causes increased synthesis of nitric oxide, which diffuses to adjacent vascular smooth muscle cells and causes their relaxation, thereby explaining the paradoxical effect of parasympathomimetics on vascular tone and bronchiolar tone. Indeed, direct stimulation of vascular smooth muscle, M3 mediates vasoconstriction in diseases wherein the vascular endothelium is disrupted. The M3 receptors are also located in many glands, which help to stimulate secretion in, for example, the salivary glands, as well as other glands of the body. Like the M1 muscarinic receptor, M3 receptors are G proteins of class Gq that upregulate phospholipase C and, therefore, inositol trisphosphate and intracellular calcium as a signaling pathway. M4 receptor M4 receptors are found in the CNS. M4 receptors are also located in erythroid progenitor cell in peripheral tissue and modulate the cAMP pathway to regulate erythroid progenitor cell differentiation. Therapies targeting the M4 receptor treats myelodysplastic syndrome and anemia that are refractory to erythropoietin. M4 receptors work via Gi receptors to decrease cAMP in the cell and, thus, produce generally inhibitory effects. Possible bronchospasm may result if stimulated by muscarinic agonists. M5 receptor Location of M5 receptors is not well known. Like the M1 and M3 muscarinic receptor, M5 receptors are coupled with G proteins of class Gq that upregulate phospholipase C and, therefore, inositol trisphosphate and intracellular calcium as a signaling pathway. Pharmacological application Ligands targeting the mAChR that are currently approved for clinical use include non-selective antagonists for the treatment of Parkinson's disease, atropine (to dilate the pupil), scopolamine (used to prevent motion sickness), and ipratropium (used in the treatment of COPD). In 2024, the United States FDA approved the drug KarXT (Cobenfy), which is a combination drug that combines xanomeline (a preferentially acting M1/M4 muscarinic acetylcholine receptor agonist) with trospium (a peripherally-restricted pan-mAChR antagonist for use in schizophrenia. In early clinical trials of moderate to high severity patients without treatment resistant history, it has demonstrated efficacy about equivalent to that of other anti-psychotics (20-point improvement in PANSS vs 10-point placebo improvement), with a notably different side effect profile (very low rates of metabolic effects, hypotension, weight changes, or EPS) with moderately reported rates of nausea and constipation. No trials have been published to date regarding use in combination with other antipsychotics, use in treatment resistant patients, or head-to-head comparisons with other medications. This is the first anti-psychotic drug approved that uses a muscarinic mechanism of action, and many others are in development. See also Muscarinic agonist Muscarinic antagonist Acetylcholine receptor (Cholinergic receptor) Nicotinic acetylcholine receptor Adrenergic receptor Nicotinic agonist Nicotinic antagonist Vagal escape References External links Acetylcholine Receptors (Muscarinic) Neurochemistry Neurophysiology Acetylcholine receptors
Muscarinic acetylcholine receptor
Chemistry,Biology
2,916
16,370,290
https://en.wikipedia.org/wiki/Thiophene-3-acetic%20acid
Thiophene-3-acetic acid is an organosulfur compound with the formula HO2CCH2C4H3S. It is a white solid. It is one of two isomers of thiophene acetic acid, the other being thiophene-2-acetic acid. Thiophene-3-acetic acid has attracted attention as a precursor to functionalized derivatives of polythiophene. References Thiophenes Acetic acids
Thiophene-3-acetic acid
Chemistry
104
23,453,338
https://en.wikipedia.org/wiki/C10H14O
{{DISPLAYTITLE:C10H14O}} The molecular formula C10H14O (molar mass: 150.22 g/mol, exact mass: 150.104465 u) can refer to: o-sec-Butylphenol Carvacrol Carvone Chrysanthenone Cumyl alcohol 2-Ethyl-4,5-dimethylphenol, a phenolic compound found in rosemary essential oil Levoverbenone Menthofuran Penguinone Perillaldehyde Perillene Pinocarvone Rosefuran Safranal Thymol Umbellulone Verbenone
C10H14O
Chemistry
138
27,193,320
https://en.wikipedia.org/wiki/2010%20Boston%20water%20emergency
The 2010 Boston water emergency occurred on May 1, 2010, when a water pipe in Weston, Massachusetts, broke and began flooding into the Charles River. This led to unsanitary water conditions in the greater Boston area, which resulted in Governor Deval Patrick declaring a state of emergency and an order for residents to boil drinking water. The leak was stopped on May 2. On May 4, the order was lifted. President Barack Obama signed an emergency disaster declaration offering federal help, authorizing the Department of Homeland Security and Federal Emergency Management Agency to coordinate disaster relief efforts with Massachusetts. MWRA executive director Frederick Laskey called the break "catastrophic" and "everyone's worst nightmare in the water industry". Chronology At about 10:00am on May 1, a collar connecting two sections of pipe ruptured in Weston, Massachusetts, disrupting the connection between the MetroWest Water Supply Tunnel and the City Tunnel. With the water supply cut off, the emergency water supply reserve system from surrounding ponds was routed to the main water supply. The rupture worsened as the afternoon progressed, eventually resulting in the loss of access to clean water from the Quabbin and Wachusett Reservoirs for approximately two million residents of 31 cities and towns, including Boston. At the height of the spill, approximately of water entered the Charles River per hour. By evening, the Massachusetts Water Resources Authority had activated the backup water system, which was drawing water from the Chestnut Hill Reservoir, and Spot Pond Reservoir. The Sudbury Aqueduct supplied additional water to the Chestnut Hill reservoir from the Sudbury Reservoir and the Framingham #3 reservoir. Because water from these older surface reservoirs is not treated, the MWRA issued a boil order for the affected communities. The Massachusetts Water Resources Authority (MWRA) issued an emergency water notice for the Boston area. Governor Deval Patrick issued a state of emergency and a boil-water advisory for Boston and a dozen surrounding communities, affecting nearly two million people. Local agency officials used a variety of means to inform locals about the situation including location based SMS, Boston's reverse 911 citizen alert system, highway alert signs, driving through affected neighborhoods with bullhorns, and other emergency management systems. As a result of the water boil order, many residents rushed to purchase bottled water at local stores. Many stores quickly sold out of water, and bottled water companies increased shipments at the request of the Massachusetts Emergency Management Agency, maintaining availability at other stores. Local stores quickly sold out their supplies of bottled water, and the Massachusetts National Guard was dispatched to deliver additional bottled water. The state government also asked bottled-water suppliers to increase their deliveries to the area. Many cafes such as Starbucks and Dunkin' Donuts that depended on municipal water for coffee production were closed or forced to operate with limited functionality. By May 2, workers had stopped the spill and begun repairs on the pipe and MWRA officials reported steady water pressure on the night of May 2. Experts and officials associated with the MWRA interviewed by reporters stated that the boil-water order was necessary because the backup reservoirs were untreated and unmonitored by bacterial cultures, which take a few days to run; similar situations had resulted in bacterial contamination bad enough to cause distressing gastrointestinal symptoms in otherwise healthy adults. On May 4, 2010, at 3:00am, Massachusetts Water Resources Authority announced that Governor Patrick had lifted the water-boil order for all but one of affected communities, Saugus. In a press conference later that morning, Patrick stated that tests had since cleared the water in Saugus as well. The test results indicated that the bacteria levels in the emergency supply were not atypical for a normal day. If this had been known earlier, the boil-water order would have been unnecessary. No health effects for vulnerable classes, such as infants, pregnant women, and those with a compromised immune system, were reported in secondary sources during this event. The engineering investigation following the incident found that the break was caused by failure of the coupling bolts. Inspection of recovered bolts and bolt fragments found that the bolts were poorly manufactured and sized incorrectly for the load. Affected communities Allston Arlington Belmont Boston Brighton Brookline Canton Chelsea Everett Hanscom Air Force Base Lexington Lynnfield W.D. Malden Marblehead Medford Melrose Milton Nahant Newton Norwood Quincy Reading Revere Saugus Somerville Stoneham Swampscott Waltham Wakefield Watertown Winchester Winthrop References Boston water Boston water emergency Water emergency Engineering failures Water supply and sanitation in Massachusetts Boston water emergency Disasters in Massachusetts Weston, Massachusetts History of Middlesex County, Massachusetts
2010 Boston water emergency
Technology,Engineering
921
34,504,456
https://en.wikipedia.org/wiki/Rain%20porch
A rain porch, also commonly known as a Carolina porch, is a type of indigenous porch form found in the Southeastern United States. Some architectural scholars believe it to have originated along the coast of the Carolinas, hence the colloquial name. The defining characteristic of the rain porch is a roof that extends far beyond the edge of the porch deck and is supported with freestanding supports that rise directly from ground level, rather than the floor of the porch deck. This protects the porch deck from exposure to the elements and also leaves it well shaded from the sun most of the time. Most commonly seen on historic folk houses, the rain porch also came to be adapted to the monumental porticoes of some Greek Revival mansions, such as Rosemount and Kirkwood. The overhang became especially exaggerated in some areas with copious amounts of rainfall, such as the Eastern Shore of Mobile Bay in Alabama. Here the roof overhang ranged between beyond the porch deck, in effect creating a lower and upper porch. References Rooms
Rain porch
Engineering
204
4,407,992
https://en.wikipedia.org/wiki/Non-achromatic%20objective
A non-achromatic objective is an objective lens which is not corrected for chromatic aberration. In telescopes they can a be pre-18th century simple single element objective lenses which were used before the invention of doublet achromatic lenses. They can also be specialty monochromatic lenses used in modern research telescopes and other instruments. Non-achromatic telescope objectives Early non-achromatic objectives Early telescope objective, such as those built by Johannes Hevelius and Christiaan Huygens and his brother Constantijn Huygens, Jr., utilized single small (2"-8") positive lenses with enormous focal lengths (up to 150 feet in length in tube telescopes and up to 600 feet in non-tube aerial telescopes). This allowed the observer to use higher magnification while limiting the interfering rainbow halos caused by chromatic aberration (the uncorrected chromatic aberration fell within the large diffraction pattern at focus). Modern non-achromatic objectives Modern instruments may use a non-achromatic objective lens which is well-corrected for spherical aberration and off-axis aberrations such as coma and astigmatism over the desired field of view at only one wavelength. Monochromatically corrected objectives can be found in solar telescopes working with narrow spectral lines such as the hydrogen alpha spectral line of 0.6562725 micrometres. They are also used in astrographic telescopes where multiple single narrow wavelength images are used in stellar classification. Other applications Non-achromatic objectives are also used in monochromatic laser applications such as collimators, beam expanders, and highly corrected pupil imaging for wavefront error sensors for adaptive optics. See also List of telescope types References Lenses Telescopes
Non-achromatic objective
Astronomy
371
64,600,200
https://en.wikipedia.org/wiki/Ezra%20Brown
Ezra Abraham "Bud" Brown (born January 22, 1944, in Reading, Pennsylvania) is an American mathematician active in combinatorics, algebraic number theory, elliptic curves, graph theory, expository mathematics and cryptography. He spent most of his career at Virginia Tech where he is now Alumni Distinguished Professor Emeritus of Mathematics. Education and career Brown earned a BA at Rice University in 1965. He then studied mathematics at Louisiana State University (LSU), getting an MS in 1967 and a PhD in 1969 with the dissertation "Representations of Discriminantal Divisors by Binary Quadratic Forms" under Gordon Pall. He joined Virginia Tech in 1969 becoming Assistant Professor (1969–73), Associate Professor (1973–81), Professor (1981–2005), and Alumni Distinguished Professor of Mathematics and Distinguished Professor of Mathematics (2005–2017). He retired from Virginia Tech in 2017. Brown became interested in elliptic curves while at LSU and this has remained one of his principal areas of research along with quadratic forms and algebraic number theory in general. His extensive expository writing has garnered him many awards from the MAA, including the Chauvenet Prize, the Allendoerfer Award (3 times) and the Pólya Award (3 times). His books include The Unity of Combinatorics (MAA, 2020), co-authored with Richard K. Guy. Personal life While at LSU he met his future wife Jo. Brown remained at Virginia Tech until his retirement in 2017. At the age of 16 Brown taught himself to play the piano, and in college he acted in several musicals and joined an a cappella chorus. In 1989, he joined the Blacksburg Master Chorale and the chorus of Opera Roanoke. Starting in 2011 he took his love of music and math to MathFest where he an his fellow mathematicians composed new words to old show tunes and even took part in a Gilbert-and-Sullivan Singalong at MathFest 2016 with his "Biscuits of Number Theory" co-editor Art Benjamin. Brown and his mathematical grandfather, L. E. Dickson, have the same birthday. Selected publications papers 2018 "Five Families Around a Well: A New Look at an Old Problem" (with Matthew Crawford) 2015 "Many More Names of (7,3,1)]" 2012 "Why Ellipses Are Not Elliptic Curves" (with Adrian Rice) 2009 "Kirkman's Schoolgirls Wearing Hats and Walking Through Fields of Numbers" (with Keith Mellinger) 2005 "Phoebe Floats!"Phoebe Floats! 2004 "The Fabulous (11, 5, 2) Biplane" 2002 "The Many Names of (7,3,1)" 2000 "Three Fermat Trails to Elliptic Curves" 1999 "Square Roots from 1; 24, 51, 10 to Dan Shanks" books 2020 (with Richard K. Guy) The Unity of Combinatorics, American Mathematical Society, 2009 (co-edited with Arthur T. Benjamin) Biscuits of Number Theory, MAA Publications, 1990 (translation from German) Regiomontanus: His Life and Work Awards 2022 MAA Chauvenet Prize (with Matthew Crawford) for "Five Families Around a Well: A New Look at an Old Problem" 2014 MAA Sister Helen Christensen Service Award 2013 MAA Allendoerfer Award (with Adrian Rice) for "Why Ellipses Are Not Elliptic Curves" 2010 MAA Allendoerfer Award (with Keith Mellinger) for "Kirkman's Schoolgirls Wearing Hats and Walking Through Fields of Numbers" 2003 MAA Allendoerfer Award for "The Many Names of (7,3,1)" 2001 MAA Pólya Award for "Three Fermat Trails to Elliptic Curves" 2000 MAA Pólya Award for "Square Roots from 1; 24, 51, 10 to Dan Shanks" 2006 MAA Pólya Award for "Phoebe Floats!" 1999 MAA John M. Smith Award for Distinguished College or University Teaching 1997 Virginia Tech Edward S. Diggs Teaching Scholar Award Omicron Delta Kappa G. Burke Johnston Award References External links Personal web page 1945 births Living people Virginia Tech faculty Louisiana State University alumni People from New Orleans Rice University alumni 20th-century American mathematicians 21st-century American mathematicians American algebraists Combinatorialists
Ezra Brown
Mathematics
868
50,830,846
https://en.wikipedia.org/wiki/Naval%20Radiological%20Defense%20Laboratory
The United States Naval Radiological Defense Laboratory (NRDL) was an early military lab created to study the effects of radiation and nuclear weapons. The facility was based at the Hunter's Point Naval Shipyard in San Francisco, California. History The NRDL was formed in 1946 to manage testing, decontamination, and disposition of US Navy ships contaminated by the Operation Crossroads nuclear tests in the Pacific. A number of ships that survived the atomic detonations were towed to Hunter's Point for detailed study and decontamination. Some of the ships were cleaned and sold for scrap. The aircraft carrier , which had been heavily damaged and contaminated with nuclear fallout by Operation Crossroads explosions in July 1946, was brought to the NRDL for study. After years of trying in vain to decontaminate the ship enough that it could be safely sold for scrap, the Navy ultimately packed the ship full of nuclear waste and scuttled the radioactive hulk off California near the Farallon Islands in January 1951. The ship's wreck was discovered resting upright under 790 m of water in 2009. The NRDL used several buildings at the Hunter's Point shipyard from 1946 to 1969. Working with the newly formed US Atomic Energy Commission (predecessor to the U.S. Nuclear Regulatory Commission established in 1974), the Navy conducted a wide variety of radiation experiments on materials and animals at the lab, including the construction of a cyclotron on the site for use in radiation experiments and storage for various nuclear materials. Activities An article published 2 May 2001 in SF Weekly detailed various aspects of nuclear testing at NRDL from declassified records: Contamination The first use of radioactive materials at NRDL predated the issuing of licenses by the Atomic Energy Commission, but the AEC later issued licenses for a broad spectrum of radioactive materials to be used in research at the NRDL. Radioactive materials specific to nuclear weapon testing were exempted from AEC licensing. For closure of the NRDL in 1969, the AEC issued licenses for decommissioning activities. AEC licenses for the shipyard and NRDL were terminated in the 1970s. The NRDL testing and decontamination activities caused significant contamination of the shipyard site. The NRDL and the military radiation training school at nearby Naval Station Treasure Island loaded the nuclear waste left from experiments into steel barrels and sent weekly barges to dump them offshore near the Gulf of the Farallons, which is a US National Wildlife Refuge and a major commercial fishery. Between 1946 and 1970, records estimate the lab and naval station dumped an estimated 47,000 drums of nuclear waste in the Pacific Ocean 30 miles west of San Francisco, creating the first and largest offshore nuclear waste dump in the United States. The USGS states the barrels contain only "low-level radioactive waste," but this is disputed by historical records and experts. The US Navy completed a Historical Radiological Assessment of the Hunter's Point Shipyard in 2004, including the known NRDL facilities on the property, years after the SF Weekly article cited declassified documents showing that many sites and buildings used by NRDL were not included in the Navy's list of sites with potential for radiological contamination. Many of the buildings formerly used by NRDL had been razed by that point. The former shipyard site is still being decontaminated, and has been split into multiple parcels to allow the Navy to declare them clean and safe for redevelopment separately. While Lennar has built and sold hundreds of new condominium units on the site of the former Hunters Point Naval Shipyard, regulators, activists, and cleanup workers have claimed that the site is still heavily contaminated and that the company contracted to handle the cleanup and testing, Tetra Tech, has repeatedly violated established cleanup protocols, deliberately falsified radiation test results at the site to falsely show that there is little remaining radiation, and fired employees who attempted to force workers to perform radiation tests as required. References External links Installations of the United States Navy in California Nuclear technology Military science Pollution United States Navy installations United States Navy shipyards Naval Shipyard Naval Shipyard Military Superfund sites 1870 establishments in California Superfund sites in California Military facilities in the San Francisco Bay Area Military in the San Francisco Bay Area Ships involved in Operation Crossroads Military research of the United States Bayview–Hunters Point, San Francisco
Naval Radiological Defense Laboratory
Physics
865
17,621,644
https://en.wikipedia.org/wiki/Tacit%20Networks
Tacit Networks, Inc. is an I.T. company based in South Plainfield, New Jersey. It was founded in 2000. Their product lines are: iShared which provides Wide area file services and WAN optimization. Mobiliti (via the acquisition of Mobiliti) which provides backup, synchronization and offline access services to mobile users. On January 30, 2004, Tacit Networks acquired the assets of AttachStor. The AttachStor technology provided the basis for the email acceleration feature in the iShared product. On December 30, 2005, Tacit Networks acquired the assets of Mobiliti and integrated the Mobiliti product line into its portfolio. On May 15, 2006, Packeteer acquired Tacit Networks and integrated the iShared and Mobiliti product lines into the Packeteer portfolio. See also Wide area file services WAN optimization References External links iShared product page on Packeteer.com Network performance Defunct computer companies of the United States Defunct computer hardware companies Computer companies established in 2000 Computer companies disestablished in 2006 Companies based in Middlesex County, New Jersey South Plainfield, New Jersey 2006 mergers and acquisitions
Tacit Networks
Technology
234
20,710,906
https://en.wikipedia.org/wiki/HD%20181433%20d
HD 181433 d is an extrasolar planet located approximately 87 light years away in the constellation of Pavo, orbiting the star HD 181433. This planet has a minimum mass of 0.54 Jupiter mass and takes 2172 days to orbit the star. The average orbital distance is 3.00 AU. At periastron distance, it will have distance from the star similar to Mars’ distance from the Sun at 1.56 AU. At apastron, the distance is 4.44 AU. These corresponds to the orbital eccentricity of 0.48. References External links HD 181433 Pavo (constellation) Exoplanets discovered in 2008 Giant planets Exoplanets detected by radial velocity
HD 181433 d
Astronomy
143
53,661,843
https://en.wikipedia.org/wiki/List%20of%20federal%20subjects%20of%20Russia%20by%20life%20expectancy
Life expectancy in Russia is 70.06 years, according to official data for 2021. Russia's historical maximum life expectancy was 73.3 years, achieved in 2019. Life expectancy decreased by 1.8 years in 2020 and a further 1.48 years in 2021, due largely to the effect of the COVID-19 pandemic on Russia's aging society. There have been significant regional differences in COVID-19's impact on life expectancy, with this indicator decreasing by 2.42 years in Voronezh Oblast while simultaneously increasing by 0.89 years in Chechnya during this period. Duration of life in Russia varies greatly between regions. Russians in the predominantly Muslim, abstinent North Caucasus and in cities of federal importance have relatively high life expectancies, and Ingushetia is considered a "blue zone" due to its especially promising statistics. Life expectancy is relatively low in many regions of the Russian Far East, and as of 2022 Chukotka has the lowest life expectancy in Russia. On average, Russians in towns live slightly longer than those in rural areas. However, in some regions the opposite pattern is observed, or scales in different years leans on different sides. Annual estimates of life expectancy are provided by the World Health Organization. According to the WHO, healthy life expectancy (HALE) in Russia in 2019 was 64.2 years: 60.7 for men and 67.5 for women. Also according to the WHO, Russia, Ukraine and Belarus exhibit the world's highest difference in life expectancy between women and men. Official Russian data 2021 List of the federal subjects of Russia by life expectancy provided by the Russian statistical agency Rosstat in 2022. In the last years Rosstat publishes data about life expectancy one time in two years, so the next release of official Russian data is expected in 2024. by federal subject by federal district Life expectancy in federal districts that have undergone boundary changes is shown only after the last change in their configuration. Charts and maps Charts and maps for Russia Comparison of Russia with other countries Official Russian data 2019 Detailed data for 2019, that was the most successful in terms of longevity, and annual dynamics from 2014 to 2021. See also References Health in Russia life expectancy Life expectancy Russia, life expectancy Russia Life expectancy
List of federal subjects of Russia by life expectancy
Biology
475
9,402,045
https://en.wikipedia.org/wiki/Pinwheel%20tiling
In geometry, pinwheel tilings are non-periodic tilings defined by Charles Radin and based on a construction due to John Conway. They are the first known non-periodic tilings to each have the property that their tiles appear in infinitely many orientations. Definition Let be the right triangle with side length , and . Conway noticed that can be divided in five isometric copies of its image by the dilation of factor . The pinwheel tiling is obtained by repeatedly inflating by a factor of and then subdividing each tile in this manner. Conversely, the tiles of the pinwheel tiling can be grouped into groups of five that form a larger pinwheel tiling. In this tiling, isometric copies of appear in infinitely many orientations because the small angle of , , is not a rational multiple of . Radin found a collection of five prototiles, each of which is a marking of , so that the matching rules on these tiles and their reflections enforce the pinwheel tiling. All of the vertices have rational coordinates, and tile orientations are uniformly distributed around the circle. Generalizations Radin and Conway proposed a three-dimensional analogue which was dubbed the quaquaversal tiling. There are other variants and generalizations of the original idea. One gets a fractal by iteratively dividing in five isometric copies, following the Conway construction, and discarding the middle triangle (ad infinitum). This "pinwheel fractal" has Hausdorff dimension . Use in architecture Federation Square, a building complex in Melbourne, Australia, features the pinwheel tiling. In the project, the tiling pattern is used to create the structural sub-framing for the facades, allowing for the facades to be fabricated off-site, in a factory and later erected to form the facades. The pinwheel tiling system was based on the single triangular element, composed of zinc, perforated zinc, sandstone or glass (known as a tile), which was joined to 4 other similar tiles on an aluminum frame, to form a "panel". Five panels were affixed to a galvanized steel frame, forming a "mega-panel", which were then hoisted onto support frames for the facade. The rotational positioning of the tiles gives the facades a more random, uncertain compositional quality, even though the process of its construction is based on pre-fabrication and repetition. The same pinwheel tiling system is used in the development of the structural frame and glazing for the "Atrium" at Federation Square, although in this instance, the pin-wheel grid has been made "3-dimensional" to form a portal frame structure. References External links Pinwheel at the Tilings Encyclopedia Dynamic Pinwheel made in GeoGebra Discrete geometry Aperiodic tilings
Pinwheel tiling
Physics,Mathematics
575
36,942,251
https://en.wikipedia.org/wiki/Ricci%20scalars%20%28Newman%E2%80%93Penrose%20formalism%29
In the Newman–Penrose (NP) formalism of general relativity, independent components of the Ricci tensors of a four-dimensional spacetime are encoded into seven (or ten) Ricci scalars which consist of three real scalars , three (or six) complex scalars and the NP curvature scalar . Physically, Ricci-NP scalars are related with the energy–momentum distribution of the spacetime due to Einstein's field equation. Definitions Given a complex null tetrad and with the convention , the Ricci-NP scalars are defined by (where overline means complex conjugate) Remark I: In these definitions, could be replaced by its trace-free part or by the Einstein tensor because of the normalization (i.e. inner product) relations that Remark II: Specifically for electrovacuum, we have , thus and therefore is reduced to Remark III: If one adopts the convention , the definitions of should take the opposite values; that is to say, after the signature transition. Alternative derivations According to the definitions above, one should find out the Ricci tensors before calculating the Ricci-NP scalars via contractions with the corresponding tetrad vectors. However, this method fails to fully reflect the spirit of Newman–Penrose formalism and alternatively, one could compute the spin coefficients and then derive the Ricci-NP scalars via relevant NP field equations that while the NP curvature scalar could be directly and easily calculated via with being the ordinary scalar curvature of the spacetime metric . Electromagnetic Ricci-NP scalars According to the definitions of Ricci-NP scalars above and the fact that could be replaced by in the definitions, are related with the energy–momentum distribution due to Einstein's field equations . In the simplest situation, i.e. vacuum spacetime in the absence of matter fields with , we will have . Moreover, for electromagnetic field, in addition to the aforementioned definitions, could be determined more specifically by where denote the three complex Maxwell-NP scalars which encode the six independent components of the Faraday-Maxwell 2-form (i.e. the electromagnetic field strength tensor) Remark: The equation for electromagnetic field is however not necessarily valid for other kinds of matter fields. For example, in the case of Yang–Mills fields there will be where are Yang–Mills-NP scalars. See also Newman–Penrose formalism Weyl scalar References General relativity
Ricci scalars (Newman–Penrose formalism)
Physics
503
30,598,162
https://en.wikipedia.org/wiki/Zorn%20ring
In mathematics, a Zorn ring is an alternative ring in which for every non-nilpotent x there exists an element y such that xy is a non-zero idempotent . named them after Max August Zorn, who studied a similar condition in . For associative rings, the definition of Zorn ring can be restated as follows: the Jacobson radical J(R) is a nil ideal and every right ideal of R which is not contained in J(R) contains a nonzero idempotent. Replacing "right ideal" with "left ideal" yields an equivalent definition. Left or right Artinian rings, left or right perfect rings, semiprimary rings and von Neumann regular rings are all examples of associative Zorn rings. References Non-associative algebras Ring theory
Zorn ring
Mathematics
177
1,247,989
https://en.wikipedia.org/wiki/Tunnel%20ionization
In physics, tunnel ionization is a process in which electrons in an atom (or a molecule) tunnel through the potential barrier and escape from the atom (or molecule). In an intense electric field, the potential barrier of an atom (molecule) is distorted drastically. Therefore, as the length of the barrier that electrons have to pass decreases, the electrons can escape from the atom's potential more easily. Tunneling ionization is a quantum mechanical phenomenon since in the classical picture an electron does not have sufficient energy to overcome the potential barrier of the atom. When the atom is in a DC external field, the Coulomb potential barrier is lowered and the electron has an increased, non-zero probability of tunnelling through the potential barrier. In the case of an alternating electric field, the direction of the electric field reverses after the half period of the field. The ionized electron may come back to its parent ion. The electron may recombine with the nucleus (nuclei) and its kinetic energy is released as light (high harmonic generation). If the recombination does not occur, further ionization may proceed by collision between high-energy electrons and a parent atom (molecule). This process is known as non-sequential ionization. DC tunneling ionization Tunneling ionization from the ground state of a hydrogen atom in an electrostatic (DC) field was solved schematically by Lev Landau, using parabolic coordinates. This provides a simplified physical system that given it proper exponential dependence of the ionization rate on the applied external field. When , the ionization rate for this system is given by: Landau expressed this in atomic units, where . In SI units the previous parameters can be expressed as: , . The ionization rate is the total probability current through the outer classical turning point. This rate is found using the WKB approximation to match the ground state hydrogen wavefunction through the suppressed coulomb potential barrier. A more physically meaningful form for the ionization rate above can be obtained by noting that the Bohr radius and hydrogen atom ionization energy are given by , , where is the Rydberg energy. Then, the parameters and can be written as , . so that the total ionization rate can be rewritten . This form for the ionization rate emphasizes that the characteristic electric field needed for ionization is proportional to the ratio of the ionization energy to the characteristic size of the electron's orbital . Thus, atoms with low ionization energy (such as alkali metals) with electrons occupying orbitals with high principal quantum number (i.e. far down the periodic table) ionize most easily under a DC field. Furthermore, for a hydrogenic atom, the scaling of this characteristic ionization field goes as , where is the nuclear charge. This scaling arises because the ionization energy scales as and the orbital radius as . More accurate and general formulas for the tunneling from Hydrogen orbitals can also be obtained. As an empirical point of reference, the characteristic electric field for the ordinary hydrogen atom is about (or ) and the characteristic frequency is . AC electric field The ionization rate of a hydrogen atom in an alternating electric field, like that of a laser, can be treated, in the appropriate limit, as the DC ionization rate averaged over a single period of the electric field's oscillation. Multiphoton and tunnel ionization of an atom or a molecule describes the same process by which a bounded electron, through the absorption of more than one photon from the laser field, is ionized. The difference between them is a matter of definition under different conditions. They can henceforth be called multiphoton ionization (MPI) whenever the distinction is not necessary. The dynamics of the MPI can be described by finding the time evolution of the state of the atom which is described by the Schrödinger equation. When the intensity of the laser is strong, the lowest-order perturbation theory is not sufficient to describe the MPI process. In this case, the laser field on larger distances from the nucleus is more important than the Coulomb potential and the dynamic of the electron in the field should be properly taken into account. The first work in this category was published by Leonid Keldysh. He modeled the MPI process as a transition of the electron from the ground state of the atom to the Volkov states (the state of a free electron in the electromagnetic field). In this model, the perturbation of the ground state by the laser field is neglected and the details of atomic structure in determining the ionization probability are not taken into account. The major difficulty with Keldysh's model was its neglect of the effects of Coulomb interaction on the final state of the electron. As is observed from the figure, the Coulomb field is not very small in magnitude compared to the potential of the laser at larger distances from the nucleus. This is in contrast to the approximation made by neglecting the potential of the laser at regions near the nucleus. A. M. Perelomov, V. S. Popov and M. V. Terent'ev included the Coulomb interaction at larger internuclear distances. Their model (which is called the PPT model after their initials) was derived for short-range potential and includes the effect of the long-range Coulomb interaction through the first-order correction in the quasi-classical action. In the quasi-static limit, the PPT model approaches the ADK model by M. V. Ammosov, N. B. Delone, and V. P. Krainov. Many experiments have been carried out on the MPI of rare gas atoms using strong laser pulses, through measuring both the total ion yield and the kinetic energy of the electrons. Here, one only considers the experiments designed to measure the total ion yield. Among these experiments are those by S. L. Chin et al., S. Augst et al. and T. Auguste et al. Chin et al. used a 10.6 μm CO2 laser in their experiment. Due to the very small frequency of the laser, the tunneling is strictly quasi-static, a characteristic that is not easily attainable using pulses in the near infrared or visible region of frequencies. These findings weakened the suspicion on the applicability of models basically founded on the assumption of a structureless atom. S. Larochelle et al. have compared the theoretically predicted ion versus intensity curves of rare gas atoms interacting with a :Ti:sapphire laser with experimental measurement. They have shown that the total ionization rate predicted by the PPT model fits very well the experimental ion yields for all rare gases in the intermediate regime of Keldysh parameter. Analytical formula for the rate of MPI The dynamics of the MPI can be described by finding the time evolution of the state of the atom which is described by the Schrödinger equation. The form of this equation in the electric field gauge, assuming the single active electron (SAE) approximation and using dipole approximation, is the following where is the electric field of the laser and is the static Coulomb potential of the atomic core at the position of the active electron. By finding the exact solution of equation (1) for a potential ( the magnitude of the ionization potential of the atom), the probability current is calculated. Then, the total MPI rate from short-range potential for linear polarization, , is found from where is the frequency of the laser, which is assumed to be polarized in the direction of the axis. The effect of the ionic potential, which behaves like ( is the charge of atomic or ionic core) at a long distance from the nucleus, is calculated through first order correction on the semi-classical action. The result is that the effect of ionic potential is to increase the rate of MPI by a factor of Where and is the peak electric field of laser. Thus, the total rate of MPI from a state with quantum numbers and in a laser field for linear polarization is calculated to be where is the Keldysh's adiabaticity parameter and . The coefficients , and are given by The coefficient is given by , where The ADK model is the limit of the PPT model when approaches zero (quasi-static limit). In this case, which is known as quasi-static tunnelling (QST), the ionization rate is given by . In practice, the limit for the QST regime is . This is justified by the following consideration. Referring to the figure, the ease or difficulty of tunneling can be expressed as the ratio between the equivalent classical time it takes for the electron to tunnel out the potential barrier while the potential is bent down. This ratio is indeed , since the potential is bent down during half a cycle of the field oscillation and the ratio can be expressed as , where is the tunneling time (classical time of flight of an electron through a potential barrier, and is the period of laser field oscillation. MPI of molecules Contrary to the abundance of theoretical and experimental work on the MPI of rare gas atoms, the amount of research on the prediction of the rate of MPI of neutral molecules was scarce until recently. Walsh et al. have measured the MPI rate of some diatomic molecules interacting with a CO2 laser. They found that these molecules are tunnel-ionized as if they were structureless atoms with an ionization potential equivalent to that of the molecular ground state. A. Talebpour et al. were able to quantitatively fit the ionization yield of diatomic molecules interacting with a Ti:sapphire laser pulse. The conclusion of the work was that the MPI rate of a diatomic molecule can be predicted from the PPT model by assuming that the electron tunnels through a barrier given by instead of barrier which is used in the calculation of the MPI rate of atoms. The importance of this finding is in its practicality; the only parameter needed for predicting the MPI rate of a diatomic molecule is a single parameter, . Using the semi-empirical model for the MPI rate of unsaturated hydrocarbons is feasible. This simplistic view ignores the ionization dependence on orientation of molecular axis with respect to polarization of the electric field of the laser, which is determined by the symmetries of the molecular orbitals. This dependence can be used to follow molecular dynamics using strong field multiphoton ionization. Tunneling time The question of how long a tunneling particle spends inside the barrier region has remained unresolved since the early days of quantum mechanics. It is sometimes suggested that the tunneling time is instantaneous because both the Keldysh and the closely related Buttiker-Landauer times are imaginary (corresponding to the decay of the wavefunction under the barrier). In a recent publication the main competing theories of tunneling time are compared against experimental measurements using the attoclock in strong laser field ionization of helium atoms. Refined attoclock measurements reveal a real and not instantaneous tunneling delay time over a large intensity regime. It is found that the experimental results are compatible with the probability distribution of tunneling times constructed using a Feynman path integral (FPI) formulation. However, later work in atomic hydrogen has demonstrated that most of the tunneling time measured in the experiment is purely from the long-range Coulomb force exerted by the ion core on the outgoing electron. References Further reading Ions Atomic physics Ionization
Tunnel ionization
Physics,Chemistry
2,358
297,117
https://en.wikipedia.org/wiki/Hydrofluoric%20acid
Hydrofluoric acid is a solution of hydrogen fluoride (HF) in water. Solutions of HF are colorless, acidic and highly corrosive. A common concentration is 49% (48-52%) but there are also stronger solutions (e.g. 70%) and pure HF has a boiling point near room temperature. It is used to make most fluorine-containing compounds; examples include the commonly used pharmaceutical antidepressant medication fluoxetine (Prozac) and the material PTFE (Teflon). Elemental fluorine is produced from it. It is commonly used to etch glass and silicon wafers. Uses Production of organofluorine compounds The principal use of hydrofluoric acid is in organofluorine chemistry. Many organofluorine compounds are prepared using HF as the fluorine source, including Teflon, fluoropolymers, fluorocarbons, and refrigerants such as freon. Many pharmaceuticals contain fluorine. Production of inorganic fluorides Most high-volume inorganic fluoride compounds are prepared from hydrofluoric acid. Foremost are Na3AlF6, cryolite, and AlF3, aluminium trifluoride. A molten mixture of these solids serves as a high-temperature solvent for the production of metallic aluminium. Other inorganic fluorides prepared from hydrofluoric acid include sodium fluoride and uranium hexafluoride. Etchant, cleaner It is used in the semiconductor industry as a major component of Wright etch and buffered oxide etch, which are used to clean silicon wafers. In a similar manner it is also used to etch glass by treatment with silicon dioxide to form gaseous or water-soluble silicon fluorides. It can also be used to polish and frost glass. SiO2 + 4 HF → SiF4(g) + 2 H2O SiO2 + 6 HF → H2SiF6 + 2 H2O A 5% to 9% hydrofluoric acid gel is also commonly used to etch all ceramic dental restorations to improve bonding. For similar reasons, dilute hydrofluoric acid is a component of household rust stain remover, in car washes in "wheel cleaner" compounds, in ceramic and fabric rust inhibitors, and in water spot removers. Because of its ability to dissolve iron oxides as well as silica-based contaminants, hydrofluoric acid is used in pre-commissioning boilers that produce high-pressure steam. Hydrofluoric acid is also useful for dissolving rock samples (usually powdered) prior to analysis. In similar manner, this acid is used in acid macerations to extract organic fossils from silicate rocks. Fossiliferous rock may be immersed directly into the acid, or a cellulose nitrate film may be applied (dissolved in amyl acetate), which adheres to the organic component and allows the rock to be dissolved around it. Oil refining In a standard oil refinery process known as alkylation, isobutane is alkylated with low-molecular-weight alkenes (primarily a mixture of propylene and butylene) in the presence of an acid catalyst derived from hydrofluoric acid. The catalyst protonates the alkenes (propylene, butylene) to produce reactive carbocations, which alkylate isobutane. The reaction is carried out at mild temperatures (0 and 30 °C) in a two-phase reaction. Production Hydrofluoric acid was first prepared in 1771, by Carl Wilhelm Scheele. It is now mainly produced by treatment of the mineral fluorite, CaF2, with concentrated sulfuric acid at approximately 265 °C. CaF2 + H2SO4 → 2 HF + CaSO4 The acid is also a by-product of the production of phosphoric acid from apatite and fluoroapatite. Digestion of the mineral with sulfuric acid at elevated temperatures releases a mixture of gases, including hydrogen fluoride, which may be recovered. Because of its high reactivity toward glass, hydrofluoric acid is stored in fluorinated plastic (often PTFE) containers. Properties In dilute aqueous solution hydrogen fluoride behaves as a weak acid, Infrared spectroscopy has been used to show that, in solution, dissociation is accompanied by formation of the ion pair ·F−. + HF ⋅F−pKa = 3.17 This ion pair has been characterized in the crystalline state at very low temperature. Further association has been characterized both in solution and in the solid state. HF + F− log K = 0.6 It is assumed that polymerization occurs as the concentration increases. This assumption is supported by the isolation of a salt of a tetrameric anion and by low-temperature X-ray crystallography. The species that are present in concentrated aqueous solutions of hydrogen fluoride have not all been characterized; in addition to which is known the formation of other polymeric species, , is highly likely. The Hammett acidity function, H0, for 100% HF was first reported as -10.2, while later compilations show -11, comparable to values near -12 for pure sulfuric acid. Acidity Unlike other hydrohalic acids, such as hydrochloric acid, hydrogen fluoride is only a weak acid in dilute aqueous solution. This is in part a result of the strength of the hydrogen–fluorine bond, but also of other factors such as the tendency of HF, , and anions to form clusters. At high concentrations, HF molecules undergo homoassociation to form polyatomic ions (such as bifluoride, ) and protons, thus greatly increasing the acidity. This leads to protonation of very strong acids like hydrochloric, sulfuric, or nitric acids when using concentrated hydrofluoric acid solutions. Although hydrofluoric acid is regarded as a weak acid, it is very corrosive, even attacking glass when hydrated. Dilute solutions are weakly acidic with an acid ionization constant (or ), in contrast to corresponding solutions of the other hydrogen halides, which are strong acids (). However concentrated solutions of hydrogen fluoride are much more strongly acidic than implied by this value, as shown by measurements of the Hammett acidity function H0(or "effective pH"). During self ionization of 100% liquid HF the H0 was first measured as −10.2 and later compiled as −11, comparable to values near −12 for sulfuric acid. In thermodynamic terms, HF solutions are highly non-ideal, with the activity of HF increasing much more rapidly than its concentration. The weak acidity in dilute solution is sometimes attributed to the high H—F bond strength, which combines with the high dissolution enthalpy of HF to outweigh the more negative enthalpy of hydration of the fluoride ion. Paul Giguère and Sylvia Turrell have shown by infrared spectroscopy that the predominant solute species in dilute solution is the hydrogen-bonded ion pair ·F−. + HF ⋅F− With increasing concentration of HF the concentration of the hydrogen difluoride ion also increases. The reaction 3 HF + H2F+ is an example of homoconjugation. Health and safety In addition to being a highly corrosive liquid, hydrofluoric acid is also a powerful contact poison. Since it can penetrate tissue, poisoning can occur readily through exposure of skin or eyes, inhalation, or ingestion. Symptoms of exposure to hydrofluoric acid may not be immediately evident, and this can provide false reassurance to victims, causing them to delay medical treatment. Despite its irritating vapor, HF may reach dangerous levels without an obvious odor. It interferes with nerve function, meaning that burns may not initially be painful. Accidental exposures can go unnoticed, delaying treatment and increasing the extent and seriousness of the injury. Symptoms of HF exposure include irritation of the eyes, skin, nose, and throat, eye and skin burns, rhinitis, bronchitis, pulmonary edema (fluid buildup in the lungs), and bone damage due to HF strongly interacting with calcium in bones. In a concentrated form, HF can cause severe tissue destruction through lesions and mucous membrane damage, but dilute HF is still dangerous because of its high lipid affinity, leading to cellular death of nerves, blood vessels, tendons, bones, and other tissues. Hydrofluoric burns are treated with a calcium gluconate gel. In popular culture In the episodes "Cat's in the Bag..." and "Box Cutter" of the crime drama television series Breaking Bad, Walter White and Jesse Pinkman use hydrofluoric acid to chemically disincorporate bodies of gangsters. See also Vapour phase decomposition 2019 Philadelphia Energy Solutions refinery explosion References External links International Chemical Safety Card 0283 NIOSH Pocket Guide to Chemical Hazards (HF) (5HF) (6HF) (7HF) "Hydrofluoric Acid Burn", The New England Journal of Medicine—Acid burn case study Fluorides Nonmetal halides Mineral acids Acid catalysts
Hydrofluoric acid
Chemistry
1,961
279,293
https://en.wikipedia.org/wiki/Mathematics%20and%20architecture
Mathematics and architecture are related, since, as with other arts, architects use mathematics for several reasons. Apart from the mathematics needed when engineering buildings, architects use geometry: to define the spatial form of a building; from the Pythagoreans of the sixth century BC onwards, to create forms considered harmonious, and thus to lay out buildings and their surroundings according to mathematical, aesthetic and sometimes religious principles; to decorate buildings with mathematical objects such as tessellations; and to meet environmental goals, such as to minimise wind speeds around the bases of tall buildings. In ancient Egypt, ancient Greece, India, and the Islamic world, buildings including pyramids, temples, mosques, palaces and mausoleums were laid out with specific proportions for religious reasons. In Islamic architecture, geometric shapes and geometric tiling patterns are used to decorate buildings, both inside and outside. Some Hindu temples have a fractal-like structure where parts resemble the whole, conveying a message about the infinite in Hindu cosmology. In Chinese architecture, the tulou of Fujian province are circular, communal defensive structures. In the twenty-first century, mathematical ornamentation is again being used to cover public buildings. In Renaissance architecture, symmetry and proportion were deliberately emphasized by architects such as Leon Battista Alberti, Sebastiano Serlio and Andrea Palladio, influenced by Vitruvius's De architectura from ancient Rome and the arithmetic of the Pythagoreans from ancient Greece. At the end of the nineteenth century, Vladimir Shukhov in Russia and Antoni Gaudí in Barcelona pioneered the use of hyperboloid structures; in the Sagrada Família, Gaudí also incorporated hyperbolic paraboloids, tessellations, catenary arches, catenoids, helicoids, and ruled surfaces. In the twentieth century, styles such as modern architecture and Deconstructivism explored different geometries to achieve desired effects. Minimal surfaces have been exploited in tent-like roof coverings as at Denver International Airport, while Richard Buckminster Fuller pioneered the use of the strong thin-shell structures known as geodesic domes. Connected fields The architects Michael Ostwald and Kim Williams, considering the relationships between architecture and mathematics, note that the fields as commonly understood might seem to be only weakly connected, since architecture is a profession concerned with the practical matter of making buildings, while mathematics is the pure study of number and other abstract objects. But, they argue, the two are strongly connected, and have been since antiquity. In ancient Rome, Vitruvius described an architect as a man who knew enough of a range of other disciplines, primarily geometry, to enable him to oversee skilled artisans in all the other necessary areas, such as masons and carpenters. The same applied in the Middle Ages, where graduates learnt arithmetic, geometry and aesthetics alongside the basic syllabus of grammar, logic, and rhetoric (the trivium) in elegant halls made by master builders who had guided many craftsmen. A master builder at the top of his profession was given the title of architect or engineer. In the Renaissance, the quadrivium of arithmetic, geometry, music and astronomy became an extra syllabus expected of the Renaissance man such as Leon Battista Alberti. Similarly in England, Sir Christopher Wren, known today as an architect, was firstly a noted astronomer. Williams and Ostwald, further overviewing the interaction of mathematics and architecture since 1500 according to the approach of the German sociologist Theodor Adorno, identify three tendencies among architects, namely: to be revolutionary, introducing wholly new ideas; reactionary, failing to introduce change; or revivalist, actually going backwards. They argue that architects have avoided looking to mathematics for inspiration in revivalist times. This would explain why in revivalist periods, such as the Gothic Revival in 19th century England, architecture had little connection to mathematics. Equally, they note that in reactionary times such as the Italian Mannerism of about 1520 to 1580, or the 17th century Baroque and Palladian movements, mathematics was barely consulted. In contrast, the revolutionary early 20th-century movements such as Futurism and Constructivism actively rejected old ideas, embracing mathematics and leading to Modernist architecture. Towards the end of the 20th century, too, fractal geometry was quickly seized upon by architects, as was aperiodic tiling, to provide interesting and attractive coverings for buildings. Architects use mathematics for several reasons, leaving aside the necessary use of mathematics in the engineering of buildings. Firstly, they use geometry because it defines the spatial form of a building. Secondly, they use mathematics to design forms that are considered beautiful or harmonious. From the time of the Pythagoreans with their religious philosophy of number, architects in ancient Greece, ancient Rome, the Islamic world and the Italian Renaissance have chosen the proportions of the built environment – buildings and their designed surroundings – according to mathematical as well as aesthetic and sometimes religious principles. Thirdly, they may use mathematical objects such as tessellations to decorate buildings. Fourthly, they may use mathematics in the form of computer modelling to meet environmental goals, such as to minimise whirling air currents at the base of tall buildings. Secular aesthetics Ancient Rome Vitruvius The influential ancient Roman architect Vitruvius argued that the design of a building such as a temple depends on two qualities, proportion and symmetria. Proportion ensures that each part of a building relates harmoniously to every other part. Symmetria in Vitruvius's usage means something closer to the English term modularity than mirror symmetry, as again it relates to the assembling of (modular) parts into the whole building. In his Basilica at Fano, he uses ratios of small integers, especially the triangular numbers (1, 3, 6, 10, ...) to proportion the structure into (Vitruvian) modules. Thus the Basilica's width to length is 1:2; the aisle around it is as high as it is wide, 1:1; the columns are five feet thick and fifty feet high, 1:10. Vitruvius named three qualities required of architecture in his De architectura, : firmness, usefulness (or "Commodity" in Henry Wotton's 17th century English), and delight. These can be used as categories for classifying the ways in which mathematics is used in architecture. Firmness encompasses the use of mathematics to ensure a building stands up, hence the mathematical tools used in design and to support construction, for instance to ensure stability and to model performance. Usefulness derives in part from the effective application of mathematics, reasoning about and analysing the spatial and other relationships in a design. Delight is an attribute of the resulting building, resulting from the embodying of mathematical relationships in the building; it includes aesthetic, sensual and intellectual qualities. The Pantheon The Pantheon in Rome has survived intact, illustrating classical Roman structure, proportion, and decoration. The main structure is a dome, the apex left open as a circular oculus to let in light; it is fronted by a short colonnade with a triangular pediment. The height to the oculus and the diameter of the interior circle are the same, , so the whole interior would fit exactly within a cube, and the interior could house a sphere of the same diameter. These dimensions make more sense when expressed in ancient Roman units of measurement: The dome spans 150 Roman feet); the oculus is 30 Roman feet in diameter; the doorway is 40 Roman feet high. The Pantheon remains the world's largest unreinforced concrete dome. Renaissance The first Renaissance treatise on architecture was Leon Battista Alberti's 1450 (On the Art of Building); it became the first printed book on architecture in 1485. It was partly based on Vitruvius's De architectura and, via Nicomachus, Pythagorean arithmetic. Alberti starts with a cube, and derives ratios from it. Thus the diagonal of a face gives the ratio 1:, while the diameter of the sphere which circumscribes the cube gives 1:. Alberti also documented Filippo Brunelleschi's discovery of linear perspective, developed to enable the design of buildings which would look beautifully proportioned when viewed from a convenient distance. The next major text was Sebastiano Serlio's Regole generali d'architettura (General Rules of Architecture); the first volume appeared in Venice in 1537; the 1545 volume (books1 and 2) covered geometry and perspective. Two of Serlio's methods for constructing perspectives were wrong, but this did not stop his work being widely used. In 1570, Andrea Palladio published the influential I quattro libri dell'architettura (The Four Books of Architecture) in Venice. This widely printed book was largely responsible for spreading the ideas of the Italian Renaissance throughout Europe, assisted by proponents like the English diplomat Henry Wotton with his 1624 The Elements of Architecture. The proportions of each room within the villa were calculated on simple mathematical ratios like 3:4 and 4:5, and the different rooms within the house were interrelated by these ratios. Earlier architects had used these formulas for balancing a single symmetrical facade; however, Palladio's designs related to the whole, usually square, villa. Palladio permitted a range of ratios in the Quattro libri, stating: In 1615, Vincenzo Scamozzi published the late Renaissance treatise L'idea dell'architettura universale (The Idea of a Universal Architecture). He attempted to relate the design of cities and buildings to the ideas of Vitruvius and the Pythagoreans, and to the more recent ideas of Palladio. Nineteenth century Hyperboloid structures were used starting towards the end of the nineteenth century by Vladimir Shukhov for masts, lighthouses and cooling towers. Their striking shape is both aesthetically interesting and strong, using structural materials economically. Shukhov's first hyperboloidal tower was exhibited in Nizhny Novgorod in 1896. Twentieth century The early twentieth century movement Modern architecture, pioneered by Russian Constructivism, used rectilinear Euclidean (also called Cartesian) geometry. In the De Stijl movement, the horizontal and the vertical were seen as constituting the universal. The architectural form consists of putting these two directional tendencies together, using roof planes, wall planes and balconies, which either slide past or intersect each other, as in the 1924 Rietveld Schröder House by Gerrit Rietveld. Modernist architects were free to make use of curves as well as planes. Charles Holden's 1933 Arnos station has a circular ticket hall in brick with a flat concrete roof. In 1938, the Bauhaus painter László Moholy-Nagy adopted Raoul Heinrich Francé's seven biotechnical elements, namely the crystal, the sphere, the cone, the plane, the (cuboidal) strip, the (cylindrical) rod, and the spiral, as the supposed basic building blocks of architecture inspired by nature. Le Corbusier proposed an anthropometric scale of proportions in architecture, the Modulor, based on the supposed height of a man. Le Corbusier's 1955 Chapelle Notre-Dame du Haut uses free-form curves not describable in mathematical formulae. The shapes are said to be evocative of natural forms such as the prow of a ship or praying hands. The design is only at the largest scale: there is no hierarchy of detail at smaller scales, and thus no fractal dimension; the same applies to other famous twentieth-century buildings such as the Sydney Opera House, Denver International Airport, and the Guggenheim Museum, Bilbao. Contemporary architecture, in the opinion of the 90 leading architects who responded to a 2010 World Architecture Survey, is extremely diverse; the best was judged to be Frank Gehry's Guggenheim Museum, Bilbao. Denver International Airport's terminal building, completed in 1995, has a fabric roof supported as a minimal surface (i.e., its mean curvature is zero) by steel cables. It evokes Colorado's snow-capped mountains and the teepee tents of Native Americans. The architect Richard Buckminster Fuller is famous for designing strong thin-shell structures known as geodesic domes. The Montréal Biosphère dome is high; its diameter is . Sydney Opera House has a dramatic roof consisting of soaring white vaults, reminiscent of ship's sails; to make them possible to construct using standardized components, the vaults are all composed of triangular sections of spherical shells with the same radius. These have the required uniform curvature in every direction. The late twentieth century movement Deconstructivism creates deliberate disorder with what Nikos Salingaros in A Theory of Architecture calls random forms of high complexity by using non-parallel walls, superimposed grids and complex 2-D surfaces, as in Frank Gehry's Disney Concert Hall and Guggenheim Museum, Bilbao. Until the twentieth century, architecture students were obliged to have a grounding in mathematics. Salingaros argues that first "overly simplistic, politically-driven" Modernism and then "anti-scientific" Deconstructivism have effectively separated architecture from mathematics. He believes that this "reversal of mathematical values" is harmful, as the "pervasive aesthetic" of non-mathematical architecture trains people "to reject mathematical information in the built environment"; he argues that this has negative effects on society. Religious principles Ancient Egypt The pyramids of ancient Egypt are tombs constructed with mathematical proportions, but which these were, and whether the Pythagorean theorem was used, are debated. The ratio of the slant height to half the base length of the Great Pyramid of Giza is less than 1% from the golden ratio. If this was the design method, it would imply the use of Kepler's triangle (face angle 51°49'), but according to many historians of science, the golden ratio was not known until the time of the Pythagoreans. The proportions of some pyramids may have also been based on the 3:4:5 triangle (face angle 53°8'), known from the Rhind Mathematical Papyrus (c. 1650–1550 BC); this was first conjectured by historian Moritz Cantor in 1882. It is known that right angles were laid out accurately in ancient Egypt using knotted cords for measurement, that Plutarch recorded in Isis and Osiris (c. 100 AD) that the Egyptians admired the 3:4:5 triangle, and that a scroll from before 1700 BC demonstrated basic square formulas. Historian Roger L. Cooke observes that "It is hard to imagine anyone being interested in such conditions without knowing the Pythagorean theorem," but also notes that no Egyptian text before 300 BC actually mentions the use of the theorem to find the length of a triangle's sides, and that there are simpler ways to construct a right angle. Cooke concludes that Cantor's conjecture remains uncertain; he guesses that the ancient Egyptians probably knew the Pythagorean theorem, but "there is no evidence that they used it to construct right angles." Ancient India Vaastu Shastra, the ancient Indian canons of architecture and town planning, employs symmetrical drawings called mandalas. Complex calculations are used to arrive at the dimensions of a building and its components. The designs are intended to integrate architecture with nature, the relative functions of various parts of the structure, and ancient beliefs utilizing geometric patterns (yantra), symmetry and directional alignments. However, early builders may have come upon mathematical proportions by accident. The mathematician Georges Ifrah notes that simple "tricks" with string and stakes can be used to lay out geometric shapes, such as ellipses and right angles. The mathematics of fractals has been used to show that the reason why existing buildings have universal appeal and are visually satisfying is because they provide the viewer with a sense of scale at different viewing distances. For example, in the tall gopuram gatehouses of Hindu temples such as the Virupaksha Temple at Hampi built in the seventh century, and others such as the Kandariya Mahadev Temple at Khajuraho, the parts and the whole have the same character, with fractal dimension in the range 1.7 to 1.8. The cluster of smaller towers (shikhara, lit. 'mountain') about the tallest, central, tower which represents the holy Mount Kailash, abode of Lord Shiva, depicts the endless repetition of universes in Hindu cosmology. The religious studies scholar William J. Jackson observed of the pattern of towers grouped among smaller towers, themselves grouped among still smaller towers, that: The Meenakshi Amman Temple is a large complex with multiple shrines, with the streets of Madurai laid out concentrically around it according to the shastras. The four gateways are tall towers (gopurams) with fractal-like repetitive structure as at Hampi. The enclosures around each shrine are rectangular and surrounded by high stone walls. Ancient Greece Pythagoras (c. 569 – c. 475 B.C.) and his followers, the Pythagoreans, held that "all things are numbers". They observed the harmonies produced by notes with specific small-integer ratios of frequency, and argued that buildings too should be designed with such ratios. The Greek word symmetria originally denoted the harmony of architectural shapes in precise ratios from a building's smallest details right up to its entire design. The Parthenon is long, wide and high to the cornice. This gives a ratio of width to length of 4:9, and the same for height to width. Putting these together gives height:width:length of 16:36:81, or to the delight of the Pythagoreans 42:62:92. This sets the module as 0.858 m. A 4:9 rectangle can be constructed as three contiguous rectangles with sides in the ratio 3:4. Each half-rectangle is then a convenient 3:4:5 right triangle, enabling the angles and sides to be checked with a suitably knotted rope. The inner area (naos) similarly has 4:9 proportions ( wide by 48.3 m long); the ratio between the diameter of the outer columns, , and the spacing of their centres, , is also 4:9. The Parthenon is considered by authors such as John Julius Norwich "the most perfect Doric temple ever built". Its elaborate architectural refinements include "a subtle correspondence between the curvature of the stylobate, the taper of the naos walls and the entasis of the columns". Entasis refers to the subtle diminution in diameter of the columns as they rise. The stylobate is the platform on which the columns stand. As in other classical Greek temples, the platform has a slight parabolic upward curvature to shed rainwater and reinforce the building against earthquakes. The columns might therefore be supposed to lean outwards, but they actually lean slightly inwards so that if they carried on, they would meet about a kilometre and a half above the centre of the building; since they are all the same height, the curvature of the outer stylobate edge is transmitted to the architrave and roof above: "all follow the rule of being built to delicate curves". The golden ratio was known in 300 B.C., when Euclid described the method of geometric construction. It has been argued that the golden ratio was used in the design of the Parthenon and other ancient Greek buildings, as well as sculptures, paintings, and vases. More recent authors such as Nikos Salingaros, however, doubt all these claims. Experiments by the computer scientist George Markowsky failed to find any preference for the golden rectangle. Islamic architecture The historian of Islamic art Antonio Fernandez-Puertas suggests that the Alhambra, like the Great Mosque of Cordoba, was designed using the Hispano-Muslim foot or codo of about . In the palace's Court of the Lions, the proportions follow a series of surds. A rectangle with sides 1and has (by Pythagoras's theorem) a diagonal of , which describes the right triangle made by the sides of the court; the series continues with (giving a 1:2 ratio), and so on. The decorative patterns are similarly proportioned, generating squares inside circles and eight-pointed stars, generating six-pointed stars. There is no evidence to support earlier claims that the golden ratio was used in the Alhambra. The Court of the Lions is bracketed by the Hall of Two Sisters and the Hall of the Abencerrajes; a regular hexagon can be drawn from the centres of these two halls and the four inside corners of the Court of the Lions. The Selimiye Mosque in Edirne, Turkey, was built by Mimar Sinan to provide a space where the mihrab could be seen from anywhere inside the building. The very large central space is accordingly arranged as an octagon, formed by eight enormous pillars, and capped by a circular dome of diameter and high. The octagon is formed into a square with four semidomes, and externally by four exceptionally tall minarets, tall. The building's plan is thus a circle, inside an octagon, inside a square. Mughal architecture Mughal architecture, as seen in the abandoned imperial city of Fatehpur Sikri and the Taj Mahal complex, has a distinctive mathematical order and a strong aesthetic based on symmetry and harmony. The Taj Mahal exemplifies Mughal architecture, both representing paradise and displaying the Mughal Emperor Shah Jahan's power through its scale, symmetry and costly decoration. The white marble mausoleum, decorated with pietra dura, the great gate (Darwaza-i rauza), other buildings, the gardens and paths together form a unified hierarchical design. The buildings include a mosque in red sandstone on the west, and an almost identical building, the Jawab or 'answer' on the east to maintain the bilateral symmetry of the complex. The formal charbagh ('fourfold garden') is in four parts, symbolising the four rivers of Paradise, and offering views and reflections of the mausoleum. These are divided in turn into 16 parterres. The Taj Mahal complex was laid out on a grid, subdivided into smaller grids. The historians of architecture Koch and Barraud agree with the traditional accounts that give the width of the complex as 374 Mughal yards or gaz, the main area being three 374-gaz squares. These were divided in areas like the bazaar and caravanserai into 17-gaz modules; the garden and terraces are in modules of 23 gaz, and are 368 gaz wide (16 x 23). The mausoleum, mosque and guest house are laid out on a grid of 7gaz. Koch and Barraud observe that if an octagon, used repeatedly in the complex, is given sides of 7units, then it has a width of 17 units, which may help to explain the choice of ratios in the complex. Christian architecture The Christian patriarchal basilica of Haghia Sophia in Byzantium (now Istanbul), first constructed in 537 (and twice rebuilt), was for a thousand years the largest cathedral ever built. It inspired many later buildings including Sultan Ahmed and other mosques in the city. The Byzantine architecture includes a nave crowned by a circular dome and two half-domes, all of the same diameter (), with a further five smaller half-domes forming an apse and four rounded corners of a vast rectangular interior. This was interpreted by mediaeval architects as representing the mundane below (the square base) and the divine heavens above (the soaring spherical dome). The emperor Justinian used two geometers, Isidore of Miletus and Anthemius of Tralles as architects; Isidore compiled the works of Archimedes on solid geometry, and was influenced by him. The importance of water baptism in Christianity was reflected in the scale of baptistry architecture. The oldest, the Lateran Baptistry in Rome, built in 440, set a trend for octagonal baptistries; the baptismal font inside these buildings was often octagonal, though Italy's largest baptistry, at Pisa, built between 1152 and 1363, is circular, with an octagonal font. It is high, with a diameter of (a ratio of 8:5). Saint Ambrose wrote that fonts and baptistries were octagonal "because on the eighth day, by rising, Christ loosens the bondage of death and receives the dead from their graves." Saint Augustine similarly described the eighth day as "everlasting ... hallowed by the resurrection of Christ". The octagonal Baptistry of Saint John, Florence, built between 1059 and 1128, is one of the oldest buildings in that city, and one of the last in the direct tradition of classical antiquity; it was extremely influential in the subsequent Florentine Renaissance, as major architects including Francesco Talenti, Alberti and Brunelleschi used it as the model of classical architecture. The number five is used "exuberantly" in the 1721 Pilgrimage Church of St John of Nepomuk at Zelená hora, near Žďár nad Sázavou in the Czech republic, designed by Jan Blažej Santini Aichel. The nave is circular, surrounded by five pairs of columns and five oval domes alternating with ogival apses. The church further has five gates, five chapels, five altars and five stars; a legend claims that when Saint John of Nepomuk was martyred, five stars appeared over his head. The fivefold architecture may also symbolise the five wounds of Christ and the five letters of "Tacui" (Latin: "I kept silence" [about secrets of the confessional]). Antoni Gaudí used a wide variety of geometric structures, some being minimal surfaces, in the Sagrada Família, Barcelona, started in 1882 (and not completed as of 2023). These include hyperbolic paraboloids and hyperboloids of revolution, tessellations, catenary arches, catenoids, helicoids, and ruled surfaces. This varied mix of geometries is creatively combined in different ways around the church. For example, in the Passion Façade of Sagrada Família, Gaudí assembled stone "branches" in the form of hyperbolic paraboloids, which overlap at their tops (directrices) without, therefore, meeting at a point. In contrast, in the colonnade there are hyperbolic paraboloidal surfaces that smoothly join other structures to form unbounded surfaces. Further, Gaudí exploits natural patterns, themselves mathematical, with columns derived from the shapes of trees, and lintels made from unmodified basalt naturally cracked (by cooling from molten rock) into hexagonal columns. The 1971 Cathedral of Saint Mary of the Assumption, San Francisco has a saddle roof composed of eight segments of hyperbolic paraboloids, arranged so that the bottom horizontal cross section of the roof is a square and the top cross section is a Christian cross. The building is a square on a side, and high. The 1970 Cathedral of Brasília by Oscar Niemeyer makes a different use of a hyperboloid structure; it is constructed from 16 identical concrete beams, each weighing 90 tonnes, arranged in a circle to form a hyperboloid of revolution, the white beams creating a shape like hands praying to heaven. Only the dome is visible from outside: most of the building is below ground. Several medieval churches in Scandinavia are circular, including four on the Danish island of Bornholm. One of the oldest of these, Østerlars Church from , has a circular nave around a massive circular stone column, pierced with arches and decorated with a fresco. The circular structure has three storeys and was apparently fortified, the top storey having served for defence. Mathematical decoration Islamic architectural decoration Islamic buildings are often decorated with geometric patterns which typically make use of several mathematical tessellations, formed of ceramic tiles (girih, zellige) that may themselves be plain or decorated with stripes. Symmetries such as stars with six, eight, or multiples of eight points are used in Islamic patterns. Some of these are based on the "Khatem Sulemani" or Solomon's seal motif, which is an eight-pointed star made of two squares, one rotated 45 degrees from the other on the same centre. Islamic patterns exploit many of the 17 possible wallpaper groups; as early as 1944, Edith Müller showed that the Alhambra made use of 11 wallpaper groups in its decorations, while in 1986 Branko Grünbaum claimed to have found 13 wallpaper groups in the Alhambra, asserting controversially that the remaining four groups are not found anywhere in Islamic ornament. Modern architectural decoration Towards the end of the 20th century, novel mathematical constructs such as fractal geometry and aperiodic tiling were seized upon by architects to provide interesting and attractive coverings for buildings. In 1913, the Modernist architect Adolf Loos had declared that "Ornament is a crime", influencing architectural thinking for the rest of the 20th century. In the 21st century, architects are again starting to explore the use of ornament. 21st century ornamentation is extremely diverse. Henning Larsen's 2011 Harpa Concert and Conference Centre, Reykjavik has what looks like a crystal wall of rock made of large blocks of glass. Foreign Office Architects' 2010 Ravensbourne College, London is tessellated decoratively with 28,000 anodised aluminium tiles in red, white and brown, interlinking circular windows of differing sizes. The tessellation uses three types of tile, an equilateral triangle and two irregular pentagons. Kazumi Kudo's Kanazawa Umimirai Library creates a decorative grid made of small circular blocks of glass set into plain concrete walls. Defence Europe The architecture of fortifications evolved from medieval fortresses, which had high masonry walls, to low, symmetrical star forts able to resist artillery bombardment between the mid-fifteenth and nineteenth centuries. The geometry of the star shapes was dictated by the need to avoid dead zones where attacking infantry could shelter from defensive fire; the sides of the projecting points were angled to permit such fire to sweep the ground, and to provide crossfire (from both sides) beyond each projecting point. Well-known architects who designed such defences include Michelangelo, Baldassare Peruzzi, Vincenzo Scamozzi and Sébastien Le Prestre de Vauban. The architectural historian Siegfried Giedion argued that the star-shaped fortification had a formative influence on the patterning of the Renaissance ideal city: "The Renaissance was hypnotized by one city type which for a century and a half—from Filarete to Scamozzi—was impressed upon all utopian schemes: this is the star-shaped city." China In Chinese architecture, the tulou of Fujian province are circular, communal defensive structures with mainly blank walls and a single iron-plated wooden door, some dating back to the sixteenth century. The walls are topped with roofs that slope gently both outwards and inwards, forming a ring. The centre of the circle is an open cobbled courtyard, often with a well, surrounded by timbered galleries up to five stories high. Environmental goals Architects may also select the form of a building to meet environmental goals. For example, Foster and Partners' 30 St Mary Axe, London, known as "The Gherkin" for its cucumber-like shape, is a solid of revolution designed using parametric modelling. Its geometry was chosen not purely for aesthetic reasons, but to minimise whirling air currents at its base. Despite the building's apparently curved surface, all the panels of glass forming its skin are flat, except for the lens at the top. Most of the panels are quadrilaterals, as they can be cut from rectangular glass with less wastage than triangular panels. The traditional yakhchal (ice pit) of Persia functioned as an evaporative cooler. Above ground, the structure had a domed shape, but had a subterranean storage space for ice and sometimes food as well. The subterranean space and the thick heat-resistant construction insulated the storage space year round. The internal space was often further cooled with windcatchers. See also Black Rock City Mathematics and art Patterns in nature Notes References External links Nexus Network Journal: Architecture and Mathematics Online The International Society of the Arts, Mathematics, and Architecture University of St Andrews: Mathematics and Architecture National University of Singapore: Mathematics in Art and Architecture Dartmouth College: Geometry in Art & Architecture Mathematics and culture Architectural theory
Mathematics and architecture
Engineering
6,771
33,074,893
https://en.wikipedia.org/wiki/List%20of%20long%20mathematical%20proofs
This is a list of unusually long mathematical proofs. Such proofs often use computational proof methods and may be considered non-surveyable. , the longest mathematical proof, measured by number of published journal pages, is the classification of finite simple groups with well over 10000 pages. There are several proofs that would be far longer than this if the details of the computer calculations they depend on were published in full. Long proofs The length of unusually long proofs has increased with time. As a rough rule of thumb, 100 pages in 1900, or 200 pages in 1950, or 500 pages in 2000 is unusually long for a proof. 1799 The Abel–Ruffini theorem was nearly proved by Paolo Ruffini, but his proof, spanning 500 pages, was mostly ignored and later, in 1824, Niels Henrik Abel published a proof that required just six pages. 1890 Killing's classification of simple complex Lie algebras, including his discovery of the exceptional Lie algebras, took 180 pages in 4 papers. 1894 The ruler-and-compass construction of a polygon of 65537 sides by Johann Gustav Hermes took over 200 pages. 1905 Emanuel Lasker's original proof of the Lasker–Noether theorem took 98 pages, but has since been simplified: modern proofs are less than a page long. 1963 Odd order theorem by Feit and Thompson was 255 pages long, which at the time was over 10 times as long as what had previously been considered a long paper in group theory. 1964 Resolution of singularities. Hironaka's original proof was 216 pages long; it has since been simplified considerably down to about 10 or 20 pages. 1966 Abyhankar's proof of resolution of singularities for 3-folds in characteristic greater than 6 covered about 500 pages in several papers. In 2009, Cutkosky simplified this to about 40 pages. 1966 Discrete series representations of Lie groups. Harish-Chandra's construction of these involved a long series of papers totaling around 500 pages. His later work on the Plancherel theorem for semisimple groups added another 150 pages to these. 1968 the Novikov–Adian proof solving Burnside's problem on finitely generated infinite groups with finite exponents negatively. The three-part original paper is more than 300 pages long. (Britton later published a 282-page paper attempting to solve the problem, but his paper contained a serious gap.) 1960-1970 Fondements de la Géometrie Algébrique, Éléments de géométrie algébrique and Séminaire de géométrie algébrique. Grothendieck's work on the foundations of algebraic geometry covers many thousands of pages. Although this is not a proof of a single theorem, there are several theorems in it whose proofs depend on hundreds of earlier pages. 1974 N-group theorem. Thompson's classification of N-groups used 6 papers totaling about 400 pages, but also used earlier results of his such as the odd order theorem, which bring to total length up to more than 700 pages. 1974 Ramanujan conjecture and the Weil conjectures. While Deligne's final paper proving these conjectures were "only" about 30 pages long, it depended on background results in algebraic geometry and étale cohomology that Deligne estimated to be about 2000 pages long. 1974 4-color theorem. Appel and Haken's proof of this took 139 pages, and also depended on long computer calculations. 1974 The Gorenstein–Harada theorem classifying finite groups of sectional 2-rank at most 4 was 464 pages long. 1976 Eisenstein series. Langlands's proof of the functional equation for Eisenstein series was 337 pages long. 1983 Trichotomy theorem. Gorenstein and Lyons's proof for the case of rank at least 4 was 731 pages long, and Aschbacher's proof of the rank 3 case adds another 159 pages, for a total of 890 pages. 1983 Selberg trace formula. Hejhal's proof of a general form of the Selberg trace formula consisted of 2 volumes with a total length of 1322 pages. Arthur–Selberg trace formula. Arthur's proofs of the various versions of this cover several hundred pages spread over many papers. 2000 Almgren's regularity theorem. Almgren's proof was 955 pages long. 2000 Lafforgue's theorem on the Langlands conjecture for the general linear group over function fields. Laurent Lafforgue's proof of this was about 600 pages long, not counting many pages of background results. 2003 Poincaré conjecture, Geometrization theorem, Geometrization conjecture. Perelman's original proofs of the Poincaré conjecture and the Geometrization conjecture were not lengthy, but were rather sketchy. Several other mathematicians have published proofs with the details filled in, which come to several hundred pages. 2004 Quasithin groups. The classification of the simple quasithin groups by Aschbacher and Smith was 1221 pages long, one of the longest single papers ever written. 2004 Classification of finite simple groups. The proof of this is spread out over hundreds of journal articles which makes it hard to estimate its total length, which is probably around 10000 to 20000 pages. 2004 Robertson–Seymour theorem. The proof takes about 500 pages spread over about 20 papers. 2005 Kepler conjecture. Hales's proof of this involves several hundred pages of published arguments, together with several gigabytes of computer calculations. 2006 the strong perfect graph theorem, by Maria Chudnovsky, Neil Robertson, Paul Seymour, and Robin Thomas. The paper comprised 180 pages in the Annals of Mathematics. Long computer calculations There are many mathematical theorems that have been checked by long computer calculations. If these were written out as proofs many would be far longer than most of the proofs above. There is not really a clear distinction between computer calculations and proofs, as several of the proofs above, such as the 4-color theorem and the Kepler conjecture, use long computer calculations as well as many pages of mathematical argument. For the computer calculations in this section, the mathematical arguments are only a few pages long, and the length is due to long but routine calculations. Some typical examples of such theorems include: Several proofs of the existence of sporadic simple groups, such as the Lyons group, originally used computer calculations with large matrices or with permutations on billions of symbols. In most cases, such as the baby monster group, the computer proofs were later replaced by shorter proofs avoiding computer calculations. Similarly, the calculation of the maximal subgroups of the larger sporadic groups uses a lot of computer calculations. 2004 Verification of the Riemann hypothesis for the first 1013 zeros of the Riemann zeta function. 2007 Verification that checkers is a draw. 2008 Proofs that various Mersenne numbers with around ten million digits are prime. Calculations of large numbers of digits of π. 2010 Showing that the Rubik's Cube can be solved in 20 moves. 2012 Showing that Sudoku needs at least 17 clues. 2013 Ternary Goldbach conjecture: Every odd number greater than 5 can be expressed as the sum of three primes. 2014 Proof of Erdős discrepancy conjecture for the particular case C=2: every ±1-sequence of the length 1161 has a discrepancy at least 3; the original proof, generated by a SAT solver, had a size of 13 gigabytes and was later reduced to 850 megabytes. 2016 Solving the Boolean Pythagorean triples problem required the generation of 200 terabytes of proof. 2017 Marijn Heule, who coauthored solution to the Boolean Pythagorean triples problem, announced a 2 petabytes long proof that the 5th Schur's number is 160. Long proofs in mathematical logic Kurt Gödel showed how to find explicit examples of statements in formal systems that are provable in that system but whose shortest proof is absurdly long. For example, the statement: "This statement cannot be proved in Peano arithmetic in less than a googolplex symbols" is provable in Peano arithmetic but the shortest proof has at least a googolplex symbols. It has a short proof in a more powerful system: in fact, it is easily provable in Peano arithmetic together with the statement that Peano arithmetic is consistent (which cannot be proved in Peano arithmetic by Gödel's incompleteness theorem). In this argument, Peano arithmetic can be replaced by any more powerful consistent system, and a googolplex can be replaced by any number that can be described concisely in the system. Harvey Friedman found some explicit natural examples of this phenomenon, giving some explicit statements in Peano arithmetic and other formal systems whose shortest proofs are ridiculously long . For example, the statement "there is an integer n such that if there is a sequence of rooted trees T1, T2, ..., Tn such that Tk has at most k+10 vertices, then some tree can be homeomorphically embedded in a later one" is provable in Peano arithmetic, but the shortest proof has length at least 10002, where 02 = 1 and n + 12 = 2(n2) (tetrational growth). The statement is a special case of Kruskal's theorem and has a short proof in second order arithmetic. See also List of incomplete proofs Proof by intimidation References Proofs,Long Long Long
List of long mathematical proofs
Mathematics
1,963
8,505,125
https://en.wikipedia.org/wiki/Kappa%20Columbae
Kappa Columbae, Latinized from κ Columbae, is a solitary star in the southern constellation of Columba. It has an apparent visual magnitude of 4.37, which is bright enough to be seen with the naked eye. Based upon an annual parallax shift of 17.87 mas, it is located at a distance of 183 light years from the Sun. It has a peculiar velocity of , making it a candidate runaway star. This is an evolved K-type giant star with a stellar classification of K0.5 IIIa. The measured angular diameter of this star, after correction for limb darkening, is . At the estimated distance of this star, this yields a physical size of about 10.5 times the radius of the Sun. It has an estimated 1.76 times the mass of the Sun and is about 1.7 billion years old. The star radiates 57.5 times the solar luminosity from its outer atmosphere at an effective temperature of 4,876 K. It is catalogued as a suspected variable star. In Chinese, (), meaning Grandson, refers to an asterism consisting of κ Columbae and θ Columbae. Consequently, κ Columbae itself is known as (, .). References K-type giants Columba (constellation) Columbae, Kappa Durchmusterung objects 043785 029807 02256
Kappa Columbae
Astronomy
291
1,300,489
https://en.wikipedia.org/wiki/Glial%20fibrillary%20acidic%20protein
Glial fibrillary acidic protein (GFAP) is a protein that is encoded by the GFAP gene in humans. It is a type III intermediate filament (IF) protein that is expressed by numerous cell types of the central nervous system (CNS), including astrocytes and ependymal cells during development. GFAP has also been found to be expressed in glomeruli and peritubular fibroblasts taken from rat kidneys, Leydig cells of the testis in both hamsters and humans, human keratinocytes, human osteocytes and chondrocytes and stellate cells of the pancreas and liver in rats. GFAP is closely related to the other three non-epithelial type III IF family members, vimentin, desmin and peripherin, which are all involved in the structure and function of the cell's cytoskeleton. GFAP is thought to help to maintain astrocyte mechanical strength as well as the shape of cells, but its exact function remains poorly understood, despite the number of studies using it as a cell marker. The protein was named and first isolated and characterized by Lawrence F. Eng in 1969. In humans, it is located on the long arm of chromosome 17. Structure Type III intermediate filaments contain three domains, named the head, rod and tail domains. The specific DNA sequence for the rod domain may differ between different type III intermediate filaments, but the structure of the protein is highly conserved. This rod domain coils around that of another filament to form a dimer, with the N-terminal and C-terminal of each filament aligned. Type III filaments such as GFAP are capable of forming both homodimers and heterodimers; GFAP can polymerize with other type III proteins. GFAP and other type III IF proteins cannot assemble with keratins, the type I and II intermediate filaments: in cells that express both proteins, two separate intermediate filament networks form, which can allow for specialization and increased variability. To form networks, the initial GFAP dimers combine to make staggered tetramers, which are the basic subunits of an intermediate filament. Since rod domains alone in vitro do not form filaments, the non-helical head and tail domains are necessary for filament formation. The head and tail regions have greater variability of sequence and structure. In spite of this increased variability, the head of GFAP contains two conserved arginines and an aromatic residue that have been shown to be required for proper assembly. Function in the central nervous system GFAP is expressed in the central nervous system in astrocyte cells, and the concentration of GFAP differs between different regions in the CNS, where the highest levels are found in medulla oblongata, cervical spinal cord and hippocampus. It is involved in many important CNS processes, including cell communication and the functioning of the blood brain barrier. GFAP has been shown to play a role in mitosis by adjusting the filament network present in the cell. During mitosis, there is an increase in the amount of phosphorylated GFAP, and a movement of this modified protein to the cleavage furrow. There are different sets of kinases at work; cdc2 kinase acts only at the G2 phase transition, while other GFAP kinases are active at the cleavage furrow alone. This specificity of location allows for precise regulation of GFAP distribution to the daughter cells. Studies have also shown that GFAP knockout mice undergo multiple degenerative processes including abnormal myelination, white matter structure deterioration, and functional/structural impairment of the blood–brain barrier. These data suggest that GFAP is necessary for many critical roles in the CNS. GFAP is proposed to play a role in astrocyte-neuron interactions as well as cell-cell communication. In vitro, using antisense RNA, astrocytes lacking GFAP do not form the extensions usually present with neurons. Studies have also shown that Purkinje cells in GFAP knockout mice do not exhibit normal structure, and these mice demonstrate deficits in conditioning experiments such as the eye-blink task. Biochemical studies of GFAP have shown MgCl2 and/or calcium/calmodulin dependent phosphorylation at various serine or threonine residues by PKC and PKA which are two kinases that are important for the cytoplasmic transduction of signals. These data highlight the importance of GFAP for cell-cell communication. GFAP has also been shown to be important in repair after CNS injury. More specifically for its role in the formation of glial scars in a multitude of locations throughout the CNS including the eye and brain. Autoimmune GFAP astrocytopathy In 2016 a CNS inflammatory disorder associated with anti-GFAP antibodies was described. Patients with autoimmune GFAP astrocytopathy developed meningoencephalomyelitis with inflammation of the meninges, the brain parenchyma, and the spinal cord. About one third of cases were associated with various cancers and many also expressed other CNS autoantibodies. Meningoencephalitis is the predominant clinical presentation of autoimmune GFAP astrocytopathy in published case series. It also can appear associated with encephalomyelitis and parkinsonism. Disease states There are multiple disorders associated with improper GFAP regulation, and injury can cause glial cells to react in detrimental ways. Glial scarring is a consequence of several neurodegenerative conditions, as well as injury that severs neural material. The scar is formed by astrocytes interacting with fibrous tissue to re-establish the glial margins around the central injury core and is partially caused by up-regulation of GFAP. Another condition directly related to GFAP is Alexander disease, a rare genetic disorder. Its symptoms include mental and physical retardation, dementia, enlargement of the brain and head, spasticity (stiffness of arms and/or legs), and seizures. The cellular mechanism of the disease is the presence of cytoplasmic accumulations containing GFAP and heat shock proteins, known as Rosenthal fibers. Mutations in the coding region of GFAP have been shown to contribute to the accumulation of Rosenthal fibers. Some of these mutations have been proposed to be detrimental to cytoskeleton formation as well as an increase in caspase 3 activity, which would lead to increased apoptosis of cells with these mutations. GFAP therefore plays an important role in the pathogenesis of Alexander disease. Notably, the expression of some GFAP isoforms have been reported to decrease in response to acute infection or neurodegeneration. Additionally, reduction in GFAP expression has also been reported in Wernicke's encephalopathy. The HIV-1 viral envelope glycoprotein gp120 can directly inhibit the phosphorylation of GFAP and GFAP levels can be decreased in response to chronic infection with HIV-1, varicella zoster, and pseudorabies. Decreases in GFAP expression have been reported in Down's syndrome, schizophrenia, bipolar disorder and depression. The generally high abundance of GFAP in the CNS has led to a great interest in GFAP as a blood biomarker of acute injury to the brain and spinal cord in different types of disease mechanisms, such as traumatic brain injury and cerebrovascular disease. Elevated blood levels of GFAP are also found in neuroinflammatory diseases, such as multiple sclerosis and neuromyelitis optica, a disease targeting astrocytes. In a study of 22 child patients undergoing extracorporeal membrane oxygenation (ECMO), children with abnormally high levels of GFAP were 13 times more likely to die and 11 times more likely to suffer brain injury than children with normal GFAP levels. Interactions Glial fibrillary acidic protein has been shown to interact with MEN1 and PSEN1. Isoforms Although GFAP alpha is the only isoform which is able to assemble homomerically, GFAP has 8 different isoforms which label distinct subpopulations of astrocytes in the human and rodent brain. These isoforms include GFAP kappa, GFAP +1 and the currently best researched GFAP delta. GFAP delta appears to be linked with neural stem cells (NSCs) and may be involved in migration. GFAP+1 is an antibody which labels two isoforms. Although GFAP+1 positive astrocytes are supposedly not reactive astrocytes, they have a wide variety of morphologies including processes of up to 0.95 mm (seen in the human brain). The expression of GFAP+1 positive astrocytes is linked with old age and the onset of AD pathology. See also 17q21.31 microdeletion syndrome (Koolen–de Vries syndrome) GFAP stain References Further reading External links GeneReviews/NCBI/NIH/UW entry on Alexander disease OMIM entries on Alexander disease Proteins Biology of bipolar disorder
Glial fibrillary acidic protein
Chemistry
1,956
68,177,737
https://en.wikipedia.org/wiki/Mitoquinone%20mesylate
Mitoquinone mesylate (MitoQ) is a synthetic analogue of coenzyme Q10 which has antioxidant effects. It was first developed in New Zealand in the late 1990s. It has significantly improved bioavailability and improved mitochondrial penetration compared to coenzyme Q10, and has shown potential in a number of medical indications, being widely sold as a dietary supplement. A 2014 review found insufficient evidence for the use of mitoquinone mesylate in Parkinson's disease and other movement disorders. See also Idebenone Nicotinamide mononucleotide Pyrroloquinoline quinone References Antioxidants 1,4-Benzoquinones
Mitoquinone mesylate
Chemistry
146
11,632
https://en.wikipedia.org/wiki/Food%20and%20Drug%20Administration
The United States Food and Drug Administration (FDA or US FDA) is a federal agency of the Department of Health and Human Services. The FDA is responsible for protecting and promoting public health through the control and supervision of food safety, tobacco products, caffeine products, dietary supplements, prescription and over-the-counter pharmaceutical drugs (medications), vaccines, biopharmaceuticals, blood transfusions, medical devices, electromagnetic radiation emitting devices (ERED), cosmetics, animal foods & feed and veterinary products. The FDA's primary focus is enforcement of the Federal Food, Drug, and Cosmetic Act (FD&C). However, the agency also enforces other laws, notably Section 361 of the Public Health Service Act as well as associated regulations. Much of this regulatory-enforcement work is not directly related to food or drugs but involves other factors like regulating lasers, cellular phones, and condoms. In addition, the FDA takes control of diseases in the contexts varying from household pets to human sperm donated for use in assisted reproduction. The FDA is led by the commissioner of food and drugs, appointed by the president with the advice and consent of the Senate. The commissioner reports to the secretary of health and human services. Robert Califf is the current commissioner as of February 17, 2022. The FDA's headquarters is located in the White Oak area of Silver Spring, Maryland. The agency has 223 field offices and 13 laboratories located across the 50 states, the United States Virgin Islands, and Puerto Rico. In 2008, the FDA began to post employees to foreign countries, including China, India, Costa Rica, Chile, Belgium, and the United Kingdom. Organizational structure Department of Health and Human Services Food and Drug Administration Office of the Commissioner (C) Office of the Chief Counsel (OCC) Office of the Executive Secretariat (OES) Office of the Counselor to the Commissioner Office of Digital Transformation (ODT) Center for Biologics Evaluation and Research (CBER) Center for Devices and Radiological Health (CDRH) Center for Drug Evaluation and Research (CDER) Center for Food Safety and Applied Nutrition (CFSAN) Center for Tobacco Products (CTP) Center for Veterinary Medicine (CVM) Oncology Center of Excellence (OCE) Office of Regulatory Affairs (ORA) Office of Clinical Policy and Programs (OCPP) Office of External Affairs (OEA) Office of Food Policy and Response (OFPR) Office of Minority Health and Health Equity (OMHHE) Office of Operations (OO) Office of Policy, Legislation, and International Affairs (OPLIA) Office of the Chief Scientist (OCS) National Center for Toxicological Research (NCTR) Office of Women's Health (OWH) Location Headquarters FDA headquarters facilities are currently located in Montgomery County and Prince George's County, Maryland. White Oak Federal Research Center Since 1990, the FDA has had employees and facilities on of the White Oak Federal Research Center in the White Oak area of Silver Spring, Maryland. In 2001, the General Services Administration (GSA) began new construction on the campus to consolidate the FDA's 25 existing operations in the Washington metropolitan area, its headquarters in Rockville, and several fragmented office buildings. The first building, the Life Sciences Laboratory, was dedicated and opened with 104 employees in December 2003. the FDA campus has a population of 10,987 employees housed in approximately of space, divided into ten offices and four laboratory buildings. The campus houses the Office of the Commissioner (OC), the Office of Regulatory Affairs (ORA),  the Center for Drug Evaluation and Research (CDER), the Center for Devices and Radiological Health (CDRH), the Center for Biologics Evaluation and Research (CBER) and offices for the Center for Veterinary Medicine (CVM). With the passing of the FDA Reauthorization Act of 2017, the FDA projects a 64% increase in employees to 18,000 over the next 15 years and wants to add approximately of office and special use space to their existing facilities. The National Capital Planning Commission approved a new master plan for this expansion in December 2018, and construction is expected to be completed by 2035, dependent on GSA appropriations. Field locations Office of Regulatory Affairs The Office of Regulatory Affairs is considered the agency's "eyes and ears", conducting the vast majority of the FDA's work in the field. Its employees, known as Consumer Safety Officers, or more commonly known simply as investigators, inspect production, warehousing facilities, investigate complaints, illnesses, or outbreaks, and review documentation in the case of medical devices, drugs, biological products, and other items where it may be difficult to conduct a physical examination or take a physical sample of the product. The Office of Regulatory Affairs is divided into five regions, which are further divided into 20 districts. The districts are based roughly on the geographic divisions of the Federal court system. Each district comprises a main district office and a number of Resident Posts, which are FDA remote offices that serve a particular geographic area. ORA also includes the Agency's network of regulatory laboratories, which analyze any physical samples taken. Though samples are usually food-related, some laboratories are equipped to analyze drugs, cosmetics, and radiation-emitting devices. Office of Criminal Investigations The Office of Criminal Investigations was established in 1991 to investigate criminal cases. To do so, OCI employs approximately 200 Special Agents nationwide who, unlike ORA Investigators, are armed, have badges, and do not focus on technical aspects of the regulated industries. Rather, OCI agents pursue and develop cases when individuals and companies commit criminal actions, such as fraudulent claims or knowingly and willfully shipping known adulterated goods in interstate commerce. In many cases, OCI pursues cases involving violations of Title 18 of the United States Code (e.g., conspiracy, false statements, wire fraud, mail fraud), in addition to prohibited acts as defined in Chapter III of the FD&C Act. OCI Special Agents often come from other criminal investigations backgrounds, and frequently work closely with the Federal Bureau of Investigation, Assistant Attorney General, and even Interpol. OCI receives cases from a variety of sources—including ORA, local agencies, and the FBI, and works with ORA Investigators to help develop the technical and science-based aspects of a case. Other locations The FDA has a number of field offices across the United States, in addition to international locations in China, India, Europe, the Middle East, and Latin America. Scope and funding As of 2021, the FDA had responsibility for overseeing $2.7 trillion in food, medical, and tobacco products. Some 54% of its budget derives from the federal government, and 46% is covered by industry user fees for FDA services. For example, pharmaceutical firms pay fees to expedite drug reviews. According to Forbes, pharmaceutical firms provide 75% of the FDA's drug review budget. Regulatory programs Emergency approvals (EUA) Emergency Use Authorization (EUA) is a mechanism that was created to facilitate the availability and use of medical countermeasures, including vaccines and personal protective equipment, during public health emergencies such as the Zika virus epidemic, the Ebola virus epidemic and the COVID-19 pandemic. Regulations The programs for safety regulation vary widely by the type of product, its potential risks, and the regulatory powers granted to the agency. For example, the FDA regulates almost every facet of prescription drugs, including testing, manufacturing, labeling, advertising, marketing, efficacy, and safety—yet FDA regulation of cosmetics focuses primarily on labeling and safety. The FDA regulates most products with a set of published standards enforced by a modest number of facility inspections. Inspection observations are documented on Form 483. In June 2018, the FDA released a statement regarding new guidelines to help food and drug manufacturers "implement protections against potential attacks on the U.S. food supply". One of the guidelines includes the Intentional Adulteration (IA) rule, which requires strategies and procedures by the food industry to reduce the risk of compromise in facilities and processes that are significantly vulnerable. The FDA also uses tactics of regulatory shaming, mainly through online publication of non-compliance, warning letters, and "shaming lists." Regulation by shaming harnesses firms' sensitivity to reputational damage. For example, in 2018, the agency published an online "black list", in which it named dozens of branded drug companies that are supposedly using unlawful or unethical means to attempt to impede competition from generic drug companies. The FDA frequently works with other federal agencies, including the Department of Agriculture, the Drug Enforcement Administration, Customs and Border Protection, and the Consumer Product Safety Commission. They also often work with local and state government agencies in performing regulatory inspections and enforcement actions. Food and dietary supplements The regulation of food and dietary supplements by the Food and Drug Administration is governed by various statutes enacted by the United States Congress and interpreted by the FDA. Pursuant to the Federal Food, Drug, and Cosmetic Act and accompanying legislation, the FDA has authority to oversee the quality of substances sold as food in the United States, and to monitor claims made in the labeling of both the composition and the health benefits of foods. The FDA subdivides substances that it regulates as food into various categories—including foods, food additives, added substances (human-made substances that are not intentionally introduced into food, but nevertheless end up in it), and dietary supplements. Dietary supplements or dietary ingredients include vitamins, minerals, herbs, amino acids, and enzymes. Specific standards the FDA exercises differ from one category to the next. Furthermore, legislation had granted the FDA a variety of means to address violations of standards for a given substance category. Under the Dietary Supplement Health and Education Act of 1994 (DSHEA), the FDA is responsible for ensuring that manufacturers and distributors of dietary supplements and dietary ingredients meet the current requirements. These manufacturers and distributors are not allowed to advertise their products in an adulterated way, and they are responsible for evaluating the safety and labeling of their product. The FDA has a "Dietary Supplement Ingredient Advisory List" that includes ingredients that sometimes appear on dietary supplements but need further evaluation. An ingredient is added to this list when it is excluded from use in a dietary supplement, does not appear to be an approved food additive or recognized as safe, and/or is subjected to the requirement for pre-market notification without having a satisfied requirement. "FDA-Approved" vs. "FDA-Accepted in Food Processing" The FDA does not approve applied coatings used in the food processing industry. There is no review process to approve the composition of nonstick coatings; nor does the FDA inspect or test these materials. Through their governing of processes, however, the FDA does have a set of regulations that cover the formulation, manufacturing, and use of nonstick coatings. Hence, materials like Polytetrafluoroethylene (Teflon) are not and cannot be considered as FDA Approved, but rather, they are a "FDA Compliant" or "FDA Acceptable". Medical countermeasures (MCMs) Medical countermeasures (MCMs) are products such as biologics and pharmaceutical drugs that can protect from or treat the health effects of a chemical, biological, radiological, or nuclear (CBRN) attack. MCMs can also be used for prevention and diagnosis of symptoms associated with CBRN attacks or threats. The FDA runs a program called the "FDA Medical Countermeasures Initiative" (MCMi), with programs funded by the federal government. It helps support "partner" agencies and organisations prepare for public health emergencies that could require MCMs. Medications The Center for Drug Evaluation and Research uses different requirements for the three main drug product types: new drugs, generic drugs, and over-the-counter drugs. A drug is considered "new" if it is made by a different manufacturer, uses different excipients or inactive ingredients, is used for a different purpose, or undergoes any substantial change. The most rigorous requirements apply to new molecular entities: drugs that are not based on existing medications. New medications New drugs receive extensive scrutiny before FDA approval in a process called a new drug application (NDA). Under the Presidency of Donald Trump, the agency has worked to make the drug-approval process go faster. Critics, however, argue that FDA standards are not sufficiently rigorous to prevent unsafe or ineffective drugs from getting approval. New drugs are available only by prescription by default. A change to over-the-counter (OTC) status is a separate process, and the drug must be approved through an NDA first. A drug that is approved is said to be "safe and effective when used as directed". Very rare, limited exceptions to this multi-step process involving animal testing and controlled clinical trials can be granted out of compassionate use protocols. This was the case during the 2015 Ebola epidemic with the use, by prescription and authorization, of ZMapp and other experimental treatments, and for new drugs that can be used to treat debilitating and/or very rare conditions for which no existing remedies or drugs are satisfactory, or where there has not been an advance in a long period of time. The studies are progressively longer, gradually adding more individuals as they progress from stage I to stage III, normally over a period of years, and normally involve drug companies, the government and its laboratories, and often medical schools and hospitals and clinics. However, any exceptions to the aforementioned process are subject to strict review and scrutiny and conditions, and are only given if a substantial amount of research and at least some preliminary human testing has shown that they are believed to be somewhat safe and possibly effective. (See FDA Special Protocol Assessment about Phase III trials.) Advertising and promotion The FDA's Office of Prescription Drug Promotion (OPDP) has responsibilities that revolve around the review and regulation of prescription drug advertising and promotion. This is achieved through surveillance activities and the issuance of enforcement letters to pharmaceutical manufacturers. Advertising and promotion for over-the-counter drugs is regulated by the Federal Trade Commission. The FDA also implements regulatory oversight through engagement with third-party enforcer-firms. It expects pharmaceutical companies to ensure that third-party suppliers and labs comply with the agency's health and safety guidelines . The drug advertising regulation contains two broad requirements: (1) a company may advertise or promote a drug only for the specific indication or medical use for which it was approved by FDA. Also, an advertisement must contain a "fair balance" between the benefits and the risks (side effects) of a drug. The regulation of drug advertising in the U.S. is divided between the Food and Drug Administration (FDA) and the Federal Trade Commission (FTC), based on whether the drug in question is a prescription drug or an over-the-counter (OTC) drug. The FDA oversees the advertising of prescription drugs, while the FTC regulates the advertising of OTC drugs. The term off-label refers to the practice of prescribing a drug for a different purpose than what the FDA approved. Due to this approval requirement, manufacturers were prohibited from advertising COVID-19 vaccines during the period in which they had only been approved under Emergency Use Authorization. Post-market safety surveillance After NDA approval, the sponsor must then review and report to the FDA every single patient adverse drug experience it learns of. They must report unexpected serious and fatal adverse drug events within 15 days, and other events on a quarterly basis. The FDA also receives directly adverse drug event reports through its MedWatch program. These reports are called "spontaneous reports" because reporting by consumers and health professionals is voluntary. While this remains the primary tool of post-market safety surveillance, FDA requirements for post-marketing risk management are increasing. As a condition of approval, a sponsor may be required to conduct additional clinical trials, called Phase IV trials. In some cases, the FDA requires risk management plans called Risk Evaluation and Mitigation Strategies (REMS) for some drugs that require actions to be taken to ensure that the drug is used safely. For example, thalidomide can cause birth defects, but has uses that outweigh the risks if men and women taking the drugs do not conceive a child; a REMS program for thalidomide mandates an auditable process to ensure that people taking the drug take action to avoid pregnancy; many opioid drugs have REMS programs to avoid addiction and diversion of drugs. The drug isotretinoin has a REMS program called iPLEDGE. Generic drugs Generic drugs are chemical and therapeutic equivalents of name-brand drugs, normally whose patents have expired. Approved generic drugs should have the same dosage, safety, effectiveness, strength, stability, and quality, as well as route of administration. In general, they are less expensive than their name brand counterparts, are manufactured and marketed by rival companies and, in the 1990s, accounted for about a third of all prescriptions written in the United States. For a pharmaceutical company to gain approval to produce a generic drug, the FDA requires scientific evidence that the generic drug is interchangeable with or therapeutically equivalent to the originally approved drug. This is called an Abbreviated New Drug Application (ANDA). 80% of prescription drugs sold in the United States are generic brands. Generic drug scandal In 1989, a major scandal erupted involving the procedures used by the FDA to approve generic drugs for sale to the public. Charges of corruption in generic drug approval first emerged in 1988 during the course of an extensive congressional investigation into the FDA. The oversight subcommittee of the United States House Energy and Commerce Committee resulted from a complaint brought against the FDA by Mylan Laboratories Inc. of Pittsburgh. When its application to manufacture generics were subjected to repeated delays by the FDA, Mylan, convinced that it was being discriminated against, soon began its own private investigation of the agency in 1987. Mylan eventually filed suit against two former FDA employees and four drug-manufacturing companies, charging that corruption within the federal agency resulted in racketeering and in violations of antitrust law. "The order in which new generic drugs were approved was set by the FDA employees even before drug manufacturers submitted applications" and, according to Mylan, this illegal procedure was followed to give preferential treatment to certain companies. During the summer of 1989, three FDA officials (Charles Y. Chang, David J. Brancato, Walter Kletch) pleaded guilty to criminal charges of accepting bribes from generic drugs makers, and two companies (Par Pharmaceutical and its subsidiary Quad Pharmaceuticals) pleaded guilty to giving bribes. Furthermore, it was discovered that several manufacturers had falsified data submitted in seeking FDA authorization to market certain generic drugs. Vitarine Pharmaceuticals of New York, which sought approval of a generic version of the drug Dyazide, a medication for high blood pressure, submitted Dyazide, rather than its generic version, for the FDA tests. In April 1989, the FDA investigated 11 manufacturers for irregularities; and later brought that number up to 13. Dozens of drugs were eventually suspended or recalled by manufacturers. In the early 1990s, the U.S. Securities and Exchange Commission filed securities fraud charges against the Bolar Pharmaceutical Company, a major generic manufacturer based in Long Island, New York. Over-the-counter drugs Over-the-counter (OTC) are drugs like aspirin that do not require a doctor's prescription. The FDA has a list of approximately 800 such approved ingredients that are combined in various ways to create more than 100,000 OTC drug products. Many OTC drug ingredients had been previously approved prescription drugs now deemed safe enough for use without a medical practitioner's supervision like ibuprofen. Ebola treatment In 2014, the FDA added an Ebola treatment being developed by Canadian pharmaceutical company Tekmira to the Fast Track program, but halted the phase 1 trials in July pending the receipt of more information about how the drug works. This was widely viewed as increasingly important in the face of a major outbreak of the disease in West Africa that began in late March 2014 and ended in June 2016. Coronavirus (COVID-19) testing During the coronavirus pandemic, FDA granted emergency use authorization for personal protective equipment (PPE), in vitro diagnostic equipment, ventilators and other medical devices. On March 18, 2020, FDA inspectors postponed most foreign facility inspections and all domestic routine surveillance facility inspections. In contrast, the USDA's Food Safety and Inspection Service (FSIS) continued inspections of meatpacking plants, which resulted in 145 FSIS field employees who tested positive for COVID-19, and three who died. Vaccines, blood and tissue products, and biotechnology The Center for Biologics Evaluation and Research is the branch of the FDA responsible for ensuring the safety and efficacy of biological therapeutic agents. These include blood and blood products, vaccines, allergenics, cell and tissue-based products, and gene therapy products. New biologics are required to go through a premarket approval process called a Biologics License Application (BLA), similar to that for drugs. The original authority for government regulation of biological products was established by the 1902 Biologics Control Act, with additional authority established by the 1944 Public Health Service Act. Along with these Acts, the Federal Food, Drug, and Cosmetic Act applies to all biologic products, as well. Originally, the entity responsible for regulation of biological products resided under the National Institutes of Health; this authority was transferred to the FDA in 1972. Medical and radiation-emitting devices The Center for Devices and Radiological Health (CDRH) is the branch of the FDA responsible for the premarket approval of all medical devices, as well as overseeing the manufacturing, performance and safety of these devices. The definition of a medical device is given in the FD&C Act, and it includes products from the simple toothbrush to complex devices such as implantable neurostimulators. CDRH also oversees the safety performance of non-medical devices that emit certain types of electromagnetic radiation. Examples of CDRH-regulated devices include cellular phones, airport baggage screening equipment, television receivers, microwave ovens, tanning booths, and laser products. CDRH regulatory powers include the authority to require certain technical reports from the manufacturers or importers of regulated products, to require that radiation-emitting products meet mandatory safety performance standards, to declare regulated products defective, and to order the recall of defective or noncompliant products. CDRH also conducts limited amounts of direct product testing. "FDA-Cleared" vs "FDA-Approved" Clearance requests are required for medical devices that prove they are "substantially equivalent" to the predicate devices already on the market. Approved requests are for items that are new or substantially different and need to demonstrate "safety and efficacy", for example they may be inspected for safety in case of new toxic hazards. Both aspects need to be proved or provided by the submitter to ensure proper procedures are followed. Cosmetics Cosmetics are regulated by the Center for Food Safety and Applied Nutrition, the same branch of the FDA that regulates food. Cosmetic products are not, in general, subject to premarket approval by the FDA unless they make "structure or function claims" that make them into drugs (see Cosmeceutical). However, all color additives must be specifically FDA approved before manufacturers can include them in cosmetic products sold in the U.S. The FDA regulates cosmetics labeling, and cosmetics that have not been safety tested must bear a warning to that effect. According to the industry advocacy group, the American Council on Science and Health, though the cosmetic industry is primarily responsible for its own product safety, the FDA can intervene when necessary to protect the public. In general, though, cosmetics do not require pre-market approval or testing. The ACSH says that companies must place a warning note on their products if they have not been tested, and that experts in cosmetic ingredient review also play a role in monitoring safety through influence on ingredients, but they lack legal authority. According to the ACSH, it has reviewed about 1,200 ingredients and has suggested that several hundred be restricted—but there is no standard or systemic method for reviewing chemicals for safety, or a clear definition of what 'safety' even means so that all chemicals get tested on the same basis. However, on December 29, 2022, President Biden signed the '2023 Consolidated Budget Act', which includes the 'Cosmetics Regulatory Modernization Act of 2022 (MoCRA)', which is a stricter regulation that is different from the previous regulations. MoCRA requires compliance with matters such as serious adverse event reporting, safety substantiation, additional labeling, record keeping, and Good Manufacturing Practices (GMP). MoCRA also calls on the FDA to grant Mandatory Recall Authority and establish regulations for GMP rules, flavor allergen labeling rules, and testing methods for cosmetics containing talc. Veterinary products The Center for Veterinary Medicine (CVM) is a center of the FDA that regulates food additives and drugs that are given to animals. CVM regulates animal drugs, animal food including pet animal, and animal medical devices. The FDA's requirements to prevent the spread of bovine spongiform encephalopathy are also administered by CVM through inspections of feed manufacturers. CVM does not regulate vaccines for animals; these are handled by the United States Department of Agriculture. Tobacco products The FDA regulates tobacco products with authority established by the 2009 Family Smoking Prevention and Tobacco Control Act. This Act requires color warnings on cigarette packages and printed advertising, and text warnings from the U.S. Surgeon General. The nine new graphic warning labels were announced by the FDA in June 2011 and were scheduled to be required to appear on packaging by September 2012. The implementation date is uncertain, due to ongoing proceedings in the case of R.J. Reynolds Tobacco Co. v. U.S. Food and Drug Administration. R.J. Reynolds, Lorillard, Commonwealth Brands, Liggett Group and Santa Fe Natural Tobacco Company have filed suit in Washington, D.C. federal court claiming that the graphic labels are an unconstitutional way of forcing tobacco companies to engage in anti-smoking advocacy on the government's behalf. A First Amendment lawyer, Floyd Abrams, is representing the tobacco companies in the case, contending requiring graphic warning labels on a lawful product cannot withstand constitutional scrutiny. The Association of National Advertisers and the American Advertising Federation have also filed a brief in the suit, arguing that the labels infringe on commercial free speech and could lead to further government intrusion if left unchallenged. In November 2011, Federal judge Richard Leon of the U.S. District Court for the District of Columbia temporarily halted the new labels, likely delaying the requirement that tobacco companies display the labels. The U.S. Supreme Court ultimately could decide the matter. In July 2017, the FDA announced a plan that would reduce the current levels of nicotine permitted in tobacco cigarettes. The proposed regulation, identified as RIN 0910-AI76, titled "Tobacco Product Standard for Nicotine Yield of Cigarettes and Certain Other Combusted Tobacco Products," seeks to reduce the nicotine content in cigarettes to approximately 0.7 milligrams per gram of tobacco. Regulation of living organisms With acceptance of premarket notification 510(k) k033391 in January 2004, the FDA granted Ronald Sherman permission to produce and market medical maggots for use in humans or other animals as a prescription medical device. Medical maggots represent the first living organism allowed by the Food and Drug Administration for production and marketing as a prescription medical device. In June 2004, the FDA cleared Hirudo medicinalis (medicinal leeches) as the second living organism legal to use as a medical device. The FDA also requires that milk be pasteurized to remove bacteria. International Cooperation In February 2011, President Barack Obama and Canadian Prime Minister Stephen Harper issued a "Declaration on a Shared Vision for Perimeter Security and Economic Competitiveness" and announced the creation of the Canada-United States Regulatory Cooperation Council (RCC) "to increase regulatory transparency and coordination between the two countries." Under the RCC mandate, the FDA and Health Canada undertook a "first of its kind" initiative by selecting "as its first area of alignment common cold indications for certain over-the-counter antihistamine ingredients (GC 2013-01-10)." A more recent example of the FDA's international work is their 2018 cooperation with regulatory and law-enforcement agencies worldwide through Interpol as part of Operation Pangea XI. The FDA targeted 465 websites that illegally sold potentially dangerous, unapproved versions of opioid, oncology, and antiviral prescription drugs to U.S. consumers. The agency focused on transaction laundering schemes in order to uncover the complex online drug network. Science and research programs The FDA carries out research and development activities to develop technology and standards that support its regulatory role, with the objective of resolving scientific and technical challenges before they become impediments. The FDA's research efforts include the areas of biologics, medical devices, drugs, women's health, toxicology, food safety and applied nutrition, and veterinary medicine. Data management The FDA has collected a large amount of data through the decades. The OpenFDA project was created to enable easy access of the data for the public and was officially launched in June 2014. History Up until the 20th century, there were few federal laws regulating the contents and sale of domestically produced food and pharmaceuticals, with one exception being the Vaccine Act of 1813. The history of the FDA can be traced to the latter part of the 19th century and the Division of Chemistry of the U.S. Department of Agriculture, which itself derived from the Copyright and Patent Clause. Under Harvey Washington Wiley, appointed chief chemist in 1883, the Division began conducting research into the adulteration and misbranding of food and drugs on the American market. Wiley's advocacy came at a time when the public had become aroused to hazards in the marketplace by muckraking journalists like Upton Sinclair, and became part of a general trend for increased federal regulations in matters pertinent to public safety during the Progressive Era. The Biologics Control Act of 1902 was put in place after a diphtheria antitoxin derived from tetanus-contaminated serum caused the deaths of thirteen children in St. Louis, Missouri. The serum was originally collected from a horse named Jim who had contracted tetanus. In June 1906, President Theodore Roosevelt signed into law the Pure Food and Drug Act of 1906, also known as the "Wiley Act" after its chief advocate. The Act prohibited, under penalty of seizure of goods, the interstate transport of food that had been "adulterated". The Act applied similar penalties to the interstate marketing of "adulterated" drugs, in which the "standard of strength, quality, or purity" of the active ingredient was not either stated clearly on the label or listed in the United States Pharmacopeia or the National Formulary. The responsibility for examining food and drugs for such "adulteration" or "misbranding" was given to Wiley's USDA Bureau of Chemistry. Wiley used these new regulatory powers to pursue an aggressive campaign against the manufacturers of foods with chemical additives, but the Chemistry Bureau's authority was soon checked by judicial decisions, which narrowly defined the bureau's powers and set high standards for proof of fraudulent intent. In 1927, the Bureau of Chemistry's regulatory powers were reorganized under a new USDA body, the Food, Drug, and Insecticide Administration. This name was shortened to the Food and Drug Administration (FDA) three years later. By the 1930s, muckraking journalists, consumer protection organizations, and federal regulators began mounting a campaign for stronger regulatory authority by publicizing a list of injurious products that had been ruled permissible under the 1906 law, including radioactive beverages, mascara that could cause blindness, and worthless "cures" for diabetes and tuberculosis. The resulting proposed law did not get through the Congress of the United States for five years, but was rapidly enacted into law following the public outcry over the 1937 Elixir Sulfanilamide tragedy, in which over 100 people died after using a drug formulated with a toxic, untested solvent. President Franklin Delano Roosevelt signed the Federal Food, Drug, and Cosmetic Act into law on June 24, 1938. The new law significantly increased federal regulatory authority over drugs by mandating a pre-market review of the safety of all new drugs, as well as banning false therapeutic claims in drug labeling without requiring that the FDA prove fraudulent intent. The law also authorized the FDA to issue minimum food standards of identity for all mass-produced foods to reduce food fraud. Soon after passage of the 1938 Act, the FDA began to designate certain drugs as safe for use only under the supervision of a medical professional, and the category of "prescription-only" drugs was securely codified into law by the Durham-Humphrey Amendment in 1951. These developments confirmed extensive powers for the FDA to enforce post-marketing recalls of ineffective drugs. Outside of the US, the drug thalidomide was marketed for the relief of general nausea and morning sickness, but caused birth defects and even the death of thousands of babies when taken during pregnancy. American mothers were largely unaffected as Frances Oldham Kelsey of the FDA refused to authorize the medication for market. In 1962, the Kefauver-Harris Amendment to the FD&C Act was passed, which represented a "revolution" in FDA regulatory authority. The most important change was the requirement that all new drug applications demonstrate "substantial evidence" of the drug's efficacy for a marketed indication, in addition to the existing requirement for pre-marketing demonstration of safety. This marked the start of the FDA approval process in its modern form. These reforms had the effect of increasing the time, and the difficulty, required to bring a drug to market. One of the most important statutes in establishing the modern American pharmaceutical market was the 1984 Drug Price Competition and Patent Term Restoration Act, more commonly known as the "Hatch-Waxman Act" after its chief sponsors. The act extended the patent exclusivity terms of new drugs, and tied those extensions, in part, to the length of the FDA approval process for each individual drug. For generic manufacturers, the Act created a new approval mechanism, the Abbreviated New Drug Application (ANDA), in which the generic drug manufacturer need only demonstrate that their generic formulation has the same active ingredient, route of administration, dosage form, strength, and pharmacokinetic properties ("bioequivalence") as the corresponding brand-name drug. This Act has been credited with, in essence, creating the modern generic drug industry. Concerns about the length of the drug approval process were brought to the fore early in the AIDS epidemic. In the mid- and late 1980s, ACT-UP and other HIV activist organizations accused the FDA of unnecessarily delaying the approval of medications to fight HIV and opportunistic infections. Partly in response to these criticisms, the FDA issued new rules to expedite approval of drugs for life-threatening diseases, and expanded pre-approval access to drugs for patients with limited treatment options. All of the initial drugs approved for the treatment of HIV/AIDS were approved through these accelerated approval mechanisms. Frank Young, then commissioner of the FDA, was behind the Action Plan Phase II, established in August 1987 for quicker approval of AIDS medication. In two instances, state governments have sought to legalize drugs that the FDA has not approved. Under the theory that federal law, passed pursuant to Constitutional authority, overrules conflicting state laws, federal authorities still claim the authority to seize, arrest, and prosecute for possession and sales of these substances, even in states where they are legal under state law. The first wave was the legalization by 27 states of laetrile in the late 1970s. This drug was used as a treatment for cancer, but scientific studies both before and after this legislative trend found it ineffective. The second wave concerned medical marijuana in the 1990s and 2000s. Though Virginia passed legislation allowing doctors to recommend cannabis for glaucoma or the side effects of chemotherapy, a more widespread trend began in California with the Compassionate Use Act of 1996. When the FDA requested Endo Pharmaceuticals on June 8, 2017, to remove oxymorphone hydrochloride from the market, it was the first request in FDA history to recall an effective drug over its potential for misuse. 21st-century reforms Critical Path Initiative The Critical Path Initiative is the FDA's effort to stimulate and facilitate a national effort to modernize the sciences through which FDA-regulated products are developed, evaluated, and manufactured. The Initiative was launched in March 2004, with the release of a report entitled Innovation/Stagnation: Challenge and Opportunity on the Critical Path to New Medical Products. Patients' rights to access unapproved drugs The Compassionate Investigational New Drug program was created after Randall v. U.S. ruled in favor of Robert C. Randall in 1978, creating a program for medical marijuana. A 2006 court case, Abigail Alliance v. von Eschenbach, would have forced radical changes in FDA regulation of unapproved drugs. The Abigail Alliance argued that the FDA must license drugs for use by terminally ill patients with "desperate diagnoses", after they have completed Phase I testing. The case won an initial appeal in May 2006, but that decision was reversed by a March 2007 rehearing. The US Supreme Court declined to hear the case, and the final decision denied the existence of a right to unapproved medications. Critics of the FDA's regulatory power argue that the FDA takes too long to approve drugs that might ease pain and human suffering faster if brought to market sooner. The AIDS crisis created some political efforts to streamline the approval process. However, these limited reforms were targeted for AIDS drugs, not for the broader market. This has led to the call for more robust and enduring reforms that would allow patients, under the care of their doctors, access to drugs that have passed the first round of clinical trials. Post-marketing drug safety monitoring The widely publicized recall of Vioxx, a non-steroidal anti-inflammatory drug (NSAID) now estimated to have contributed to fatal heart attacks in thousands of Americans, played a strong role in driving a new wave of safety reforms at both the FDA rulemaking and statutory levels. The FDA approved Vioxx in 1999, and initially hoped it would be safer than previous NSAIDs due to its reduced risk of intestinal tract bleeding. However, a number of pre and post-marketing studies suggested that Vioxx might increase the risk of myocardial infarction, and results from the APPROVe trial in 2004 conclusively demonstrated this. Faced with numerous lawsuits, the manufacturer voluntarily withdrew it from the market. The example of Vioxx has been prominent in an ongoing debate over whether new drugs should be evaluated on the basis of their absolute safety, or their safety relative to existing treatments for a given condition. In the wake of the Vioxx recall, there were widespread calls by major newspapers, medical journals, consumer advocacy organizations, lawmakers, and FDA officials for reforms in the FDA's procedures for pre- and post-market drug safety regulation. In 2006, a Congressional committee was appointed by the Institute of Medicine to review pharmaceutical safety regulation in the U.S. and to issue recommendations for improvements. The committee was composed of 16 experts, including leaders in clinical medicine medical research, economics, biostatistics, law, public policy, public health, and the allied health professions, as well as current and former executives from the pharmaceutical, hospital, and health insurance industries. The authors found major deficiencies in the current FDA system for ensuring the safety of drugs on the American market. Overall, the authors called for an increase in the regulatory powers, funding, and independence of the FDA. Some of the committee's recommendations were incorporated into drafts of the PDUFA IV amendment, which was signed into law as the Food and Drug Administration Amendments Act of 2007. As of 2011, Risk Minimization Action Plans (RiskMAPS) have been created to ensure risks of a drug never outweigh the benefits of that drug within the post-marketing period. This program requires that manufacturers design and implement periodic assessments of their programs' effectiveness. The Risk Minimization Action Plans are set in place depending on the overall level of risk a prescription drug is likely to pose to the public. Pediatric drug testing Prior to the 1990s, only 20% of all drugs prescribed for children in the United States were tested for safety or efficacy in a pediatric population. This became a major concern of pediatricians as evidence accumulated that the physiological response of children to many drugs differed significantly from those drugs' effects on adults. Children react differently to the drugs because of many reasons, including size, weight, etc. There were several reasons that few medical trials were done with children. For many drugs, children represented such a small proportion of the potential market, that drug manufacturers did not see such testing as cost-effective. Also, the belief that children are ethically restricted in their ability to give informed consent brought increased governmental and institutional hurdles to approval of these clinical trials, and greater concerns about legal liability. Thus, for decades, most medicines prescribed to children in the U.S. were done so in a non-FDA-approved, "off-label" manner, with dosages "extrapolated" from adult data through body weight and body-surface-area calculations. In an initial FDA attempt to address this issue they produced the 1994 FDA Final Rule on Pediatric Labeling and Extrapolation, which allowed manufacturers to add pediatric labeling information, but required drugs that had not been tested for pediatric safety and efficacy to bear a disclaimer to that effect. However, this rule failed to motivate many drug companies to conduct additional pediatric drug trials. In 1997, the FDA proposed a rule to require pediatric drug trials from the sponsors of New Drug Applications. However, this new rule was successfully preempted in federal court as exceeding the FDA's statutory authority. While this debate was unfolding, Congress used the Food and Drug Administration Modernization Act of 1997 to pass incentives that gave pharmaceutical manufacturers a six-month patent term extension on new drugs submitted with pediatric trial data. The Best Pharmaceuticals for Children Act of 2007 reauthorized these provisions and allowed the FDA to request NIH-sponsored testing for pediatric drug testing, although these requests are subject to NIH funding constraints. In the Pediatric Research Equity Act of 2003, Congress codified the FDA's authority to mandate manufacturer-sponsored pediatric drug trials for certain drugs as a "last resort" if incentives and publicly funded mechanisms proved inadequate. Priority review voucher (PRV) The priority review voucher is a provision of the Food and Drug Administration Amendments Act of 2007, which awards a transferable "priority review voucher" to any company that obtains approval for a treatment for a neglected tropical diseases. The system was first proposed by Duke University faculty David Ridley, Henry Grabowski, and Jeffrey Moe in their 2006 Health Affairs paper: "Developing Drugs for Developing Countries". President Obama signed into law the Food and Drug Administration Safety and Innovation Act of 2012, which extended the authorization until 2017. Rules for generic biologics Since the 1990s, many successful new drugs for the treatment of cancer, autoimmune diseases, and other conditions have been protein-based biotechnology drugs, regulated by the Center for Biologics Evaluation and Research. Many of these drugs are extremely expensive; for example, the anti-cancer drug Avastin costs $55,000 for a year of treatment, while the enzyme replacement therapy drug Cerezyme costs $200,000 per year, and must be taken by Gaucher's disease patients for life. Biotechnology drugs do not have the simple, readily verifiable chemical structures of conventional drugs, and are produced through complex, often proprietary, techniques, such as transgenic mammalian cell cultures. Because of these complexities, the 1984 Hatch-Waxman Act did not include biologics in the Abbreviated New Drug Application (ANDA) process. This precluded the possibility of generic drug competition for biotechnology drugs. In February 2007, identical bills were introduced into the House to create an ANDA process for the approval of generic biologics, but were not passed. Mobile medical applications In 2013, a guidance was issued to regulate mobile medical applications and protect users from their unintended use. This guidance distinguishes the apps subjected to regulation based on the marketing claims of the apps. Incorporation of the guidelines during the development phase of these apps has been proposed for expedited market entry and clearance. Electronic Submissions Gateway (ESG) To standardize, automate and streamline the flow of regulatory data, FDA introduced an Electronic Submissions Gateway (ESG) in 2006. This gateway allows reporting organizations to send regulatory submissions to different centers over the internet, packaged in a center-specific format and enveloped as a GNU-compatible .tar.gz file, through either a FDA-specific WebTrader application or via a more generic B2B communication protocol called AS2 (Applicability Statement 2). For WebTrader, which is recommended for manual, small-volume submissions, users would typically install a client application on their computers and upload the package through it to FDA server. In AS2, which is recommended for automated or high-volume submissions, users can use any standard AS2 software to transmit the package to FDA by including additional routing details on top of standard AS2, in the form of custom HTTP request headers. Criticism The FDA has regulatory oversight over a large array of products that affect the health and life of American citizens. As a result, the FDA's powers and decisions are carefully monitored by several governmental and non-governmental organizations. A $1.8million 2006 Institute of Medicine report on pharmaceutical regulation in the U.S. found major deficiencies in the current FDA system for ensuring the safety of drugs on the American market. Overall, the authors called for an increase in the regulatory powers, funding, and independence of the FDA. A 2022 article from Politico raised concerns that food is not a high priority at the FDA. The report explains the FDA has structural and leadership problems in the food division and is often deferential to industry. This might be attributed to lobbying and influence of big food companies in Washington, D.C. See also Adverse reaction Adverse event Adverse drug reaction Biosecurity Biosecurity in the United States Drug Efficacy Study Implementation Food and Drug Administration Modernization Act of 1997 FDA Food Safety Modernization Act of 2011 FDA Fast Track Development Program (for drugs) Food and Drug Administration Amendments Act of 2007 (e.g. drugs) Food and Drug Administration Safety and Innovation Act of 2012 (GAIN/QIDP etc.) Inverse benefit law Investigational Device Exemption (for use in clinical trials) Kefauver Harris Amendment 1962 – required "proof-of-efficacy" for drugs International: Food Administration International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) African Union: African Medicines Agency Australia: Therapeutic Goods Administration Brazil: National Health Surveillance Agency Canada: Marketed Health Products Directorate Canada: Health Canada Denmark: Danish Medicines Agency European Union: European Medicines Agency Germany: Federal Institute for Drugs and Medical Devices India: Food Safety and Standards Authority of India India: Central Drugs Standard Control Organization Japan: Ministry of Health, Labour and Welfare (MHLW) Japan: Pharmaceuticals and Medical Devices Agency Mexico: Federal Commission for the Protection against Sanitary Risk Philippines: Food and Drug Administration (FDA) Singapore: Health Sciences Authority United Kingdom: Medicines and Healthcare products Regulatory Agency United States: Food and Drug Administration Notes References Further reading External links Food and Drug Administration in the Federal Register Food and Drug Administration in the Code of Federal Regulations Strategic Plan (archived) Online books by United States Food and Drug Administration at The Online Books Page Food and Drug Administration apportionments on OpenOMB 1906 establishments in the United States American medical research Government agencies established in 1906 Regulators of biotechnology products National agencies for drug regulation
Food and Drug Administration
Chemistry,Biology
9,846
839,112
https://en.wikipedia.org/wiki/Fumaric%20acid
Fumaric acid or trans-butenedioic acid is an organic compound with the formula HO2CCH=CHCO2H. A white solid, fumaric acid occurs widely in nature. It has a fruit-like taste and has been used as a food additive. Its E number is E297. The salts and esters are known as fumarates. Fumarate can also refer to the ion (in solution). Fumaric acid is the trans isomer of butenedioic acid, while maleic acid is the cis isomer. Biosynthesis and occurrence It is produced in eukaryotic organisms from succinate in complex 2 of the electron transport chain via the enzyme succinate dehydrogenase. Fumaric acid is found in fumitory (Fumaria officinalis), bolete mushrooms (specifically Boletus fomentarius var. pseudo-igniarius), lichen, and Iceland moss. Fumarate is an intermediate in the citric acid cycle used by cells to produce energy in the form of adenosine triphosphate (ATP) from food. It is formed by the oxidation of succinate by the enzyme succinate dehydrogenase. Fumarate is then converted by the enzyme fumarase to malate. Human skin naturally produces fumaric acid when exposed to sunlight. Fumarate is also a product of the urea cycle. Uses Food Fumaric acid has been used as a food acidulant since 1946. It is approved for use as a food additive in the EU, USA and Australia and New Zealand. As a food additive, it is used as an acidity regulator and can be denoted by the E number E297. It is generally used in beverages and baking powders for which requirements are placed on purity. Fumaric acid is used in the making of wheat tortillas as a food preservative and as the acid in leavening. It is generally used as a substitute for tartaric acid and occasionally in place of citric acid, at a rate of 1 g of fumaric acid to every ~1.5 g of citric acid, in order to add sourness, similarly to the way malic acid is used. As well as being a component of some artificial vinegar flavors, such as "Salt and Vinegar" flavored potato chips, it is also used as a coagulant in stove-top pudding mixes. The European Commission Scientific Committee on Animal Nutrition, part of DG Health, found in 2014 that fumaric acid is "practically non-toxic" but high doses are probably nephrotoxic after long-term use. Medicine Fumaric acid was developed as a medicine to treat the autoimmune condition psoriasis in the 1950s in Germany as a tablet containing 3 esters, primarily dimethyl fumarate, and marketed as Fumaderm by Biogen Idec in Europe. Biogen would later go on to develop the main ester, dimethyl fumarate, as a treatment for multiple sclerosis. In patients with relapsing-remitting multiple sclerosis, the ester dimethyl fumarate (BG-12, Biogen) significantly reduced relapse and disability progression in a phase 3 trial. It activates the Nrf2 antioxidant response pathway, the primary cellular defense against the cytotoxic effects of oxidative stress. Other uses Fumaric acid is used in the manufacture of polyester resins and polyhydric alcohols and as a mordant for dyes. When fumaric acid is added to their feed, lambs produce up to 70% less methane during digestion. Synthesis Fumaric acid is produced based on catalytic isomerisation of maleic acid in aqueous solutions at low pH. It precipitates from the reaction solution. Maleic acid is accessible in large volumes as a hydrolysis product of maleic anhydride, produced by catalytic oxidation of benzene or butane. Historic and laboratory routes Fumaric acid was first prepared from succinic acid. A traditional synthesis involves oxidation of furfural (from the processing of maize) using chlorate in the presence of a vanadium-based catalyst. Reactions The chemical properties of fumaric acid can be anticipated from its component functional groups. This weak acid forms a diester, it undergoes bromination across the double bond, and it is a good dienophile. Safety The oral LD50 is 10g/kg. See also Citric acid cycle (TCA cycle) Fumarate reductase Photosynthesis Maleic acid, the cis isomer of fumaric acid Succinic acid References External links International Chemical Safety Card 1173 Dicarboxylic acids Food additives Food acidity regulators Urea cycle Citric acid cycle compounds Nephrotoxins E-number additives Alkene derivatives Metabolic intermediates Biomolecules Carboxylic acid-based monomers
Fumaric acid
Chemistry,Biology
1,052
72,900,340
https://en.wikipedia.org/wiki/Neural%20radiance%20field
A neural radiance field (NeRF) is a method based on deep learning for reconstructing a three-dimensional representation of a scene from two-dimensional images. The NeRF model enables downstream applications of novel view synthesis, scene geometry reconstruction, and obtaining the reflectance properties of the scene. Additional scene properties such as camera poses may also be jointly learned. First introduced in 2020, it has since gained significant attention for its potential applications in computer graphics and content creation. Algorithm The NeRF algorithm represents a scene as a radiance field parametrized by a deep neural network (DNN). The network predicts a volume density and view-dependent emitted radiance given the spatial location (x, y, z) and viewing direction in Euler angles (θ, Φ) of the camera. By sampling many points along camera rays, traditional volume rendering techniques can produce an image. Data collection A NeRF needs to be retrained for each unique scene. The first step is to collect images of the scene from different angles and their respective camera pose. These images are standard 2D images and do not require a specialized camera or software. Any camera is able to generate datasets, provided the settings and capture method meet the requirements for SfM (Structure from Motion). This requires tracking of the camera position and orientation, often through some combination of SLAM, GPS, or inertial estimation. Researchers often use synthetic data to evaluate NeRF and related techniques. For such data, images (rendered through traditional non-learned methods) and respective camera poses are reproducible and error-free. Training For each sparse viewpoint (image and camera pose) provided, camera rays are marched through the scene, generating a set of 3D points with a given radiance direction (into the camera). For these points, volume density and emitted radiance are predicted using the multi-layer perceptron (MLP). An image is then generated through classical volume rendering. Because this process is fully differentiable, the error between the predicted image and the original image can be minimized with gradient descent over multiple viewpoints, encouraging the MLP to develop a coherent model of the scene. Variations and improvements Early versions of NeRF were slow to optimize and required that all input views were taken with the same camera in the same lighting conditions. These performed best when limited to orbiting around individual objects, such as a drum set, plants or small toys. Since the original paper in 2020, many improvements have been made to the NeRF algorithm, with variations for special use cases. Fourier feature mapping In 2020, shortly after the release of NeRF, the addition of Fourier Feature Mapping improved training speed and image accuracy. Deep neural networks struggle to learn high frequency functions in low dimensional domains; a phenomenon known as spectral bias. To overcome this shortcoming, points are mapped to a higher dimensional feature space before being fed into the MLP. Where is the input point, are the frequency vectors, and are coefficients. This allows for rapid convergence to high frequency functions, such as pixels in a detailed image. Bundle-adjusting neural radiance fields One limitation of NeRFs is the requirement of knowing accurate camera poses to train the model. Often times, pose estimation methods are not completely accurate, nor is the camera pose even possible to know. These imperfections result in artifacts and suboptimal convergence. So, a method was developed to optimize the camera pose along with the volumetric function itself. Called Bundle-Adjusting Neural Radiance Field (BARF), the technique uses a dynamic low-pass filter to go from coarse to fine adjustment, minimizing error by finding the geometric transformation to the desired image. This corrects imperfect camera poses and greatly improves the quality of NeRF renders. Multiscale representation Conventional NeRFs struggle to represent detail at all viewing distances, producing blurry images up close and overly aliased images from distant views. In 2021, researchers introduced a technique to improve the sharpness of details at different viewing scales known as mip-NeRF (comes from mipmap). Rather than sampling a single ray per pixel, the technique fits a gaussian to the conical frustum cast by the camera. This improvement effectively anti-aliases across all viewing scales. mip-NeRF also reduces overall image error and is faster to converge at ~half the size of ray-based NeRF. Learned initializations In 2021, researchers applied meta-learning to assign initial weights to the MLP. This rapidly speeds up convergence by effectively giving the network a head start in gradient descent. Meta-learning also allowed the MLP to learn an underlying representation of certain scene types. For example, given a dataset of famous tourist landmarks, an initialized NeRF could partially reconstruct a scene given one image. NeRF in the wild Conventional NeRFs are vulnerable to slight variations in input images (objects, lighting) often resulting in ghosting and artifacts. As a result, NeRFs struggle to represent dynamic scenes, such as bustling city streets with changes in lighting and dynamic objects. In 2021, researchers at Google developed a new method for accounting for these variations, named NeRF in the Wild (NeRF-W). This method splits the neural network (MLP) into three separate models. The main MLP is retained to encode the static volumetric radiance. However, it operates in sequence with a separate MLP for appearance embedding (changes in lighting, camera properties) and an MLP for transient embedding (changes in scene objects). This allows the NeRF to be trained on diverse photo collections, such as those taken by mobile phones at different times of day. Relighting In 2021, researchers added more outputs to the MLP at the heart of NeRFs. The output now included: volume density, surface normal, material parameters, distance to the first surface intersection (in any direction), and visibility of the external environment in any direction. The inclusion of these new parameters lets the MLP learn material properties, rather than pure radiance values. This facilitates a more complex rendering pipeline, calculating direct and global illumination, specular highlights, and shadows. As a result, the NeRF can render the scene under any lighting conditions with no re-training. Plenoctrees Although NeRFs had reached high levels of fidelity, their costly compute time made them useless for many applications requiring real-time rendering, such as VR/AR and interactive content. Introduced in 2021, Plenoctrees (plenoptic octrees) enabled real-time rendering of pre-trained NeRFs through division of the volumetric radiance function into an octree. Rather than assigning a radiance direction into the camera, viewing direction is taken out of the network input and spherical radiance is predicted for each region. This makes rendering over 3000x faster than conventional NeRFs. Sparse Neural Radiance Grid Similar to Plenoctrees, this method enabled real-time rendering of pretrained NeRFs. To avoid querying the large MLP for each point, this method bakes NeRFs into Sparse Neural Radiance Grids (SNeRG). A SNeRG is a sparse voxel grid containing opacity and color, with learned feature vectors to encode view-dependent information. A lightweight, more efficient MLP is then used to produce view-dependent residuals to modify the color and opacity. To enable this compressive baking, small changes to the NeRF architecture were made, such as running the MLP once per pixel rather than for each point along the ray. These improvements make SNeRG extremely efficient, outperforming Plenoctrees. Instant NeRFs In 2022, researchers at Nvidia enabled real-time training of NeRFs through a technique known as Instant Neural Graphics Primitives. An innovative input encoding reduces computation, enabling real-time training of a NeRF, an improvement orders of magnitude above previous methods. The speedup stems from the use of spatial hash functions, which have access times, and parallelized architectures which run fast on modern GPUs. Related techniques Plenoxels Plenoxel (plenoptic volume element) uses a sparse voxel representation instead of a volumetric approach as seen in NeRFs. Plenoxel also completely removes the MLP, instead directly performing gradient descent on the voxel coefficients. Plenoxel can match the fidelity of a conventional NeRF in orders of magnitude less training time. Published in 2022, this method disproved the importance of the MLP, showing that the differentiable rendering pipeline is the critical component. Gaussian splatting Gaussian splatting is a newer method that can outperform NeRF in render time and fidelity. Rather than representing the scene as a volumetric function, it uses a sparse cloud of 3D gaussians. First, a point cloud is generated (through structure from motion) and converted to gaussians of initial covariance, color, and opacity. The gaussians are directly optimized through stochastic gradient descent to match the input image. This saves computation by removing empty space and foregoing the need to query a neural network for each point. Instead, simply "splat" all the gaussians onto the screen and they overlap to produce the desired image. Photogrammetry Traditional photogrammetry is not neural, instead using robust geometric equations to obtain 3D measurements. NeRFs, unlike photogrammetric methods, do not inherently produce dimensionally accurate 3D geometry. While their results are often sufficient for extracting accurate geometry (ex: via cube marching), the process is fuzzy, as with most neural methods. This limits NeRF to cases where the output image is valued, rather than raw scene geometry. However, NeRFs excel in situations with unfavorable lighting. For example, photogrammetric methods completely break down when trying to reconstruct reflective or transparent objects in a scene, while a NeRF is able to infer the geometry. Applications NeRFs have a wide range of applications, and are starting to grow in popularity as they become integrated into user-friendly applications. Content creation NeRFs have huge potential in content creation, where on-demand photorealistic views are extremely valuable. The technology democratizes a space previously only accessible by teams of VFX artists with expensive assets. Neural radiance fields now allow anyone with a camera to create compelling 3D environments. NeRF has been combined with generative AI, allowing users with no modelling experience to instruct changes in photorealistic 3D scenes. NeRFs have potential uses in video production, computer graphics, and product design. Interactive content The photorealism of NeRFs make them appealing for applications where immersion is important, such as virtual reality or videogames. NeRFs can be combined with classical rendering techniques to insert synthetic objects and create believable virtual experiences. Medical imaging NeRFs have been used to reconstruct 3D CT scans from sparse or even single X-ray views. The model demonstrated high fidelity renderings of chest and knee data. If adopted, this method can save patients from excess doses of ionizing radiation, allowing for safer diagnosis. Robotics and autonomy The unique ability of NeRFs to understand transparent and reflective objects makes them useful for robots interacting in such environments. The use of NeRF allowed a robot arm to precisely manipulate a transparent wine glass; a task where traditional computer vision would struggle. NeRFs can also generate photorealistic human faces, making them valuable tools for human-computer interaction. Traditionally rendered faces can be uncanny, while other neural methods are too slow to run in real-time. References Machine learning algorithms Computer vision
Neural radiance field
Engineering
2,427
52,961,252
https://en.wikipedia.org/wiki/Transition%20metal%20boryl%20complex
In chemistry, a transition metal boryl complex is a molecular species with a formally anionic boron center coordinated to a transition metal. They have the formula LnM-BR2 or LnM-(BR2LB) (L = ligand, R = H, organic substituent, LB = Lewis base). One example is (C5Me5)Mn(CO)2(BH2PMe3) (Me = methyl). Such compounds, especially those derived from catecholborane and the related pinacolborane, are intermediates in transition metal-catalyzed borylation reactions. Synthesis Oxidative addition is the main route to metal boryl complexes. Both B-H and B-B bonds add to low-valent metal complexes. For example, catecholborane oxidatively adds to Pt(0) to give the boryl hydride. C6H4O2BH + Pt(PR3)2 → C6H4O2B Pt(PR3)2H Addition of diboron tetrafluoride to Vaska's complex gives the triboryl iridium(III) derivative: 2B2F4 + IrCl(CO)(PPh3)2 → Ir(BF2)3(CO)(PPh3)2 + ClBF2 References Boron Coordination chemistry
Transition metal boryl complex
Chemistry
293
76,010,836
https://en.wikipedia.org/wiki/Exidia%20crenata
Exidia crenata is a species of fungus in the family Auriculariaceae. It has the English name of amber jelly roll. Basidiocarps (fruit bodies) are gelatinous, brown to orange-brown, and turbinate (top-shaped). It typically grows on dead attached twigs and branches of broadleaved trees and is found in North America. Taxonomy The species was originally described from North Carolina in 1822 by German-American mycologist Lewis David de Schweinitz as Tremella crenata. It was transferred to the genus Exidia by Fries in the same year. Exidia crenata was widely considered a synonym of the European Exidia recisa until molecular research, based on cladistic analysis of DNA sequences, showed that the American species is distinct. Description The gelatinous fruit bodies are amber, wide, and thick. They can be translucent and tend to be moist and/or glossy. The spore print is white. Similar species Similar species include E. recisa and members of Auricularia and Phaeotremella. Habitat and distribution Exidia crenata is a wood-rotting species, typically found on dead attached twigs and branches of broadleaf trees, particularly oak. It is widely distributed in eastern North America, where it can be found from September through May, thriving in winter. References Auriculariales Fungi described in 1822 Fungi of North America Fungus species Taxa named by Lewis David de Schweinitz
Exidia crenata
Biology
306
2,136,757
https://en.wikipedia.org/wiki/Alarm%20signal
In animal communication, an alarm signal is an antipredator adaptation in the form of signals emitted by social animals in response to danger. Many primates and birds have elaborate alarm calls for warning conspecifics of approaching predators. For example, the alarm call of the blackbird is a familiar sound in many gardens. Other animals, like fish and insects, may use non-auditory signals, such as chemical messages. Visual signs such as the white tail flashes of many deer have been suggested as alarm signals; they are less likely to be received by conspecifics, so have tended to be treated as a signal to the predator instead. Different calls may be used for predators on the ground or from the air. Often, the animals can tell which member of the group is making the call, so that they can disregard those of little reliability. Evidently, alarm signals promote survival by allowing the receivers of the alarm to escape from the source of peril; this can evolve by kin selection, assuming the receivers are related to the signaller. However, alarm calls can increase individual fitness, for example by informing the predator it has been detected. Alarm calls are often high-frequency sounds because these sounds are harder to localize. Selective advantage This cost/benefit tradeoff of alarm calling behaviour has sparked many interest debates among evolutionary biologists seeking to explain the occurrence of such apparently "self-sacrificing" behaviour. The central question is this: "If the ultimate purpose of any animal behaviour is to maximize the chances that an organism's own genes are passed on, with maximum fruitfulness, to future generations, why would an individual deliberately risk destroying itself (their entire genome) for the sake of saving others (other genomes)?". Altruism Some scientists have used the evidence of alarm-calling behaviour to challenge the theory that "evolution works only/primarily at the level of the gene and of the gene's 'interest' in passing itself along to future generations." If alarm-calling is truly an example of altruism, then human understanding of natural selection becomes more complicated than simply "survival of the fittest gene". Other researchers, generally those who support the selfish gene theory, question the authenticity of this "altruistic" behaviour. For instance, it has been observed that vervets sometimes emit calls in the presence of a predator, and sometimes do not. Studies show that these vervets may call more often when they are surrounded by their own offspring and by other relatives who share many of their genes. Other researchers have shown that some forms of alarm calling, for example, "aerial predator whistles" produced by Belding's ground squirrels, do not increase the chances that a caller will get eaten by a predator; the alarm call is advantageous to both caller and recipient by frightening and warding off the predator. Predator-directed signaling Another theory suggests that alarm signals function to attract further predators, which fight over the prey organism, giving it a better chance of escape. Others still suggest they are a deterrent to predators, communicating the prey's alertness to the predator. One such case is the western swamphen (Porphyrio porphyrio), which gives conspicuous visual tail flicks (see also aposematism, handicap principle and stotting). Further research Considerable research effort continues to be directed toward the purpose and ramifications of alarm-calling behaviour, because, to the extent that this research has the ability to comment on the occurrence or non-occurrence of altruistic behaviour, these findings can be applied to the understanding of altruism in human behaviour. Monkeys with alarm calls Vervet monkeys Vervet monkeys (Chlorocebus Pygerythus) are some of the most studied monkeys when it comes to vocalization and alarm calls within the nonhuman primates. They are most known for making alarm calls in the presence of their most common predators (leopards, eagles, and snakes). Alarm calls of the vervet monkey are considered arbitrary in relation to the predator that they signify, in the sense that while the calls may be distinct to the threat that the monkeys are perceiving, the calls do not mimic the actual sounds of the predatorit is like yelling "Danger!" when seeing an angry dog rather than making barking sounds. This type of alarm calls is seen as the earliest example of symbolic communication (the relationship between signifier and signified is arbitrary and purely conventional) in nonhuman primates. However, there is much debate on whether the vervet monkeys alarm calls are actual "words" in the sense of purposely manipulating sounds to communicate specific meaning or are unintentional sounds that are made when interacting with an outside stimulus. Like small children who cannot communicate words effectively make random noises when being played with or are stimulated by something in their immediate environment. As children grow and begin learning how to communicate the noises, they make are very broad in relation to their environment. They begin to recognize the things in their environment but there more things than known words or noises so a certain sound may reference multiple things.  As children get older, they can become more specific about the noises and words made in relation to the things in their environment. It is thought that as Vervet monkeys get older they are able to learn and break the broad categories into more specific sub categories to a specific context. In an experiment conducted by Dr. Tabitha Price, they used custom software to gather the acoustic sounds of male and female Vervet monkeys from East Africa and male Vervet monkey from South Africa. The point of the experiment was to gather the acoustic sounds of these monkeys when stimulated by the presence of snakes (mainly Python), raptors, terrestrial animals (mostly Leopards), and aggression. Then to determine if the calls could be distinguished with a known context. The experiment determined that while the Vervet monkeys were able to categorize different predators and members of different social groups, however their ability to communicate specific threats is not proven. The chirps and barks that Vervet monkeys make as an eagle swoops in are the same chirps and barks that are made in moments of high arousal. Similarly, the barks made for leopards are the same that are made during aggressive interactions. The environment that they exist in is too complex for their ability to communicate about everything in their environment specifically. In an experiment conducted by Dr. Julia Fischer, a drone was flown over Vervet monkeys and recorded the sounds produced. The Vervet monkeys made alarm calls that were almost identical to the eagle calls of East African Vervets. When a sound recording of the drone was played back a few days later to a monkey that was alone and away from the main group it looked up and scanned the sky. Dr. Fischer concluded that Vervet monkeys can be exposed to a new threat once and understand what it means. It is still debated whether or not Vervet monkeys are actually aware of what the alarm calls mean. One side of the argument is that the monkeys give alarm calls because they are simply excited. The other side of the argument is that the alarm calls create mental representation of predators in the listeners minds. The common middle ground argument is that they give alarm calls because they want others to elicit a certain response, not necessarily because they want the group to think that there is a specific threat near. Ultimately there is not enough evidence to support whether or not the calls are simply identifying a threat or calling for specific action due to the threat. Campbell's mona monkeys Campbell's mona monkeys also generate alarm calls, but in a different way than vervet monkeys. Instead of having discrete calls for each predator, Campbell monkeys have two distinct types of calls which contain different calls which consist in an acoustic continuum of affixes which change meaning. It has been suggested that this is a homology to human morphology. Similarly, the cotton-top tamarin is able to use a limited vocal range of alarm calls to distinguish between aerial and land predators. Both the Campbell monkey and the cotton-top tamarin have demonstrated abilities similar to vervet monkeys' ability to distinguish likely direction of predation and appropriate responses. That these three species use vocalizations to warn others of danger has been called by some proof of proto-language in primates. However, there is some evidence that this behavior does not refer to the predators themselves but to threat, distinguishing calls from words. Barbary macaque Another species that exhibits alarm calls is the Barbary macaque. Barbary macaque mothers are able to recognize their own offspring's calls and behave accordingly. Diana monkeys Diana monkeys also produce alarm signals. Adult males respond to each other's calls, showing that calling can be contagious. Their calls differ based on signaller sex, threat type, habitat, and caller ontogenetic or lifetime predator experience. Diana monkeys emit different alarm calls as a result of their sex. Male alarm calls are primarily used for resource defence, male–male competition, and communication between groups of conspecifics. Female alarm calls are mainly used for communication within groups of conspecifics to avoid predation. Alarm calls are also predator-specific. In Taï National Park, Côte d'Ivoire, Diana monkeys are preyed on by leopards, eagles, and chimpanzees, but only emit alarm calls for leopards and eagles. When threatened by chimpanzees, they use silent, cryptic behaviour and when threatened by leopards or eagles, they emit predator-specific alarm signals. When researchers play recordings of alarm calls produced by chimpanzees in response to predation by leopards, about fifty per cent of nearby Diana monkeys switch from a chimpanzee antipredator response to a leopard antipredator response. The tendency to switch responses is especially prominent among Diana monkey populations that live within the main range of the chimpanzee community. This shift in antipredator response suggests that the monkeys interpret chimpanzee-produced, leopard-induced alarm calls as evidence for the presence of a leopard. When the same monkeys are then played recordings of leopard growls, their reactions confirm that they had anticipated the presence of a leopard. There are three possible cognitive mechanisms explaining how Diana monkeys recognize chimpanzee-produced, leopard-induced alarm calls as evidence for a nearby leopard: associative learning, causal reasoning, or a specialized learning programme driven by adaptive antipredator behaviour necessary for survival. In Taï National Park and Tiwai Island, Sierra Leone, specific acoustic markers in the alarm calls of Diana monkeys convey both threat type and caller familiarity information to a receiver. In Taï National Park, males respond to eagle alarm signals based on predator type and caller familiarity. When the caller is unfamiliar to the receiver, the response call is a 'standard' eagle alarm call, characterized by a lack of frequency transition at the onset of the call. When the caller is familiar, the response call is an atypical eagle alarm call, characterized by a frequency transition at onset, and the response is faster than to that of an unfamiliar caller. On Tiwai Island, males respond in the opposite way to eagle alarm signals. When the caller is familiar, the response call is a 'standard' eagle alarm call, without a frequency transition at onset. When the caller is unfamiliar, the response call is an atypical eagle alarm call, with a frequency transition at onset. The differences in alarm call responses are due to differences in habitat. In Taï National Park, there is a low predation risk from eagles, high primate abundance, strong intergroup competition, and a tendency for group encounters to result in high levels of aggression. Therefore, even familiar males are a threat to whom males respond with aggression and an atypical eagle alarm call. Only unfamiliar males, who are likely to be solitary and non-threatening, do not receive an aggressive response and receive only a typical alarm call. On Tiwai Island, there is a high predation risk from eagles, low primate abundance, a tendency for group encounters to result in peaceful retreats, low resource competition, and frequent sharing of foraging areas. Therefore, there is a lack of aggression towards familiar conspecifics to whom receivers respond with a 'standard' eagle call. There is only aggression towards unfamiliar conspecifics, to whom receivers respond with an atypical call. Simply put, a response with a typical eagle alarm call prioritizes the risk of predation, while a response with an atypical alarm call prioritizes social aggression. Diana monkeys also display a predisposition for flexibility in acoustic variation of alarm call assembly related to caller ontogenetic or lifetime predator experience. In Taï National Park and on Tiwai Island, monkeys have a predisposition to threat-specific alarm signals. In Taï National Park, males produce three threat-specific calls in response to three threats: eagles, leopards, and general disturbances. On Tiwai Island, males produce two threat-specific calls in response to two groups of threats: eagles, and leopards or general disturbances. The latter are likely grouped together because leopards have not been present on the island for at least 30 years. Other primates, such as Guereza monkeys and putty-nosed monkeys, also have two main predator-specific assemblies of alarm calls. Predator-specific alarm signals differ based on call sequence assembly. General disturbances in Taï National Park and both general disturbances and leopards on Tiwai Island result in alarm calls assembled into long sequences. Conversely, leopards in Taï National Park result in alarm calls that typically begin with voiced inhalations followed by a small number of calls. These differences in alarm call arrangement between habitats are due to ontogenetic experience; specifically, a lack of experience with leopards on Tiwai Island causes them to be classified in the same predator category as general disturbances, and accordingly, leopards receive the same type of alarm call arrangement. Sexual selection for predator-specific alarm signals In guenons, selection is responsible for the evolution of predator-specific alarm calls from loud calls. Loud calls travel long distances, greater than that of the home range, and can be used as beneficial alarm calls to warn conspecifics or showcase their awareness of and deter a predator. A spectrogram of a subadult male call shows that the call is a composition of elements from a female alarm call and male loud call, suggesting the transition from the latter to the former during puberty and suggesting that alarm calls gave rise to loud calls through sexual selection. Evidence of sexual selection in loud calls includes structural adaptations for long-range communication, co-incidence of loud calls and sexual maturity, and sexual dimorphism in loud calls. Controversy over the semantic properties of alarm calls Not all scholars of animal communication accept the interpretation of alarm signals in monkeys as having semantic properties or transmitting "information". Prominent spokespersons for this opposing view are Michael Owren and Drew Rendall, whose work on this topic has been widely cited and debated. The alternative to the semantic interpretation of monkey alarm signals as suggested in the cited works is that animal communication is primarily a matter of influence rather than information, and that vocal alarm signals are essentially emotional expressions influencing the animals that hear them. In this view monkeys do not designate predators by naming them, but may react with different degrees of vocal alarm depending on the nature of the predator and its nearness on detection, as well as by producing different types of vocalization under the influence of the monkey's state and movement during the different types of escape required by different predators. Other monkeys may learn to use these emotional cues along with the escape behaviour of the alarm signaller to help make a good decision about the best escape route for themselves, without there having been any naming of predators. Chimpanzees with alarm calls Chimpanzees emit alarm calls in response to predators, such as leopards and snakes. They produce three types of alarm calls: acoustically-variable 'hoos', 'barks', and 'SOS screams'. Alarm signalling is impacted by receiver knowledge and caller age, can be coupled with receiver monitoring, and is important to the understanding of the evolution of hominoid communication. Receiver knowledge Alarm signalling varies depending on the receiver's knowledge of a certain threat. Chimpanzees are significantly more likely to produce an alarm call when conspecifics are unaware of a potential threat or were not nearby when a previous alarm call was emitted. When judging if conspecifics are unaware of potential dangers, chimpanzees do not solely look for behavioural cues, but also assess receiver mental states and use this information to target signalling and monitoring. In a recent experiment, caller chimpanzees were shown a fake snake as a predator and were played pre-recorded calls from receivers. Some receivers emitted calls that were snake-related, and therefore represented receivers with knowledge of the predator, while other receivers emitted calls that were not snake-related, and therefore represented receivers without knowledge of the predator. In response to the non-snake-related calls from receivers, the signallers increased their vocal and nonvocal signalling and coupled it with increased receiver monitoring. Caller age Chimpanzee age impacts the frequency of alarm signalling. Chimpanzees over 80 months of age are more likely to produce an alarm call than those less than 80 months of age. There are several hypotheses for this lack of alarm calling in infants zero to four years of age. The first hypothesis is a lack of motivation to produce alarm calls because of mothers in close proximity that minimize the infant's perception of a threat or that respond to a threat before the infant can. Infants may also be more likely to use distress calls to catch their mother's attention in order for her to produce an alarm call. Infants might also lack the physical ability to produce alarm calls or lack the necessary experience to classify unfamiliar objects as dangerous and worthy of an alarm signal. Therefore, alarm calling may require advanced levels of development, perception, categorization, and social cognition. Other factors Other factors, such as signaller arousal, receiver identity, or increased risk of predation from calling, do not have a significant effect on the frequency of alarm call production. Receiver monitoring However, while alarm signals can be coupled with receiver monitoring, there is a lack of consensus on the definition, starting age, and purpose of monitoring. It is either defined as the use of three subsequent gaze alternations, from a threat to a nearby conspecific and back to the threat, or as the use of two gaze alternations. Moreover, while some studies only report gaze alternation as starting in late juveniles, other studies report gaze alternation in infants as early as five months of age. In infants and juveniles, it is potentially a means of social referencing or social learning through which younger chimpanzees check the reactions of more experienced conspecifics in order to learn about new situations, such as potential threats. It has also been proposed to be a communicative behaviour or simply the result of shifts in attention between different environmental elements. Evolution of hominoid communication The evolution of hominoid communication is evident through chimpanzee 'hoo' vocalizations and alarm calls. Researchers propose that communication evolved as natural selection diversified 'hoo' vocalizations into context-dependent 'hoos' for travel, rest, and threats. Context-dependent communication is beneficial and likely maintained by selection as it facilities cooperative activities and social cohesion between signallers and receivers that can increase the likelihood of survival. Alarm calls in chimpanzees also point to the evolution of hominoid language. Callers assess conspecifics' knowledge of threats, fill their need for information, and, in doing so, use social cues and intentionality to inform communication. Filling a gap in information and incorporating social cues and intentionality into communication are all components of human language. These shared elements between chimpanzee and human communication suggest an evolutionary basis, most likely that the last common human ancestor with chimpanzees also possessed these linguistic abilities. False alarm calls Deceptive alarm calls are used by male swallows (Hirundo rustica). Males give these false alarm calls when females leave the nest area during the mating season, and are thus able to disrupt extra-pair copulations. As this is likely to be costly to females, it can be seen as an example of sexual conflict. Counterfeit alarm calls are also used by thrushes to avoid intraspecific competition. By sounding a bogus alarm call normally used to warn of aerial predators, they can frighten other birds away, allowing them to eat undisturbed. Vervets seem to be able to understand the referent of alarm calls instead of merely the acoustic properties, and if another species' specific alarm call (terrestrial or aerial predator, for instance) is used incorrectly with too high of a regularity, the vervet will learn to ignore the analogous vervet call as well. Alarm pheromones Alarm signals need not be communicated only by auditory means. For example, many animals may use chemosensory alarm signals, communicated by chemicals known as pheromones. Minnows and catfish release alarm pheromones (Schreckstoff) when injured, which cause nearby fish to hide in dense schools near the bottom. At least two species of freshwater fish produce chemicals known as disturbance cues, which initiates a coordinated antipredator defence by increasing group cohesion in response to fish predators. Chemical communication about threats is also known among plants, though it is debated to what extent this function has been reinforced by actual selection. Lima beans release volatile chemical signals that are received by nearby plants of the same species when infested with spider mites. This 'message' allows the recipients to prepare themselves by activating defense genes, making them less vulnerable to attack, and also attracting another mite species that is a predator of spider mites (indirect defence). Although it is conceivable that other plants are only intercepting a message primarily functioning to attract "bodyguards", some plants spread this signal on to others themselves, suggesting an indirect benefit from increased inclusive fitness. Deceptive chemical alarm signals are also employed. For example, the wild potato, Solanum berthaultii, emits the aphid alarm-pheromone, (E)-β-farnesene, from its leaves, which functions as a repellent against the green peach aphid, Myzus persicae. See also Group selection Kin selection Mobbing call References External links Chickadees' alarm-call carry information about size, threat of predator The Trek of the Pika "A story complete with sounds of pika and marmot calls" 2002-10-30 Characteristics of arctic ground squirrel alarm calls Oecologia Volume 7, Number 2 / June, 1971 Why do Yellow-bellied Marmots Call? Daniel T. Blumstein & Kenneth B. Armitage Department of Systematics and Ecology, University of Kansas Alarm calls of Belding's ground squirrels to aerial predators: nepotism or self-preservation? Signalling theory Animal communication Antipredator adaptations Emergency communication Survival skills Articles containing video clips Chemical ecology
Alarm signal
Chemistry,Biology
4,763
24,035,669
https://en.wikipedia.org/wiki/RESCO
A Renewable Energy Service Company (RESCO) is an ESCO Energy service company which provides energy to the consumers from renewable energy sources, usually solar photovoltaics, wind power or micro hydro. RESCOs include investor owned, publicly owned, cooperatives, and community organisations. Main characteristics The main characteristics of a RESCO are: The household serviced does not own the generation equipment, which is owned by an external organisation such as a Government agency or the RESCO; The user does not carry out maintenance, all maintenance and repair service is provided by the RESCO; The user pays a service charge that covers the capital repayment requirement and the cost of providing for maintenance and repairs. The concept is much like that of a conventional electric utility in that the generation equipment is not owned by the user and the electricity that is generated is made available to the customer for a fee. The fee charged to the user includes any required capital replacement cost and all operating, maintenance and repair costs plus a profit for the operating organisation. There are two significant differences between the conventional utility approach and that of the RESCO. For a RESCO: Generation may be distributed among many households instead of being centralised at a power station; Many organisations regulated by the government may provide services independently of each other. RESCOs & rural electrification RESCOs have been very successful in the expansion of rural electrification projects worldwide because: Low income rural households receive electricity without having to invest in renewable energy equipment, something that they would not normally be able to afford due to the high initial cost. Equipment is properly maintained and components replaced by the RESCO, making sure that the service is not interrupted, Equipment is owned by an organization that directly or indirectly represents the users (beneficiaries of the funding). As a result of all this, donors are prepared to contribute with funding to the RESCO concept because it makes their aid (1) effective, (2) sustainable and (3) accountable. RESCOs in the Pacific Pacific Governments have provided rural electrification to remote locations by means of four main institutional set-ups, which are: Engaging the power utility in carrying out the rural electrification. This is the approach followed in Marshall Islands by the Marshall Electric Company (MEC) or in the Federated States of Micronesia by the Yap State Public Service Corporation (YSPSC) and the Chuuk Power Utility Corporation (CPUC). Setting up a different entity for the rural electrification (RESCO), but still centralising much of the decision-making process in Government officials through the nomination of a board of directors. This is the strategy applied in Kiribati, with the Kiribati Solar Energy Company Ltd (KSEC). Decentralising operation and maintenance by engaging the private sector, while keeping the ownership of the equipment in the hands of a government department. This is the solution followed in Fiji with the RESCO charter and in Pohnpei State (Federated States of Micronesia). Decentralising operation and maintenance by engaging the users, and transferring ownership of the equipment to the organisation that represents them. This was the solution followed in Tonga though the Ha'apai District Solar Electricity Committee before deciding to engage the power utility in providing the service. See also Off-the-grid Low carbon technology Renewable energy development Renewable energy economy Renewable energy in Africa Soft energy technologies Sustainable energy References External links Charter for Renewable Energy Based Rural Electrification with Participation of Private Enterprises Solar Home Systems: Technical/Management Model in Kiribati Renewable energy organizations
RESCO
Engineering
710
34,468,733
https://en.wikipedia.org/wiki/Giorgio%20Garuzzo
Giorgio Garuzzo, born in 1938 in Paesana, a small village in the Piedmont Alps near Cuneo, is an Italian electronics engineer, manager and industrialist, who took a central part in some of the most important developments in Italian industry in the past 50 years. The Istituto Garuzzo per le Arti Visive (IGAV) is a non-profit organisation, established in 2005, funded and managed by his family with the object of supporting contemporary art and specifically to help young emerging Italian artists to become known in the international arena. The Olivetti/General Electric period Garuzzo received a degree in Electronic Engineering after following the first graduation course in the than new discipline at the Politecnico di Torino (Polytechnic University of Turin) in November 1961, and joined the Laboratorio di Ricerche Elettroniche Olivetti in Borgolombardo, near Milan, where a host of researchers and engineers were developing the first family of Italian mainframe computers, a business idea of the visionary entrepreneur Adriano Olivetti. He worked on the Olivetti Elea 9003 and 6001 computers that allowed the first approach to informatics of more than 100 Italian large companies. When Olivetti was forced by supporting financial institutions to sell its electronic division to General Electric, Giorgio Garuzzo followed, working at Pregnana Milanese laboratories of General Electric Information System Italia (GEISI) as chief of the engineering planning department on the new computer generations GE115 and GE130, to be sold in some 5.000 units across the world. The Fiat period In 1973 Garuzzo joined Gilardini, a listed holding company managed by a maverick entrepreneur, Carlo De Benedetti, who Gianni Agnelli, the charismatic Fiat chairman, suddenly and unexpectedly hired in 1976 as “amministratore delegato” (chief executive officer) of the Fiat group, the largest Italian private enterprise that employed at the time more than 300.000 people. When De Benedetti stayed in Fiat a mere 100 days, Giorgio Garuzzo, who had followed him as his personal advisor, remained 20 years, orchestrating some of the most important achievements of the Group in the period. In a book published in 2006 ("Fiat - I segreti di un'epoca") he describes the events and realisations of his Fiat experience, in the context of the Italian social and economical environment of the period 1976–1996. In 1977 he promoted the merging of seven machine tools firms to create Comau SpA, a company specialized in welding equipment, whose “robogate” computerized and flexible manufacturing system (FMS) was to be widely used since the ‘80s to assemble cars by many makes all over the world. Between 1979 and 1984, heading the Fiat Component Sector, Giorgio Garuzzo re-organized and managed more than 50 companies in the field of components for automotive and industrial applications, including promoting the development of the multi-point electronic controlled gasoline fuel injection system of Magneti Marelli SpA (the first European alternative to the offer from the German company Bosch), a product which gradually substituted the Weber carburators, which had been very successful in the past, but were becoming obsolete because less apt to fuel saving and emission control. From 1984, as C.E.O., he managed the return to profitability of Iveco SA, the manufacturer of commercial vehicles and heavy trucks, and developed it with the acquisition and incorporation of Ford Truck and Seddon Atkinson in the UK, Pegaso in Spain, Ashok Leyland (the second largest producer of commercial vehicle in India, in conjunction with the Hinduja group) and Astra in Italy. The technology transfer and the joint venture for the production of diesel engines and the Iveco Daily light commercial vehicle that he signed in 1985 with the Nanjing Automobile Corporation was one of the first initiatives to be started under the new course of China towards a market economy inaugurated by Deng Xiaoping in the early ‘80s. The same year he signed a consortium with Oto Melara for the development of C1 Ariete battle tank and the B1 Centauro wheeled tank destroyer. He personally guided the program to re-design the full product range of Iveco products: vehicles from 3 tonne weight up to the 56 tonne maxi-code vehicle, and engines from 56 to 1250 HP. The R&D effort and the rationalisation of 22 plants in 5 countries of Europe was a major task that took five years to complete and cost more than 5 trillion Italian lire. Given the full range of product offer, Iveco became one of the two leaders of the European market, with 22% of market share in 1990. In 1989, Garuzzo negotiated the acquisition of Ford New Holland, which had resulted from an earlier merging of Ford Tractor and New Holland Agriculture, a world leader in agricultural machinery. The integration with Fiat Geotech (which in turn included Fiat Trattori, Laverda and Hesston), led to the creation of a world leader under the simplified logo New Holland (later to become CNH). In 1991, a year of deep crisis for the car sector Fiat Automobile, he was nominated chief operating officer (C.O.O. or “Direttore Generale”) of the Fiat group and chairman of Fiat Auto SpA, IVECO N.V. and New Holland N.V. He was one of the founding members of ACEA, the European Automobile Manufacturers Association, which he chaired in 1994 and 1995. In 1993, he was questioned by prosecutor Antonio Di Pietro in connection with the investigation called “Tangentopoli” or “Mani Pulite” with allegations of some bribery for the sale of buses by an Iveco dealer to the Milan communality, but he suffered no adverse judiciary consequences. He was forced to leave Fiat in 1996, when the Group had recovered from the bottom of the crisis , after a disagreement with the incumbent Fiat C.E.O. Cesare Romiti, in whose regard he declared to hold "a different approach to life and business" Current activity After working ten years in the private equity industry, in 2007 Garuzzo co-founded Mid Industry Capital, a holding company listed at the Milan Stock Exchange with a paid-in capital of €100 million, which he has chaired since the beginning. The company's aim is the acquisition of small or medium enterprises and their development in the medium-long term, by contributing funds, management and expertise, with a strong entrepreneurial attitude. The IGAV foundation The “Istituto Garuzzo per le Arti Visive” (IGAV) non-profit organisation has been particularly active in art relations between Italy and China. Its main exhibitions included “Nature and Metamorphosis” (Shanghai, Beijing, and Spoleto, 2006), “Subtle Energy of Matter” (Shanghai, Beijing, Shenzhen, Seoul, and Saluzzo, 2008), “Behind Body Boundaries” (Moscow, 2008), “Contemporary Energy . Italian Attitudes” (jointly with Premio Terna, Shanghai, 2010), and a participation to the Shanghai Expo 2010. As an official participant to the “Year of China in Italy”, it contributed to organize the exhibition “China New Design” (Milan and Turin, 2011, with the Ullens Center for Contemporary Art). IGAV permanent collections of contemporary art are located in the palace of Castiglia, the 13th-century castle of the Marquisate of Saluzzo, which also served as a prison in the 19th and 20th century. References External links Istituto Garuzzo per le Arti Visive 1938 births Living people Chief operating officers Electronics engineers Italian industrialists Polytechnic University of Turin alumni
Giorgio Garuzzo
Engineering
1,579
21,499,105
https://en.wikipedia.org/wiki/Iridium%2033
Iridium 33 was a communications satellite launched by Russia for Iridium Communications. It was launched into low Earth orbit from Site 81/23 at the Baikonur Cosmodrome at 01:36 UTC on 14 September 1997, by a Proton-K rocket with a Block DM2 upper stage. The launch was arranged by International Launch Services (ILS). It was operated in Plane 3 of the Iridium satellite constellation, with an ascending node of 230.9°. Mission Iridium 33 was part of a commercial communications network consisting of a constellation of 66 LEO spacecraft. The system uses L-Band to provide global communications services through portable handsets. Commercial service began in 1998. The system employs ground stations with a master control complex in Landsdowne, Virginia, a backup in Italy, and a third engineering center in Chandler, Arizona. Spacecraft The spacecraft was 3-axis stabilized, with a hydrazine propulsion system. It had 2 solar panels with 1-axis articulation. The system employed L-Band using FDMA/TDMA to provide voice at 4.8 kbps and data at 2400 bps with a 16 dB margin. Each satellite had 48 spot beams for Earth coverage and used Ka-Band for crosslinks and ground commanding. Destruction On 10 February 2009, at 16:56 UTC, at about 800 km altitude, Kosmos 2251 (1993-036A) (a derelict Strela satellite) and Iridium 33 collided, resulting in the destruction of both spacecraft. NASA reported that a large amount of space debris was produced by the collision, i.e. 1347 debris for Kosmos 2251 and 528 for Iridium 33. See also Kessler syndrome References Communications satellites in low Earth orbit 2009 in spaceflight Satellite collisions Satellites formerly orbiting Earth Spacecraft launched in 1997 Iridium satellites Spacecraft that broke apart in space
Iridium 33
Technology
381
77,886,772
https://en.wikipedia.org/wiki/WD%202317%2B1830
WD 2317+1830 (SDSS J231726.72+183049.6) is one of the first white dwarfs with lithium detected in its atmosphere. The white dwarf is surrounded by a debris disk and is actively accreting material. Researchers suggest that the presence of alkali metals indicates the accretion of crust material. Another work however cautions to use alkali metals as a single indicator of crust material. They suggest that such objects could be polluted by mantle material instead. An analysis in 2024 finds that the abundance of lithium is in agreement with Big Bang nucleosynthesis (BBN) and galactic nucleosynthesis. WD 2317+1830 likely was a star with sub-solar metallicity, which is evident from its old age, as well as from its thick disk or halo kinematics. This low metallicity means that the planetesimals that formed around this old white dwarf had a composition more similar to BBN abundances. The lithium-enhancement is not in agreement with the accretion of terrestrial continental crust material. The accretion of an exotic exoplanet is not ruled out, but the accretion of a primitive planetesimal is more likely. The accretion of an exomoon as a lithium source is excluded. WD 2317+1830 was first discovered in 2021 from Gaia and SDSS data as a candidate white dwarf. A first spectral analysis was published in 2020, identifying it as a DZ white dwarf. In 2021 observations with the Gran Telescopio Canarias were published. The white dwarf is massive and has a mass of 1.00 ± 0.02 . The cooling age was determined to be 9.5±0.2 Gyrs and the total age is 9.7±0.2 Gyrs. A more recent work found a higher temperature and younger cooling age of about 6.4 Gyrs. The researchers detected sodium, lithium and weak calcium absorption. The researchers also detected infrared excess, indicative of a debris disk, around this white dwarf. The disk is inclined by 70°, has an inner disk temperature of 1,500 K and an outer disk temperature of 500 K. In the past WD 2317+1830 had a mass of 4.8 ± 0.2 and was likely a B-type star. See also List of exoplanets and planetary debris around white dwarfs WD J2356−209 is another cool white dwarf with sodium detected LSPM J0207+3331 is another old white dwarf with a disk detected References white dwarfs circumstellar disks Pegasus (constellation)
WD 2317+1830
Astronomy
549
9,554,041
https://en.wikipedia.org/wiki/XM%20PCR
The XM PCR is a satellite receiver sold by XM Radio and discontinued in 2004, amidst piracy concerns. Programs allowed users to record every song played on an XM channel, quickly and cheaply building an MP3 library. History The Personal Computer Receiver (PCR) was first announced in 2003. The next year, XM pulled the PCRs from the market, reportedly due to music piracy. Enhancements Several enhancements have been created for the PCR, both software and hardware. In the software arena, PCR Replacement programs have been sprouting up on Internet forums and web sites. These are software packages that replace the interface included with the PCR, XMMT. Several features have been added to these new programs, including the ability to rip songs and build an MP3 library, time shift shows so that the user can listen at a more convenient time, control the radio via a web browser, and stream audio to other computers. Some web sites also offer a playlist log, which allows a user to browse a list of all the recently played songs or shows. A hardware modification has also been discovered that allows the addition of a TOSLINK optical output, allowing users to connect the PCR to the optical digital input on a home theater receiver. Replacements The XM Direct receiver, also marketed as the XM Commander, can now serve the same purpose as the PCR. While the XM Direct is intended for automotive use, the unit itself is controlled by RS-232 command signals, and so is easily adapted to PC control. When combined with a "smart cable", which is really just a USB to Serial cable and a wiring adapter to connect to the XM Direct's control port, the XM Direct supports some features not found on the original PCR. The XM Mini tuner may also hold promise for hardware tweakers. It uses the newest XM tuner and is much smaller than the XM Direct. Like the Direct, the Mini is designed to be used with an external system, in this case a home theater receiver. Unlike the Direct, the Mini is also capable of receiving XM's newest technologies, including HD audio. References XM Satellite Radio Satellite broadcasting
XM PCR
Engineering
454
55,090,703
https://en.wikipedia.org/wiki/Mibampator
Mibampator (developmental code name LY-451395) is a positive allosteric modulator (PAM) of the AMPA receptor (AMPAR), an ionotropic glutamate receptor, which was under development by Eli Lilly for the treatment of agitation/aggression in Alzheimer's disease but was never marketed. It reached phase II clinical trials prior to the discontinuation of its development. Mibampator belongs to the biarylpropylsulfonamide group of AMPAR PAMs, which also includes LY-404187, LY-503430, and PF-04958242 among others. It is a "high-impact" AMPAR potentiator, unlike "low-impact" AMPAR potentiators from other classes like CX-516 and its congener farampator (CX-691, ORG-24448), and is able to elicit comparatively more robust increases in AMPAR signaling. In animals, high-impact AMPAR potentiators enhance cognition and memory at low doses, but produce motor coordination disruptions, convulsions, and neurotoxicity at higher doses. Mibampator failed to produce cognitive improvement in patients with Alzheimer's disease, though it did show improvements in neuropsychiatric measures. A caveat of the study was that the maximally tolerated dosage of the drug could not be used due to toxicity, and dosages in the same range in rodents notably failed to improve memory-related behavior. See also AMPA receptor positive allosteric modulator References External links Mibampator - AdisInsight Abandoned drugs AMPA receptor positive allosteric modulators Nootropics Sulfonamides
Mibampator
Chemistry
367
497,481
https://en.wikipedia.org/wiki/Hormesis
Hormesis is a two-phased dose-response relationship to an environmental agent whereby low-dose amounts have a beneficial effect and high-dose amounts are either inhibitory to function or toxic. Within the hormetic zone, the biological response to low-dose amounts of some stressors is generally favorable. An example is the breathing of oxygen, which is required in low amounts (in air) via respiration in living animals, but can be toxic in high amounts, even in a managed clinical setting. In toxicology, hormesis is a dose-response phenomenon to xenobiotics or other stressors. In physiology and nutrition, hormesis has regions extending from low-dose deficiencies to homeostasis, and potential toxicity at high levels. Physiological concentrations of an agent above or below homeostasis may adversely affect an organism, where the hormetic zone is a region of homeostasis of balanced nutrition. In pharmacology, the hormetic zone is similar to the therapeutic window. In the context of toxicology, the hormesis model of dose response is vigorously debated. The biochemical mechanisms by which hormesis works (particularly in applied cases pertaining to behavior and toxins) remain under early laboratory research and are not well understood. Etymology The term "hormesis" derives from Greek hórmēsis for "rapid motion, eagerness", itself from ancient Greek to excite. The same Greek root provides the word hormone. The term "hormetics" is used for the study of hormesis. The word hormesis was first reported in English in 1943. History A form of hormesis famous in antiquity was Mithridatism, the practice whereby Mithridates VI of Pontus supposedly made himself immune to a variety of toxins by regular exposure to small doses. Mithridate and theriac, polypharmaceutical electuaries claiming descent from his formula and initially including flesh from poisonous animals, were consumed for centuries by emperors, kings, and queens as protection against poison and ill health. In the Renaissance, the Swiss doctor Paracelsus said, "All things are poison, and nothing is without poison; the dosage alone makes it so a thing is not a poison." German pharmacologist Hugo Schulz first described such a phenomenon in 1888 following his own observations that the growth of yeast could be stimulated by small doses of poisons. This was coupled with the work of German physician Rudolph Arndt, who studied animals given low doses of drugs, eventually giving rise to the Arndt–Schulz rule. Arndt's advocacy of homeopathy contributed to the rule's diminished credibility in the 1920s and 1930s. The term "hormesis" was coined and used for the first time in a scientific paper by Chester M. Southam and J. Ehrlich in 1943 in the journal Phytopathology, volume 33, pp. 517–541. In 2004, Edward Calabrese evaluated the concept of hormesis. Over 600 substances show a U-shaped dose–response relationship; Calabrese and Baldwin wrote: "One percent (195 out of 20,285) of the published articles contained 668 dose-response relationships that met the entry criteria [of a U-shaped response indicative of hormesis]" Examples Carbon monoxide Carbon monoxide is produced in small quantities across phylogenetic kingdoms, where it has essential roles as a neurotransmitter (subcategorized as a gasotransmitter). The majority of endogenous carbon monoxide is produced by heme oxygenase; the loss of heme oxygenase and subsequent loss of carbon monoxide signaling has catastrophic implications for an organism. In addition to physiological roles, small amounts of carbon monoxide can be inhaled or administered in the form of carbon monoxide-releasing molecules as a therapeutic agent. Regarding the hormetic curve graph: Deficiency zone: an absence of carbon monoxide signaling has toxic implications Hormetic zone / region of homeostasis: small amount of carbon monoxide has a positive effect: essential as a neurotransmitter beneficial as a pharmaceutical Toxicity zone: excessive exposure results in carbon monoxide poisoning Oxygen Many organisms maintain a hormesis relationship with oxygen, which follows a hormetic curve similar to carbon monoxide: Deficiency zone: hypoxia / asphyxia Hormetic zone / region of homeostasis Toxicity zone: oxidative stress Physical exercise Physical exercise intensity may exhibit a hormetic curve. Individuals with low levels of physical activity are at risk for some diseases; however, individuals engaged in moderate, regular exercise may experience less disease risk. Mitohormesis The possible effect of small amounts of oxidative stress is under laboratory research. Mitochondria are sometimes described as "cellular power plants" because they generate most of the cell's supply of adenosine triphosphate (ATP), a source of chemical energy. Reactive oxygen species (ROS) have been discarded as unwanted byproducts of oxidative phosphorylation in mitochondria by the proponents of the free-radical theory of aging promoted by Denham Harman. The free-radical theory states that compounds inactivating ROS would lead to a reduction of oxidative stress and thereby produce an increase in lifespan, although this theory holds only in basic research. However, in over 19 clinical trials, "nutritional and genetic interventions to boost antioxidants have generally failed to increase life span." Whether this concept applies to humans remains to be shown, although a 2007 epidemiological study supports the possibility of mitohormesis, indicating that supplementation with beta-carotene, vitamin A or vitamin E may increase disease prevalence in humans. More recent studies have reported that rapamycin exhibits hormesis, where low doses can enhance cellular longevity by partially inhibiting mTOR, unlike higher doses that are toxic due to complete inhibition. This partial inhibition of mTOR (by the hormetic effect of low-dose rapamycin) modulates mTOR–mitochondria cross-talk, thereby demonstrating mitohormesis; and consequently reducing oxidative damage, metabolic dysregulation, and mitochondrial dysfunction, thus slowing cellular aging. Alcohol Alcohol is believed to be hormetic in preventing heart disease and stroke, although the benefits of light drinking may have been exaggerated. The gut microbiome of a typical healthy individual naturally ferments small amounts of ethanol, and in rare cases dysbiosis leads to auto-brewery syndrome, therefore whether benefits of alcohol are derived from the behavior of consuming alcoholic drinks or as a homeostasis factor in normal physiology via metabolites from commensal microbiota remains unclear. In 2012, researchers at UCLA found that tiny amounts (1 mM, or 0.005%) of ethanol doubled the lifespan of Caenorhabditis elegans, a roundworm frequently used in biological studies, that were starved of other nutrients. Higher doses of 0.4% provided no longevity benefit. However, worms exposed to 0.005% did not develop normally (their development was arrested). The authors argue that the worms were using ethanol as an alternative energy source in the absence of other nutrition, or had initiated a stress response. They did not test the effect of ethanol on worms fed a normal diet. Methylmercury In 2010, a paper in the journal Environmental Toxicology & Chemistry showed that low doses of methylmercury, a potent neurotoxic pollutant, improved the hatching rate of mallard eggs. The author of the study, Gary Heinz, who led the study for the U.S. Geological Survey at the Patuxent Wildlife Research Center in Beltsville, stated that other explanations are possible. For instance, the flock he studied might have harbored some low, subclinical infection and that mercury, well known to be antimicrobial, might have killed the infection that otherwise hurt reproduction in the untreated birds. Radiation Ionizing radiation Hormesis has been observed in a number of cases in humans and animals exposed to chronic low doses of ionizing radiation. A-bomb survivors who received high doses exhibited shortened lifespan and increased cancer mortality, but those who received low doses had lower cancer mortality than the Japanese average. In Taiwan, recycled radiocontaminated steel was inadvertently used in the construction of over 100 apartment buildings, causing the long-term exposure of 10,000 people. The average dose rate was 50 mSv/year and a subset of the population (1,000 people) received a total dose over 4,000 mSv over ten years. In the widely used linear no-threshold model used by regulatory bodies, the expected cancer deaths in this population would have been 302 with 70 caused by the extra ionizing radiation, with the remainder caused by natural background radiation. The observed cancer rate, though, was quite low at 7 cancer deaths when 232 would be predicted by the LNT model had they not been exposed to the radiation from the building materials. Ionizing radiation hormesis appears to be at work. Chemical and ionizing radiation combined No experiment can be performed in perfect isolation. Thick lead shielding around a chemical dose experiment to rule out the effects of ionizing radiation is built and rigorously controlled for in the laboratory, and certainly not the field. Likewise the same applies for ionizing radiation studies. Ionizing radiation is released when an unstable particle releases radiation, creating two new substances and energy in the form of an electromagnetic wave. The resulting materials are then free to interact with any environmental elements, and the energy released can also be used as a catalyst in further ionizing radiation interactions. The resulting confusion in the low-dose exposure field (radiation and chemical) arise from lack of consideration of this concept as described by Mothersill and Seymory. Nucleotide excision repair Veterans of the Gulf War (1991) who suffered from the persistent symptoms of Gulf War Illness (GWI) were likely exposed to stresses from toxic chemicals and/or radiation. The DNA damaging (genotoxic) effects of such exposures can be, at least partially, overcome by the DNA nucleotide excision repair (NER) pathway. Lymphocytes from GWI veterans exhibited a significantly elevated level of NER repair. It was suggested that this increased NER capability in exposed veterans was likely a hormetic response, that is, an induced protective response resulting from battlefield exposure. Applications Effects in aging One of the areas where the concept of hormesis has been explored extensively with respect to its applicability is aging. Since the basic survival capacity of any biological system depends on its homeostatic ability, biogerontologists proposed that exposing cells and organisms to mild stress should result in the adaptive or hormetic response with various biological benefits. This idea has preliminary evidence showing that repetitive mild stress exposure may have anti-aging effects in laboratory models. Some mild stresses used for such studies on the application of hormesis in aging research and interventions are heat shock, irradiation, prooxidants, hypergravity, and food restriction. Such compounds that may modulate stress responses in cells have been termed "hormetins". Controversy Hormesis suggests dangerous substances have benefits. Concerns exist that the concept has been leveraged by lobbyists to weaken environmental regulations of some well-known toxic substances in the US. Radiation controversy The hypothesis of hormesis has generated the most controversy when applied to ionizing radiation. This hypothesis is called radiation hormesis. For policy-making purposes, the commonly accepted model of dose response in radiobiology is the linear no-threshold model (LNT), which assumes a strictly linear dependence between the risk of radiation-induced adverse health effects and radiation dose, implying that there is no safe dose of radiation for humans. Nonetheless, many countries including the Czech Republic, Germany, Austria, Poland, and the United States have radon therapy centers whose whole primary operating principle is the assumption of radiation hormesis, or beneficial impact of small doses of radiation on human health. Countries such as Germany and Austria at the same time have imposed very strict antinuclear regulations, which have been described as radiophobic inconsistency. The United States National Research Council (part of the National Academy of Sciences), the National Council on Radiation Protection and Measurements (a body commissioned by the United States Congress) and the United Nations Scientific Committee on the Effects of Ionizing Radiation all agree that radiation hormesis is not clearly shown, nor clearly the rule for radiation doses. A United States–based National Council on Radiation Protection and Measurements stated in 2001 that evidence for radiation hormesis is insufficient and radiation protection authorities should continue to apply the LNT model for purposes of risk estimation. A 2005 report commissioned by the French National Academy concluded that evidence for hormesis occurring at low doses is sufficient and LNT should be reconsidered as the methodology used to estimate risks from low-level sources of radiation, such as deep geological repositories for nuclear waste. Policy consequences Hormesis remains largely unknown to the public, requiring a policy change for a possible toxin to consider exposure risk of small doses. See also Calorie restriction Michael Ristow Petkau effect Radiation hormesis Stochastic resonance Mithridatism Antifragility Xenohormesis References External links International Dose-Response Society Clinical pharmacology Radiobiology Toxicology Health paradoxes
Hormesis
Chemistry,Biology,Environmental_science
2,773
52,427,266
https://en.wikipedia.org/wiki/Nandrolone%20acetate
Nandrolone acetate, also known as 19-nortestosterone 17β-acetate or as estr-4-en-17β-ol-3-one 17β-acetate, is a synthetic, injected anabolic–androgenic steroid (AAS) and a derivative of 19-nortestosterone (nandrolone) that was never marketed. It is an androgen ester – specifically, the C17β acetate ester of nandrolone. See also List of androgen esters § Nandrolone esters References Abandoned drugs Acetate esters Anabolic–androgenic steroids Nandrolone esters Progestogens
Nandrolone acetate
Chemistry
143
23,335
https://en.wikipedia.org/wiki/Parsec
The parsec (symbol: pc) is a unit of length used to measure the large distances to astronomical objects outside the Solar System, approximately equal to or (AU), i.e. . The parsec unit is obtained by the use of parallax and trigonometry, and is defined as the distance at which 1 AU subtends an angle of one arcsecond ( of a degree). The nearest star, Proxima Centauri, is about from the Sun: from that distance, the gap between the Earth and the Sun spans slightly less than one arcsecond. Most stars visible to the naked eye are within a few hundred parsecs of the Sun, with the most distant at a few thousand parsecs, and the Andromeda Galaxy at over 700,000 parsecs. The word parsec is a shortened form of a distance corresponding to a parallax of one second, coined by the British astronomer Herbert Hall Turner in 1913. The unit was introduced to simplify the calculation of astronomical distances from raw observational data. Partly for this reason, it is the unit preferred in astronomy and astrophysics, though in popular science texts and common usage the light-year remains prominent. Although parsecs are used for the shorter distances within the Milky Way, multiples of parsecs are required for the larger scales in the universe, including kiloparsecs (kpc) for the more distant objects within and around the Milky Way, megaparsecs (Mpc) for mid-distance galaxies, and gigaparsecs (Gpc) for many quasars and the most distant galaxies. In August 2015, the International Astronomical Union (IAU) passed Resolution B2 which, as part of the definition of a standardized absolute and apparent bolometric magnitude scale, mentioned an existing explicit definition of the parsec as exactly  au, or exactly  metres, given the IAU 2012 exact definition of the astronomical unit in metres. This corresponds to the small-angle definition of the parsec found in many astronomical references. History and derivation Imagining an elongated right triangle in space, where the shorter leg measures one au (astronomical unit, the average Earth–Sun distance) and the subtended angle of the vertex opposite that leg measures one arcsecond ( of a degree), the parsec is defined as the length of the adjacent leg. The value of a parsec can be derived through the rules of trigonometry. The distance from Earth whereupon the radius of its solar orbit subtends one arcsecond. One of the oldest methods used by astronomers to calculate the distance to a star is to record the difference in angle between two measurements of the position of the star in the sky. The first measurement is taken from the Earth on one side of the Sun, and the second is taken approximately half a year later, when the Earth is on the opposite side of the Sun. The distance between the two positions of the Earth when the two measurements were taken is twice the distance between the Earth and the Sun. The difference in angle between the two measurements is twice the parallax angle, which is formed by lines from the Sun and Earth to the star at the distant vertex. Then the distance to the star could be calculated using trigonometry. The first successful published direct measurements of an object at interstellar distances were undertaken by German astronomer Friedrich Wilhelm Bessel in 1838, who used this approach to calculate the 3.5-parsec distance of 61 Cygni. The parallax of a star is defined as half of the angular distance that a star appears to move relative to the celestial sphere as Earth orbits the Sun. Equivalently, it is the subtended angle, from that star's perspective, of the semimajor axis of the Earth's orbit. Substituting the star's parallax for the one arcsecond angle in the imaginary right triangle, the long leg of the triangle will measure the distance from the Sun to the star. A parsec can be defined as the length of the right triangle side adjacent to the vertex occupied by a star whose parallax angle is one arcsecond. The use of the parsec as a unit of distance follows naturally from Bessel's method, because the distance in parsecs can be computed simply as the reciprocal of the parallax angle in arcseconds (i.e.: if the parallax angle is 1 arcsecond, the object is 1 pc from the Sun; if the parallax angle is 0.5 arcseconds, the object is 2 pc away; etc.). No trigonometric functions are required in this relationship because the very small angles involved mean that the approximate solution of the skinny triangle can be applied. Though it may have been used before, the term parsec was first mentioned in an astronomical publication in 1913. Astronomer Royal Frank Watson Dyson expressed his concern for the need of a name for that unit of distance. He proposed the name astron, but mentioned that Carl Charlier had suggested siriometer and Herbert Hall Turner had proposed parsec. It was Turner's proposal that stuck. Calculating the value of a parsec By the 2015 definition, of arc length subtends an angle of at the center of the circle of radius . That is, 1 pc = 1 au/tan() ≈ 206,264.8 au by definition. Converting from degree/minute/second units to radians, , and (exact by the 2012 definition of the au) Therefore, (exact by the 2015 definition) Therefore, (to the nearest metre). Approximately, In the diagram above (not to scale), S represents the Sun, and E the Earth at one point in its orbit (such as to form a right angle at S). Thus the distance ES is one astronomical unit (au). The angle SDE is one arcsecond ( of a degree) so by definition D is a point in space at a distance of one parsec from the Sun. Through trigonometry, the distance SD is calculated as follows: Because the astronomical unit is defined to be , the following can be calculated: Therefore, if ≈ , Then ≈ A corollary states that a parsec is also the distance from which a disc that is one au in diameter must be viewed for it to have an angular diameter of one arcsecond (by placing the observer at D and a disc spanning ES). Mathematically, to calculate distance, given obtained angular measurements from instruments in arcseconds, the formula would be: where θ is the measured angle in arcseconds, Distanceearth-sun is a constant ( or ). The calculated stellar distance will be in the same measurement unit as used in Distanceearth-sun (e.g. if Distanceearth-sun = , unit for Distancestar is in astronomical units; if Distanceearth-sun = , unit for Distancestar is in light-years). The length of the parsec used in IAU 2015 Resolution B2 (exactly astronomical units) corresponds exactly to that derived using the small-angle calculation. This differs from the classic inverse-tangent definition by about , i.e.: only after the 11th significant figure. As the astronomical unit was defined by the IAU (2012) as an exact length in metres, so now the parsec corresponds to an exact length in metres. To the nearest meter, the small-angle parsec corresponds to . Usage and measurement The parallax method is the fundamental calibration step for distance determination in astrophysics; however, the accuracy of ground-based telescope measurements of parallax angle is limited to about , and thus to stars no more than distant. This is because the Earth's atmosphere limits the sharpness of a star's image. Space-based telescopes are not limited by this effect and can accurately measure distances to objects beyond the limit of ground-based observations. Between 1989 and 1993, the Hipparcos satellite, launched by the European Space Agency (ESA), measured parallaxes for about stars with an astrometric precision of about , and obtained accurate measurements for stellar distances of stars up to away. ESA's Gaia satellite, which launched on 19 December 2013, is intended to measure one billion stellar distances to within s, producing errors of 10% in measurements as far as the Galactic Centre, about away in the constellation of Sagittarius. Distances in parsecs Distances less than a parsec Distances expressed in fractions of a parsec usually involve objects within a single star system. So, for example: One astronomical unit (au), the distance from the Sun to the Earth, is just under . The most distant space probe, Voyager 1, was from Earth . Voyager 1 took to cover that distance. The Oort cloud is estimated to be approximately in diameter Parsecs and kiloparsecs Distances expressed in parsecs (pc) include distances between nearby stars, such as those in the same spiral arm or globular cluster. A distance of is denoted by the kiloparsec (kpc). Astronomers typically use kiloparsecs to express distances between parts of a galaxy or within groups of galaxies. So, for example : Proxima Centauri, the nearest known star to Earth other than the Sun, is about away by direct parallax measurement. The distance to the open cluster Pleiades is () from us per Hipparcos parallax measurement. The centre of the Milky Way is more than from the Earth and the Milky Way is roughly across. ESO 383-76, one of the largest known galaxies, has a diameter of . The Andromeda Galaxy (M31) is about away from the Earth. Megaparsecs and gigaparsecs Astronomers typically express the distances between neighbouring galaxies and galaxy clusters in megaparsecs (Mpc). A megaparsec is one million parsecs, or about 3,260,000 light years. Sometimes, galactic distances are given in units of Mpc/h (as in "50/h Mpc", also written ""). h is a constant (the "dimensionless Hubble constant") in the range reflecting the uncertainty in the value of the Hubble constant H for the rate of expansion of the universe: . The Hubble constant becomes relevant when converting an observed redshift z into a distance d using the formula . One gigaparsec (Gpc) is one billion parsecs — one of the largest units of length commonly used. One gigaparsec is about , or roughly of the distance to the horizon of the observable universe (dictated by the cosmic microwave background radiation). Astronomers typically use gigaparsecs to express the sizes of large-scale structures such as the size of, and distance to, the CfA2 Great Wall; the distances between galaxy clusters; and the distance to quasars. For example: The Andromeda Galaxy is about from the Earth. The nearest large galaxy cluster, the Virgo Cluster, is about from the Earth. The galaxy RXJ1242-11, observed to have a supermassive black hole core similar to the Milky Way's, is about from the Earth. The galaxy filament Hercules–Corona Borealis Great Wall, currently the largest known structure in the universe, is about across. The particle horizon (the boundary of the observable universe) has a radius of about . Volume units To determine the number of stars in the Milky Way, volumes in cubic kiloparsecs (kpc3) are selected in various directions. All the stars in these volumes are counted and the total number of stars statistically determined. The number of globular clusters, dust clouds, and interstellar gas is determined in a similar fashion. To determine the number of galaxies in superclusters, volumes in cubic megaparsecs (Mpc3) are selected. All the galaxies in these volumes are classified and tallied. The total number of galaxies can then be determined statistically. The huge Boötes void is measured in cubic megaparsecs. In physical cosmology, volumes of cubic gigaparsecs (Gpc3) are selected to determine the distribution of matter in the visible universe and to determine the number of galaxies and quasars. The Sun is currently the only star in its cubic parsec, (pc3) but in globular clusters the stellar density could be from . The observational volume of gravitational wave interferometers (e.g., LIGO, Virgo) is stated in terms of cubic megaparsecs (Mpc3) and is essentially the value of the effective distance cubed. See also Attoparsec Distance measure In popular culture The parsec was used incorrectly as a measurement of time by Han Solo in the first Star Wars film, when he claimed his ship, the Millennium Falcon "made the Kessel Run in less than 12 parsecs", originally with the intention of presenting Solo as "something of a bull artist who didn't always know precisely what he was talking about". The claim was repeated in The Force Awakens, but this was retconned in Solo: A Star Wars Story, by stating the Millennium Falcon traveled a shorter distance (as opposed to a quicker time) due to a more dangerous route through the Kessel Run, enabled by its speed and maneuverability. It is also used incorrectly in The Mandalorian. Notes References External links Units of length Units of measurement in astronomy Concepts in astronomy Parallax 1913 in science
Parsec
Physics,Astronomy,Mathematics
2,824
147,909
https://en.wikipedia.org/wiki/Linearity%20of%20differentiation
In calculus, the derivative of any linear combination of functions equals the same linear combination of the derivatives of the functions; this property is known as linearity of differentiation, the rule of linearity, or the superposition rule for differentiation. It is a fundamental property of the derivative that encapsulates in a single rule two simpler rules of differentiation, the sum rule (the derivative of the sum of two functions is the sum of the derivatives) and the constant factor rule (the derivative of a constant multiple of a function is the same constant multiple of the derivative). Thus it can be said that differentiation is linear, or the differential operator is a linear operator. Statement and derivation Let and be functions, with and constants. Now consider By the sum rule in differentiation, this is and by the constant factor rule in differentiation, this reduces to Therefore, Omitting the brackets, this is often written as: Detailed proofs/derivations from definition We can prove the entire linearity principle at once, or, we can prove the individual steps (of constant factor and adding) individually. Here, both will be shown. Proving linearity directly also proves the constant factor rule, the sum rule, and the difference rule as special cases. The sum rule is obtained by setting both constant coefficients to . The difference rule is obtained by setting the first constant coefficient to and the second constant coefficient to . The constant factor rule is obtained by setting either the second constant coefficient or the second function to . (From a technical standpoint, the domain of the second function must also be considered - one way to avoid issues is setting the second function equal to the first function and the second constant coefficient equal to . One could also define both the second constant coefficient and the second function to be 0, where the domain of the second function is a superset of the first function, among other possibilities.) On the contrary, if we first prove the constant factor rule and the sum rule, we can prove linearity and the difference rule. Proving linearity is done by defining the first and second functions as being two other functions being multiplied by constant coefficients. Then, as shown in the derivation from the previous section, we can first use the sum law while differentiation, and then use the constant factor rule, which will reach our conclusion for linearity. In order to prove the difference rule, the second function can be redefined as another function multiplied by the constant coefficient of . This would, when simplified, give us the difference rule for differentiation. In the proofs/derivations below, the coefficients are used; they correspond to the coefficients above. Linearity (directly) Let . Let be functions. Let be a function, where is defined only where and are both defined. (In other words, the domain of is the intersection of the domains of and .) Let be in the domain of . Let . We want to prove that . By definition, we can see that In order to use the limits law for the sum of limits, we need to know that and both individually exist. For these smaller limits, we need to know that and both individually exist to use the coefficient law for limits. By definition, and . So, if we know that and both exist, we will know that and both individually exist. This allows us to use the coefficient law for limits to write and With this, we can go back to apply the limit law for the sum of limits, since we know that and both individually exist. From here, we can directly go back to the derivative we were working on.Finally, we have shown what we claimed in the beginning: . Sum Let be functions. Let be a function, where is defined only where and are both defined. (In other words, the domain of is the intersection of the domains of and .) Let be in the domain of . Let . We want to prove that . By definition, we can see that In order to use the law for the sum of limits here, we need to show that the individual limits, and both exist. By definition, and , so the limits exist whenever the derivatives and exist. So, assuming that the derivatives exist, we can continue the above derivation Thus, we have shown what we wanted to show, that: . Difference Let be functions. Let be a function, where is defined only where and are both defined. (In other words, the domain of is the intersection of the domains of and .) Let be in the domain of . Let . We want to prove that . By definition, we can see that: In order to use the law for the difference of limits here, we need to show that the individual limits, and both exist. By definition, and that , so these limits exist whenever the derivatives and exist. So, assuming that the derivatives exist, we can continue the above derivation Thus, we have shown what we wanted to show, that: . Constant coefficient Let be a function. Let ; will be the constant coefficient. Let be a function, where j is defined only where is defined. (In other words, the domain of is equal to the domain of .) Let be in the domain of . Let . We want to prove that . By definition, we can see that: Now, in order to use a limit law for constant coefficients to show that we need to show that exists. However, , by the definition of the derivative. So, if exists, then exists. Thus, if we assume that exists, we can use the limit law and continue our proof. Thus, we have proven that when , we have . See also References Articles containing proofs Differential calculus Differentiation rules Theorems in analysis Theorems in calculus
Linearity of differentiation
Mathematics
1,150
372,090
https://en.wikipedia.org/wiki/Open%20and%20closed%20maps
In mathematics, more specifically in topology, an open map is a function between two topological spaces that maps open sets to open sets. That is, a function is open if for any open set in the image is open in Likewise, a closed map is a function that maps closed sets to closed sets. A map may be open, closed, both, or neither; in particular, an open map need not be closed and vice versa. Open and closed maps are not necessarily continuous. Further, continuity is independent of openness and closedness in the general case and a continuous function may have one, both, or neither property; this fact remains true even if one restricts oneself to metric spaces. Although their definitions seem more natural, open and closed maps are much less important than continuous maps. Recall that, by definition, a function is continuous if the preimage of every open set of is open in (Equivalently, if the preimage of every closed set of is closed in ). Early study of open maps was pioneered by Simion Stoilow and Gordon Thomas Whyburn. Definitions and characterizations If is a subset of a topological space then let and (resp. ) denote the closure (resp. interior) of in that space. Let be a function between topological spaces. If is any set then is called the image of under Competing definitions There are two different competing, but closely related, definitions of "" that are widely used, where both of these definitions can be summarized as: "it is a map that sends open sets to open sets." The following terminology is sometimes used to distinguish between the two definitions. A map is called a "" if whenever is an open subset of the domain then is an open subset of 's codomain "" if whenever is an open subset of the domain then is an open subset of 's image where as usual, this set is endowed with the subspace topology induced on it by 's codomain Every strongly open map is a relatively open map. However, these definitions are not equivalent in general. Warning: Many authors define "open map" to mean " open map" (for example, The Encyclopedia of Mathematics) while others define "open map" to mean " open map". In general, these definitions are equivalent so it is thus advisable to always check what definition of "open map" an author is using. A surjective map is relatively open if and only if it is strongly open; so for this important special case the definitions are equivalent. More generally, a map is relatively open if and only if the surjection is a strongly open map. Because is always an open subset of the image of a strongly open map must be an open subset of its codomain In fact, a relatively open map is a strongly open map if and only if its image is an open subset of its codomain. In summary, A map is strongly open if and only if it is relatively open and its image is an open subset of its codomain. By using this characterization, it is often straightforward to apply results involving one of these two definitions of "open map" to a situation involving the other definition. The discussion above will also apply to closed maps if each instance of the word "open" is replaced with the word "closed". Open maps A map is called an or a if it satisfies any of the following equivalent conditions: Definition: maps open subsets of its domain to open subsets of its codomain; that is, for any open subset of , is an open subset of is a relatively open map and its image is an open subset of its codomain For every and every neighborhood of (however small), is a neighborhood of . We can replace the first or both instances of the word "neighborhood" with "open neighborhood" in this condition and the result will still be an equivalent condition: For every and every open neighborhood of , is a neighborhood of . For every and every open neighborhood of , is an open neighborhood of . for all subsets of where denotes the topological interior of the set. Whenever is a closed subset of then the set is a closed subset of This is a consequence of the identity which holds for all subsets If is a basis for then the following can be appended to this list: maps basic open sets to open sets in its codomain (that is, for any basic open set is an open subset of ). Closed maps A map is called a if whenever is a closed subset of the domain then is a closed subset of 's image where as usual, this set is endowed with the subspace topology induced on it by 's codomain A map is called a or a if it satisfies any of the following equivalent conditions: <li>Definition: maps closed subsets of its domain to closed subsets of its codomain; that is, for any closed subset of is a closed subset of is a relatively closed map and its image is a closed subset of its codomain for every subset for every closed subset for every closed subset Whenever is an open subset of then the set is an open subset of If is a net in and is a point such that in then converges in to the set The convergence means that every open subset of that contains will contain for all sufficiently large indices A surjective map is strongly closed if and only if it is relatively closed. So for this important special case, the two definitions are equivalent. By definition, the map is a relatively closed map if and only if the surjection is a strongly closed map. If in the open set definition of "continuous map" (which is the statement: "every preimage of an open set is open"), both instances of the word "open" are replaced with "closed" then the statement of results ("every preimage of a closed set is closed") is to continuity. This does not happen with the definition of "open map" (which is: "every image of an open set is open") since the statement that results ("every image of a closed set is closed") is the definition of "closed map", which is in general equivalent to openness. There exist open maps that are not closed and there also exist closed maps that are not open. This difference between open/closed maps and continuous maps is ultimately due to the fact that for any set only is guaranteed in general, whereas for preimages, equality always holds. Examples The function defined by is continuous, closed, and relatively open, but not (strongly) open. This is because if is any open interval in 's domain that does contain then where this open interval is an open subset of both and However, if is any open interval in that contains then which is not an open subset of 's codomain but an open subset of Because the set of all open intervals in is a basis for the Euclidean topology on this shows that is relatively open but not (strongly) open. If has the discrete topology (that is, all subsets are open and closed) then every function is both open and closed (but not necessarily continuous). For example, the floor function from to is open and closed, but not continuous. This example shows that the image of a connected space under an open or closed map need not be connected. Whenever we have a product of topological spaces the natural projections are open (as well as continuous). Since the projections of fiber bundles and covering maps are locally natural projections of products, these are also open maps. Projections need not be closed however. Consider for instance the projection on the first component; then the set is closed in but is not closed in However, for a compact space the projection is closed. This is essentially the tube lemma. To every point on the unit circle we can associate the angle of the positive -axis with the ray connecting the point with the origin. This function from the unit circle to the half-open interval [0,2π) is bijective, open, and closed, but not continuous. It shows that the image of a compact space under an open or closed map need not be compact. Also note that if we consider this as a function from the unit circle to the real numbers, then it is neither open nor closed. Specifying the codomain is essential. Sufficient conditions Every homeomorphism is open, closed, and continuous. In fact, a bijective continuous map is a homeomorphism if and only if it is open, or equivalently, if and only if it is closed. The composition of two (strongly) open maps is an open map and the composition of two (strongly) closed maps is a closed map. However, the composition of two relatively open maps need not be relatively open and similarly, the composition of two relatively closed maps need not be relatively closed. If is strongly open (respectively, strongly closed) and is relatively open (respectively, relatively closed) then is relatively open (respectively, relatively closed). Let be a map. Given any subset if is a relatively open (respectively, relatively closed, strongly open, strongly closed, continuous, surjective) map then the same is true of its restriction to the -saturated subset The categorical sum of two open maps is open, or of two closed maps is closed. The categorical product of two open maps is open, however, the categorical product of two closed maps need not be closed. A bijective map is open if and only if it is closed. The inverse of a bijective continuous map is a bijective open/closed map (and vice versa). A surjective open map is not necessarily a closed map, and likewise, a surjective closed map is not necessarily an open map. All local homeomorphisms, including all coordinate charts on manifolds and all covering maps, are open maps. A variant of the closed map lemma states that if a continuous function between locally compact Hausdorff spaces is proper then it is also closed. In complex analysis, the identically named open mapping theorem states that every non-constant holomorphic function defined on a connected open subset of the complex plane is an open map. The invariance of domain theorem states that a continuous and locally injective function between two -dimensional topological manifolds must be open. In functional analysis, the open mapping theorem states that every surjective continuous linear operator between Banach spaces is an open map. This theorem has been generalized to topological vector spaces beyond just Banach spaces. A surjective map is called an if for every there exists some such that is a for which by definition means that for every open neighborhood of is a neighborhood of in (note that the neighborhood is not required to be an neighborhood). Every surjective open map is an almost open map but in general, the converse is not necessarily true. If a surjection is an almost open map then it will be an open map if it satisfies the following condition (a condition that does depend in any way on 's topology ): whenever belong to the same fiber of (that is, ) then for every neighborhood of there exists some neighborhood of such that If the map is continuous then the above condition is also necessary for the map to be open. That is, if is a continuous surjection then it is an open map if and only if it is almost open and it satisfies the above condition. Properties Open or closed maps that are continuous If is a continuous map that is also open closed then: if is a surjection then it is a quotient map and even a hereditarily quotient map, A surjective map is called if for every subset the restriction is a quotient map. if is an injection then it is a topological embedding. if is a bijection then it is a homeomorphism. In the first two cases, being open or closed is merely a sufficient condition for the conclusion that follows. In the third case, it is necessary as well. Open continuous maps If is a continuous (strongly) open map, and then: where denotes the boundary of a set. where denote the closure of a set. If where denotes the interior of a set, then where this set is also necessarily a regular closed set (in ). In particular, if is a regular closed set then so is And if is a regular open set then so is If the continuous open map is also surjective then and moreover, is a regular open (resp. a regular closed) subset of if and only if is a regular open (resp. a regular closed) subset of If a net converges in to a point and if the continuous open map is surjective, then for any there exists a net in (indexed by some directed set ) such that in and is a subnet of Moreover, the indexing set may be taken to be with the product order where is any neighbourhood basis of directed by See also Notes Citations References General topology Theory of continuous functions Lemmas
Open and closed maps
Mathematics
2,678
3,735,409
https://en.wikipedia.org/wiki/Psychological%20resilience
Psychological resilience, or mental resilience, is the ability to cope mentally and emotionally with a crisis, or to return to pre-crisis status quickly. The term was popularized in the 1970s and 1980s by psychologist Emmy Werner as she conducted a forty-year-long study of a cohort of Hawaiian children who came from low socioeconomic status backgrounds. Numerous factors influence a person's level of resilience. Internal factors include personal characteristics such as self-esteem, self-regulation, and a positive outlook on life. External factors include social support systems, including relationships with family, friends, and community, as well as access to resources and opportunities. People can leverage psychological interventions and other strategies to enhance their resilience and better cope with adversity. These include cognitive-behavioral techniques, mindfulness practices, building psychosocial factors, fostering positive emotions, and promoting self-compassion. Overview A resilient person uses "mental processes and behaviors in promoting personal assets and protecting self from the potential negative effects of stressors". Psychological resilience is an adaptation in a person's psychological traits and experiences that allows them to regain or remain in a healthy mental state during crises/chaos without long-term negative consequences. It is difficult to measure and test this psychological construct because resilience can be interpreted in a variety of ways. Most psychological paradigms (biomedical, cognitive-behavioral, sociocultural, etc.) have their own perspective of what resilience looks like, where it comes from, and how it can be developed. There are numerous definitions of psychological resilience, most of which center around two concepts: adversity and positive adaptation. Positive emotions, social support, and hardiness can influence a person to become more resilient. A psychologically resilient person can resist adverse mental conditions that are often associated with unfavorable life circumstances. This differs from psychological recovery which is associated with returning to those mental conditions that preceded a traumatic experience or personal loss. Research on psychological resilience has shown that it plays a crucial role in promoting mental health and well-being. Resilient people are better equipped to navigate life's challenges, maintain positive emotions, and recover from setbacks. They demonstrate higher levels of self-efficacy, optimism, and problem-solving skills, which contribute to their ability to adapt and thrive in adverse situations. Resilience is a "positive adaptation" after a stressful or adverse situation. When a person is "bombarded by daily stress, it disrupts their internal and external sense of balance, presenting challenges as well as opportunities." The routine stressors of daily life can have positive impacts which promote resilience. Some psychologists believe that it is not stress itself that promotes resilience but rather the person's perception of their stress and of their level of control. Stress allows people to practice resilience over time, different levels of stress vary among different individuals and the reason for that being is unknown. However, it is known that some people can handle stress better than others. Stress can be experienced in a person's life course at times of difficult life transitions, involving developmental and social change; traumatic life events, including grief and loss; and environmental pressures, encompassing poverty and community violence. Resilience is the integrated adaptation of physical, mental, and spiritual aspects to circumstances, and a coherent sense of self that is able to maintain normative developmental tasks that occur at various stages of life. The Children's Institute of the University of Rochester explains that "resilience research is focused on studying those who engage in life with hope and humor despite devastating losses". Resilience is not only about overcoming a deeply stressful situation, but also coming out of such a situation with "competent functioning". Resiliency allows a person to rebound from adversity as a strengthened and more resourceful person. Some characteristics associated with psychological resilience include: an easy temperament, good self-esteem, planning skills, and a supportive environment inside and outside of the family. When an event is appraised as comprehensible (predictable), manageable (controllable), and somehow meaningful (explainable) a resilient response is more likely. Process Psychological resilience is commonly understood as a process. It can also be characterized as a tool a person develops over time, or as a personal trait of the person ("resiliency"). Most research shows resilience as the result of people being able to interact with their environments and participate in processes that either promote well-being or protect them against the overwhelming influence of relative risk. This research supports the model in which psychological resilience is seen as a process rather than a trait—something to develop or pursue, rather than a static endowment or endpoint. When people are faced with an adverse condition, there are three ways in which they may approach the situation. respond with anger or aggression become overwhelmed and shut down feel the emotion about the situation and appropriately handle the emotion Resilience is promoted through the third approach, which is employed by individuals who adapt and change their current patterns to cope with disruptive states, thereby enhancing their well-being.In contrast, the first and second approaches lead individuals to adopt a victim mentality, blaming others and rejecting coping methods even after a crisis has passed. These individuals tend to react instinctively rather than respond thoughtfully, clinging to negative emotions such as fear, anger, anxiety, distress, helplessness, and hopelessness. Such emotions decrease problem-solving abilities and weaken resilience, making it harder to recover. Resilient people, on the other hand, actively cope, bounce back, and find solutions. Their resilience is further supported by protective environments, including good families, schools, communities, and social policies, which provide cumulative protective factors that bolster their ability to withstand and recover from exposure to risk factors. Resilience can be viewed as a developmental process (the process of developing resilience), or as indicated by a response process. In the latter approach, the effects of an event or stressor on a situationally relevant indicator variable are studied, distinguishing immediate responses, dynamic responses, and recovery patterns. In response to a stressor, more-resilient people show some (but less than less-resilient people) increase in stress. The speed with which this stress response returns to pre-stressor levels is also indicative of a person's resilience. Biological models From a scientific standpoint, resilience’s contested definition is multifaceted in relation to genetics, revealing a complex link between biological mechanisms and resilience: "Resilience, conceptualized as a positive bio-psychological adaptation, has proven to be a useful theoretical context for understanding variables for predicting long-term health and well-being". Three notable bases for resilience—self-confidence, self-esteem and self-concept—each have roots in a different nervous system—respectively, the somatic nervous system, the autonomic nervous system, and the central nervous system. Research indicates that, like trauma, resilience is influenced by epigenetic modifications. Increased DNA methylation of the growth factor GDNF in certain brain regions promotes stress resilience, as do molecular adaptations of the blood–brain barrier. The two neurotransmitters primarily responsible for stress buffering within the brain are dopamine and endogenous opioids, as evidenced by research showing that dopamine and opioid antagonists increased stress response in both humans and animals. Primary and secondary rewards reduce negative reactivity. Primary rewards are stimuli that are attributed to basic needs, such as water, food, and physical well-being. On the other hand, secondary rewards are accomplished by experiences or social interactions of stress in the brain in both humans and animals. The relationship between social support and stress resilience is thought to be mediated by the oxytocin system's impact on the hypothalamic-pituitary-adrenal axis. Alongside such neurotransmitters, stress-induced alterations in brain structures, such as the prefrontal cortex (PFC) and hippocampus have been linked to mental health issues like depression and anxiety. The increased activation of the medial prefrontal cortex and glutamatergic circuits has emerged as a potential factor in enhancing resilience as “environmental enrichment… increases the complexity of… pyramidal neurons in hippocampus and PFC, suggesting… a shared feature of resilience under these two distinct condition[s]." History The first research on resilience was published in 1973. The study used epidemiology—the study of disease prevalence—to uncover the risks and the protective factors that now help define resilience. A year later, the same group of researchers created tools to look at systems that support development of resilience. Emmy Werner was one of the early scientists to use the term resilience. She studied a cohort of children from Kauai, Hawaii. Kauai was quite poor and many of the children in the study grew up with alcoholic or mentally ill parents. Many of the parents were also out of work. Werner noted that of the children who grew up in these detrimental situations, two-thirds exhibited destructive behaviors in their later-teen years, such as chronic unemployment, substance abuse, and out-of-wedlock births (in girls). However, one-third of these youths did not exhibit destructive behaviors. Werner called the latter group resilient. Thus, resilient children and their families were those who, by definition, demonstrated traits that allowed them to be more successful than non-resilient children and families. Resilience also emerged as a major theoretical and research topic in the 1980s in studies of children with mothers diagnosed with schizophrenia. A 1989 study showed that children with a schizophrenic parent may not obtain an appropriate level of comforting caregiving—compared to children with healthy parents—and that such situations often had a detrimental impact on children's development. On the other hand, some children of ill parents thrived and were competent in academic achievement, which led researchers to make efforts to understand such responses to adversity. Since the onset of the research on resilience, researchers have been devoted to discovering protective factors that explain people's adaptation to adverse conditions, such as maltreatment, catastrophic life events, or urban poverty. Researchers endeavor to uncover how some factors (e.g. connection to family) may contribute to positive outcomes. Trait resilience Temperamental and constitutional disposition is a major factor in resilience. It is one of the necessary precursors of resilience along with warmth in family cohesion and accessibility of prosocial support systems. There are three kinds of temperamental systems that play part in resilience: the appetitive system, defensive system, and attentional system. Trait resilience is negatively correlated with the personality traits of neuroticism and negative emotionality, which represent tendencies to see and react to the world as threatening, problematic, and distressing, and to view oneself as vulnerable. Trait resilience is positively correlated with personality traits of openness and positive emotionality, which are associated with tendencies to approach and confront problems with confidence, while also maintaining autonomy and fostering adaptability to those life changes. Resilience traits are personal characteristics that express how people approach and react to events that they experience as negative. Trait resilience is generally considered via two methods: direct assessment of traits through resilience measures and proxy assessments of resilience in which existing cognate psychological constructs are used to explain resilient outcomes. Typically, trait resilience measures explore how individuals tend to react to and cope with adverse events. Proxy assessments of resilience, sometimes referred to as the buffering approach, view resilience as the antithesis of risk, focusing on how psychological processes interrelate with negative events to mitigate their effects. Possibly an individual perseverance trait, conceptually related to persistence and resilience, could also be measured behaviorally by means of arduous, difficult, or otherwise unpleasant tasks. Developing and sustaining resilience There are several theories or models that attempt to describe subcomponents, prerequisites, predictors, or correlates of resilience. Fletcher and Sarkar found five factors that develop and sustain a person's resilience: the ability to make realistic plans and being capable of taking the steps necessary to follow through with them confidence in one's strengths and abilities communication and problem-solving skills the ability to manage strong impulses and feelings having good self-esteem Among older adults, Kamalpour et al. found that the important factors are external connections, grit, independence, self-care, self-acceptance, altruism, hardship experience, health status, and positive perspective on life. Another study examined thirteen high-achieving professionals who seek challenging situations that require resilience, all of whom had experienced challenges in the workplace and negative life events over the course of their careers but who had also been recognized for their great achievements in their respective fields. Participants were interviewed about everyday life in the workplace as well as their experiences with resilience and thriving. The study found six main predictors of resilience: positive and proactive personality, experience and learning, sense of control, flexibility and adaptability, balance and perspective, and perceived social support. High achievers were also found to engage in many activities unrelated to their work such as engaging in hobbies, exercising, and organizing meetups with friends and loved ones. The American Psychological Association, in its popular psychology-oriented Psychology topics publication, suggests the following tactics people can use to build resilience: Prioritize relationships. Join a social group. Take care of your body. Practice mindfulness. Avoid negative coping outlets (like alcohol use). Help others. Be proactive; search for solutions. Make progress toward your goals. Look for opportunities for self-discovery. Keep things in perspective. Accept change. Maintain a hopeful outlook. Learn from your past. The idea that one can build one's resilience implies that resilience is a developable characteristic, and so is perhaps at odds with the theory that resilience is a process. Positive emotions The relationship between positive emotions and resilience has been extensively studied. People who maintain positive emotions while they face adversity are more flexibile in their thinking and problem solving. Positive emotions also help people recover from stressful experiences. People who maintain positive emotions are better-defended from the physiological effects of negative emotions, and are better-equipped to cope adaptively, to build enduring social resources, and to enhance their well-being. The ability to consciously monitor the factors that influence one's mood is correlated with a positive emotional state. This is not to say that positive emotions are merely a by-product of resilience, but rather that feeling positive emotions during stressful experiences may have adaptive benefits in the coping process. Resilient people who have a propensity for coping strategies that concretely elicit positive emotions—such as benefit-finding and cognitive reappraisal, humor, optimism, and goal-directed problem-focused coping—may strengthen their resistance to stress by allocating more access to these positive emotional resources. Social support from caring adults encouraged resilience among participants by providing them with access to conventional activities. Positive emotions have physiological consequences. For example, humor leads to improvements in immune system functioning and increases in levels of salivary immunoglobulin A, a vital system antibody, which serves as the body's first line of defense in respiratory illnesses. Other health outcomes include faster injury recovery rate and lower readmission rates to hospitals for the elderly, and reductions in the length of hospital stay. One study has found early indications that older adults who have increased levels of psychological resilience have decreased odds of death or inability to walk after recovering from hip fracture surgery. In another study, trait-resilient individuals experiencing positive emotions more quickly rebounded from cardiovascular activation that was initially generated by negative emotional arousal. Social support Social support is an important factor in the development of resilience. While many competing definitions of social support exist, they tend to concern one's degree of access to, and use of, strong ties to other people who are similar to oneself. Social support requires solidarity and trust, intimate communication, and mutual obligation both within and outside the family. Military studies have found that resilience is also dependent on group support: unit cohesion and morale is the best predictor of combat resiliency within a unit or organization. Resilience is highly correlated with peer support and group cohesion. Units with high cohesion tend to experience a lower rate of psychological breakdowns than units with low cohesion and morale. High cohesion and morale enhance adaptive stress reactions. War veterans who had more social support were less likely to develop post-traumatic stress disorder. Cognitive behavioral therapy A number of self-help approaches to resilience-building have been developed, drawing mainly on cognitive behavioral therapy (CBT) and rational emotive behavior therapy (REBT). For example, a group cognitive-behavioral intervention, called the Penn Resiliency Program (PRP), fosters aspects of resilience. A meta-analysis of 17 PRP studies showed that the intervention significantly reduces depressive symptoms over time. In CBT, building resilience is a matter of mindfully changing behaviors and thought patterns. The first step is to change the nature of self-talk—the internal monologue people have that reinforces beliefs about their self-efficacy and self-value. To build resilience, a person needs to replace negative self-talk, such as "I can't do this" and "I can't handle this", with positive self-talk. This helps to reduce psychological stress when a person faces a difficult challenge. The second step is to prepare for challenges, crises, and emergencies. Businesses prepare by creating emergency response plans, business continuity plans, and contingency plans. Similarly, an individual can create a financial cushion to help with economic stressors, maintain supportive social networks, and develop emergency response plans. Language learning and communication Language learning and communication help develop resilience in people who travel, study abroad, work internationally, or in those who find themselves as refugees in countries where their home language is not spoken. Research conducted by the British Council found a strong relationship between language and resilience in refugees. Providing adequate English-learning programs and support for Syrian refugees builds resilience not only in the individual, but also in the host community. Language builds resilience in five ways: home language and literacy development Development of home language and literacy helps create the foundation for a shared identity. By maintaining the home language, even when displaced, a person not only learns better in school, but enhances their ability to learn other languages. This improves resilience by providing a shared culture and sense of identity that allows refugees to maintain close relationships to others who share their identity and sets them up to possibly return one day. access to education, training, and employment This allows refugees to establish themselves in their host country and provides more ease when attempting to access information, apply to work or school, or obtain professional documentation. Securing access to education or employment is largely dependent on language competency, and both education and employment provide security and success that enhance resilience and confidence. learning together and social cohesion Learning together encourages resilience through social cohesion and networks. When refugees engage in language-learning activities with host communities, engagement and communication increases. Both refugee and host community are more likely to celebrate diversity, share their stories, build relationships, engage in the community, and provide each other with support. This creates a sense of belonging with the host communities alongside the sense of belonging established with other members of the refugee community through home language. addressing the effects of trauma on learning Additionally, language programs and language learning can help address the effects of trauma by providing a means to discuss and understand. Refugees are more capable of expressing their trauma, including the effects of loss, when they can effectively communicate with their host community. Especially in schools, language learning establishes safe spaces through storytelling, which further reinforces comfort with a new language, and can in turn lead to increased resilience. building inclusivity This is more focused on providing resources. By providing institutions or schools with more language-based learning and cultural material, the host community can learn how to better address the needs of the refugee community. This feeds back into the increased resilience of refugees by creating a sense of belonging and community. Another study shows the impacts of storytelling in building resilience. It aligns with many of the five factors identified by the study completed by the British Council, as it emphasizes the importance of sharing traumatic experiences through language. It showed that those who were exposed to more stories, from family or friends, had a more holistic view of life's struggles, and were thus more resilient, especially when surrounded by foreign languages or attempting to learn a new language. Development programs The Head Start program promotes resilience, as does the Big Brothers Big Sisters Programme, Centered Coaching & Consulting,, the Abecedarian Early Intervention Project, and social programs for youth with emotional or behavioral difficulties. The Positive Behavior Supports and Intervention program is a trauma-informed, resilience-based program for elementary age students. It has four components: positive reinforcements such as encouraging feedback; understanding that behavior is a response to unmet needs or a survival response; promoting belonging, mastery, and independence; and creating an environment to support the student through sensory tools, mental health breaks, and play. Tuesday's Children, a family service organization, works to build psychological resilience through programs such as Mentoring and Project Common Bond, an eight-day peace-building and leadership initiative for people aged 15–20, from around the world, who have been directly impacted by terrorism. Military organizations test personnel for the ability to function under stressful circumstances by deliberately subjecting them to stress during training. Those students who do not exhibit the necessary resilience can be screened out of the training. Those who remain can be given stress inoculation training. The process is repeated as personnel apply for increasingly demanding positions, such as special forces. Other factors Another protective factor involves external social support, which helps moderate the negative effects of environmental hazards or stressful situations and guides vulnerable individuals toward optimistic paths. One study distinguished three contexts for protective factors: Personal Attributes: Traits such as an outgoing personality, perceptiveness, and a positive self-concept. Family Environment: Close and supportive relationships with at least one family member or an emotionally stable parent. Community Support: Support and guidance from peers and community members. A study of the elderly in Zurich, Switzerland, illuminated the role humor plays to help people remain happy in the face of age-related adversity. Research has also been conducted into individual differences in resilience. Self-esteem, ego-control, and ego-resiliency are related to behavioral adaptation. Maltreated children who feel good about themselves may process risk situations differently by and, thereby, avoiding negative internalized self-perceptions. Ego-control is "the threshold or operating characteristics of an individual with regard to the expression or containment" of their impulses, feelings, and desires. Ego-resilience refers to the "dynamic capacity, to modify his or her model level of ego-control, in either direction, as a function of the demand characteristics of the environmental context" Demographic information (e.g., gender) and resources (e.g., social support) also predict resilience. After disaster women tend to show less resilience than men, and people who were less involved in affinity groups and organisations also showed less resilience. Certain aspects of religions, spirituality, or mindfulness could promote or hinder certain psychological virtues that increase resilience. However, the "there has not yet been much direct empirical research looking specifically at the association of religion and ordinary strengths and virtues". In a review of the literature on the relationship between religiosity/spirituality and PTSD, about half of the studies showed a positive relationship and half showed a negative relationship between measures of religiosity/spirituality and resilience. The United States Army was criticized for promoting spirituality in its Comprehensive Soldier Fitness program as a way to prevent PTSD, due to the lack of conclusive supporting data. Forgiveness plays a role in resilience among patients with chronic pain (but not in the severity of the pain). Resilience is also enhanced in people who develop effective coping skills for stress. Coping skills help people reduce stress levels, so they remain functional. Coping skills include using meditation, exercise, socialization, and self-care practices to maintain a healthy level of stress. Bibliotherapy, positive tracking of events, and enhancing psychosocial protective factors with positive psychological resources are other methods for resilience building. Increasing a person's arsenal of coping skills builds resilience. A study of 230 adults, diagnosed with depression and anxiety, showed that emotional regulation contributed to resilience in patients. The emotional regulation strategies focused on planning, positively reappraising events, and reducing rumination. Patients with improved resilience experienced better treatment outcomes than patients with non-resilience focused treatment plans. This suggests psychotherapeutic interventions may better handle mental disorders by focusing on psychological resilience. Other factors associated with resilience include the capacity to make realistic plans, self-confidence and a positive self image, communications skills, and the capacity to manage strong feelings and impulses. Children Adverse childhood experiences (ACEs) are events that occur in a child's life that could lead to maladaptive symptoms such as tension, low mood, repetitive and recurring thoughts, and . Maltreated children who experience some risk factors (e.g., single parenting, limited maternal education, or family unemployment), show lower ego-resilience and intelligence than children who were not maltreated. Maltreated children are also more likely to withdraw and demonstrate behavior problems. Ego-resiliency and positive self-esteem predict competent adaptation in maltreated children. Psychological resilience which helps overcome adverse events does not solely explain why some children experience post-traumatic growth and some do not. Resilience is the product of a number of developmental processes over time that allow children to experience small exposures to adversity or age appropriate challenges and develop skills to handle those challenges. This gives children a sense of pride and self-worth. Two "protective factors"—characteristics of children or situations that help children in the context of risk—are good cognitive functioning (like cognitive self-regulation and IQ) and positive relationships (especially with competent adults, like parents). Children who have protective factors in their lives tend to do better in some risky contexts. However, children do better when not exposed to high levels of risk or adversity. There are a few protective factors of young children that are consistent over differences in culture and stressors (poverty, war, divorce of parents, natural disasters, etc.): capable parenting other close relationships intelligence self-control motivation to succeed self-confidence and self-efficacy faith, hope, belief life has meaning effective schools effective communities effective cultural practices Ann Masten calls these protective factors "ordinary magic"—the ordinary human adaptive systems that are shaped by biological and cultural evolution. In her book, Ordinary Magic: Resilience in Development, she discusses the "immigrant paradox", the phenomenon that first-generation immigrant youth are more resilient than their children. Researchers hypothesize that "there may be culturally based resiliency that is lost with succeeding generations as they become distanced from their culture of origin." Another hypothesis is that those who choose to immigrate are more likely to be more resilient. Neurocognitive resilience Trauma is defined as an emotional response to distressing event, and PTSD is a mental disorder the develops after a person has experienced a dangerous event, for instance car accident or environmental disaster. The findings of a study conducted on a sample of 226 individuals who had experienced trauma indicate a positive association between resilience and enhanced nonverbal memory, as well as a measure of emotional learning. The findings of the study indicate that individuals who exhibited resilience demonstrated a lower incidence of depressed and post-traumatic stress disorder (PTSD) symptoms. Conversely, those who lacked resilience exhibited a higher likelihood of experiencing unemployment and having a history of suicide attempts. The research additionally revealed that the experience of severe childhood abuse or exposure to trauma was correlated with a lack of resilience. The results indicate that resilience could potentially serve as a substitute measure for emotional learning, a process that is frequently impaired in stress-related mental disorders. This finding has the potential to enhance our comprehension of resilience. Young adults Sports provide benefits such as social support or a boost in self confidence. The findings of a study investigating the correlation between resilience and symptom resolution in adolescents and young adults who have experienced sport-related concussions (SRC) indicate that individuals with lower initial resilience ratings tend to exhibit a higher number and severity of post-concussion symptoms (PCSS), elevated levels of anxiety and depression, and a delayed recovery process from SRC. Additionally, the research revealed that those who initially scored lower on resilience assessments were less inclined to describe a sense of returning to their pre-injury state and experienced more pronounced exacerbation of symptoms resulting from both physical and cognitive exertion, even after resuming sports or physical activity. This finding illustrates the significant impact that resilience can have on the process of physical and mental recovery. Role of the family Family environments that are caring and stable, hold high expectations for children's behavior, and encourage participation by children in the life of the family are environments that more successfully foster resilience in children. Most resilient children have a strong relationship with at least one adult (not always a parent), and this relationship helps to diminish risk associated with family discord. Parental resilience—the ability of parents to deliver competent high-quality parenting, despite the presence of risk factors—plays an important role in children's resilience. Understanding the characteristics of quality parenting is critical to the idea of parental resilience. However, resilience research has focused on the well-being of children, with limited academic attention paid to factors that may contribute to the resilience of parents. Even if divorce produces stress, the availability of social support from family and community can reduce this stress and yield positive outcomes. A family that emphasizes the value of assigned chores, caring for brothers or sisters, and the contribution of part-time work in supporting the family helps to foster resilience. Some practices that poor parents utilize help to promote resilience in families. These include frequent displays of warmth, affection, and emotional support; reasonable expectations for children combined with straightforward, not overly harsh discipline; family routines and celebrations; and the maintenance of common values regarding money and leisure. According to sociologist Christopher B. Doob, "Poor children growing up in resilient families have received significant support for doing well as they enter the social world—starting in daycare programs and then in schooling." The Besht model of natural resilience-building through parenting, in an ideal family with positive access and support from family and friends, has four key markers: realistic upbringing effective risk communications positivity and restructuring of demanding situations building self efficacy and hardiness In this model, self-efficacy is the belief in one's ability to organize and execute the courses of action required to achieve goals and hardiness is a composite of interrelated attitudes of commitment, control, and challenge. Role of school Resilient children in classroom environments work and play well, hold high expectations, and demonstrate locus of control, self-esteem, self-efficacy, and autonomy. These things work together to prevent the debilitating behaviors that are associated with learned helplessness. Research on Mexican–American high school students found that a sense of belonging to school was the only significant predictor of academic resilience, though a sense of belonging to family, a peer group, and a culture higher academic resilience. "Although cultural loyalty overall was not a significant predictor of resilience, certain cultural influences nonetheless contribute to resilient outcomes, like familism and cultural pride and awareness." The results "indicate a negative relationship between cultural pride and the ethnic homogeneity of a school." The researchers hypothesize that "ethnicity becomes a salient and important characteristic in more ethnically diverse settings". A strong connection with one's cultural identity is an important protective factor against stress and is indicative of increased resilience. While classroom resources have been created to promote resilience in students, the most effective ways to ensure resilience in children is by protecting their natural adaptive systems from breaking down or being hijacked. At home, resilience can be promoted through a positive home environment and emphasizing cultural practices and values. In school, this can be done by ensuring that each student develops and maintains a sense of belonging to the school through positive relationships with classroom peers and a caring teacher. A sense of belonging—whether it be in a culture, family, or another group—predicts resiliency against any given stressor. Role of the community Communities play a role in fostering resilience. The clearest sign of a cohesive and supportive community is the presence of social organizations that provide healthy human development. Services are unlikely to be used unless there is good communication about them. Children who are repeatedly relocated do not benefit from these resources, as their opportunities for resilience-building community participation are disrupted with every relocation. Outcomes in adulthood Patients who show resilience to adverse events in childhood may have worse outcomes later in life. A study in the American Journal of Psychiatry interviewed 1420 participants with a Child and Adolescent Psychiatric Assessment up to 8 times as children. Of those 1,266 were interviewed as adults, and this group had higher risks for anxiety, depression and problems with work or education. This was accompanied by worse physical health outcomes. The study authors posit that the goal of public health should be to reduce childhood trauma, and not promote resilience. Specific situations Divorce Cultivating resilience may be beneficial to all parties involved in divorce. The level of resilience a child will experience after their parents have split is dependent on both internal and external variables. Some of these variables include their psychological and physical state and the level of support they receive from their schools, friends, and family friends. Children differ by age, gender, and temperament in their capacity to cope with divorce. About 20–25% of children "demonstrate severe emotional and behavioral problems" when going through a divorce, compared to 10% of children exhibiting similar problems in married families. Despite this, approximately 75–80% of these children will "develop into well-adjusted adults with no lasting psychological or behavioral problems". This goes to show that most children have the resilience needed to endure their parents' divorce. The effects of the divorce extend past the separation of the parents. Residual conflict between parents, financial problems, and the re-partnering or remarriage of parents can cause stress. Studies have shown conflicting results about the effect of post-divorce conflict on a child's healthy adjustment. Divorce may reduce children's financial means and associated lifestyle. For example, economizing may mean a child cannot continue to participate in extracurricular activities such as sports and music lessons, which can be detrimental to their social lives. A parent's repartnering or remarrying can add conflict and anger to a child's home environment. One reason re-partnering causes additional stress is because of the lack of clarity in roles and relationships; the child may not know how to react and behave with this new quasi-parent figure in their life. Bringing in a new partner/spouse may be most stressful when done shortly after the divorce. Divorce is not a single event, but encompasses multiple changes and challenges. Internal factors promote resiliency in the child, as do external factors in the environment. Certain programs such as the 14-week Children's Support Group and the Children of Divorce Intervention Program may help a child cope with the changes that occur from a divorce. Bullying Beyond preventing bullying, it is also important to consider interventions based on emotional intelligence when bullying occurs. Emotional intelligence may foster resilience in victims. When a person faces stress and adversity, especially of a repetitive nature, their ability to adapt is an important factor in whether they have a more positive or negative outcome. One study examining adolescents who illustrated resilience to bullying found higher behavioral resilience in girls and higher emotional resilience in boys. The study's authors suggested the targeting of psychosocial skills as a form of intervention. Emotional intelligence promotes resilience to stress and the ability to manage stress and other negative emotions can restrain a victim from going on to perpetuate aggression. Emotion regulation is an important factor in resilience. Emotional perception significantly facilitates lower negative emotionality during stress, while emotional understanding facilitates resilience and correlates with positive affect. Natural disasters Resilience after a natural disaster can be gauged on an individual level (each person in the community), a community level (everyone collectively in the affected locality), and on a physical level (the locality's environment and infrastructure). UNESCAP-funded research on how communities show resiliency in the wake of natural disasters found that communities were more physically resilient if community members banded together and made resiliency a collective effort. Social support, especially the ability to pool resources, is key to resilience. Communities that pooled social, natural, and economic resources were more resilient and could overcome disasters more quickly than communities that took a more individualistic approach. The World Economic Forum met in 2014 to discuss resiliency after natural disasters. They concluded that countries that are more economically sound, and whose members can diversify their livelihoods, show higher levels of resiliency. this had not been studied in depth, but the ideas discussed in this forum appeared fairly consistent with existing research. Individual resilience in the wake of natural disasters can be predicted by the level of emotion the person experienced and was able to process during and following the disaster. Those who employ emotional styles of coping were able to grow from their experiences and to help others. In these instances, experiencing emotions was adaptive. Those who did not engage with their emotions and who employed avoidant and suppressive coping styles had poorer mental health outcomes following disaster. Death of a family member little research had been done on the topic of family resilience in the wake of the death of a family member. Clinical attention to bereavement has focused on the individual mourning process rather than on the family unit as a whole. Resiliency in this context is the "ability to maintain a stable equilibrium" that is conducive to balance, harmony, and recovery. Families manage familial distortions caused by the death of the family member by reorganizing relationships and changing patterns of functioning to adapt to their new situation. People who exhibiting resilience in the wake of trauma can successfully traverse the bereavement process without long-term negative consequences. One of the healthiest behaviors displayed by resilient families in the wake of a death is honest and open communication. This facilitates an understanding of the crisis. Sharing the experience of the death can promote immediate and long-term adaptation. Empathy is a crucial component in familial resilience because it allows mourners to understand other positions, tolerate conflict, and grapple with differences that may arise. Another crucial component to resilience is the maintenance of a routine that binds the family together through regular contact and order. The continuation of education and a connection with peers and teachers at school is an important support for children struggling with the death of a family member. Professional settings Resilience has been examined in the context of failure and setbacks in workplace settings. Psychological resilience is one of the core constructs of positive organizational behavior and has captured scholars' and practitioners' attention. Research has highlighted certain personality traits, personal resources (e.g., self-efficacy, work-life balance, social competencies), personal attitudes (e.g., sense of purpose, job commitment), positive emotions, and work resources (e.g., social support, positive organizational context) as potential facilitators of workplace resilience. Attention has also been directed to the role of resilience in innovative contexts. Due to high degrees of uncertainty and complexity in the innovation process, failure and setbacks happen frequently in this context. These can harm affected individuals' motivation and willingness to take risks, so their resilience is essential for them to productively engage in future innovative activities. A resilience construct specifically aligned to the peculiarities of the innovation context was needed to diagnose and develop innovators' resilience: Innovator Resilience Potential (IRP). Based on Bandura's social cognitive theory, IRP has six components: self-efficacy, outcome expectancy, optimism, hope, self-esteem, and risk propensity. It reflects a process perspective on resilience: IRP can be interpreted either as an antecedent of how a setback affects an innovator, or as an outcome of the process that is influenced by the setback situation. A measurement scale of IRP was developed and validated in 2018. Cultural differences There is controversy about the indicators of good psychological and social development when resilience is studied across different cultures and contexts. The American Psychological Association's Task Force on Resilience and Strength in Black Children and Adolescents, for example, notes that there may be special skills that these young people and families have that help them cope, including the ability to resist racial prejudice. Researchers of indigenous health have shown the impact of culture, history, community values, and geographical settings on resilience in indigenous communities. People who cope may also show "hidden resilience" when they do not conform with society's expectations for how someone is supposed to behave (for example, in some contexts aggression may aid resilience, or less emotional engagement may be protective in situations of abuse). Resilience in individualist and collectivist communities Individualist cultures, such as those of the U.S., Austria, Spain, and Canada, emphasize personal goals, initiatives, and achievements. Independence, self-reliance, and individual rights are highly valued by members of individualistic cultures. The ideal person in individualist societies is assertive, strong, and innovative. People in this culture tend to describe themselves in terms of their unique traits—"I am analytical and curious". Economic, political, and social policies reflect the culture's interest in individualism. Collectivist cultures, such as those of Japan, Sweden, Turkey, and Guatemala, emphasize family and group work goals. The rules of these societies promote unity, brotherhood, and selflessness. Families and communities practice cohesion and cooperation. The ideal person in collectivist societies is trustworthy, honest, sensitive, and generous—emphasizing intrapersonal skills. Collectivists tend to describe themselves in terms of their roles—"I am a good husband and a loyal friend". In a study on the consequences of disaster on a culture's individualism, researchers operationalized these cultures by identifying indicative phrases in a society's literature. Words that showed the theme of individualism include, "able, achieve, differ, own, personal, prefer, and special." Words that indicated collectivism include, "belong, duty, give, harmony, obey, share, together." Differences in response to natural disasters Natural disasters threaten to destroy communities, displace families, degrade cultural integrity, and diminish an individual's level of functioning. Comparing individualist community reactions to collectivist community responses after natural disasters illustrates their differences and respective strengths as tools of resilience. Some suggest that because disasters strengthen the need to rely on other people and social structures, they reduce individual agency and the sense of autonomy, and so regions with heightened exposure to disaster should cultivate collectivism. However, interviews with and experiments on disaster survivors indicate that disaster-induced anxiety and stress decrease one's focus on social-contextual information—a key component of collectivism. So disasters may increase individualism. In a study into the association between socio-ecological indicators and cultural-level change in individualism, for each socio-ecological indicator, frequency of disasters was associated with greater (rather than less) individualism. Supplementary analyses indicated that the frequency of disasters was more strongly correlated with individualism-related shifts than was the magnitude of disasters or the frequency of disasters qualified by the number of deaths. Baby-naming is one indicator of change. Urbanization was linked to preference for uniqueness in baby-naming practices at a one-year lag, secularism was linked to individualist shifts in interpersonal structure at both lags, and disaster prevalence was linked to more unique naming practices at both lags. Secularism and disaster prevalence contributed to shifts in naming practices. Disaster recovery research focuses on psychology and social systems but does not adequately address interpersonal networking or relationship formation and maintenance. One disaster response theory holds that people who use existing communication networks fare better during and after disasters. Moreover, they can play important roles in disaster recovery by organizing and helping others use communication networks and by coordinating with institutions. Building strong, self-reliant communities whose members know each other, know each other's needs, and are aware of existing communication networks, is a possible source of resilience in disasters. Individualist societies promote individual responsibility for self-sufficiency; collectivist culture defines self-sufficiency within an interdependent communal context. Even where individualism is salient, a group thrives when its members choose social over personal goals and seek to maintain harmony, and where they value collectivist over individualist behavior. The concept of resilience in language While not all languages have a direct translation for the English word "resilience", nearly every culture has a word that relates to a similar concept, suggesting a common understanding of what resilience is. Even if a word does not directly translate to "resilience" in English, it relays a meaning similar enough to the concept and is used as such within the language. If a specific word for resilience does not exist in a language, speakers of that language typically assign a similar word that insinuates resilience based on context. Many languages use words that translate to "elasticity" or "bounce", which are used in context to capture the meaning of resilience. For example, one of the main words for "resilience" in Chinese literally translates to "rebound", one of the main words for "resilience" in Greek translates to "bounce" (another translates to "cheerfulness"), and one of the main words for "resilience" in Russian translates to "elasticity," just as it does in German. However, this is not the case for all languages. For example, if a Spanish speaker wanted to say "resilience", their main two options translate to "resistance" and "defense against adversity". Many languages have words that translate better to "tenacity" or "grit" better than they do to "resilience". While these languages may not have a word that exactly translates to "resilience", English speakers often use the words tenacity or grit when referring to resilience. Arabic has a word solely for resilience, but also two other common expressions to relay the concept, which directly translate to "capacity on deflation" or "reactivity of the body", but are better translated as "impact strength" and "resilience of the body" respectively. A few languages, such as Finnish, have words that express resilience in a way that cannot be translated back to English. In Finnish, the word and concept "" has been recently studied by a designated Sisu Scale, which is composed of both beneficial and harmful sides of . , measured by the Sisu Scale, has correlations with English langugage equivalents, but the harmful side of does not seem to have any corresponding concept in English-language-based scales. Sometimes has been translated to "grit" in English; blends the concepts of resilience, tenacity, determination, perseverance, and courage into one word that has become a facet of Finnish culture. Measurement Direct measurement Resilience is measured by evaluating personal qualities that reflect people's approach and response to negative experiences. Trait resilience is typically assessed using two methods: direct evaluation of traits through resilience measures, and proxy assessment of resilience, in which related psychological constructs are used to explain resilient outcomes. There are more than 30 resilience measures that assess over 50 different variables related to resilience, but there is no universally accepted "gold standard" for measuring resilience. Five of the established self-report measures of psychological resilience are: Ego Resiliency Scale measures a person's ability to exercise control over their impulses or inhibition in response to environmental demands, with the aim of maintaining or enhancing their ego equilibrium. Hardiness Scale encompasses three main dimensions: (1) commitment (a conviction that life has purpose), (2) control (confidence in one's ability to navigate life), and (3) challenge (aptitude for and pleasure in adapting to change) Psychological Resilience Scale assesses a "resilience core" characterized by five traits (purposeful life, perseverance, self-reliance, equanimity, and existential aloneness) that reflect an individual's physical and mental resilience throughout their lifespan Connor-Davidson Resilience Scale developed in a clinical treatment setting that conceptualized resilience as arising from four factors: Brief Resilience Scale assesses resilience as the capacity to bounce back from unfavorable circumstances The Resilience Systems Scales was produced to investigate and measure the underlying structure of the 115 items from these five most-commonly cited trait resilience scales in the literature. Three strong latent factors account for most of the variance accounted for by the five most popular resilience scales, and replicated ecological systems theory: Engineering resilience The capability of a system to quickly and effortlessly restore itself to a stable equilibrium state after a disruption, as measured by its speed and ease of recovery. Ecological resilience The capacity of a system to endure or resist disruptions while preserving a steady state and adapting to necessary changes in its functioning. Adaptive capacity The ability to continuously adjust functions and processes in order to be ready to adapt to any disruption. 'Proxy' measurement Resilience literature identifies five main trait domains that serve as stress-buffers and can be used as proxies to describe resilience outcomes: personality A resilient personality includes positive expressions of the five-factor personality traits such as high emotional stability, extraversion, conscientiousness, openness, and agreeableness. cognitive abilities and executive functions Resilience is identified through effective use of executive functions and processing of experiential demands, or through an overarching cognitive mapping system that integrates information from current situations, prior experience, and goal-driven processes. affective systems, which include emotional regulation systems Emotion regulation systems are based on the broaden-and-build theory, in which there is . eudaimonic well-being resilience emerges from natural well-being processes (e.g. autonomy, purpose in life, environmental mastery) and underlying genetic and neural substrates and acts as a protective resilient factor across life-span transitions. health systems This also reflects the broaden-and-build theory, where there is a reciprocal relationship between trait resilience and positive health functioning through the promotion of feeling capable to deal with adverse health situations. Mixed model A mixed model of resilience can be derived from direct and proxy measures of resilience. A search for latent factors among 61 direct and proxy resilience assessments, suggested four main factors: recovery Resilience scales that focus on recovery, such as engineering resilience, align with reports of stability in emotional and health systems. The most fitting theoretical framework for this is the broaden-and-build theory of positive emotions. This theory highlights how positive emotions can foster resilient health systems and enable individuals to recover from setbacks. sustainability Resilience scales that reflect "sustainability," such as engineering resilience, align with conscientiousness, lower levels of dysexecutive functioning, and five dimensions of eudaimonic well-being. Theoretically, resilience is the effective use of executive functions and processing of experiential demands (also known as resilient functioning), where an overarching cognitive mapping system integrates information from current situations, prior experience, and goal-driven processes (known as the cognitive model of resilience). adaptability resilience Resilience scales that assess adaptability, such as adaptive capacity, are associated with higher levels of extraversion (such as being enthusiastic, talkative, assertive, and gregarious) and openness-to-experience (such as being intellectually curious, creative, and imaginative). These personality factors are often reported to form a higher-order factor known as "beta" or "plasticity", which reflects a drive for growth, agency, and reduced inhibition by preferring new and diverse experiences while reducing fixed patterns of behavior. These findings suggest that adaptability can be seen as a complement to growth, agency, and reduced inhibition. social cohesion Several resilience measures converge to suggest an underlying social cohesion factor, in which social support, care, and cohesion among family and friends (as featured in various scales within the literature) form a single latent factor. These findings point to the possibility of adopting a "mixed model" of resilience in which direct assessments of resilience could be employed alongside cognate psychological measures to improve the evaluation of resilience. Criticism As with other psychological phenomena, there is controversy about how resilience should be defined. Its definition affects research focuses; differing or imprecise definitions lead to inconsistent research. Research on resilience has become more heterogeneous in its outcomes and measures, convincing some researchers to abandon the term altogether due to it being attributed to all outcomes of research where results were more positive than expected. There is also disagreement among researchers as to whether psychological resilience is a character trait or state of being. Psychological resilience has also been referred to . However, it is generally agreed upon that resilience is a buildable resource. There is also evidence that resilience can indicate a capacity . Adolescents who have a high level of adaptation (i.e. resilience) tend to struggle with dealing with other psychological problems later on in life. This is due to an overload of their stress response systems. There is evidence that the higher one's resilience is, the lower one's vulnerability. Brad Evans and Julian Reid criticize resilience discourse and its rising popularity in their book, Resilient Life. The authors assert that can put the onus of disaster response on individuals rather than publicly coordinated efforts. Tied to the emergence of neoliberalism, climate change, third-world development, and other discourses, Evans and Reid argue that promoting resilience draws attention away from governmental responsibility and towards self-responsibility and healthy psychological effects such as post-traumatic growth. See also References Further reading External links National Resilience Resource Center Research on resilience at Dalhousie University Motivation Psychological adjustment Psychological theories Self-sustainability
Psychological resilience
Biology
11,577
5,499,772
https://en.wikipedia.org/wiki/Vernier%20thruster
A vernier thruster is a rocket engine used on a spacecraft or launch vehicle for fine adjustments to the attitude or velocity. Depending on the design of a craft's maneuvering and stability systems, it may simply be a smaller thruster complementing the main propulsion system, or it may complement larger attitude control thrusters, or may be a part of the reaction control system. The name is derived from vernier calipers (named after Pierre Vernier) which have a primary scale for gross measurements, and a secondary scale for fine measurements. Vernier thrusters are used when a heavy spacecraft requires a wide range of different thrust levels for attitude or velocity control, as for maneuvering during docking with other spacecraft. On space vehicles with two sizes of attitude control thrusters, the main ACS (Attitude Control System) thrusters are used for larger movements, while the verniers are reserved for smaller adjustments. Due to their weight and the extra plumbing required for their operation, vernier rockets are seldom used in new designs. Instead, as modern rocket engines gained better control, larger thrusters could also be fired for very short pulses, resulting in the same change of momentum as a longer thrust from a smaller thruster. Vernier thrusters are used in rockets such as the R-7 for vehicle maneuvering because the main engine is fixed in place. For earlier versions of the Atlas rocket family (prior to the Atlas III), in addition to maneuvering, the verniers were used for roll control, although the booster engines could also perform this function. After main engine cutoff, the verniers would execute solo mode and fire for several seconds to make fine adjustments to the vehicle attitude. The Thor/Delta family also used verniers for roll control but were mounted on the base of the thrust section flanking the main engine. Examples The R-7 rocket family, with over 1700 successful launches to date, still depends on the S1.358000 vernier thrusters in its first and second stage. Most of the larger Soviet missiles and launch vehicles also used verniers. Examples include the RD-8 on the Zenit rocket family, the RD-855 and RD-856 on the R-36, and the RD-0214 on the Proton rocket family. On the SM-65 Atlas, the LR-101 verniers were used for roll control and to fine-tune the vehicle attitude after the main engine cutoff. The Delta II and Delta III rocket also used the LR-101 for vehicle roll maneuver since one engine cannot do a roll maneuver (although the later GEM 46 had thrust vectoring nozzle, only three were equipped with this feature and only ignited a brief moment during the ascent). The Space Shuttle had six vernier engines or thrusters in its "Vernier Reaction Control System" (VRCS). The VRCS could also deliver a gentle steady thrust. This was regularly used to boost the International Space Station while docked: for example during STS-130 commander George Zamka and pilot Terry Virts fired Endeavour's VRCS for a duration of 33 minutes to attain an orbit between . See also Space propulsion Cold gas thruster Vernier throttle References Spacecraft components Spacecraft attitude control
Vernier thruster
Astronomy
658
1,237,777
https://en.wikipedia.org/wiki/Polarizability
Polarizability usually refers to the tendency of matter, when subjected to an electric field, to acquire an electric dipole moment in proportion to that applied field. It is a property of particles with an electric charge. When subject to an electric field, the negatively charged electrons and positively charged atomic nuclei are subject to opposite forces and undergo charge separation. Polarizability is responsible for a material's dielectric constant and, at high (optical) frequencies, its refractive index. The polarizability of an atom or molecule is defined as the ratio of its induced dipole moment to the local electric field; in a crystalline solid, one considers the dipole moment per unit cell. Note that the local electric field seen by a molecule is generally different from the macroscopic electric field that would be measured externally. This discrepancy is taken into account by the Clausius–Mossotti relation (below) which connects the bulk behaviour (polarization density due to an external electric field according to the electric susceptibility ) with the molecular polarizability due to the local field. Magnetic polarizability likewise refers to the tendency for a magnetic dipole moment to appear in proportion to an external magnetic field. Electric and magnetic polarizabilities determine the dynamical response of a bound system (such as a molecule or crystal) to external fields, and provide insight into a molecule's internal structure. "Polarizability" should not be confused with the intrinsic magnetic or electric dipole moment of an atom, molecule, or bulk substance; these do not depend on the presence of an external field. Electric polarizability Definition Electric polarizability is the relative tendency of a charge distribution, like the electron cloud of an atom or molecule, to be distorted from its normal shape by an external electric field. The polarizability in isotropic media is defined as the ratio of the induced dipole moment of an atom to the electric field that produces this dipole moment. Polarizability has the SI units of C·m2·V−1 = A2·s4·kg−1 while its cgs unit is cm3. Usually it is expressed in cgs units as a so-called polarizability volume, sometimes expressed in Å3 = 10−24 cm3. One can convert from SI units () to cgs units () as follows: ≃ 8.988×1015 × where , the vacuum permittivity, is ≈8.854 × 10−12 (F/m). If the polarizability volume in cgs units is denoted the relation can be expressed generally (in SI) as . The polarizability of individual particles is related to the average electric susceptibility of the medium by the Clausius–Mossotti relation: where R is the molar refractivity, is the Avogadro constant, is the electronic polarizability, p is the density of molecules, M is the molar mass, and is the material's relative permittivity or dielectric constant (or in optics, the square of the refractive index). Polarizability for anisotropic or non-spherical media cannot in general be represented as a scalar quantity. Defining as a scalar implies both that applied electric fields can only induce polarization components parallel to the field and that the and directions respond in the same way to the applied electric field. For example, an electric field in the -direction can only produce an component in and if that same electric field were applied in the -direction the induced polarization would be the same in magnitude but appear in the component of . Many crystalline materials have directions that are easier to polarize than others and some even become polarized in directions perpendicular to the applied electric field, and the same thing happens with non-spherical bodies. Some molecules and materials with this sort of anisotropy are optically active, or exhibit linear birefringence of light. Tensor To describe anisotropic media a polarizability rank two tensor or matrix is defined, so that: The elements describing the response parallel to the applied electric field are those along the diagonal. A large value of here means that an electric-field applied in the -direction would strongly polarize the material in the -direction. Explicit expressions for have been given for homogeneous anisotropic ellipsoidal bodies. Application in crystallography The matrix above can be used with the molar refractivity equation and other data to produce density data for crystallography. Each polarizability measurement along with the refractive index associated with its direction will yield a direction specific density that can be used to develop an accurate three dimensional assessment of molecular stacking in the crystal. This relationship was first observed by Linus Pauling. Polarizability and molecular property are related to refractive index and bulk property. In crystalline structures, the interactions between molecules are considered by comparing a local field to the macroscopic field. Analyzing a cubic crystal lattice, we can imagine an isotropic spherical region to represent the entire sample. Giving the region the radius , the field would be given by the volume of the sphere times the dipole moment per unit volume = We can call our local field , our macroscopic field , and the field due to matter within the sphere, We can then define the local field as the macroscopic field without the contribution of the internal field: The polarization is proportional to the macroscopic field by where is the electric permittivity constant and is the electric susceptibility. Using this proportionality, we find the local field as which can be used in the definition of polarization and simplified with to get . These two terms can both be set equal to the other, eliminating the term giving us . We can replace the relative permittivity with refractive index , since for a low-pressure gas. The number density can be related to the molecular weight and mass density through , adjusting the final form of our equation to include molar refractivity: This equation allows us to relate bulk property (refractive index) to the molecular property (polarizability) as a function of frequency. Atomic and molecular polarizability Generally, polarizability increases as the volume occupied by electrons increases. In atoms, this occurs because larger atoms have more loosely held electrons in contrast to smaller atoms with tightly bound electrons. On rows of the periodic table, polarizability therefore decreases from left to right. Polarizability increases down on columns of the periodic table. Likewise, larger molecules are generally more polarizable than smaller ones. Water is a very polar molecule, but alkanes and other hydrophobic molecules are more polarizable. Water with its permanent dipole is less likely to change shape due to an external electric field. Alkanes are the most polarizable molecules. Although alkenes and arenes are expected to have larger polarizability than alkanes because of their higher reactivity compared to alkanes, alkanes are in fact more polarizable. This results because of alkene's and arene's more electronegative sp2 carbons to the alkane's less electronegative sp3 carbons. Ground state electron configuration models often describe molecular or bond polarization during chemical reactions poorly, because reactive intermediates may be excited, or be the minor, alternate structures in a chemical equilibrium with the initial reactant. Magnetic polarizability Magnetic polarizability defined by spin interactions of nucleons is an important parameter of deuterons and hadrons. In particular, measurement of tensor polarizabilities of nucleons yields important information about spin-dependent nuclear forces. The method of spin amplitudes uses quantum mechanics formalism to more easily describe spin dynamics. Vector and tensor polarization of particle/nuclei with spin are specified by the unit polarization vector and the polarization tensor P`. Additional tensors composed of products of three or more spin matrices are needed only for the exhaustive description of polarization of particles/nuclei with spin . See also Dielectric Electric susceptibility Hyperpolarizability Polarization density MOSCED, an estimation method for activity coefficients which uses polarizability as one of its parameters References Atomic physics Chemical physics Electric and magnetic fields in matter Polarization (waves)
Polarizability
Physics,Chemistry,Materials_science,Engineering
1,691
2,986,559
https://en.wikipedia.org/wiki/SEQUAL%20framework
The SEQUAL framework is systems modelling reference model for evaluating the quality of models. The SEQUAL framework, which stands for "semiotic quality framework" is developed by John Krogstie and others since the 1990s. The SEQUAL framework is a so-called "top-down quality framework", which is based on semiotic theory, such as the works of Charles W. Morris. Building on these theory it "defines several quality aspects based on relationships between a model, a body of knowledge, a domain, a modeling language, and the activities of learning, taking action, and modeling". Its usefulness, according to Mendling et al. (2006), was confirmed in a 2002 experiment by Moody et al. History The basic idea behind the SEQUAL framework is, that "conceptual models can be considered as sets of statements in a language, and therefore can be evaluated in semiotic/linguistic terms". A first semiotic framework for evaluating conceptual models was originally proposed by Lindland et al. in the 1994 article "Understanding quality in conceptual modeling". In its initial version, it considered three quality levels: syntactic, semantic, and pragmatic quality The framework was later extended, and called the SEQUAL framework by Krogstie et al. in the 1995 article "Defining quality aspects for conceptual models". in the 2002 article "Quality of interactive models" Krogstie & Jørgensen extended the initial framework adding more levels of Stamper's semiotic ladder. SEQUAL framework topics Modeling is an integral part of many technical fields, including engineering, economics, and software engineering. In this context, a model is a formal representation of an organizational system, such as a business model or a formal description of software in UML. Model activation Model activation, according to John Krogstie (2006), is the process by which a model affects reality. Model activation involves actors interpreting the model and to some extent adjusting their behaviour accordingly. This process can be: automated, where a software component interprets the model, manual, where the model guides the actions of human actors, or interactive, where prescribed aspects of the model are automatically interpreted and ambiguous parts are left to the users to resolve. Sets in the Quality Framework The Quality Framework works with a set of eight items: A: Actors that develop or have to relate to (parts of) the model. Can be persons or tools. L: What can be expressed in the modeling language M: What is expressed in the model D: What can be expressed about the domain (area of interest) K: The explicit knowledge of the participating persons I: What the persons in the audience interpret the model to say T: What relevant tools interpret the model to say G: The goals of the modeling Physical quality The three main aspects of physical quality are: Externalization or the question "Is it possible to externalize knowledge by using the model language?", Internalizability about model persistence and availability, and Basically or the question "Is the model language able to express the model domain?" Externalization is presenting the modeller's concept in some model form for others to make sense of it. Other people can have look on it and can discuss. How other people perceives the model is a matter of internalization. After perceiving the model in their own way they can discuss and change their mind accordingly. To make sense others, it is better to have some model language in common. Physical quality refers to the possibility of externalizing models by using model language that should be available and of course in persistence manner to be internalized by audiences. How available is the model to audience? Availability depends on distributability, especially when members of the audience are geographically dispersed. Then, a model which is an electronically distributable format will be more easily distributed than one which must be printed on paper and sent by ordinary mail or fax. It may also matter exactly what is distributed, e.g. the model in an editable form or merely in an output format. How persistent is the model, how protected is it against loss or damage? This also includes previous versions of the model, if these are relevant. E.g. for a model on disk, the physical quality will be higher if there is a backup copy, or even higher if this backup is on another disk whose failure is independent of the originals. Similarly, for models on paper, the amount and security of backup copies will be essential. Empirical quality To evaluate empirical quality, the model should be well externalized. Main aspects are: Ergonomics, readability, layout, and information theory. Basically empirical quality is about the question "Is the model easily readable?". Empirical quality deals with the variety of elements distinguished, error frequencies when being written or read, coding (shapes of boxes) and ergonomics for Computer-Human Interaction for documentation and modeling-tools. Ergonomics is the study of workplace design and the physical and psychological impact it has on workers. This quality is related to readability and layout. There are different factors that have an important impact on visual emphasis like size, solidity, foreground/background differences, colour (red attracts the eye more than other colours), change (blinking or moving symbols attract attention), position and so on. For graph aesthetics there may be different consideration(Battista, 1994, Tamassia, 1988) like angles between edges not be too small, minimize the number of bends along edges, minimize the number of crossings between edges, place nodes with high degree in the centre of the drawing, have symmetry of sons in hierarchies, have uniform density of nodes in the drawing, have verticality of hierarchical structures and so on. Syntactical quality Syntactic quality is the correspondence between the model M and the language extension L of the language in which the model is written. Three aspects here are: Error detection: During a modeling session, some syntactical errors--- syntactic incompleteness --- should be allowed on a temporary basis. For instance, although the DFD language requires that all processes are linked to a flow, it is difficult to draw a process and a flow simultaneously. Syntactical completeness has to be checked upon user's request. So, in contrast to implicit checks where the tool is forcing the user to follow the language syntax, explicit check can only detect and report on existing errors. The user has to make the corrections. Error correction: to replace a detected error with a correct statement Semantic quality What is expressed in the model? The semantic goals of this framework are: Validity; if all the statements in the model are correct and related to the problem. M\D = Ø Completeness; if the model contains all relevant and correct statements to solve this problem. D\M = Ø Perceived semantic quality Perceived semantic quality is the relation between an actor's interpretation of a model and his/her knowledge of the domain. Perceived validity I\K = Ø Perceived completeness K\I = Ø Pragmatic quality Pragmatic quality is the correspondence between the model and people's interpretation of it. Comprehension is the only pragmatic goal in the framework. It is very important that people that read the model, understand it. No solution is good if no-one understands it. Pragmatic quality relates to the effect the model have on the participants and the world. Four aspects is treated specifically, that: the human interpretation of the model is correct relative to what is meant. the tool interpretation is correct relative to what is meant to be expressed in the model. the participants learn based on the model. the domain is changed (preferably in a positive direction relative to the goal of modeling). Social quality The goal for the social quality is agreement. Agreement about knowledge, interpretation and model. Agreement is achieved if perceived semantic quality and comprehension are achieved. There is relative agreement and absolute agreement. For the three agreement parts (knowledge, interpretation and model) we can define: Relative agreement in the three above agreement types; all Knowledge, Interpretation and Model are consistent. Absolute agreement in the three above agreement types; all Knowledge, Interpretation and Model are equal. Knowledge quality Degree of internalization of existing organizational reality. Knowledge in domain is "complete": D\K = Ø. Knowledge in domain is "valid": K\D = Ø. Activities for improvement: Stakeholder identification Knowledge source identification Research and investigation Participant selection Participant training Problem definition Language quality To receive good language quality it is important that: The language is appropriate to the domain. The language is appropriate to the participants' knowledge of modeling languages The language appropriate to express the knowledge of the participants If the language quality is good, it will improve the participants' interpretation and other technical actors' interpretation. For additional detail, see the quality of modelling languages Organizational quality The organizational quality of the model relates to: That all statements in the model contribute to fulfilling the goals of modeling, or Organizational goal validity. That all the goals of modeling are addressed through the model, or Organizational goal completeness. Alternative quality framework An alternative quality framework is the Guidelines of Modeling (GoM) based on general accounting principles. The framework "include the six principles of correctness, clarity, relevance, comparability, economic efficiency, and systematic design". It was operationalized for Event-driven Process Chains and also tested in experiments Another alternative modelling process quality framework actually based on SEQUAL is the "Quality of Modelling" framework (QoMo). QoMo is still a "preliminary modelling process oriented, based on knowledge state transitions, cost of the activities bringing such transitions about, and a goal structure for activities-for-modelling. Such goals are directly linked to concepts of SEQUAL". References Further reading John Krogstie (2012). "Model-Based Development and Evolution of Information Systems: A Quality Approach" John Krogstie (2001). "A semiotic approach to quality in requirements specifications" Conceptual modelling Enterprise modelling
SEQUAL framework
Engineering
2,042
27,555,250
https://en.wikipedia.org/wiki/Institute%20for%20Astronomy%20and%20Astrophysics
The Institute for Astronomy and Astrophysics (IAA) in Brussels is a part of the physics department of the Université Libre de Bruxelles. It is an international center of excellence in the field of nuclear astrophysics. The institute's director is currently Prof. Alain Jorissen. The institute is composed of one full professor, five tenured FNRS senior researchers, nine postdoctoral fellows from various countries (Belgium, France, Japan, United Kingdom, Slovakia), and three Ph.D. students (Belgium, Italy). Field of research Its research interests involve nuclear astrophysics, stellar evolution, chemical composition of stars, binary stars, neutron stars and modified Newtonian dynamics. Achievements The institute has been a leading laboratory in many national and international collaborations. One of these collaborations has led to a world premiere, the direct measurement of a nuclear reaction rate on an unstable target at energies of interest in astrophysics References External links Official webpage of the astronomical institute Physics Department of the ULB Research institutes in Belgium Astronomy institutes and departments Astrophysics research institutes Université libre de Bruxelles
Institute for Astronomy and Astrophysics
Physics,Astronomy
219
201,611
https://en.wikipedia.org/wiki/Orthogonalization
In linear algebra, orthogonalization is the process of finding a set of orthogonal vectors that span a particular subspace. Formally, starting with a linearly independent set of vectors {v1, ... , vk} in an inner product space (most commonly the Euclidean space Rn), orthogonalization results in a set of orthogonal vectors {u1, ... , uk} that generate the same subspace as the vectors v1, ... , vk. Every vector in the new set is orthogonal to every other vector in the new set; and the new set and the old set have the same linear span. In addition, if we want the resulting vectors to all be unit vectors, then we normalize each vector and the procedure is called orthonormalization. Orthogonalization is also possible with respect to any symmetric bilinear form (not necessarily an inner product, not necessarily over real numbers), but standard algorithms may encounter division by zero in this more general setting. Orthogonalization algorithms Methods for performing orthogonalization include: Gram–Schmidt process, which uses projection Householder transformation, which uses reflection Givens rotation Symmetric orthogonalization, which uses the Singular value decomposition When performing orthogonalization on a computer, the Householder transformation is usually preferred over the Gram–Schmidt process since it is more numerically stable, i.e. rounding errors tend to have less serious effects. On the other hand, the Gram–Schmidt process produces the jth orthogonalized vector after the jth iteration, while orthogonalization using Householder reflections produces all the vectors only at the end. This makes only the Gram–Schmidt process applicable for iterative methods like the Arnoldi iteration. The Givens rotation is more easily parallelized than Householder transformations. Symmetric orthogonalization was formulated by Per-Olov Löwdin. Local orthogonalization To compensate for the loss of useful signal in traditional noise attenuation approaches because of incorrect parameter selection or inadequacy of denoising assumptions, a weighting operator can be applied on the initially denoised section for the retrieval of useful signal from the initial noise section. The new denoising process is referred to as the local orthogonalization of signal and noise. It has a wide range of applications in many signals processing and seismic exploration fields. See also Orthogonality Biorthogonal system Orthogonal basis References Linear algebra
Orthogonalization
Mathematics
474
126,844
https://en.wikipedia.org/wiki/Internationalization%20and%20localization
In computing, internationalization and localization (American) or internationalisation and localisation (British), often abbreviated i18n and l10n respectively, are means of adapting to different languages, regional peculiarities and technical requirements of a target locale. Internationalization is the process of designing a software application so that it can be adapted to various languages and regions without engineering changes. Localization is the process of adapting internationalized software for a specific region or language by translating text and adding locale-specific components. Localization (which is potentially performed multiple times, for different locales) uses the infrastructure or flexibility provided by internationalization (which is ideally performed only once before localization, or as an integral part of ongoing development). Naming The terms are frequently abbreviated to the numeronyms i18n (where 18 stands for the number of letters between the first i and the last n in the word internationalization, a usage coined at Digital Equipment Corporation in the 1970s or 1980s) and l10n for localization, due to the length of the words. Some writers have the latter term capitalized (L10n) to help distinguish the two. Some companies, like IBM and Oracle, use the term globalization, g11n, for the combination of internationalization and localization. Microsoft defines internationalization as a combination of world-readiness and localization. World-readiness is a developer task, which enables a product to be used with multiple scripts and cultures (globalization) and separates user interface resources in a localizable format (localizability, abbreviated to L12y). Hewlett-Packard and HP-UX created a system called "National Language Support" or "Native Language Support" (NLS) to produce localizable software. Some vendors, including IBM use the term National Language Version (NLV) for localized versions of software products supporting only one specific locale. The term implies the existence of other alike NLV versions of the software for different markets; this terminology is not used where no internationalization and localization was undertaken and a software product only supports one language and locale in any version. Scope According to Software without frontiers, the design aspects to consider when internationalizing a product are "data encoding, data and documentation, software construction, hardware device support, and user interaction"; while the key design areas to consider when making a fully internationalized product from scratch are "user interaction, algorithm design and data formats, software services, and documentation". Translation is typically the most time-consuming component of language localization. This may involve: For film, video, and audio, translation of spoken words or music lyrics, often using either dubbing or subtitles Text translation for printed materials, and digital media (possibly including error messages and documentation) Potentially altering images and logos containing text to contain translations or generic icons Different translation lengths and differences in character sizes (e.g. between Latin alphabet letters and Chinese characters) can cause layouts that work well in one language to work poorly in others Consideration of differences in dialect, register or variety Writing conventions like: Formatting of numbers (especially decimal separator and digit grouping) Date and time format, possibly including the use of different calendars (e.g. the Islamic or the Japanese calendar) Standard locale data Computer software can encounter differences above and beyond straightforward translation of words and phrases, because computer programs can generate content dynamically. These differences may need to be taken into account by the internationalization process in preparation for translation. Many of these differences are so regular that a conversion between languages can be easily automated. The Common Locale Data Repository by Unicode provides a collection of such differences. Its data is used by major operating systems, including Microsoft Windows, macOS and Debian, and by major Internet companies or projects such as Google and the Wikimedia Foundation. Examples of such differences include: Different "scripts" in different writing systems use different characters – a different set of letters, syllograms, logograms, or symbols. Modern systems use the Unicode standard to represent many different languages with a single character encoding. Writing direction is left to right in most European languages, right-to-left in Hebrew and Arabic, or both in boustrophedon scripts, and optionally vertical in some Asian languages. Complex text layout, for languages where characters change shape depending on context Capitalization exists in some scripts and not in others Different languages and writing systems have different text sorting rules Different languages have different numeral systems, which might need to be supported if Western Arabic numerals are not used Different languages have different pluralization rules, which can complicate programs that dynamically display numerical content. Other grammar rules might also vary, e.g. genitive. Different languages use different punctuation (e.g. quoting text using double-quotes (" ") as in English, or guillemets (« ») as in French) Keyboard shortcuts can only make use of buttons on the keyboard layout which is being localized for. If a shortcut corresponds to a word in a particular language (e.g. Ctrl-s stands for "save" in English), it may need to be changed. National conventions Different countries have different economic conventions, including variations in: Paper sizes Broadcast television systems and popular storage media Telephone number formats Postal address formats, postal codes, and choice of delivery services Currency (symbols, positions of currency markers, and reasonable amounts due to different inflation histories) – ISO 4217 codes are often used for internationalization System of measurement Battery sizes Voltage and current standards In particular, the United States and Europe differ in most of these cases. Other areas often follow one of these. Specific third-party services, such as online maps, weather reports, or payment service providers, might not be available worldwide from the same carriers, or at all. Time zones vary across the world, and this must be taken into account if a product originally only interacted with people in a single time zone. For internationalization, UTC is often used internally and then converted into a local time zone for display purposes. Different countries have different legal requirements, meaning for example: Regulatory compliance may require customization for a particular jurisdiction, or a change to the product as a whole, such as: Privacy law compliance Additional disclaimers on a website or packaging Different consumer labelling requirements Compliance with export restrictions and regulations on encryption Compliance with an Internet censorship regime or subpoena procedures Requirements for accessibility Collecting different taxes, such as sales tax, value added tax, or customs duties Sensitivity to different political issues, like geographical naming disputes and disputed borders shown on maps (e.g., India has proposed a bill that would make failing to show Kashmir and other areas as intended by the government a crime) Government-assigned numbers have different formats (such as passports, Social Security Numbers and other national identification numbers) Localization also may take into account differences in culture, such as: Local holidays Personal name and title conventions Aesthetics Comprehensibility and cultural appropriateness of images and color symbolism Ethnicity, clothing, and socioeconomic status of people and architecture of locations pictured Local customs and conventions, such as social taboos, popular local religions, or superstitions such as blood types in Japanese culture vs. astrological signs in other cultures Business process for internationalizing software To internationalize a product, it is important to look at a variety of markets that the product will foreseeably enter. Details such as field length for street addresses, unique format for the address, ability to make the postal code field optional to address countries that do not have postal codes or the state field for countries that do not have states, plus the introduction of new registration flows that adhere to local laws are just some of the examples that make internationalization a complex project. A broader approach takes into account cultural factors regarding for example the adaptation of the business process logic or the inclusion of individual cultural (behavioral) aspects. Already in the 1990s, companies such as Bull used machine translation (Systran) on a large scale, for all their translation activity: human translators handled pre-editing (making the input machine-readable) and post-editing. Engineering Both in re-engineering an existing software or designing a new internationalized software, the first step of internationalization is to split each potentially locale-dependent part (whether code, text or data) into a separate module. Each module can then either rely on a standard library/dependency or be independently replaced as needed for each locale. The current prevailing practice is for applications to place text in resource files which are loaded during program execution as needed. These strings, stored in resource files, are relatively easy to translate. Programs are often built to reference resource libraries depending on the selected locale data. The storage for translatable and translated strings is sometimes called a message catalog as the strings are called messages. The catalog generally comprises a set of files in a specific localization format and a standard library to handle said format. One software library and format that aids this is gettext. Thus to get an application to support multiple languages one would design the application to select the relevant language resource file at runtime. The code required to manage data entry verification and many other locale-sensitive data types also must support differing locale requirements. Modern development systems and operating systems include sophisticated libraries for international support of these types, see also Standard locale data above. Many localization issues (e.g. writing direction, text sorting) require more profound changes in the software than text translation. For example, OpenOffice.org achieves this with compilation switches. Process A globalization method includes, after planning, three implementation steps: internationalization, localization and quality assurance. To some degree (e.g. for quality assurance), development teams include someone who handles the basic/central stages of the process which then enables all the others. Such persons typically understand foreign languages and cultures and have some technical background. Specialized technical writers are required to construct a culturally appropriate syntax for potentially complicated concepts, coupled with engineering resources to deploy and test the localization elements. Once properly internationalized, software can rely on more decentralized models for localization: free and open source software usually rely on self-localization by end-users and volunteers, sometimes organized in teams. The GNOME project, for example, has volunteer translation teams for over 100 languages. MediaWiki supports over 500 languages, of which 100 are mostly complete . When translating existing text to other languages, it is difficult to maintain the parallel versions of texts throughout the life of the product. For instance, if a message displayed to the user is modified, all of the translated versions must be changed. Independent software vendor such as Microsoft may provides reference software localization guidelines for developers. The software localization language may be different from written language. Commercial considerations In a commercial setting, the benefit of localization is access to more markets. In the early 1980s, Lotus 1-2-3 took two years to separate program code and text and lost the market lead in Europe over Microsoft Multiplan. MicroPro found that using an Austrian translator for the West German market caused its WordStar documentation to, an executive said, not "have the tone it should have had". When Tandy Corporation needed French and German translations of English error messages for the TRS-80 Model 4, the company's Belgium office and five translators in the US produced six different versions that varied on the gender of computer components. However, there are considerable costs involved, which go far beyond engineering. Further, business operations must adapt to manage the production, storage and distribution of multiple discrete localized products, which are often being sold in completely different currencies, regulatory environments and tax regimes. Finally, sales, marketing and technical support must also facilitate their operations in the new languages, to support customers for the localized products. Particularly for relatively small language populations, it may never be economically viable to offer a localized product. Even where large language populations could justify localization for a given product, and a product's internal structure already permits localization, a given software developer or publisher may lack the size and sophistication to manage the ancillary functions associated with operating in multiple locales. See also Subcomponents and standards: Bidirectional script support International Components for Unicode Language code Language localization Website localization Related concepts: Computer accessibility Computer Russification, localization into Russian language Separation of concerns Methods and examples: Game localization Globalization Management System Pseudolocalization, a software testing method for testing a software product's readiness for localization. Other: Input method editor Language industry References Further reading External links Localization vs. Internationalization by The World Wide Web Consortium Business terms Globalization Information and communication technologies for development International trade Natural language and computing Technical communication Translation Transliteration Word coinage
Internationalization and localization
Technology
2,610
48,886,529
https://en.wikipedia.org/wiki/Carpogonium
The carpogonium (plural carpogonia) is the female organ in the red algae (Rhodophyta) which have a highly specialized type of reproduction. It contains the reproductive nucleus. It may contain a number of cells usually without chloroplasts. It shows an elongated process which is the receptive organ for the male gametes. It gives birth to the carpospores. It may also have hairlike structures called trichogynes which receive sperm before fertilization takes place. References Red algae
Carpogonium
Biology
109
5,060,723
https://en.wikipedia.org/wiki/Signal%20conditioning
In electronics and signal processing, signal conditioning is the manipulation of an analog signal in such a way that it meets the requirements of the next stage for further processing. In an analog-to-digital converter (ADC) application, signal conditioning includes voltage or current limiting and anti-aliasing filtering. In control engineering applications, it is common to have a sensing stage (which consists of a sensor), a signal conditioning stage (where usually amplification of the signal is done) and a processing stage (often carried out by an ADC and a micro-controller). Operational amplifiers (op-amps) are commonly employed to carry out the amplification of the signal in the signal conditioning stage. In some transducers, signal conditioning is integrated with the sensor, for example in Hall effect sensors. In power electronics, before processing the input sensed signals by sensors like voltage sensor and current sensor, signal conditioning scales signals to level acceptable to the microprocessor. Inputs Signal inputs accepted by signal conditioners include DC voltage and current, AC voltage and current, frequency and electric charge. Sensor inputs can be accelerometer, thermocouple, thermistor, resistance thermometer, strain gauge or bridge, and LVDT or RVDT. Specialized inputs include encoder, counter or tachometer, timer or clock, relay or switch, and other specialized inputs. Outputs for signal conditioning equipment can be voltage, current, frequency, timer or counter, relay, resistance or potentiometer, and other specialized outputs. Processes Signal conditioning can include amplification, filtering, converting, range matching, isolation and any other processes required to make sensor output suitable for processing after conditioning. Input Coupling Use AC coupling when the signal contains a large DC component. If you enable AC coupling, you remove the large DC offset for the input amplifier and amplify only the AC component. This configuration makes effective use of the ADC dynamic range Filtering Filtering is the most common signal conditioning function, as usually not all the signal frequency spectrum contains valid data. For example, the 50 or 60 Hz AC power lines, present in most environments induce noise on signals that can cause interference if amplified. Amplification Signal amplification performs two important functions: increases the resolution of the input signal, and increases its signal-to-noise ratio. For example, the output of an electronic temperature sensor, which is probably in the millivolts range is probably too low for an analog-to-digital converter (ADC) to process directly. In this case it is necessary to bring the voltage level up to that required by the ADC. Commonly used amplifiers used for signal conditioning include sample and hold amplifiers, peak detectors, log amplifiers, antilog amplifiers, instrumentation amplifiers and programmable gain amplifiers. Attenuation Attenuation, the opposite of amplification, is necessary when voltages to be digitized are beyond the ADC range. This form of signal conditioning decreases the input signal amplitude so that the conditioned signal is within ADC range. Attenuation is typically necessary when measuring voltages that are more than 10 V. Excitation Some sensors require external voltage or current source of excitation, These sensors are called active sensors. (E.g. a temperature sensor like a thermistor & RTD, a pressure sensor (piezo-resistive and capacitive), etc.). The stability and precision of the excitation signal directly relates to the sensor accuracy and stability. Linearization Linearization is necessary when sensors produce voltage signals that are not linearly related to the physical measurement. Linearization is the process of interpreting the signal from the sensor and can be done either with signal conditioning or through software. Electrical isolation Signal isolation may be used to pass the signal from the source to the measuring device without a physical connection. It is often used to isolate possible sources of signal perturbations that could otherwise follow the electrical path from the sensor to the processing circuitry. In some situations, it may be important to isolate the potentially expensive equipment used to process the signal after conditioning from the sensor. Magnetic or optical isolation can be used. Magnetic isolation transforms the signal from a voltage to a magnetic field so the signal can be transmitted without physical connection (for example, using a transformer). Optical isolation works by using an electronic signal to modulate a signal encoded by light transmission (optical encoding). The decoded light transmission is then used for input for the next stage of processing. Surge protection A surge protector absorbs voltage spikes to protect the next stage from damage. References Electrical engineering
Signal conditioning
Engineering
939
56,325,455
https://en.wikipedia.org/wiki/High%20performance%20organization
The high performance organization (HPO) is a conceptual framework for organizations that leads to improved, sustainable organizational performance. It is an alternative model to the bureaucratic model known as Taylorism. There is not a clear definition of the high performance organization, but research shows that organizations that fit this model all hold a common set of characteristics. Chief among these is the ability to recognize the need to adapt to the surroundings that the organization operates in. High performance organizations can quickly and efficiently change their operating structure and practices to meet needs. These organizations focus on long term success while delivering on actionable short term goals. These organizations are flexible, customer focused, and able to work highly effectively in teams. The culture and management of these organizations support flatter hierarchies, teamwork, diversity, and adaptability to the environment which are all of paramount success to this type of organization. Compared to other organizations, high performance organizations spend much more time on continuously improving their core capabilities and invest in their workforce, leading to increased growth and performance. High performance organizations are sometimes labeled as high commitment organizations. History World War II ushered in a great amount of increased manufacturing and industrial production. With this came an increased concern over the human impact on work. The Hawthorne studies were part of the reason why more importance was placed on considering the human impact of work. During this period, industrial manufacturers followed the standardized large scale production method, characterized by mass production, scientific management, and stringent division of labor. This led to increased boredom among blue collar workers who would do the same repetitive job on a daily basis. Management in this period was characterized by careful and calculated monitoring which would cause workers to feel a sense of distrust. By the 1960s management for the industrial manufacturing industries had difficulty attracting and retaining its workforce. During the 1960s there was a push for job enrichment. This grew out of the sociotechnical systems approach to work, which was pioneered by the Tavistock Institute. This system is characterized by the open systems model and self-directed work team, which are also key to the success of a high performance organization. Research on the sociotechnical systems approach to work has shown that this approach is related to increased employee satisfaction and motivation. Another important step towards the high performance organization was the Japanese Revolution in manufacturing, which pointed out another flaw to the scientific model of production. Because workers were so focused on only doing one monotonous task, they were not aware of the bigger picture. Most employees were completely unaware of the quality of the products that they were producing. The focus that Japanese manufacturing companies put on quality, through their early quality circles, eventually led to the implementation of total quality management which is a key factor of producing quality products that meet consumer demands at low price points. Another reason for the move away from the older, highly bureaucratic approach towards the high performance organization was the rapid change in the business environment since the 1980s. The 1980s were characterized by a difficulty in American production due to increased competition from foreign firms, increased inflation on oil prices, and a decrease in productivity. This change was characterized by increased globalization, an increase in diversity in the workplace, large technological advances, and increased competition. To better meet the demands of the changing market place, organizations first tried to implement increased technological innovation in their production facilities in order to regain the competitive advantage. These companies soon came to realize that the human factor was also necessary in regaining its competitive footing. The realization of the importance of the human factors in work have led organizations to rely on the high performance organization to drive production and increase their employee's quality of work. Characteristics Organizational design High performance organizations value teamwork and collaboration as priorities in their organizational design. These organizations flatten organizational hierarchies and make it easier for cross-functional collaboration to occur. They do this by reducing barriers between functional units and getting rid of complex organizational bureaucracies. In an HPO, relationships are strengthened among employees who perform distinct functions, or that only perform within a given business silo. which improves organizational performance. This is particularly evident in organizations that exhibit highly interdependent work such as hospitals. High performance organizations value sharing of information at all levels by incentivizing information sharing in both bottom up and top down processes. The design is also very malleable and can adjust to both external and internal concerns. Teams The most apparent difference in the organizational design of HPOs is their reliance on teamwork. Teams operate semi-autonomously to set schedules, manage quality, and solve problems. These self-directed work teams thrive off of information sharing from all levels of the organization and are multi-skilled with the flexibility to solve problems without the need of direct supervision. Members of self-directed work teams have been shown to have greater job satisfaction, more autonomy and idea input, and improved work variety. These teams are often small in number, typically ranging from 7–15 members. Members of these teams share complementary skills and membership is often cross-functional. In order for these teams to truly operate at high performance, they must buy into the teamwork framework. Team members who are part of high performance teams tend to have strong personal commitment to one another's growth and success, and to the organizations growth and success. The high sense of commitment exhibited by teams in a high performance organization allow these teams to have a better sense of purpose, more accountability, and more actionable goals which allows them greater productivity. High performance teams move through the same stage development framework, which was popularized by Tuckman, as other teams. They must be guided by a competent leader through the stages until they are ready to truly operate at high performance. Individuals HPOs foster an organization of learning where they invest heavily in their workforce. They do this typically through leadership development and competency management. HPOs will develop a clear set of core competencies that they want the organization's employees to master. They will invest in keeping these competencies prominent through training and development. These organizations also reinvent the way they refer to their employees in order to place value on the team concept. Employee titles will reflect this. They will often be referred to as team members or associates as opposed to employees or staff. This again increases employee involvement and makes employees more committed to the larger goals and competencies that the organization places value in. Leaders The roles of managers in an HPO are also reinvented. Traditional models for organizations would have leaders closely monitor or supervise their teams. Team leaders in HPOs are more concerned with long term strategic planning and direction. They take a more hands off approach and their titles reflect this change in responsibility. Leaders in HPOs trust in their employees to make the right decisions. They act as a coach to their team members by giving them support and keeping them focused on the project at hand. These leaders are able to lead depending on the situation and have the capability to adjust their leadership style based upon the needs of their team members. They know when to inspire people with direct communication and also have the ability to read when a more hands off approach is necessary. Although these leaders act with a hands-off approach, they hold non-performers accountable for not reaching their goals. Leadership practices are also in line with the company's vision, values, and goals. Leaders of these organizations make all of their decisions with the organization's values in mind. Leadership behavior that is consistent with the organization's vision involves setting clear expectations, promoting a sense of belonging, fostering employee involvement in decision making, and encouraging learning and development. Leaders in an HPO also have the responsibility of understanding and being able to quickly make important decisions about the always changing marketplace in which their organization operates. Leaders should have the ability to anticipate changes in competition, technology, and economics within their market. Organization strategy and vision HPOs create strong vision, value, and mission statements which guide their organizations and align them with the outside environment. The mission, vision, and values of the organization act as foundations on which the organization is built. They inform employees what is rewarded and also what is not. HPOs implement vision statements that are specific, strategic, and carefully crafted. Leaders propagate the vision at all levels by ensuring that activities are aligned with vision and strategy of the organization. HPOs also set lofty, but measurable and achievable goals for their organization in order to guide their vision. The vision and strategy of the organization is made clear to employees at all levels. A common understanding of the organizations strategy and direction creates a strategic mind-set among employees that helps the organization achieve its goals. Innovative practices HPOs reward and incentivize behavior that is in line with the organizations goals. They implement reward programs that aim to benefit employees who follow the values of the organization. HPOs streamline information sharing across all levels of the organization. Information sharing is streamlined via communications channels set up with state of the art information technology. Internal communication is interactive and open exchange is rewarded. Typically, HPOs implement innovative ICT networks within their organization. While HPOs do streamline their information, they also share information across all levels of the organization to make sure that everyone is sharing in the same vision. An HPO is constantly improving their products, manufacturing processes, or services in order to gain a competitive advantage. These organizations focus on the efficiency of all aspects of their product. They implement various process and quality optimization models such as total quality management, Lean Six Sigma, quality circles, process re-engineering, and lean manufacturing. HPOs have innovative human resources practices. For example, employees may be involved in the hiring process. All team members may be involved when hiring a new member to join that team. Human resources may also implement pay for knowledge or pay for skill programs where employees are monetarily rewarded for attending training sessions that further their skills and abilities. Generally, there is a more focused approach on training where specific skills are targeted by the organization through data collection and needs assessment. These skills are the focus of the training and development programs that are implemented by human resources. It is typical for these organizations to have an internal learning and organizational development team which dedicates its time to conducting skill and competency based needs assessments and then training employees where it is needed. Flexibility and adaptability The success of HPOs are due to their ability to have structures in place that allow them to quickly adjust to the environment that they operate within. HPOs have the ability to reconfigure themselves to meet the demands of the marketplace and avoid its threats. HPOs constantly survey and monitor the environment to understand the context of their business, identify trends, and seek out any competitors. A HPO's growth is facilitated by creating partnerships and creating networks with other organizations after careful examination of the value added by entering into these relationships. They have high external orientation and strive to meet customer demands. They meet and exceed customer demands by fostering close relationships with customers, understanding their customers' values, and being responsive to their customers' needs. HPOs maintain relationships with their stakeholders by creating mutual relationships. References Organizational behavior
High performance organization
Biology
2,235
62,205,700
https://en.wikipedia.org/wiki/Androgen%20conjugate
An androgen conjugate is a conjugate of an androgen, such as testosterone. They occur naturally in the body as metabolites of androgens. Androgen conjugates include sulfate esters and glucuronide conjugates and are formed by sulfotransferase and glucuronosyltransferase enzymes, respectively. In contrast to androgens, conjugates of androgens do not bind to the androgen receptor and are hormonally inactive. However, androgen conjugates can be converted back into active androgens through enzymes like steroid sulfatase. Examples of androgen conjugates include the sulfates testosterone sulfate, dehydroepiandrosterone sulfate, androstenediol sulfate, dihydrotestosterone sulfate, and androsterone sulfate, and the glucuronides testosterone glucuronide, dihydrotestosterone glucuronide, androsterone glucuronide, and androstanediol glucuronide. Androgen conjugates are conjugated at the C3 and/or C17β positions, where hydroxyl groups are available. See also Androgen ester Estrogen conjugate References Androstanes Human metabolites Steroid hormones Testosterone
Androgen conjugate
Chemistry,Biology
290
63,847
https://en.wikipedia.org/wiki/Formaldehyde
Formaldehyde ( , ) (systematic name methanal) is an organic compound with the chemical formula and structure , more precisely . The compound is a pungent, colourless gas that polymerises spontaneously into paraformaldehyde. It is stored as aqueous solutions (formalin), which consists mainly of the hydrate CH2(OH)2. It is the simplest of the aldehydes (). As a precursor to many other materials and chemical compounds, in 2006 the global production of formaldehyde was estimated at 12 million tons per year. It is mainly used in the production of industrial resins, e.g., for particle board and coatings. Small amounts also occur naturally. Formaldehyde is classified as a carcinogen and can cause respiratory and skin irritation upon exposure. Forms Formaldehyde is more complicated than many simple carbon compounds in that it adopts several diverse forms. These compounds can often be used interchangeably and can be interconverted. Molecular formaldehyde. A colorless gas with a characteristic pungent, irritating odor. It is stable at about 150 °C, but it polymerizes when condensed to a liquid. 1,3,5-Trioxane, with the formula (CH2O)3. It is a white solid that dissolves without degradation in organic solvents. It is a trimer of molecular formaldehyde. Paraformaldehyde, with the formula HO(CH2O)nH. It is a white solid that is insoluble in most solvents. Methanediol, with the formula CH2(OH)2. This compound also exists in equilibrium with various oligomers (short polymers), depending on the concentration and temperature. A saturated water solution, of about 40% formaldehyde by volume or 37% by mass, is called "100% formalin". A small amount of stabilizer, such as methanol, is usually added to suppress oxidation and polymerization. A typical commercial-grade formalin may contain 10–12% methanol in addition to various metallic impurities. "Formaldehyde" was first used as a generic trademark in 1893 following a previous trade name, "formalin". Structure and bonding Molecular formaldehyde contains a central carbon atom with a double bond to the oxygen atom and a single bond to each hydrogen atom. This structure is summarised by the condensed formula H2C=O. The molecule is planar, Y-shaped and its molecular symmetry belongs to the C2v point group. The precise molecular geometry of gaseous formaldehyde has been determined by gas electron diffraction and microwave spectroscopy. The bond lengths are 1.21 Å for the carbon–oxygen bond and around 1.11 Å for the carbon–hydrogen bond, while the H–C–H bond angle is 117°, close to the 120° angle found in an ideal trigonal planar molecule. Some excited electronic states of formaldehyde are pyramidal rather than planar as in the ground state. Occurrence Processes in the upper atmosphere contribute more than 80% of the total formaldehyde in the environment. Formaldehyde is an intermediate in the oxidation (or combustion) of methane, as well as of other carbon compounds, e.g. in forest fires, automobile exhaust, and tobacco smoke. When produced in the atmosphere by the action of sunlight and oxygen on atmospheric methane and other hydrocarbons, it becomes part of smog. Formaldehyde has also been detected in outer space. Formaldehyde and its adducts are ubiquitous in nature. Food may contain formaldehyde at levels 1–100 mg/kg. Formaldehyde, formed in the metabolism of the amino acids serine and threonine, is found in the bloodstream of humans and other primates at concentrations of approximately 50 micromolar. Experiments in which animals are exposed to an atmosphere containing isotopically labeled formaldehyde have demonstrated that even in deliberately exposed animals, the majority of formaldehyde-DNA adducts found in non-respiratory tissues are derived from endogenously produced formaldehyde. Formaldehyde does not accumulate in the environment, because it is broken down within a few hours by sunlight or by bacteria present in soil or water. Humans metabolize formaldehyde quickly, converting it to formic acid. It nonetheless presents significant health concerns, as a contaminant. Interstellar formaldehyde Formaldehyde appears to be a useful probe in astrochemistry due to prominence of the 110←111 and 211←212 K-doublet transitions. It was the first polyatomic organic molecule detected in the interstellar medium. Since its initial detection in 1969, it has been observed in many regions of the galaxy. Because of the widespread interest in interstellar formaldehyde, it has been extensively studied, yielding new extragalactic sources. A proposed mechanism for the formation is the hydrogenation of CO ice: H + CO → HCO HCO + H → CH2O HCN, HNC, H2CO, and dust have also been observed inside the comae of comets C/2012 F6 (Lemmon) and C/2012 S1 (ISON). Synthesis and industrial production Laboratory synthesis Formaldehyde was discovered in 1859 by the Russian chemist Aleksandr Butlerov (1828–1886) when he attempted to synthesize methanediol ("methylene glycol") from iodomethane and silver oxalate. In his paper, Butlerov referred to formaldehyde as "dioxymethylen" (methylene dioxide) because his empirical formula for it was incorrect, as atomic weights were not precisely determined until the Karlsruhe Congress. The compound was identified as an aldehyde by August Wilhelm von Hofmann, who first announced its production by passing methanol vapor in air over hot platinum wire. With modifications, Hofmann's method remains the basis of the present day industrial route. Solution routes to formaldehyde also entail oxidation of methanol or iodomethane. Industry Formaldehyde is produced industrially by the catalytic oxidation of methanol. The most common catalysts are silver metal, iron(III) oxide, iron molybdenum oxides (e.g. iron(III) molybdate) with a molybdenum-enriched surface, or vanadium oxides. In the commonly used formox process, methanol and oxygen react at c. 250–400 °C in presence of iron oxide in combination with molybdenum and/or vanadium to produce formaldehyde according to the chemical equation: 2CH3OH + O2 → 2CH2O + 2H2O The silver-based catalyst usually operates at a higher temperature, about 650 °C. Two chemical reactions on it simultaneously produce formaldehyde: that shown above and the dehydrogenation reaction: CH3OH → CH2O + H2 In principle, formaldehyde could be generated by oxidation of methane, but this route is not industrially viable because the methanol is more easily oxidized than methane. Biochemistry Formaldehyde is produced via several enzyme-catalyzed routes. Living beings, including humans, produce formaldehyde as part of their metabolism. Formaldehyde is key to several bodily functions (e.g. epigenetics), but its amount must also be tightly controlled to avoid self-poisoning. Serine hydroxymethyltransferase can decompose serine into formaldehyde and glycine, according to this reaction: HOCH2CH(NH2)CO2H → CH2O + H2C(NH2)CO2H. Methylotrophic microbes convert methanol into formaldehyde and energy via methanol dehydrogenase: CH3OH → CH2O + 2e− + 2H+ Other routes to formaldehyde include oxidative demethylations, semicarbazide-sensitive amine oxidases, dimethylglycine dehydrogenases, lipid peroxidases, P450 oxidases, and N-methyl group demethylases. Formaldehyde is catabolized by alcohol dehydrogenase ADH5 and aldehyde dehydrogenase ALDH2. Organic chemistry Formaldehyde is a building block in the synthesis of many other compounds of specialised and industrial significance. It exhibits most of the chemical properties of other aldehydes but is more reactive. Polymerization and hydration Monomeric CH2O is a gas and is rarely encountered in the laboratory. Aqueous formaldehyde, unlike some other small aldehydes (which need specific conditions to oligomerize through aldol condensation) oligomerizes spontaneously at a common state. The trimer 1,3,5-trioxane, , is a typical oligomer. Many cyclic oligomers of other sizes have been isolated. Similarly, formaldehyde hydrates to give the geminal diol methanediol, which condenses further to form hydroxy-terminated oligomers HO(CH2O)nH. The polymer is called paraformaldehyde. The higher concentration of formaldehyde—the more equilibrium shifts towards polymerization. Diluting with water or increasing the solution temperature, as well as adding alcohols (such as methanol or ethanol) lowers that tendency. Gaseous formaldehyde polymerizes at active sites on vessel walls, but the mechanism of the reaction is unknown. Small amounts of hydrogen chloride, boron trifluoride, or stannic chloride present in gaseous formaldehyde provide the catalytic effect and make the polymerization rapid. Cross-linking reactions Formaldehyde forms cross-links by first combining with a protein to form methylol, which loses a water molecule to form a Schiff base. The Schiff base can then react with DNA or protein to create a cross-linked product. This reaction is the basis for the most common process of chemical fixation. Oxidation and reduction Formaldehyde is readily oxidized by atmospheric oxygen into formic acid. For this reason, commercial formaldehyde is typically contaminated with formic acid. Formaldehyde can be hydrogenated into methanol. In the Cannizzaro reaction, formaldehyde and base react to produce formic acid and methanol, a disproportionation reaction. Hydroxymethylation and chloromethylation Formaldehyde reacts with many compounds, resulting in hydroxymethylation: X-H + CH2O → X-CH2OH(X = R2N, RC(O)NR', SH). The resulting hydroxymethyl derivatives typically react further. Thus, amines give hexahydro-1,3,5-triazines: 3RNH2 + 3CH2O → (RNCH2)3 + 3H2O Similarly, when combined with hydrogen sulfide, it forms trithiane: 3CH2O + 3H2S → (CH2S)3 + 3H2O In the presence of acids, it participates in electrophilic aromatic substitution reactions with aromatic compounds resulting in hydroxymethylated derivatives: ArH + CH2O → ArCH2OH When conducted in the presence of hydrogen chloride, the product is the chloromethyl compound, as described in the Blanc chloromethylation. If the arene is electron-rich, as in phenols, elaborate condensations ensue. With 4-substituted phenols one obtains calixarenes. Phenol results in polymers. Other reactions Many amino acids react with formaldehyde. Cysteine converts to thioproline. Uses Industrial applications Formaldehyde is a common precursor to more complex compounds and materials. In approximate order of decreasing consumption, products generated from formaldehyde include urea formaldehyde resin, melamine resin, phenol formaldehyde resin, polyoxymethylene plastics, 1,4-butanediol, and methylene diphenyl diisocyanate. The textile industry uses formaldehyde-based resins as finishers to make fabrics crease-resistant. When condensed with phenol, urea, or melamine, formaldehyde produces, respectively, hard thermoset phenol formaldehyde resin, urea formaldehyde resin, and melamine resin. These polymers are permanent adhesives used in plywood and carpeting. They are also foamed to make insulation, or cast into moulded products. Production of formaldehyde resins accounts for more than half of formaldehyde consumption. Formaldehyde is also a precursor to polyfunctional alcohols such as pentaerythritol, which is used to make paints and explosives. Other formaldehyde derivatives include methylene diphenyl diisocyanate, an important component in polyurethane paints and foams, and hexamine, which is used in phenol-formaldehyde resins as well as the explosive RDX. Condensation with acetaldehyde affords pentaerythritol, a chemical necessary in synthesizing PETN, a high explosive: Niche uses Disinfectant and biocide An aqueous solution of formaldehyde can be useful as a disinfectant as it kills most bacteria and fungi (including their spores). It is used as an additive in vaccine manufacturing to inactivate toxins and pathogens. Formaldehyde releasers are used as biocides in personal care products such as cosmetics. Although present at levels not normally considered harmful, they are known to cause allergic contact dermatitis in certain sensitised individuals. Aquarists use formaldehyde as a treatment for the parasites Ichthyophthirius multifiliis and Cryptocaryon irritans. Formaldehyde is one of the main disinfectants recommended for destroying anthrax. Formaldehyde is also approved for use in the manufacture of animal feeds in the US. It is an antimicrobial agent used to maintain complete animal feeds or feed ingredients Salmonella negative for up to 21 days. Tissue fixative and embalming agent Formaldehyde preserves or fixes tissue or cells. The process involves cross-linking of primary amino groups. The European Union has banned the use of formaldehyde as a biocide (including embalming) under the Biocidal Products Directive (98/8/EC) due to its carcinogenic properties. Countries with a strong tradition of embalming corpses, such as Ireland and other colder-weather countries, have raised concerns. Despite reports to the contrary, no decision on the inclusion of formaldehyde on Annex I of the Biocidal Products Directive for product-type 22 (embalming and taxidermist fluids) had been made . Formaldehyde-based crosslinking is exploited in ChIP-on-chip or ChIP-sequencing genomics experiments, where DNA-binding proteins are cross-linked to their cognate binding sites on the chromosome and analyzed to determine what genes are regulated by the proteins. Formaldehyde is also used as a denaturing agent in RNA gel electrophoresis, preventing RNA from forming secondary structures. A solution of 4% formaldehyde fixes pathology tissue specimens at about one mm per hour at room temperature. Drug testing Formaldehyde and 18 M (concentrated) sulfuric acid makes Marquis reagent—which can identify alkaloids and other compounds. Photography In photography, formaldehyde is used in low concentrations for the process C-41 (color negative film) stabilizer in the final wash step, as well as in the process E-6 pre-bleach step, to make it unnecessary in the final wash. Due to improvements in dye coupler chemistry, more modern (2006 or later) E-6 and C-41 films do not need formaldehyde, as their dyes are already stable. Safety In view of its widespread use, toxicity, and volatility, formaldehyde poses a significant danger to human health. In 2011, the US National Toxicology Program described formaldehyde as "known to be a human carcinogen". Chronic inhalation Concerns are associated with chronic (long-term) exposure by inhalation as may happen from thermal or chemical decomposition of formaldehyde-based resins and the production of formaldehyde resulting from the combustion of a variety of organic compounds (for example, exhaust gases). As formaldehyde resins are used in many construction materials, it is one of the more common indoor air pollutants. At concentrations above 0.1 ppm in air, formaldehyde can irritate the eyes and mucous membranes. Formaldehyde inhaled at this concentration may cause headaches, a burning sensation in the throat, and difficulty breathing, and can trigger or aggravate asthma symptoms. The CDC considers formaldehyde as a systemic poison. Formaldehyde poisoning can cause permanent changes in the nervous system's functions. A 1988 Canadian study of houses with urea-formaldehyde foam insulation found that formaldehyde levels as low as 0.046 ppm were positively correlated with eye and nasal irritation. A 2009 review of studies has shown a strong association between exposure to formaldehyde and the development of childhood asthma. A theory was proposed for the carcinogenesis of formaldehyde in 1978. In 1987 the United States Environmental Protection Agency (EPA) classified it as a probable human carcinogen, and after more studies the WHO International Agency for Research on Cancer (IARC) in 1995 also classified it as a probable human carcinogen. Further information and evaluation of all known data led the IARC to reclassify formaldehyde as a known human carcinogen associated with nasal sinus cancer and nasopharyngeal cancer. Studies in 2009 and 2010 have also shown a positive correlation between exposure to formaldehyde and the development of leukemia, particularly myeloid leukemia. Nasopharyngeal and sinonasal cancers are relatively rare, with a combined annual incidence in the United States of < 4,000 cases. About 30,000 cases of myeloid leukemia occur in the United States each year. Some evidence suggests that workplace exposure to formaldehyde contributes to sinonasal cancers. Professionals exposed to formaldehyde in their occupation, such as funeral industry workers and embalmers, showed an increased risk of leukemia and brain cancer compared with the general population. Other factors are important in determining individual risk for the development of leukemia or nasopharyngeal cancer. In yeast, formaldehyde is found to perturb pathways for DNA repair and cell cycle. In the residential environment, formaldehyde exposure comes from a number of routes; formaldehyde can be emitted by treated wood products, such as plywood or particle board, but it is produced by paints, varnishes, floor finishes, and cigarette smoking as well. In July 2016, the U.S. EPA released a prepublication version of its final rule on Formaldehyde Emission Standards for Composite Wood Products. These new rules impact manufacturers, importers, distributors, and retailers of products containing composite wood, including fiberboard, particleboard, and various laminated products, who must comply with more stringent record-keeping and labeling requirements. The U.S. EPA allows no more than 0.016 ppm formaldehyde in the air in new buildings constructed for that agency. A U.S. EPA study found a new home measured 0.076 ppm when brand new and 0.045 ppm after 30 days. The Federal Emergency Management Agency (FEMA) has also announced limits on the formaldehyde levels in trailers purchased by that agency. The EPA recommends the use of "exterior-grade" pressed-wood products with phenol instead of urea resin to limit formaldehyde exposure, since pressed-wood products containing formaldehyde resins are often a significant source of formaldehyde in homes. The eyes are most sensitive to formaldehyde exposure: The lowest level at which many people can begin to smell formaldehyde ranges between 0.05 and 1 ppm. The maximum concentration value at the workplace is 0.3 ppm. In controlled chamber studies, individuals begin to sense eye irritation at about 0.5 ppm; 5 to 20 percent report eye irritation at 0.5 to 1 ppm; and greater certainty for sensory irritation occurred at 1 ppm and above. While some agencies have used a level as low as 0.1 ppm as a threshold for irritation, the expert panel found that a level of 0.3 ppm would protect against nearly all irritation. In fact, the expert panel found that a level of 1.0 ppm would avoid eye irritation—the most sensitive endpoint—in 75–95% of all people exposed. Formaldehyde levels in building environments are affected by a number of factors. These include the potency of formaldehyde-emitting products present, the ratio of the surface area of emitting materials to volume of space, environmental factors, product age, interactions with other materials, and ventilation conditions. Formaldehyde emits from a variety of construction materials, furnishings, and consumer products. The three products that emit the highest concentrations are medium density fiberboard, hardwood plywood, and particle board. Environmental factors such as temperature and relative humidity can elevate levels because formaldehyde has a high vapor pressure. Formaldehyde levels from building materials are the highest when a building first opens because materials would have less time to off-gas. Formaldehyde levels decrease over time as the sources suppress. In operating rooms, formaldehyde is produced as a byproduct of electrosurgery and is present in surgical smoke, exposing surgeons and healthcare workers to potentially unsafe concentrations. Formaldehyde levels in air can be sampled and tested in several ways, including impinger, treated sorbent, and passive monitors. The National Institute for Occupational Safety and Health (NIOSH) has measurement methods numbered 2016, 2541, 3500, and 3800. In June 2011, the twelfth edition of the National Toxicology Program (NTP) Report on Carcinogens (RoC) changed the listing status of formaldehyde from "reasonably anticipated to be a human carcinogen" to "known to be a human carcinogen." Concurrently, a National Academy of Sciences (NAS) committee was convened and issued an independent review of the draft U.S. EPA IRIS assessment of formaldehyde, providing a comprehensive health effects assessment and quantitative estimates of human risks of adverse effects. Acute irritation and allergic reaction For most people, irritation from formaldehyde is temporary and reversible, although formaldehyde can cause allergies and is part of the standard patch test series. In 2005–06, it was the seventh-most-prevalent allergen in patch tests (9.0%). People with formaldehyde allergy are advised to avoid formaldehyde releasers as well (e.g., Quaternium-15, imidazolidinyl urea, and diazolidinyl urea). People who suffer allergic reactions to formaldehyde tend to display lesions on the skin in the areas that have had direct contact with the substance, such as the neck or thighs (often due to formaldehyde released from permanent press finished clothing) or dermatitis on the face (typically from cosmetics). Formaldehyde has been banned in cosmetics in both Sweden and Japan. Other routes Formaldehyde occurs naturally, and is "an essential intermediate in cellular metabolism in mammals and humans." According to the American Chemistry Council, "Formaldehyde is found in every living system—from plants to animals to humans. It metabolizes quickly in the body, breaks down rapidly, is not persistent and does not accumulate in the body." The twelfth edition of NTP Report on Carcinogens notes that "food and water contain measureable concentrations of formaldehyde, but the significance of ingestion as a source of formaldehyde exposure for the general population is questionable." Food formaldehyde generally occurs in a bound form and formaldehyde is unstable in an aqueous solution. In humans, ingestion of as little as of 37% formaldehyde solution can cause death. Other symptoms associated with ingesting such a solution include gastrointestinal damage (vomiting, abdominal pain), and systematic damage (dizziness). Testing for formaldehyde is by blood and/or urine by gas chromatography–mass spectrometry. Other methods include infrared detection, gas detector tubes, etc., of which high-performance liquid chromatography is the most sensitive. Regulation Several web articles claim that formaldehyde has been banned from manufacture or import into the European Union (EU) under REACH (Registration, Evaluation, Authorization, and restriction of Chemical substances) legislation. That is a misconception, as formaldehyde is not listed in the Annex I of Regulation (EC) No 689/2008 (export and import of dangerous chemicals regulation), nor on a priority list for risk assessment. However, formaldehyde is banned from use in certain applications (preservatives for liquid-cooling and processing systems, slimicides, metalworking-fluid preservatives, and antifouling products) under the Biocidal Products Directive. In the EU, the maximum allowed concentration of formaldehyde in finished products is 0.2%, and any product that exceeds 0.05% has to include a warning that the product contains formaldehyde. In the United States, Congress passed a bill July 7, 2010, regarding the use of formaldehyde in hardwood plywood, particle board, and medium density fiberboard. The bill limited the allowable amount of formaldehyde emissions from these wood products to 0.09 ppm, and required companies to meet this standard by January 2013. The final U.S. EPA rule specified maximum emissions of "0.05 ppm formaldehyde for hardwood plywood, 0.09 ppm formaldehyde for particleboard, 0.11 ppm formaldehyde for medium-density fiberboard, and 0.13 ppm formaldehyde for thin medium-density fiberboard." Formaldehyde was declared a toxic substance by the 1999 Canadian Environmental Protection Act. The FDA is proposing a ban on hair relaxers with formaldehyde due to cancer concerns. Contaminant in food Scandals have broken in both the 2005 Indonesia food scare and 2007 Vietnam food scare regarding the addition of formaldehyde to foods to extend shelf life. In 2011, after a four-year absence, Indonesian authorities found foods with formaldehyde being sold in markets in a number of regions across the country. In August 2011, at least at two Carrefour supermarkets, the Central Jakarta Livestock and Fishery Sub-Department found cendol containing 10 parts per million of formaldehyde. In 2014, the owner of two noodle factories in Bogor, Indonesia, was arrested for using formaldehyde in noodles. 50 kg of formaldehyde was confiscated. Foods known to be contaminated included noodles, salted fish, and tofu. Chicken and beer were also rumored to be contaminated. In some places, such as China, manufacturers still use formaldehyde illegally as a preservative in foods, which exposes people to formaldehyde ingestion. In the early 1900s, it was frequently added by US milk plants to milk bottles as a method of pasteurization due to the lack of knowledge and concern regarding formaldehyde's toxicity. In 2011 in Nakhon Ratchasima, Thailand, truckloads of rotten chicken were treated with formaldehyde for sale in which "a large network", including 11 slaughterhouses run by a criminal gang, were implicated. In 2012, 1 billion rupiah (almost US$100,000) of fish imported from Pakistan to Batam, Indonesia, were found laced with formaldehyde. Formalin contamination of foods has been reported in Bangladesh, with stores and supermarkets selling fruits, fishes, and vegetables that have been treated with formalin to keep them fresh. However, in 2015, a Formalin Control Bill was passed in the Parliament of Bangladesh with a provision of life-term imprisonment as the maximum punishment as well as a maximum fine of 2,000,000 BDT but not less than 500,000 BDT for importing, producing, or hoarding formalin without a license. Formaldehyde was one of the chemicals used in 19th century industrialised food production that was investigated by Dr. Harvey W. Wiley with his famous 'Poison Squad' as part of the US Department of Agriculture. This led to the 1906 Pure Food and Drug Act, a landmark event in the early history of food regulation in the United States. See also 1,3-Dioxetane DMDM hydantoin Sawdust | Health impacts of sawdust Sulphobes Transition metal complexes of aldehydes and ketones Wood glue Wood preservation References Notes External links (gas) (solution) Formaldehyde from ChemSub Online Prevention guide—Formaldehyde in the Workplace (PDF) from the IRSST Formaldehyde from the National Institute for Occupational Safety and Health "Formaldehyde Added to 'Known Carcinogens' List Despite Lobbying by Chemical Industry"—video report by Democracy Now! Do you own a post-Katrina FEMA trailer? Check your VIN# So you're living in one of FEMA's Katrina trailers... What can you do? Alkanals Endogenous aldehydes Anatomical preservation Hazardous air pollutants IARC Group 1 carcinogens Chemical hazards Organic compounds with 1 carbon atom Aldehydes Indoor air pollution
Formaldehyde
Chemistry
6,180
1,743,941
https://en.wikipedia.org/wiki/Starch%20gelatinization
Starch gelatinization is a process of breaking down of intermolecular bonds of starch molecules in the presence of water and heat, allowing the hydrogen bonding sites (the hydroxyl hydrogen and oxygen) to engage more water. This irreversibly dissolves the starch granule in water. Water acts as a plasticizer. Process Three main processes happen to the starch granule: granule swelling, crystallite and double-helical melting, and amylose leaching. Granule swelling: During heating, water is first absorbed in the amorphous space of starch, which leads to a swelling phenomenon. Melting of double helical structures: Water then enters via amorphous regions into the tightly bound areas of double helical structures of amylopectin. At ambient temperatures these crystalline regions do not allow water to enter. Heat causes such regions to become diffuse, the amylose chains begin to dissolve, to separate into an amorphous form and the number and size of crystalline regions decreases. Under the microscope in polarized light starch loses its birefringence and its extinction cross. Amylose leaching: Penetration of water thus increases the randomness in the starch granule structure, and causes swelling; eventually amylose molecules leach into the surrounding water and the granule structure disintegrates. The gelatinization temperature of starch depends upon plant type and the amount of water present, pH, types and concentration of salt, sugar, fat and protein in the recipe, as well as starch derivatisation technology are used. Some types of unmodified native starches start swelling at 55 °C, other types at 85 °C. The gelatinization temperature of modified starch depends on, for example, the degree of cross-linking, acid treatment, or acetylation. Gel temperature can also be modified by genetic manipulation of starch synthase genes. Gelatinization temperature also depends on the amount of damaged starch granules; these will swell faster. Damaged starch can be produced, for example, during the wheat milling process, or when drying the starch cake in a starch plant. There is an inverse correlation between gelatinization temperature and glycemic index. High amylose starches require more energy to break up bonds to gelatinize into starch molecules. Gelatinization improves the availability of starch for amylase hydrolysis. So gelatinization of starch is used constantly in cooking to make the starch digestible or to thicken/bind water in roux, sauce, or soup. Retrogradation Gelatinized starch, when cooled for a long enough period (hours or days), will thicken (or gel) and rearrange itself again to a more crystalline structure; this process is called retrogradation. During cooling, starch molecules gradually aggregate to form a gel. The following molecular associations can occur: amylose-amylose, amylose-amylopectin, and amylopectin-amylopectin. A mild association amongst chains come together with water still embedded in the molecule network. Due to strong associations of hydrogen bonding, longer amylose molecules (and starch which has a higher amylose content) will form a stiff gel. Amylopectin molecules with longer branched structure, which makes them more similar to amylose, increases the tendency to form strong gels. High amylopectin starches will have a stable gel, but will be softer than high amylose gels. Retrogradation restricts the availability for amylase hydrolysis to occur, which reduces the digestibility of the starch. Pregelatinized starch Pregelatinized starch (Dextrin) is starch which has been cooked and then dried in the starch factory on a drum dryer or in an extruder making the starch cold-water-soluble. Spray dryers are used to obtain dry starch sugars and low viscous pregelatinized starch powder. Determination A simple technique to study starch gelation is by using a Brabender Viscoamylograph. It is a common technique used by food industries to determine the pasting temperature, swelling capacity, shear/thermal stability, and the extent of retrogradation. Under controlled conditions, starch and distilled water is heated at a constant heating rate in a rotating bowl and then cooled down. The viscosity of the mixture deflects a measuring sensor in the bowl. This deflection is measured as viscosity in torque over time vs. temperature and recorded on the computer. The viscoamylograph allows us to observe: the beginning of gelatinization, gelatinization maximum, gelatinization temperature, viscosity during holding, and viscosity at the end of cooling. Differential scanning calorimetry (DSC) is another method industries use to examine properties of gelatinized starch. As water is heated with starch granules, gelatinization occurs, involving an endothermic reaction. The initiation of gelatinization is called the T-onset. T-peak is the position where the endothermic reaction occurs at the maximum. T-conclusion is when all the starch granules are fully gelatinized and the curve remains stable. See also Dextrin References External links Food Resource, Starch, Oregon State University Corn starch gelatinization, filmed with microscope, Youtube Starch Food science Chemical bonding
Starch gelatinization
Physics,Chemistry,Materials_science
1,146
45,643,075
https://en.wikipedia.org/wiki/Spock%20%28testing%20framework%29
Spock is a Java testing framework capable of handling the complete life cycle of a computer program. It was initially created in 2008 by Peter Niederwieser, a software engineer with GradleWare. A second Spock committer is Luke Daley (also with Gradleware), the creator of the popular Geb functional testing framework. See also JUnit, unit testing framework for the Java programming language Mockito, mocking extensions to JUnit TestNG, test framework for Java References Cross-platform software Java development tools Java platform Unit testing frameworks Software using the Apache license
Spock (testing framework)
Technology
122
196,971
https://en.wikipedia.org/wiki/Ecological%20classification
Ecological classification or ecological typology is the classification of land or water into geographical units that represent variation in one or more ecological features. Traditional approaches focus on geology, topography, biogeography, soils, vegetation, climate conditions, living species, habitats, water resources, and sometimes also anthropic factors. Most approaches pursue the cartographical delineation or regionalisation of distinct areas for mapping and planning. Approaches to classifications Different approaches to ecological classifications have been developed in terrestrial, freshwater and marine disciplines. Traditionally these approaches have focused on biotic components (vegetation classification), abiotic components (environmental approaches) or implied ecological and evolutionary processes (biogeographical approaches). Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy (ecotope). Vegetation classification Vegetation is often used to classify terrestrial ecological units. Vegetation classification can be based on vegetation structure and floristic composition. Classifications based entirely on vegetation structure overlap with land cover mapping categories. Many schemes of vegetation classification are in use by the land, resource and environmental management agencies of different national and state jurisdictions. The International Vegetation Classification (IVC or EcoVeg) has been recently proposed but has not been yet widely adopted. Vegetation classifications have limited use in aquatic systems, since only a handful of freshwater or marine habitats are dominated by plants (e.g. kelp forests or seagrass meadows). Also, some extreme terrestrial environments, like subterranean or cryogenic ecosystems, are not properly described in vegetation classifications. Biogeographical approach The disciplines of phytogeography and biogeography study the geographic distribution of plant communities and faunal communities. Common patterns of distribution of several taxonomic groups are generalised into bioregions, floristic provinces or zoogeographic regions. Environmental approach Climate classifications are used in terrestrial disciplines due to the major influence of climate on biological life in a region. The most popular classification scheme is probably the Köppen climate classification scheme. Similarly geological and soil properties can affect terrestrial vegetation. In marine disciplines, the stratification of water layers discriminate types based on the availability of light and nutrient, or changes in biogeochemical properties. Ecosystem classifications American geographer Robert Bailey defined a hierarchy of ecosystem units ranging from micro-ecosystems (individual homogeneous sites, in the order of in area), through meso-ecosystems (landscape mosaics, in the order of ) to macro-ecosystems (ecoregions, in the order of ). Bailey outlined five different methods for identifying ecosystems: gestalt ("a whole that is not derived through considerable of its parts"), in which regions are recognized and boundaries drawn intuitively; a map overlay system where different layers like geology, landforms and soil types are overlain to identify ecosystems; multivariate clustering of site attributes; digital image processing of remotely sensed data grouping areas based on their appearance or other spectral properties; or by a "controlling factors method" where a subset of factors (like soils, climate, vegetation physiognomy or the distribution of plant or animal species) are selected from a large array of possible ones are used to delineate ecosystems. In contrast with Bailey's methodology, Puerto Rico ecologist Ariel Lugo and coauthors identified ten characteristics of an effective classification system. For example that it be based on georeferenced, quantitative data; that it should minimize subjectivity and explicitly identify criteria and assumptions; that it should be structured around the factors that drive ecosystem processes; that it should reflect the hierarchical nature of ecosystems; that it should be flexible enough to conform to the various scales at which ecosystem management operates. The International Union for The Conservation of Nature (IUCN) developed a global ecosystem typology that conforms to the definition of ecosystems as ecological units that comprise a biotic component, an abiotic complex, the interactions between and within them, and occupy a finite physical space or ecotope. This typology is based on six design principles: representation of ecological processes, representation of biota, conceptual consistency throughout the biosphere, scalable structure, spatially explicit units, parsimony and utility. This approach has led to a dual representation of ecosystem functionality and composition within a flexible hierarchical structure that can be built from a top-down approach (subdivision of upper units by function) and a bottom-up approach (representation of compositional variation within functional units). See also Land use Landscape ecology References Bibliography Gregorich, E. G., and et al. "Soil and Environmental Science Dictionary." Canadian ecological land classification system, pp 111 (2001). Canadian Society of Soil Science. CRC Press LLC. . Klijn, F., and H. A. Udo De Haes. 1994. "A hierarchical approach to ecosystems and its implications for ecological land classification." In: Landscape Ecology vol. 9 no. 2 pp 89–104 (1994). The Hague, SPB Academic Publishing bv. External links Example of ecological land classification in British Columbia (Canada) EcoSim Software Inc ELC eTool International Association for Vegetation Scientists (IAVS) – Vegetation Classification Methods .Land .Land .Land Ecology terminology Environmental terminology Habitat Geographic classifications
Ecological classification
Biology
1,086
12,580,483
https://en.wikipedia.org/wiki/Tricastin%20Nuclear%20Power%20Plant
The Tricastin Nuclear Power Plant () is a nuclear power plant consisting of 4 pressurized water reactors (PWRs) of CP1 type with 915 MW electrical power output each. The power plant is located in the south of France (Drôme and Vaucluse Department) at the Canal de Donzère-Mondragon near the Donzère-Mondragon Dam and the commune Pierrelatte. The power plant is part of the widespread Tricastin Nuclear Site (see below), which was named after the historic Tricastin region. Three out of the four reactors on the site had been used until 2012 to power the Eurodif Uranium enrichment plant, which had been located on the site. Tricastin Nuclear Site The Tricastin Nuclear Site (Site Nucléaire du Tricastin) is a collection of facilities run by Areva and EDF located on right bank of the Channel of Donzère-Mondragon (diversion canal of the Rhône River) south of the city of Valence (70 km upstream) and north of Avignon (65 km downstream). The site straddles the border between the departments Drôme (26) and Vaucluse (84), not far from the Gard (30) and Ardèche (07) departments, and lies near the communes of Saint-Paul-Trois-Châteaux, Pierrelatte (both Drôme department), Bollène and Lapalud (both Vaucluse Department). Tricastin is one of the most important nuclear technology sites in the world, along with the COGEMA La Hague site. It is spread out over 600 hectares with over 5000 employees. Some of the involved companies are: Commissariat à l'énergie atomique (CEA) de Pierrelatte (A nuclear weapons research facility) The EDF Nuclear Power Plant Tricastin (3,660 MWe total 915 MWe each) Comurhex: A Uranium hexafluoride conversion facility Eurodif: Georges Besse plant for gaseous diffusion uranium enrichment, which operated from 1979 to 2012, replaced by the George Besse II plant for gas centrifuge enrichment with a capacity of 7.5M SWU per year A small number of facilities in Pierrelatte belong to the Marcoule Nuclear Site. Nuclear power plant The site houses 4 pressurized water reactors (PWR) of 915 MW each, which were built mostly in the 1970s and brought online in the early 80s. These reactors produce about 25 TWh/year, or 6% of France's electricity. Three out of four reactors were used for powering the Eurodif Uranium enrichment factory until 2012, the year that Eurodif was closed. The close proximity of the power source and usage of the power allowed for smaller transmission losses to occur, which was done at 225 kV. The replacement of the Eurodif gas-diffusion plant by the new SET gas-centrifuge plant (also located at the Tricastin site) reduced the energy consumption of the uranium enrichment process by a factor of 50, freeing up approximately 2700 MWe for the French national grid. Spent fuel is transported by train to the reprocessing plant, just as the new fuel is transported to the plant by train. Safety Fire response Tests on 2 July 2004 by the Autorité de sûreté nucléaire (Nuclear Safety Authority) found that it would take 37 minutes to respond to a fire. Flood In its initial report following the 1999 Blayais Nuclear Power Plant flood, the Institute for Nuclear Protection and Safety (now part of the Radioprotection and Nuclear Safety Institute) called for the risk of flooding at Tricastin to be re-examined due to the presence of the canal. From 27 September 2017 to December 2017 the reactors were temporary shutdown while repairs to the canal embankment were made. The regulator Autorité de sûreté nucléaire (ASN) had ordered the temporary shutdown because of the risk of embankment failure in the event of an earthquake. Incidents Cooling water During the 2003 European heat wave from 12 to 22 July, the maximum temperature of 27 °C from the piping of waste heat water into the canal was exceeded on several occasions, totalling about 44 hours. Uranium solution release In July 2008, 18,000 litres (4,755 gallons) of uranium solution containing natural uranium were accidentally released on the Tricastin Nuclear Site. Due to cleaning and repair work the containment system for a uranium solution holding tank was not functional when the tank filled. The inflow exceeded the tank's capacity and 30 cubic metres of uranium solution leaked with 18 cubic metres spilled to the ground. Testing found elevated uranium levels in the nearby rivers Gaffière and Lauzon. The liquid that escaped to the ground contained about 75 kg of unenriched uranium which is toxic as a heavy metal while possessing only slight radioactivity. Estimates for the releases were initially higher, up to 360 kg of natural uranium, but lowered later. Ground and surface water tests indicated that levels of radioactivity were 5% higher than the maximum rate allowed. In the near vicinity and above ground, the local watchdog group CRIIRAD has detected unusually high levels of radiation. French authorities banned the use of water from the Gaffière and Lauzon for drinking and watering of crops. Swimming, water sports and fishing were also banned. This incident has been classified as Level 1 on the International Nuclear Event Scale. In July 2008, approximately 100 employees were exposed to radioactive particles that escaped from a pipe in a reactor that had been shut down. Additionally, a nuclear waste leak that apparently had remained undiscovered since 2005 spilled into a concrete protective shell in Romans-sur-Isere. Areva, who owns the site, ensured that the leak had not caused harm to the environment, but the issue sparked discussion about an old French army terrain, where nuclear waste was deposited in shielded dumps. The layer of dirt covering the waste is reported to have been thinned due to wind and rain erosion, directly exposing nuclear waste material to open air. Also, the speed with which the Tricastin incident was reported to the Autorité de sûreté nucléaire (8 hours) and subsequently to local authorities (another 6 hours) is subject of ongoing discussions. The European Commissioner Andris Piebalgs may send inspectors to the sites to investigate recent events further. Other implications following the incidents resulted in a drop in the sale of wines from the Tricastin area. Acting on the wishes of the wine growers to change the name of the appellation to something without "Tricastin", to avoid being associated with the nuclear power plant, in June 2010, INAO signalled its intention to allow a name change from Coteaux du Tricastin AOC to Grignan-Les Adhemar effective from the 2010 vintage. EPR project On 15 February 2007 the Le Soir newspaper announced that Suez was considering building a new European Pressurized Reactor at the Tricastin site, but the claim was denied by the SUEZ group. Naming The Tricastin region where the plant is located, is named after the ancient Ligurian tribe the Tricastini. Their capital Augusta Tricastinorum was mentioned by Pliny the Elder in his Natural History book III in 74 C.E. Photo gallery References Nuclear power stations in France Civilian nuclear power accidents
Tricastin Nuclear Power Plant
Technology
1,519
20,362,116
https://en.wikipedia.org/wiki/Iodoxamic%20acid
Iodoxamic acid (trade name Endobil) is an organoiodine compound used as a radiocontrast agent. It features both a high iodine content as well as several hydrophilic groups. See also Iodinated contrast References Radiocontrast agents Iodobenzene derivatives Benzoic acids Anilides Propionamides
Iodoxamic acid
Chemistry
75
6,141,469
https://en.wikipedia.org/wiki/Weather%20house
A weather house is a folk art device in the shape of a small German or Alpine chalet that indicates the weather. A typical weather house has two doors side by side. The left side has a girl or woman, the right side a boy or man. The female figure comes out of the house when the weather is sunny and dry, while the male (often carrying an umbrella) comes out to indicate rain. Description In fact, a weather house functions as a hygrometer embellished by folk art. The male and female figures ride on a balance bar, which is suspended by a piece of catgut or hair. The gut relaxes or shrinks based on the humidity in the surrounding air, relaxing when the air is wet and tensing when the air is dry. This action swings one figure or the other out of the house depending on the humidity. Some variants function as a barometer: low pressure indicates bad (rainy) weather, high pressure good (sunny) weather. History The first written mention is in 1726, Theatrum Aerostaticum by Jacob Leupold, who describes a (hygrometer type) weather house he constructed many years before. An encyclopedia entry by 1735 (Zedlersches Lexikon) mentions weather houses as commonly available on markets. Cultural background and influence Weather houses are associated in the popular mind with Austria, Germany or Switzerland, and are often decorated in the style of a cuckoo clock. They are often sold as "typical German" souvenirs to travellers from other countries. Many weather houses also bear a small thermometer on the part between the two doors that conceals the gut suspension, and many also contain a piggy bank. In contrast, the term "weather house" in the United States referred to buildings built by the U. S. Signal Service and then the U. S. Weather Bureau to house the instruments and Chief Weather Observers so that they could do their job. The National Weather Service, the successor to the U. S. Weather Bureau, now uses the term "shelter". A one-act English comic opera called Weather or No, about the male and female figures in a weather house falling in love, became popular when it was played as a companion piece to The Mikado in 1896-97. The Brollys is an animated television series about a boy who is magically transported every night into the weather house on the wall of his bedroom. Weather houses are mentioned in the novel Wintersmith by Terry Pratchett. References Weather lore Meteorological instrumentation and equipment
Weather house
Physics,Technology,Engineering
517
25,542,298
https://en.wikipedia.org/wiki/Nuclear%20magnetic%20resonance%20spectroscopy%20of%20stereoisomers
Nuclear magnetic resonance spectroscopy of stereoisomers most commonly known as NMR spectroscopy of stereoisomers is a chemical analysis method that uses NMR spectroscopy to determine the absolute configuration of stereoisomers. For example, the cis or trans alkenes, R or S enantiomers, and R,R or R,S diastereomers. In a mixture of enantiomers, these methods can help quantify the optical purity by integrating the area under the NMR peak corresponding to each stereoisomer. Accuracy of integration can be improved by inserting a chiral derivatizing agent with a nucleus other than hydrogen or carbon, then reading the heteronuclear NMR spectrum: for example fluorine-19 NMR or phosphorus-31 NMR. Mosher's acid contains a -CF3 group, so if the adduct has no other fluorine atoms, the 19F NMR of a racemic mixture shows just two peaks, one for each stereoisomer. As with NMR spectroscopy in general, good resolution requires a high signal-to-noise ratio, clear separation between peaks for each stereoisomer, and narrow line width for each peak. Chiral lanthanide shift reagents cause a clear separation of chemical shift, but they must be used in low concentrations to avoid line broadening. Methods Karplus equation Chiral derivatizing agent Mosher's acid Chiral solvating agent Chiral lanthanide shift reagent (e.g. Eufod) NMR database method References See also Ultraviolet-visible spectroscopy of stereoisomers Nuclear magnetic resonance spectroscopy
Nuclear magnetic resonance spectroscopy of stereoisomers
Physics,Chemistry,Astronomy
337
74,936,885
https://en.wikipedia.org/wiki/Dry%20shipper
A dry shipper, or cryoshipper, is a container specifically engineered to transport biological specimens at cryogenic temperatures utilizing the vapor phase of liquid nitrogen. Function The architecture of a dry shipper encompasses two primary components: an internal canister and an external protective shell. The inner canister, designed to hold biological specimens, is positioned within the vapor phase of the liquid nitrogen. This configuration ensures that the specimens are maintained at temperatures below -150 °C for prolonged durations. A distinctive feature of dry shippers is their ability to avert direct contact between samples and liquid nitrogen, reducing risks of contamination and ensuring consistent cryogenic conditions during transit. Applications Dry shippers serve various sectors in both the scientific and medical arenas. In the realm of reproductive medicine, these containers facilitate the transportation of delicate biological entities, including human ova and embryos. Within the research landscape, they are employed to carry materials such as spermatozoa or preimplantation embryos of genetically modified mouse strains, safeguarding the integrity and viability of these research assets during their journey. Moreover, biobanks, which archive diverse biological specimens for subsequent scientific exploration, utilize dry shippers to dispatch and acquire samples from researchers worldwide. Alternative for specimen transport One common alternative to dry shippers is using dry ice. This method reduces package weight and costs since there's no need for return shipping, unlike with dry shippers. However, at -80 °C, dry ice might not provide a temperature low enough for all specimens. For instance, while cryopreserved mouse spermatozoa can handle this temperature for short periods without losing their fertilization capacity, cryopreserved mouse embryos require colder environments, such as those below -150 °C in dry shippers, to maintain their quality. Another method is shipping freeze-dried samples at ambient temperatures, as seen with freeze-dried mouse spermatozoa. This can be more cost-effective, but many samples, when freeze-dried, experience a notable decline in quality, limiting its applicability. See also Cryoconservation of animal genetic resources References External links Capacities of different dry shippers illustrated by the Florida State University Laboratory mouse strains Transport Cryogenics Genetically modified organisms
Dry shipper
Physics,Engineering,Biology
451
3,655,598
https://en.wikipedia.org/wiki/Choquet%20theory
In mathematics, Choquet theory, named after Gustave Choquet, is an area of functional analysis and convex analysis concerned with measures which have support on the extreme points of a convex set C. Roughly speaking, every vector of C should appear as a weighted average of extreme points, a concept made more precise by generalizing the notion of weighted average from a convex combination to an integral taken over the set E of extreme points. Here C is a subset of a real vector space V, and the main thrust of the theory is to treat the cases where V is an infinite-dimensional (locally convex Hausdorff) topological vector space along lines similar to the finite-dimensional case. The main concerns of Gustave Choquet were in potential theory. Choquet theory has become a general paradigm, particularly for treating convex cones as determined by their extreme rays, and so for many different notions of positivity in mathematics. The two ends of a line segment determine the points in between: in vector terms the segment from v to w consists of the λv + (1 − λ)w with 0 ≤ λ ≤ 1. The classical result of Hermann Minkowski says that in Euclidean space, a bounded, closed convex set C is the convex hull of its extreme point set E, so that any c in C is a (finite) convex combination of points e of E. Here E may be a finite or an infinite set. In vector terms, by assigning non-negative weights w(e) to the e in E, almost all 0, we can represent any c in C as with In any case the w(e) give a probability measure supported on a finite subset of E. For any affine function f on C, its value at the point c is In the infinite dimensional setting, one would like to make a similar statement. Choquet's theorem Choquet's theorem states that for a compact convex subset C of a normed space V, given c in C there exists a probability measure w supported on the set E of extreme points of C such that, for any affine function f on C, In practice V will be a Banach space. The original Krein–Milman theorem follows from Choquet's result. Another corollary is the Riesz representation theorem for states on the continuous functions on a metrizable compact Hausdorff space. More generally, for V a locally convex topological vector space, the Choquet–Bishop–de Leeuw theorem gives the same formal statement. In addition to the existence of a probability measure supported on the extreme boundary that represents a given point c, one might also consider the uniqueness of such measures. It is easy to see that uniqueness does not hold even in the finite dimensional setting. One can take, for counterexamples, the convex set to be a cube or a ball in R3. Uniqueness does hold, however, when the convex set is a finite dimensional simplex. A finite dimensional simplex is a special case of a Choquet simplex. Any point in a Choquet simplex is represented by a unique probability measure on the extreme points. See also Notes References Convex hulls Functional analysis Integral representations
Choquet theory
Mathematics
649
27,935,351
https://en.wikipedia.org/wiki/Schoenoplectus%20americanus
Schoenoplectus americanus (syn. Scirpus americanus) is an American species of flowering plant in the sedge family known by the common names chairmaker's bulrush and Olney's three-square bulrush. Description This perennial herb easily exceeds in height. The stiff stems are sharply three-angled and usually very concave between the edges. Each plant has three or fewer leaves which are short and narrow. The inflorescence is a small head of several spikelets which may be brown to bright orange, red, purplish, or pale and translucent. They have hairy edges. The fruit is a brown achene. The plant reproduces sexually by seed and colonies spread via vegetative reproduction, sprouting from the rhizomes. Distribution and habitat It is native to the Americas, where it is known from Alaska to Nova Scotia and all the way into southern South America; it is most common along the East and Gulf Coasts of the United States and in parts of the western states. It grows in many types of coastal and inland wetland habitat, as well as sagebrush, desert scrub, chaparral, and plains. Ecology This plant, particularly the rhizomes, are a food source of muskrat, nutria, and other animals; it is strongly favored by the snow goose in its wintering grounds. Uses Native American groups used this plant for many purposes, including food, basketry, and hatmaking. It is used for revegetation projects in salt marsh habitat in its native range. It is a model organism in the study of salt marsh ecology and its response to climate change (currently global warming). References External links Jepson Manual Treatment USDA Plants Profile Flora of North America Photo gallery americanus Plant models Flora of Northern America Plants described in 1805
Schoenoplectus americanus
Biology
374
27,317,016
https://en.wikipedia.org/wiki/Mirrorless%20camera
A mirrorless camera (sometimes referred to as a mirrorless interchangeable-lens camera (MILC) or digital single-lens mirrorless (DSLM)) is a digital camera which, in contrast to DSLRs, does not use a mirror in order to ensure that the image presented to the photographer through the viewfinder is identical to that taken by the camera. They have come to replace DSLRs, which have historically dominated interchangeable lens cameras. Other terms include electronic viewfinder interchangeable lens (EVIL) and compact system camera (CSC). Lacking a mirror system allows the camera to be smaller, quieter, and lighter. In cameras with mirrors, light from the lens is directed to either the image sensor or the viewfinder. This is done using a mechanical movable mirror which sits behind the lens. By contrast, in a mirrorless camera, the lens always shines light onto the image sensor, and what the camera sees is displayed on a screen for the photographer. Some mirrorless cameras also simulate a traditional viewfinder using a small screen, known as an electronic viewfinder (EVF). DSLRs can act like mirrorless cameras if they have a "live view" mode, in which the mirror moves out of the way so the lens can always shine onto the image sensor. Many mirrorless cameras retain a mechanical shutter. Like a DSLR, a mirrorless camera accepts interchangeable lenses. Mirrorless cameras necessarily have shorter battery life because they need to power the screen and sensor at all times. Design Mirrorless cameras are mechanically simpler than DSLR cameras, and are smaller, lighter, and quieter due to the elimination of the moving mirror. While nearly all mirrorless cameras have a mechanical shutter, many also have an electronic shutter, allowing completely silent operation. As the image from the lens is always projected onto the image sensor, features can be available which are only possible in DSLRs when the mirror is locked up into "live view" mode. This includes the ability to show a focus-peaking display, zebra patterning, and face or eye tracking. The electronic viewfinder can provide live previews of depth of field, exposure, white balance and picture style settings, as well as offer a real time view of camera settings even in extremely low or bright light levels, making it easier to view the results. With the latest phase-detect autofocus available on some mirrorless cameras, the autofocus speed and accuracy of some models has been shown to be as good as DSLRs. But mirrorless cameras have shorter battery life than DSLRs due to prolonged use of LCD and/or OLED viewfinder displays, and often smaller buffers (to save battery). On-sensor autofocus is free of the adjustment requirements of the indirect focusing system of the DSLR (which relies on a separate autofocus sensor located below the reflex mirror), and as of 2018 mirrorless cameras could shoot with phase-detect autofocus at up to 20 frames per second using up to 693 focus points—a number far exceeding what was available on any DSLR. However, on-sensor phase detection autofocus (except for Canon's Dual Pixel Autofocus) repurposes pixel sites for autofocus acquisition, so that image data is partially or entirely missing for the autofocus "pixels", which can cause banding artifacts in the final image. History Early 2000s: Digital rangefinder origins The first digital rangefinder camera commercially marketed was the Epson R-D1 (released in 2004), followed by the Leica M8 in 2006. They were some of the first digital lens-interchangeable cameras without a reflex mirror, but they are not considered mirrorless cameras because they did not use an electronic viewfinder for live preview, but, rather, an optical viewfinder. Compact cameras with large sensors, technically akin to the current mirrorless cameras, were also marketed in this period. Cameras like Sony Cyber-shot DSC-R1 and Sigma DP1 proved that live preview operation is possible, and useful with APS-C sized sensors. Late 2000s: Micro Four Thirds system The first mirrorless camera commercially marketed was the Panasonic Lumix DMC-G1, released in Japan in October 2008. It was also the first camera of Micro Four Thirds system, developed exclusively for the mirrorless ILC system. The term mirrorless came into use in order to describe Micro Four Thirds cameras when they were announced in 2008, especially as the first Micro Four Thirds camera, the Lumix G1, was designed to be as similar to a DSLR as possible. There are other terms that were created, too, but mirrorless became the most popular. The Ricoh GXR (November 2009) had a radically different design. The mirrorless camera featured interchangeable lens units – a sealed unit of a lens and sensor, instead of the lens only being interchangeable. This design was different from other mirrorless cameras, and received mixed reviews, primarily due to its higher cost. 2010s: Pocket mirrorless cameras Following the introduction of the Micro Four Thirds system, several other cameras were released by Panasonic and Olympus, with the Olympus PEN E-P1 (announced June 2009) being the first mirrorless camera in a compact size (pocketable with a small lens). The Samsung NX10 (announced January 2010) was the first camera in this class not using the Micro Four Thirds system, instead utilizing a new, proprietary lens mount (Samsung NX-mount). The Sony Alpha NEX-3 and NEX-5 (announced May 14, 2010, and released in July 2010) saw Sony enter the market with a new, proprietary lens mount (the Sony E-mount), though the camera included LA-EA1 and LA-EA2 adapters for the legacy Minolta A-mount.In June 2011, Pentax announced the 'Q' mirrorless interchangeable lens camera and the 'Q-mount' lens system. The original Q series featured a smaller 1/2.3 inch 12.4 megapixel CMOS sensor. The Q7, introduced in 2013, has a slightly larger 1/1.7 inch CMOS sensor with the same megapixel count. In September 2011, Nikon announced their Nikon 1 system which consists of the Nikon 1 J1 and Nikon 1 V1 cameras and lenses. The V1 features an electronic viewfinder. The series includes high-speed mirrorless cameras which, according to Nikon, had the fastest autofocus and the fastest continuous shooting speed (60 fps) of any camera with interchangeable lenses, including DSLRs. The Fujifilm X-Pro1, announced in January 2012, was the first non-rangefinder mirrorless with a built-in optical viewfinder. Its hybrid viewfinder overlaid electronic information, including shifting frame-lines, to compensate for the parallax effect. Its 2016 successor, the X-Pro2, had an updated version of this viewfinder.Beyond just consumer interest, mirrorless lens systems created significant interest from camera manufacturers as a possible alternative to high-end camera manufacturing. Mirrorless cameras have fewer moving parts than DSLRs, and are more electronic, which is an advantage to electronic manufacturers (such as Panasonic, and Samsung), while reducing the advantage that dedicated camera manufacturers have in precision mechanical engineering. Sony's entry level full frame mirrorless α7 II camera has a 24-megapixel 5-axis stabilised sensor, but is more compact and less expensive than any full-frame sensor DSLR. Canon was the last of the major manufacturer of DSLRs to announce their own mirrorless camera, announcing the Canon EOS M in 2012 with an APS-C sensor and 18 mm registration distance similar to the one used by NEX. In the longer term Olympus decided that mirrorless may replace DSLRs entirely in some categories; Olympus America's DSLR product manager speculated that by 2012, Olympus DSLRs (the Olympus E system) might become mirrorless, though still using the Four Thirds System (not Micro Four Thirds). Panasonic UK's Lumix G product manager John Mitchell, speaking to the Press at the 2011 "Focus on Imaging" show in Birmingham, reported that Panasonic "G" camera market share was almost doubling each year, and that the UK Panasonic "G" captured over 11% of all interchangeable camera sales in the UK in 2010, and that the UK "CSC" sales made up 23% of the interchangeable lens market in the UK, and 40% in Japan. Sony announced their 2011 sales statistics in September 2012, which showed that mirrorless lenses had 50% of the interchangeable lens market in Japan, 18% in Europe, and 23% worldwide. Since then, Nikon and others entered the mirrorless market. Due to the downward trend of the world camera market, mirrorless camera sales suffered, but not as drastically and was compensated with increase by about 12 percent in the Japanese mirrorless camera market. However, mirrorless cameras took longer to catch on in Europe and North America. According to Japanese photo industry sources, mirrorless made up only 11.2% of interchangeable-lens cameras shipped to Europe in the first nine months of 2013, and 10.5% of those shipped to the U.S. in the same period. An industry researcher found that mirrorless camera sales in the U.S. fell by about 20% in the three weeks leading up to December 14, 2013—which included the key Black Friday shopping week; in the same period, DSLR sales went up 1%. In 2013, mirrorless system cameras constituted about five percent of total camera shipments. In 2015, they accounted for 26 percent of system camera sales outside of the Americas, and 16 percent within the United States. As of 2023, mirrorless cameras have come to overtake DSLRs as the dominant kind of interchangeable lens camera, with them gaining market share over DSLRs, and nearly all camera manufactures have switched entirely and exclusively to making mirrorless cameras and lenses. Until the mid 2010s, mirrorless cameras were dismissed by many photographers, because of their laggy and low resolution screens, when compared with the clarity and responsiveness of the optical viewfinders used on DSLRs, especially under strong sunlight or when photographing the sky at night. In addition, mirrorless cameras were known for having worse autofocus performance compared to DSLRs, and much worse battery life. This negative perception of mirrorless cameras began to change around 2013, when the Sony α7 was released. It was the first professional, full-frame mirrorless camera, and, although not the first with depth aware autofocus, included small additional sensors on the main sensor to detect depth in the scene, for fast autofocus ("phase-detect"). 2015 sales statistics showed that overall camera sales have fallen to one third of those of 2010, due to compact cameras being substituted by camera-capable mobile phones. Within camera sales, mirrorless ILCs have seen their market share increasing, with ILCs being 30% of overall camera sales, of which DSLRs were 77% and mirrorless cameras were 23%. In the Americas in 2015, DSLR annual sales fell by 16% per annum, while mirrorless sales over the same 12-month period have increased by 17%. In Japan, mirrorless cameras outsold DSLRs during some parts of the year. In 2015, mirrorless-cameras accounted for 26 percent of interchangeable-lens camera sales outside the Americas, although a lesser share of 26 percent was in the U.S. In late 2016, Olympus announced their OM-D E-M1 Mark II camera, a successor to the earlier and successful Mark I. The Mark II model retains a Micro Four Thirds image sensor of 17.3x13 mm and features a 20.4 megapixel resolution sensor, representing a new generation of mirrorless cameras competitive with and in many respects superior to DSLR cameras. Late 2010s: Full-frame mirrorless success In early 2017, Sony announced the Alpha 9 mirrorless camera, offering 693 autofocus points, and 20 frame-per-second shooting. In October Sony announces the A7RIII, offering 10 FPS shooting at 42 megapixels. In early 2018, Sony announced the a7 III mirrorless camera, bringing the 693 autofocus points of the A9 at a much lower cost. In August, Nikon announced its new full-frame mirrorless Z6 and Z7 cameras, both using a new lens mount. Canon announced its first full-frame mirrorless model, the EOS R, and its own new lens mount the next month. At the NAB Show in April 2018, Blackmagic Design announced and demonstrated the Pocket Cinema Camera 4K at a price of $1,295 USD. In early 2019, Canon officially announced their second full-frame mirrorless camera following the EOS R introduced in 2018. That said camera is the EOS RP as it was made to be entry-level for a full-frame mirrorless. In July 2019, Sony announced the a7R IV with a groundbreaking 61-megapixel full-frame sensor, making it the highest-resolution full-frame camera at the time. Other improvements included 15 stops of dynamic range and 576-point phase-detection autofocus for exceptional detail and precision. In October 2019, Panasonic's Lumix S1H became the first hybrid full-frame mirrorless camera certified by Netflix for use in its Original productions. This became a giant milestone in the camera industry by showing that it is possible to have a camera that is highly compact and relatively affordable (compared to traditional cinema cameras) while still meeting the high standards of a professional camera for film making. 2020s: End of DSLRs In July 2020, Canon announced both the EOS R5 and R6 to bring more mirrorless cameras to their line up. The EOS R5 was significant at the time as it was the first camera to be capable of 8K RAW video recording at up to 30 fps, positioning it as a leader in hybrid photo-video equipment. The EOS R6 was viewed as the affordable sibling, offering 20 MP stills, 4K 60 fps video, and 8 stops of image stabilization, appealing to enthusiasts and professionals alike. Also in July 2020, Sony announced the a7S III, which was a much-anticipated camera as it was aimed at professionals, especially videographers, as it retained a focus on low-light performance and video features. Throughout 2020 there had been major improvements with achieving the ability to shoot in low light to being able to record in 8K RAW, 2020 was one of the most impactful years as it introduced new things to both photography and videography. The year 2021 marked a turning point for mirrorless cameras, as they surpassed DSLRs in shipments, accounting for over 67% of total camera sales. On January 26, 2021, Sony announced the Alpha 1 and it had set the benchmark as it competed with the Canon EOS R5, being able to shoot video in 8K at 30 fps and 4K at 120 fps modes, as well as showcasing a 9.44-million-dot OLED EVF with a 240 Hz refresh rate. The Alpha 1 aimed to unify both photography and videography at a professional level. A day later, Fujifilm released the GFX 100s, featuring a smaller and lighter body holding a 102 MP medium-format sensor. This camera is compared to the original GFX 100 with 6 stops of in-body image stablization (IBIS) and 4K video recording at 30 fps. Nikon, towards the end of the year in October, released the Z9, its flagship full frame mirrorless model. It featured Nikon's best autofocus performance with 3D tracking, 8K video, 60 fps RAW shooting as well as offered cutting-edge speed and reliability with its stacked CMOS sensor. For the year 2021, Sony led the market with a 32% share, followed closely by Canon at 28.2%, reflecting the growing preference for compact, versatile, and professional-grade systems. These cameras catered to both photographers and filmmakers, pushing the boundaries of image quality, autofocus, and video capabilities. Some expansions to the mirrorless camera is that hybrid cameras had gained a dominance as cameras like the Canon EOS R5 C and Fujifilm X-H2S are catered to professionals going for high-resolution photo performance alongside 8K recording and internal ProRes capabilities. Some more improvements are how the APS-C format had become a focus of innovation—cameras like the Canon EOS R7 or the R10, alongside Fujifilm's X-H2 and X-H2S, highlighted how APS-C cameras could offer professional-grade specs like high burst rates and advanced autofocus, all while remaining compact and more affordable than full-frame systems. These systems have leaned more on an emphasis on content creators such as vloggers and YouTubers. In 2022, mirrorless systems continued to dominate the digital camera market, accounting for 69% of interchangeable lens camera shipments—a 31% increase from the previous year, as reported by Camera & Imaging Products Association (CIPA). Among the most impactful releases of the year were the Canon EOS R7 (June 2022), and the Sony FX30 (September 2022), both of which offer 4K video up to 60 fps but the FX30 to it further to 120 fps. The R7 featured one of the highest performing APS-C sensors of the year, with 32 megapixels only slightly surpassed by the Fujifilm X-T5 (November 2022) with 40 megapixels. The X-T5 was also a top performer in video quality with 6.2K at 30 fps and 7 stops of IBIS. Nikon's Z30 catered to content creators with its user-friendly video-centric design. The OM System OM-1 pushed the boundaries of Micro Four Thirds with a stacked BSI Live MOS sensor and up to 50 fps of continuous shooting, appealing to wildlife and action photographers. In March of 2022, the Panasonic Lumix BS1H earned Netflix approval, further highlighting the growing acceptance of mirrorless cameras in high-end filmmaking. Additionally, the Nikon Z9 was honored with the prestigious "Camera of the Year Award" and "Readers Award" at the Camera Grand Prix 2022, recognizing its groundbreaking performance and impact on the camera industry. In 2023, according to CIPA, global shipments of mirrorless cameras reached approximately $17 billion in the first half of 2023, marking a 20% year-over-year increase and setting a record high for the third consecutive year. In April 2023, Canon released the EOS R8, and it contains various autofocus capabilities for recognizing subjects and 4K video from a 6K capture at up to 60p. Nikon then released the Z8 in May 2023, which has the same CMOS sensor, 8K video, and 20–30 fps shooting rate modes as the Z9 in a smaller body. Meanwhile, Sony announced the a9 III in November 2023, the first modern mirrorless camera with a global shutter. The shutter allows for distortion-free motion capture by reading out each pixel of the image at the same time. The 24-megapixel CMOS full-frame shutter can also shoot at 120 frames per second, while being able to capture at shutter speeds as fast as 1/80000 second. Regionally, China led the surge with a 44% increase in sales, followed by Japan at 30%, and Europe at 9%. This uptick is closely linked to the revival of international travel, with the United Nations World Tourism Organization reporting that the number of international travelers in the first quarter of 2023 more than doubled compared to the previous year, reaching about 80% of pre-pandemic levels. Sensor size A full-frame camera is a digital camera with a digital sensor the same size as 35 mm format () film. Cameras that have a smaller sensor than full-frame (such as APS-C and Micro Four Thirds) differ in having a crop factor. Digital cameras with a larger sensor than full-frame are called medium format, named after medium format film cameras that use the 120 and 220 film formats (although their sensors are generally much smaller than the frame size of medium format film cameras). Sony was the first to introduce a full-frame mirrorless camera, the α7, in 2013. It was followed by the Leica SL (Typ 601) in 2015. Nikon and Canon each launched full-frame mirrorless cameras in September 2018. Panasonic and Sigma, under the L-Mount Alliance, announced that they would be using the Leica L-Mount for their own full-frame mirrorless cameras. Panasonic announced its S1R and S1 cameras, and Sigma announced a then-unnamed camera, later called the fp, all to be launched in 2019 along with lenses from Panasonic and Sigma. Systems comparison See also List of smallest mirrorless cameras Notes References Japanese inventions de:Systemkamera#Digitale Systemkameras
Mirrorless camera
Technology
4,418
55,112,880
https://en.wikipedia.org/wiki/Cute%20aggression
Cute aggression, or playful aggression, is the urge to squeeze or bite things perceived as being cute without the desire to cause any harm. It is a common type of dimorphous display, where a person experiences positive and negative expressions simultaneously in a disorganised manner. Individuals experiencing cute aggression may find themselves clenching their jaw or fists, with the urge to squish, pinch or bite an adorable baby, animal, or object. Terminology Social psychologist Oriana Aragón and colleagues defined the phenomenon of cute aggression in their published research paper in 2015. They also referred to these experiences with the alternative term "playful aggression", defining it as follows: "Playful aggression is in reference to the expressions that people show sometimes when interacting with babies. Sometimes we say things and appear to be more angry than happy, even though we are happy. For example some people grit their teeth, clench their hands, pinch cheeks, or say things like "I want to eat you up!" It would be difficult to ask about every possible behaviour of playful aggression, so we ask generally about things of this kind—calling them playful aggressions." In other languages The concept of cute aggression is reflected in various terms across many languages. The word gigil in Tagalog describes an overwhelming feeling of joy in reference to something cute and wanting to squeeze it. The Indonesian word gemas describes the feeling of wanting to choke something cute you see. Gigil and gemas have alternative meanings of expressing severe frustration and anger towards something. The word geram in Malay is also polysemous, with meanings associated with expressing a love-hate anger toward something cute, evoking urges to squeeze it affectionately, and describes a feeling of dissatisfaction too. Man Khiaao or มัน-เขี้ยว in Thai, is an expression which means that an individual wants to 'eat them up' as they are 'so cute' often in relation to people or animals. The verb man directly translates 'to enjoy', and khiaao translates to fang or canine. The concept of cute aggression also exists for the natives of the Mariana Islands named Chamorros. Their vernacular language Chamorro contains the term ma'goddai. This describes the strong feelings one gets when admiring someone's poki (pleasantly chubby) appearance causing an urge to pinch, squeeze or smother the person in kisses. The presence of cute aggression is evident in the array of languages worldwide that incorporate expressions related to this phenomenon. Neurological response Brain structure Upon encountering something cute, the activity of the orbitofrontal cortex increases, (the area associated with emotion and pleasure) located at the front of the brain. Neuroimaging research found that the orbitofrontal cortex in adults became active in one seventh of a second after seeing a baby face. This enables us to understand how babies attract our attention to elicit care and protection from the moment they are born. Research using EEG scans discovered that both the emotion centre and reward centre lit up in the brain when participants viewed images of baby animals. Hormones The interaction between the neurohormones oxytocin and vasopressin offer proximate explanations for why cute stimuli can elicit contradictory responses of affection and aggression. They are distinct molecules and are evolved components of an adaptive system humans have for long term attachment. Explanation 1 The hormone oxytocin or the cuddle/love hormone is produced in the hypothalamus in the brain and released into the bloodstream by the pituitary gland during childbirth, sex, breastfeeding, and exercise. Oxytocin pathways are activated upon seeing something cute and neuropeptide surges contribute to feelings of affection. Vasopressin is produced in the hypothalamus and released from the posterior pituitary in the brain. When released it compels the individual to protect and defend what is considered vulnerable. For example many mammals, such as grizzly bears, will display aggressive behaviour to protect their young. Explanation 2 Cute aggression is experienced because portions of the brain corresponding to emotions and rewards are triggered, which can essentially overload an individual’s mental faculties. To compensate, the body develops an aggressive response, which can drag down some of the overwhelmingly positive responses. This response triggers an impulse to squeeze the cute person or thing in question, or some other similarly aggressive behavior, such as biting. Evolutionary explanation Evolution serves as the ultimate explanation for understanding cute aggression, as it suggests that this seemingly paradoxical response may have provided adaptive advantages in human ancestors, aiding in the care and protection of vulnerable offspring. As a species, humans rely heavily upon parental care in order for their offspring to survive. Humans have low reproductive rates relative to other species, amplifying the importance of parental care for the survival of their few offspring. These feelings tend to be on a continuous scale rather than a particular threshold value. The gradient is most intense with objects that we perceive to be more cute in comparison to objects that are not as cute, but they still generate a response. Infantile traits like big eyes, round faces, and small size evoke perceptions of cuteness, and trigger innate caregiving instincts in humans. Psychoanalyst John Bowlby (1907–1990) in his Evolutionary Theory of Attachment suggests that babies are pre-programmed to elicit attachments from caregivers to increase their chances of survival. He explained how babies use social releasers including smiling, crying, and making eye-contact to attract the attention of caregivers. Biological response of oxytocin attaches the adults to infants and vasopressin is somehow associated with aggressive feelings. Cute aggression, such as biting, squeezing, and tickling, is related to the intersection of emotional responses and reward centers. Some have postulated that this impulse serves an evolutionary purpose; if a human were to continually stare at their children, in awe over how adorable they are while being negligent to the environment and immediate surroundings, the children could be attacked by a wild animal in the vicinity when the parent is not aware and suffer harm. These cute behaviours highlight the child's vulnerability which adults are receptive to. The same adoration that humans are compelled to feel for their young may carry over to other animals with similar physiological traits which require care such as puppies and kittens. Research Psychological reactions A study conducted in 2015 by Aragon and colleagues sought to explain whether cute aggression as a dimorphous expression serves as a regulatory mechanism during overwhelming emotional experiences. They outline how dimorphous expressions of emotion feature the distinct pattern of one stimulus event, one appraisal, one emotional experience but two expressive behaviours. Their 143 participant survey results found more infantile babies received higher positive appraisals (M = 66.88) than less-infantile babies (M = 56.68). Participants reported feeling more overwhelmed with positive feelings towards the more-infantile babies (M = 42.74) while expressing more aggressive urges towards them compared to less infantile babies (M = 33.35). Physiological reactions A more recent study conducted by Stravropoulos and colleagues in 2018, used electroencephalography (EEG) scans to investigate brain activity during cute aggression experiences. Fifty-four participants rated their reactions to baby animal images comparing these to adult animals. Higher ratings were given after viewing baby animals and the EEG analysis found in the N200 component, emotional responses peaked around 200 ms after stimulus onset. Participants who reported higher levels of cute aggression showed a stronger reward processing response in the mesolimbic system. The involvement of emotional and reward processing in the brain enables insight to the underlying mechanisms of cute aggression. Citations Aggression Emotions Play (activity) Concepts in aesthetics
Cute aggression
Biology
1,570
24,463,486
https://en.wikipedia.org/wiki/Galileo%27s%20Dream
Galileo's Dream (2009) is a science fiction novel with elements of historical fiction written by Kim Stanley Robinson. In the book, 17th-century scientist Galileo Galilei is visited by far-future time travellers living on the Galilean moons of Jupiter. Italicised portions of the text within the novel are actually translations of Galileo and his contemporaries' own recorded writings. It was published in hardcover on August 6, 2009, in the United Kingdom and on December 29, 2009, in the United States. Synopsis The novel's action moves back and forth between Renaissance Italy and the Jovian moons of the 32nd century, a utopian society where humans live for centuries and violence is virtually unknown. It is narrated by Cartophilus, a Jovian time-traveller who has assumed an identity as one of Galileo's servants. Galileo is visited by Ganymede, a time traveler who  transports him to 32nd century Europa. Ganymede hopes that Galileo will aid his campaign to stop the Europans from entering the moon's subsurface ocean and communicating with the intelligent entity that inhabits it. Hera, another Jovian, warns Galileo that Ganymede does not have his best interests at heart. Ganymede gives Galileo a drug that makes him forget what has happened, before returning him to his own time. On further trips, Galileo learns more about the Jovians' culture, science, and history. Hera warns Galileo that he will be burnt at the stake unless he comes to understand the events of his life better--in particular, his interactions with women and the privileged position he has occupied in a patriarchal society. Through futuristic technology, Galileo relives his relationship with Marina Gamba and other events of his life.  It is revealed that Ganymede hopes to manipulate Galileo into being martyred for science, believing that this will increase the power of science and reduce the suffering that humanity endured in the centuries after Galileo's life. Ganymede injures the Europan intelligence, believing that contact with a vastly superior entity will throw humanity into existential despair. It is revealed that Jupiter itself is an intelligent entity, as are the sun and stars. Galileo and Hera share an experience of transcendental oneness with the universe. They decide to travel back in time once more, to undo Ganymede's assault on the Europan alien. In between these dimly-remembered trips to the future, Galileo conducts scientific investigations and tries to find a way to publish his heliocentric findings without running afoul of the inquisition. Cartophilus and a few other time-travellers do their best to aid him behind the scenes. He sends his daughters to live in a convent of the Poor Clares, where they are poorly fed despite his best efforts to supply the convent with food. Galileo is eventually brought to trial for heresy, found guilty, and sentenced to house arrest--a humiliating punishment, but far lighter than the sentence of death he could have faced. For a time he finds joy in a domestic life shared with his beloved daughter Maria Celeste, but she dies of dysentery (aggravated by her poor diet) in the last years of his life. Cartophilus eulogizes Galileo and urges the reader to emulate his dedication to describing reality as he saw it: "Push like Galileo pushed! And together we may crab sideways toward the good." Reception Robinson was praised for his depiction of Galileo in both his greatness and his weaknesses, and for the handling of themes such as the relation between our perception of time and memory. Adam Roberts described the book as an homage to Johannes Kepler's Somnium, sometimes identified as the first science fiction novel. References External links Galileo's Dream at KimStanleyRobinson.info 2009 American novels American historical novels Fiction set on Jupiter's moons Novels by Kim Stanley Robinson American science fiction novels Fiction about trans-Neptunian objects Cultural depictions of Galileo Galilei HarperCollins books Novels set in the 17th century Novels set in Italy Novels set in the 32nd century
Galileo's Dream
Astronomy
831
23,166,477
https://en.wikipedia.org/wiki/YouTomb
YouTomb was a website built to track videos removed by popular American video-sharing website YouTube. The site operated a searchable database of recent video removals on YouTube. It tracked not only DMCA takedowns but also terms of use violations and user removals. Those videos removed due to DMCA takedowns were sortable by alleged copyright holder. The database was generated by software that repeatedly scanned YouTube for unavailable videos. The site was operated by the MIT chapter of Students for Free Culture and its source code is licensed under the GNU Affero General Public License. Although it only tracked YouTube, a future goal was to cover more video websites on YouTomb (unavailable as of November 2014). See also Lumen References External links YouTomb (archived) Massachusetts Institute of Technology YouTube Software using the GNU Affero General Public License
YouTomb
Technology
175
71,865,071
https://en.wikipedia.org/wiki/1%2C2%2C4%2C5-Tetrachloro-3-nitrobenzene
1,2,4,5-Tetrachloro-3-nitrobenzene (tecnazene) is an organic compound with the formula . It is a colorless solid. A related isomer is 1,2,3,4-tetrachloro-5-nitrobenzene. It is used as a standard for quantitative analysis by nuclear magnetic resonance. 1,2,4,5-Tetrachloro-3-nitrobenzene is also a fungicide used to prevent dry rot and sprouting on potatoes during storage. References Fungicides Analytical standards Nitrobenzene derivatives Chlorobenzene derivatives
1,2,4,5-Tetrachloro-3-nitrobenzene
Chemistry,Biology
144
41,242,993
https://en.wikipedia.org/wiki/Dynamic%20kinetic%20resolution%20in%20asymmetric%20synthesis
Dynamic kinetic resolution in chemistry is a type of kinetic resolution where 100% of a racemic compound can be converted into an enantiopure compound. It is applied in asymmetric synthesis. Asymmetric synthesis has become a much explored field due to the challenge of creating a compound with a single 3D structure. Even more challenging is the ability to take a racemic mixture and have only one chiral product left after a reaction. One method that has become an exceedingly useful tool is dynamic kinetic resolution (DKR). DKR utilizes a center of a particular molecule that can be easily epimerized so that the (R) and (S) enantiomers can interconvert throughout the reaction process. At this point the catalyst can selectively lower the transition state energy of a single enantiomer, leading to almost 100% yield of one reaction pathway over the other. The figure below is an example of an energy diagram for a compound with an (R) and (S) isomer. If a catalyst is able to increase ΔΔG‡ to a sufficient degree, then one pathway will dominate over the other, leading to a single chiral product. Manipulating kinetics therefore becomes a powerful way to achieve asymmetric products from racemic starting materials. There have been numerous uses of DKR in the literature that have provided new methods in pharmaceuticals as well as routes to natural products. Applications Noyori's asymmetric hydrogenation One of the more classic applications of DKR is Noyori's asymmetric hydrogenation. The presence of an acidic center between two carbonyl groups allows for easy epimerization at the chiral center under basic conditions. To select for one of the four possible stereoisomers, a BINAP-Ru catalyst is used to control the outcome of the reaction through the steric bulk of the phosphorus ligand. Some of the early transformations are shown below. To further understand the stereochemical outcome, one must look at the transition state geometry. The steric bulk of the BINAP ligand coupled with the coordination of ruthenium to the carbonyl oxygen atoms results in high selectivity for hydrogen insertion on one face. This resulting stereochemistry of (R,S) and (R,R) is obtained in 94.5% yield while the other three stereoisomers range from 0.5-3% yield. Noyori's accomplishments of 1990 paved the way for even more useful applications of DKR. Asymmetric conjugate reduction About a decade later, Jurkauskas and Buchwald also utilized dynamic kinetic resolution towards the hydrogenation of conjugated systems. 1,4 addition to cyclic enones is quite common in many reaction schemes, however asymmetric reductions in the presence of an easily epimerizable center adds to the complexity when trying to modify only one center. Through the use of a copper catalyzed reaction however, Buchwald was able to obtain 1,4 reduction in great enantiomeric excess (ee). In order to achieve a high rate of epimerization, a strong bulky base like sodium t-butoxide was used to ensure rapid equilibrium. Copper proved to be an excellent metal in this reaction due to its ability to complex with the oxygen when the hydrogen was added. Being a soft metal, copper greatly prefers 1,4 addition over 1,2 addition, with the alkene being a softer more polarizable electrophile. Again, BINAP became the ligand of choice due to its steric selectivity, lowering the transition state energy of starting material in the left column. In addition, PMHS was used as a relatively less reactive silane. This prevented loss of ee before deprotection with tetra-n-butylammonium fluoride (TBAF). Asymmetric aldol reaction In addition to hydrogenation reactions, other bonds have been formed using DKR and are highly successful. The aldol reaction has been extensively researched primarily because of the inherent challenge of forming a carbon-carbon bond. Ward and colleagues have been able to use the proline-catalyzed aldol reaction in tandem with dynamic kinetic resolution to obtain a high enantioselective reaction. In this reaction proline catalyzes the reaction through creation of an enamine intermediate that is highly nucleophilic. The acid group on the catalyst helps facilitate the carbon-carbon bond formation by coordinating with the aldehyde oxygen. This greatly improves stereoselectivity and yield. Ward and his associates also found that by adding trace amounts of water to the DMSO solvent, it greatly increase the yield of the reaction, most likely by aiding proton transfer from proline to the newly forming alcohol. The selectivity for this product can best be explained by the Felkin model. The cyclic (E)-enamine is able to undergo a favorable transition state where the aldehyde adopts an anti relationship relative to the incoming nucleophile, as well as a 1,2 syn relationship between the aldehyde and its adjacent ring system. The transition state is shown above. Enzyme-metal reactions More recently many research groups have tried to employ enzymes into DKR synthetic routes. Due to the generally high specificity for substrates, enzymes prove to be vital catalysts for binding to only one stereoisomer in the racemic mixture. In 2007 Bäckvall discovered an enzyme-metal coupled reaction that converts allylic acetates to allylic alcohols with excellent stereospecificity. In this reaction, a Pd(0) complex is used to interconvert the chirality of the acetate center at a rate fast enough to ensure complete racemization. When this is achieved the CALB enzyme selectively hydrolyzes the (R) substrate because of the low binding affinity for the (S) substrate. This gives almost exclusively the (R) allylic alcohol in 98% ee. To expand on this chemistry, Bäckvall designed a one-pot, two-reaction system that utilizes the stereochemical outcome of a DKR reaction to undergo a second energetically favorable reaction with high enantioselectivity. This time a ruthenium complex is used to racemize the allylic alcohol in much the same way as the previous example. The addition of CALB catalyzes the reaction between the (R) isomer and the ester reagent to form a product with a diene and a dienophile. This intermediate can then undergo a tandem Diels-Alder reaction to achieve a decent yield with 97% ee. Natural product synthesis Dynamic kinetic resolution has also been applied to the total synthesis of a variety of natural products. After Bäckvall's discoveries in 2007, he employed another enzyme-metal coupled reaction to synthesize the natural product (R)-Bufuralol. The key step that the literature points out utilizes DKR to convert the chlorohydrin into the (S)-acetate by means of a lipase and a ruthenium catalyst. The lipase PS-C “Amano” II has been reported in the literature to be particularly enantioselective for the 1-phenyl-2-chloroethanol motif. The enzyme, along with the ruthenium catalyst, allows for rapid racemization of the chlorohydrin with a selective binding to the (S) isomer for the acetylation reaction. Here isopropenyl acetate is used as the acyl donor. The product is achieved in excellent yield (96%) and near-perfect enantiomeric excess (>99%). Conclusion With the number of asymmetric synthetic challenges increasing as new targets for pharmaceuticals and materials grow, methods development becomes critical. Dynamic kinetic resolution is one solution to this ever growing demand, as one can take inexpensive racemic starting materials and come out with products in high yield and stereoselectivity. As the scope and application of this powerful concept increases, its utilization in the industrial and academic settings is likely to expand in the years to come. References Stereochemistry Asymmetry
Dynamic kinetic resolution in asymmetric synthesis
Physics,Chemistry
1,662
4,499,573
https://en.wikipedia.org/wiki/Adduct
In chemistry, an adduct (; alternatively, a contraction of "addition product") is a product of a direct addition of two or more distinct molecules, resulting in a single reaction product containing all atoms of all components. The resultant is considered a distinct molecular species. Examples include the addition of sodium bisulfite to an aldehyde to give a sulfonate. It can be considered as a single product resulting from the direct combination of different molecules which comprises all atoms of the reactant molecules. Adducts often form between Lewis acids and Lewis bases. A good example is the formation of adducts between the Lewis acid borane and the oxygen atom in the Lewis bases, tetrahydrofuran (THF): or diethyl ether: . Many Lewis acids and Lewis bases reacting in the gas phase or in non-aqueous solvents to form adducts have been examined in the ECW model. Trimethylborane, trimethyltin chloride and bis(hexafluoroacetylacetonato)copper(II) are examples of Lewis acids that form adducts which exhibit steric effects. For example: trimethyltin chloride, when reacting with diethyl ether, exhibits steric repulsion between the methyl groups on the tin and the ethyl groups on oxygen. But when the Lewis base is tetrahydrofuran, steric repulsion is reduced. The ECW model can provide a measure of these steric effects. Compounds or mixtures that cannot form an adduct because of steric hindrance are called frustrated Lewis pairs. Adducts are not necessarily molecular in nature. A good example from solid-state chemistry is the adducts of ethylene or carbon monoxide of . The latter is a solid with an extended lattice structure. Upon formation of the adduct, a new extended phase is formed in which the gas molecules are incorporated (inserted) as ligands of the copper atoms within the structure. This reaction can also be considered a reaction between a base and a Lewis acid where the copper atom plays the electron-receiving role and the pi electrons of the gas molecule play the electron-donating role. Adduct ions An adduct ion is formed from a precursor ion and contains all of the constituent atoms of that ion as well as additional atoms or molecules. Adduct ions are often formed in a mass spectrometer ion source. See also Adductomics DNA adduct References Chemical reactions Solid-state chemistry General chemistry
Adduct
Physics,Chemistry,Materials_science
518
2,611,670
https://en.wikipedia.org/wiki/Post%20Office%20Research%20Station
The Post Office Research Station was first established as a separate section of the General Post Office in 1909. In 1921, the Research Station moved to Dollis Hill, north west London, initially in ex-army huts. The main permanent buildings at Dollis Hill were opened in 1933 by Prime Minister Ramsay MacDonald. In 1968 it was announced that the station would be relocated to a new centre to be built at Martlesham Heath in Suffolk. This was formally opened on 21 November 1975 by Queen Elizabeth and is today known as Adastral Park. The old Dollis Hill site was released for housing, with the main building converted into a block of luxury flats and an access road named Flowers Close, in honour of Tommy Flowers. Much of the rest of the site contains affordable housing administered by Network Housing. World War II In 1943 the world's first programmable electronic computer, Colossus Mark 1, was built by Tommy Flowers and his team, followed in 1944 and 1945 by nine Colossus Mark 2s. These were used at Bletchley Park in Cryptanalysis of the Lorenz cipher. Dollis Hill also built the predecessor of Colossus the Heath Robinson (codebreaking machine). The Director, Gordon Radley, was also told of the secret Bletchley Park establishment. Members of Flowers' team included Sydney Broadhurst, William W. Chandler, Harry Fensom; and Allen Coombs (who took over for the Mark II version of Colossus). Paddock, a World War II concrete two-level underground bunker, was built in secret in 1939 as an alternative Cabinet War Room underneath a corner of the Dollis Hill site. Its surface building was demolished after the war. Research The first transatlantic radio telephone service (in the 1940s). In 1957 ERNIE (Electronic Random Number Indicator Equipment) was built for the government's Premium Bond lottery, by Sidney Broadhurst's team. In 1971 Samuel Fedida conceived Viewdata and the Prestel service was launched in 1979. Notable staff John Bray William W. Chandler Allen Coombs Dick Dyott James H. Ellis Samuel Fedida Harry Fensom Tommy Flowers Gil Hayward Ralph Archibald Jones. Developed espionage and counter equipment, helped invent the listening devices used for locating buried bomb victims in London and helped devise the standard for telephone systems in Europe. Arnold Lynch Frank Morrell Gordon Radley Stephanie Shirley Haakon Sørbye Eric Speight Henry John Josephs (H. J. Josephs). Entered the Research Station as a draughtsman but eventually rose to a senior research position being known for his mathematical skills. He was a great admirer of Oliver Heaviside and his work, of which Josephs wrote a monograph on the Heaviside Operational calculus. Josephs was also involved with the IEE (now Institution of Engineering and Technology) in which he presented a number of papers at the Heaviside Centenary Meeting in 1950 and went on to examine, repair and study papers of Oliver Heaviside found under the floorboards of a house in Paignton, Devon, where Oliver Heaviside had once lived. References Former buildings and structures in the London Borough of Brent General Post Office History of computing History of telecommunications in the United Kingdom 20th century in London 1921 establishments in the United Kingdom 1921 in London
Post Office Research Station
Technology
673
18,367,636
https://en.wikipedia.org/wiki/KCNA7
Potassium voltage-gated channel subfamily A member 7 also known as Kv1.7 is a protein that in humans is encoded by the KCNA7 gene. The protein encoded by this gene is a voltage-gated potassium channel subunit. It may contribute to the cardiac transient outward potassium current (Ito1), the main contributing current to the repolarizing phase 1 of the cardiac action potential. References Further reading External links Ion channels
KCNA7
Chemistry
89
11,819,940
https://en.wikipedia.org/wiki/Septoria%20liquidambaris
Septoria liquidambaris is a fungal plant pathogen infecting sweetgum trees. References External links Index Fungorum USDA ARS Fungal Database Fungal tree pathogens and diseases Fungus species liquidambaris
Septoria liquidambaris
Biology
44
24,508,918
https://en.wikipedia.org/wiki/Gymnopilus%20jalapensis
Gymnopilus jalapensis is a species of mushroom in the family Hymenogastraceae. See also List of Gymnopilus species External links Gymnopilus jalapensis at Index Fungorum jalapensis Fungi of North America Taxa named by William Alphonso Murrill Fungus species
Gymnopilus jalapensis
Biology
69
32,122,780
https://en.wikipedia.org/wiki/4-Hydroxy-TEMPO
4-Hydroxy-TEMPO or TEMPOL, formally 4-hydroxy-2,2,6,6-tetramethylpiperidin-1-oxyl, is a heterocyclic compound. Like the related TEMPO, it is used as a catalyst and chemical oxidant by virtue of being a stable aminoxyl radical. Its major appeal over TEMPO is that it is less expensive, being produced from triacetone amine, which is itself made via the condensation of acetone and ammonia. This makes it economically viable on an industrial scale. In biochemical research, 4-hydroxy-TEMPO has been investigated as an agent for limiting reactive oxygen species. It catalyzes the disproportionation of superoxide, facilitates hydrogen peroxide metabolism, and inhibits Fenton chemistry. 4-Hydroxy-TEMPO, along with related nitroxides, are being studied for their potential antioxidant properties. On an industrial-scale 4-hydroxy-TEMPO is often present as a structural element in hindered amine light stabilizers, which are commonly used stabilizers in plastics, it is also used as a polymerisation inhibitor, particularly during the purification of styrene. It is a promising model substance to inhibit SARS-CoV-2 RNA-dependent RNA polymerase. See also Bobbitt's salt pH neutral aqueous organic flow batteries References Free radicals Amine oxides Piperidines
4-Hydroxy-TEMPO
Chemistry,Biology
301
1,673,941
https://en.wikipedia.org/wiki/Institute%20for%20Systems%20Biology
Institute for Systems Biology (ISB) is a non-profit research institution located in Seattle, Washington, United States. ISB concentrates on systems biology, the study of relationships and interactions between various parts of biological systems, and advocates an interdisciplinary approach to biological research. Goals Systems biology is the study of biological systems in a holistic manner by integrating data at all levels of the biological information hierarchy, from global down to the individual organism, and below down to the molecular level. The vision of ISB is to integrate these concepts using a cross-disciplinary approach combining the efforts of biologists, chemists, computer scientists, engineers, mathematicians, physicists, and physicians. On its website, ISB has defined four areas of focus: P4 Medicine – This acronym refers to predictive, preventive, personalized and participatory medicine, which focuses on wellness rather than mere treatment of disease. Global Health – Use of the systems approach towards the study of infectious diseases, vaccine development, emergence of chronic diseases, and maternal and child health. Sustainable Environment – Applying systems biology for a better understanding of the role of microbes in the environment and their relation to human health. Education & Outreach – Knowledge transfer to society through a variety of educational programs and partnerships, including the spin out of new companies. Early history Leroy Hood co-founded the Institute with Alan Aderem and Ruedi Aebersold in 2000. However, the story of how ISB got started actually begins in 1990. Lee Hood was the director of a large molecular biotechnology lab at the California Institute of Technology in Pasadena, and was a key advisor in the Human Genome Project, having overseen development of machines that were instrumental to its later success. The University of Washington (UW), like many other universities, was eager to recruit Hood, but had neither the space nor the money to accommodate Hood's large laboratory. Lee Huntsman, director of University of Washington's Center for Bioengineering, was attending a University of Washington football game, sharing a luxury box with Bill Gates, the former CEO and current chairman of Microsoft. Huntsman took the opportunity to tell Gates about Hood. Bill Gates already had a considerable interest in biotechnology, both as a philanthropist and as an investor, and after meeting Hood, donated $12 million to UW to enable him to head a new department of molecular biotechnology, where Hood continues to hold a faculty position as the Gates Professor of Molecular Biotechnology. ISB represents a spin-off of Hood's labs at UW. Achievements ISB is in the top ranks of scientific institutions worldwide. In 2012, the SCImago Research Group, based in Spain, ranked ISB 4th worldwide on its Excellence Rate scale. ISB currently hosts 12 research groups with expertise ranging across genetics, microbial genetics, complex molecular machines, macromolecular complexes, gene regulatory networks, immunology, molecular and cell biology, cancer biology, genomics, proteomics, protein chemistry, computational biology and biotechnology. The ISB website lists 985 peer-reviewed publications for the years 2000 through early 2012. In late 2005, ISB began to emphasize the application of systems biology to P4 medicine (predictive, preventive, personalized, participatory), i.e. the development of techniques for predicting and preventing disease, possibly before patients even know they are sick. The P4 Medicine institute was co-founded in 2010 by ISB and Ohio State University. On December 21, 2012, President Obama awarded 12 scientists, including Dr. Leroy Hood, the National Medal of Science, which the highest honor given by the U.S. government to scientists, engineers and inventors. The Education and Outreach efforts of ISB include creating the Logan Center for Education whose mission is to enable educators to produce STEM literate students. ISB offers paid research internships for high school and undergraduate students, and offers advanced systems science courses throughout the year. ISB partnered in several high-profile research projects, the most significant one thus far being with the Grand Duchy of Luxembourg to create the Center for Systems Biology Luxembourg and the Seattle Proteome Center. ISB faculty members have launched several companies, including: Cytopeia (acquired by BD in 2008), Integrated Diagnostics, Macrogenics, NanoString Technologies, and Accelerator Corporation. Accelerator Corporation, in particular, is an investment company that provides venture capital funding and management for biotech startup companies. Its portfolio companies and graduates have focused on improved biotherapeutics, vaccines, biomarkers and other such products. See also List of systems sciences organizations References External links ISB website Seattle Proteome Center Bioinformatics organizations Biological research institutes in the United States Systems biology Systems science institutes Research institutes in Seattle Research institutes established in 2000 2000 establishments in Washington (state)
Institute for Systems Biology
Biology
978
43,285,428
https://en.wikipedia.org/wiki/Rhizopogon%20truncatus
Rhizopogon truncatus is an ectomycorrhizal fungus in the family Rhizopogonaceae. It was described by American mycologist David Hunt Linder in 1924. References External links Rhizopogonaceae Fungi described in 1924 Fungi of North America Fungus species
Rhizopogon truncatus
Biology
65
1,240,291
https://en.wikipedia.org/wiki/Return%20on%20assets
The return on assets (ROA) shows the percentage of how profitable a company's assets are in generating revenue. ROA can be computed as below: The phrase return on average assets (ROAA) is also used, to emphasize that average assets are used in the above formula. This number tells you what the company can do with what it has, i.e. how many dollars of earnings they derive from each dollar of assets they control. It's a useful number for comparing competing companies in the same industry. The number will vary widely across different industries. Return on assets gives an indication of the capital intensity of the company, which will depend on the industry; companies that require large initial investments will generally have lower return on assets. ROAs over 5% are generally considered good. Usage Return on assets is one of the elements used in financial analysis using the Du Pont Identity. See also Return on equity (ROE) List of business and finance abbreviations Rate of return on a portfolio Return on brand (ROB) Return on capital (ROC) Return on investment (ROI) Weighted average return on assets (WARA) References External links Return On Assets - ROA Financial ratios Investment indicators
Return on assets
Mathematics
242
31,130,713
https://en.wikipedia.org/wiki/Warm-glow%20giving
Warm-glow giving is an economic theory describing the emotional reward of giving to others. According to the original warm-glow model developed by James Andreoni (1989, 1990), people experience a sense of joy and satisfaction for "doing their part" to help others. This satisfaction - or "warm glow" - represents the selfish pleasure derived from "doing good", regardless of the actual impact of one's generosity. Within the warm-glow framework, people may be "impurely altruistic", meaning they simultaneously maintain both altruistic and egoistic (selfish) motivations for giving. This may be partially due to the fact that "warm glow" sometimes gives people credit for the contributions they make, such as a plaque with their name or a system where they can make donations publicly so other people know the "good" they are doing for the community. Whereas "pure altruists" (sometimes referred to as "perfect altruists") are motivated solely by the desire to provide for a recipient, impure altruists are also motivated by the joy of giving (warm glow). Importantly, warm glow is distinctly non-pecuniary, meaning it arises independent of the possibility of financial reward. Therefore, the warm glow phenomenon is distinct from reciprocal altruism, which may imply a direct financial incentive. Warm-glow giving is a useful economic framework to consider public good provision, collective action problems, charitable giving, and gifting behavior. The existence of a warm glow helps explain the absence of complete crowding-out of private giving by public grants, as predicted by classical economic models under the neutrality hypothesis. Beyond economics, warm glow has been applied to sociology, political science, environmental policy, healthcare, and business. Conceptually, warm-glow giving is related to the notion of a "helper's high" and appears to be resilient across cultures. Background in moral philosophy Warm glow is built upon the idea of impure altruism: the blend of both altruistic and egoistic desires to help others. Philosophers have debated this idea since the time of the ancient Greeks. In the Socratic dialogues, motivation may be traced to an egoistic concern for one's own welfare, thus denying the plausibility of pure altruism. Similarly, Plato's organization of motivations as responses to hunger-based desires highlights the foundational importance of egoism in all social interactions. However, in Nicomachean Ethics and Eudemian Ethics, Aristotle considers both the possibility and necessity of altruism to fulfill high-order eudaimonic goals, thus setting the stage for an ongoing philosophical debate. Hobbes, Kant, Nietzsche, Bentham, J.S. Mill argued against the possibility of pure altruism and advanced the doctrine of psychological egoism, while others (Butler, Hume, Rousseau, Adam Smith, Nagel) argued for the existence of altruistic motives. Conceptually, the warm-glow model represents a stylized compromise between these two perspectives, allowing for individuals to be purely altruistic, purely egoistic, or impurely altruistic. Warm glow is at least tangentially related to the topic of free will, as people should only reap the psychological reward of helping if they freely choose to do so. Background in economics Departure from Classical theory The normative theory of Ricardian equivalence suggests private spending should be unresponsive to fiscal policy because forward-looking individuals smooth their consumption, consistent with Modigliani's life-cycle hypothesis. Applied to the provision of charities or public goods, Ricardian equivalence and the classical assumption of pure altruism together support the neutrality hypothesis, implying perfect substitutability between private and public contributions. The neutrality hypothesis assumes rational economic agents are indifferent to whether a cause is funded by the private or public sector; only the level of funding is relevant. A consequence of neutrality under perfect altruism is that government grants should completely crowd-out private donations. That is, a dollar given by the government takes the place of a dollar that would have been given by a private citizen. To illustrate, economic agents operating under the neutrality hypothesis would give to a cause until complete provision, beyond which they would contribute nothing. This is consistent with Andreoni's conceptualization of "pure altruism"; however, it is inconsistent with impure altruism or pure egoism. Thus, warm glow and the failure to act as purely altruistic present fundamental challenges to the neutrality hypothesis and Ricardian equivalence. In economics, violations to the neutrality hypothesis pose serious concerns for macroeconomic policies involving taxation and redistribution; and microeconomic theories for collective action and public good provision. Several of Andreoni's contemporaries simultaneously provided evidence against neutrality-driven crowding-out effects, including Kingma (1989) and Khanna et al. (1995). Taken together, these findings offered a strong rebuke of the assumption that public grants crowd-out private donations to public goods. Original model Andreoni's economic model of impure altruism considers a simplistic world with only two goods: a private good and a public good. A given individual, endowed with wealth faces the budget constraint: where represents consumption of a private good, and represents the contribution to the public good. To the extent that positively contributes to utility, it may be interpreted as the degree of warm glow. It follows that the total provision of the public good, G, is simply: and the total contributions to the public good from all other individuals is denoted as: Thus, the public good is the sum of the person's contribution along with the total contributions of all other individuals (1) where All individuals in this naïve economy face the same utility functions, given by: (2) where the utility functions represent the utility for private, egoistic consumption the utility derived from the public good and the warm-glow utility of the contribution towards the public good An altruist should derive no additional utility from the act of giving: whereas a pure egoist derives pleasure only from the warm glow of giving, without care for the public good itself, hence From the budget constraint and utility function, one can derive the utility maximization function, which is the original utility function (2) transformed using the definition of the public good (1). This utility maximization function serves as the foundation for warm-glow model development Implications Assuming a strategy of utility maximization, the model of warm glow offers many important economic predictions. Specifically, it presents three contrarian insights to those of classical economics under Ricardian equivalence. First, warm-glow theory predicts that income transfers will increase net giving only when income is transferred to more altruistic individuals. Second, it suggests that the provision of a public good is dependent upon the distribution of income within a population. Third, it suggests that public fund of public goods through lump-sum taxes will be more effective than relying upon altruism in the private sector. Individually and collectively, these propositions are in stark contrast to the laissez-faire doctrine of Ricardian economics. Following this original model, warm glow has conceptually evolved with new applications across disciplines to explain and encourage prosocial behavior. Background in psychology Many of the advances in warm glow research stem not from economics, but from psychology. In particular, research on motivations and affect have played a key role in defining and operationalizing warm glow for broad application. A motivational perspective "...a millionaire does not really care whether his money does good or not, provided he finds his conscience eased and his social status improved by giving it away..." -George Bernard Shaw. As illustrated in Shaw's quote, both intrinsic desires of conscience and extrinsic desires of social status may motivate giving. Warm glow has traditionally been restricted to intrinsic motivation, however this distinction is often murky. There has been considerable inconsistency in the literature as to whether a warm glow refers to intrinsic or extrinsic motivation. According to Andreoni (2006), "putting warm-glow into the model is, while intuitively appealing, an admittedly ad hoc fix". Further elaborating on the topic, he and colleagues wrote that the concept was "originally a placeholder for more specific models of individual and social motivations". From this initial ambiguity, different authors have at times referred to the phenomenon as solely intrinsic, both intrinsic and extrinsic, or solely extrinsic. Some authors have made deliberate distinctions between prestige-seeking (extrinsic) and the intrinsic components of warm glow, but many have not. Conceptualization of warm glow as either intrinsic or extrinsic has implications for motivational crowding out, satiation effects, and expected magnitude. Intrinsic warm glow The most common and classically "correct" interpretation of warm glow is as a solely intrinsic phenomenon. Language referring to the "joy of giving", "the positive emotional experience from the act of helping others", "the moral satisfaction of helping others" and the "internal satisfaction of giving" suggests an intrinsic drive. The intrinsic component of warm glow is the private emotional benefit to giving. Extrinsic warm glow Much of the ambiguity surrounding the motivational processes of warm glow has arisen from the misclassification of extrinsic rewards to intrinsic processes. While intrinsic desires center upon emotional gains, extrinsic rewards may include recognition, identity signaling, and prestige. Extrinsic motivation may also take the form of punishment (negative warm glow), in the form of censure or blame. Some research has explicitly focused on extrinsic warm glow, such as "relational warm glow". One area that has been frequently confused in the literature involves the classification of guilt, which is an introjected form of extrinsic motivation. Importance of motivational classification The classification of warm glow as either intrinsic or extrinsic has important ramifications for policy makers. The extensive body of literature on motivational crowding out suggests the efficacy of policies promoting altruistic behavior may be a function of whether pre-existing behavior is intrinsically or extrinsically motivated. The extent to which extrinsic incentives may be substitutes for intrinsic motivations depends upon the motivational classification of the warm glow model. Furthermore, intrinsic warm glow may be more resilient to satiation effects than extrinsic warm glow. Finally, the expected magnitude of a warm glow will be a function of how it is characterized. Models assuming a purely intrinsic warm glow should report lesser warm glows than models also including extrinsic components. Empathy and the psychological determinants of warm glow The phenomenon of warm-glow giving was originally introduced as an economic model. It its original form, the warm-glow model lacked a satisfactory explanation for the underlying psychological processes. Early studies of warm glow were deliberately vague in attributing the experience to a cause. A more recent body of research has identified several important determinants of warm glow, including social distance, vividness to the beneficiary, and guilt avoidance. Taken together, these observations suggest the warm glow may be best described as the visceral manifestation of empathy. This is consistent with the moral psychological literature of empathy, most notably as advanced by Batson. In his "empathy-altruism hypothesis", Batson claims that empathy ("feeling sympathetic, compassionate, warm, softhearted, tender") evokes a desire for other-regarding behavior. Social distance Social distance is an important determinant of warm glow, particularly in the framework of empathy. Prior research has examined the link between emotional arousal and social distance, finding that mutual suffering and shared joy both increase as a function of social similarity. Consistent with the "identifiable victim effect", research has shown that people express a greater willingness to help when others are known, as opposed to statistical. Vividness to beneficiary While the vividness of the beneficiary is captured in social distance, the vividness to the beneficiary refers to a beneficiary's ability to perceive that kindness has been done upon them. As a determinant of warm glow, vividness to the beneficiary operates on two levels. The primary level concerns whether a beneficiary is aware that kindness has been given to them, absent any attribution of the source. The secondary level involves the identifiability of the benefactor. Warm glow should be positively impacted by both levels for vividness. Guilt Recent work has identified guilt avoidance as an important component of warm glow. Some have even compared guilt as the "flip side" of warm glow. Parameterizing guilt as a component of warm glow allows for deficit values of warm glow, which was originally constrained to strictly positive values in Andreoni (1989, 1990). In a recent publication, Andreoni and colleagues explain this by writing: "Psychologists posit that giving is initiated by a stimulus that elevates sympathy or empathy in the mind of the potential giver, much as the smell of freshly baked bread can pique appetite. Resolving this feeling comes either by giving and feeling good or by not giving and feeling guilt." In other notable overviews of warm glow, this phenomenon has been characterized as "personal distress". In surveys of self-reported guilt, people experience roughly as much interpersonal and societal guilt as they do personal guilt. Furthermore, half of the survey respondents prefer to directly address and resolve their feelings of guilt. Taken together, these findings suggest a substantial component of guilt aversion. Neurobiological evidence Evidence from neural imaging supports the warm-glow effect. A meta-analysis of 36 studies using functional magnetic resonance imaging demonstrated that the brain's reward networks are consistently activated when choices to give are made. This includes the ventromedial prefrontal cortex (vmPFC). Strategic decisions for which something is hoped for in return activate more anterior regions of vmPFC but decisions where nothing is expected in return activate posterior regions of vmPFC. This provides a biological distinction of decisions to help that depends on the expectation of external rewards. Applications Voting One of the earliest attempts to formally model the warm glow phenomenon can be found in "A Theory of the Calculus of Voting" by Riker and Ordeshook (1968). Resolving the paradox by which rational individuals would never expend the effort to vote due to the statistical near-improbability of "having their vote count" (casting the decisive vote), Riker and Ordeshook highlighted the psychological utility of voting for one's preferred candidate. Just as an economic warm glow motivates people to willingly forego their scarce resources, the psychological utility described in early voting models serves to explain otherwise irrational behavior. The warm glow of voting continues to be an important consideration in ethical voter models. Environmental policy In efforts to design effective, enduring, and efficient environmental interventions, many scholars and policy makers have focused on warm-glow effects. Because many forms of extrinsic rewards and punishments have failed to promote long-term improvements in environmentally conscious behavior, there is a growing emphasis on intrinsic warm glow. Intervention experiments offer promising results in areas such as supporting green energy, recycling and waste reduction, energy consumption, carpooling initiatives. Business Corporate social responsibility Supporting businesses engaged in corporate social responsibility (CSR) initiatives may give consumers a vicarious warm glow. However, recent research suggests that consumers may expect to overpay when companies engage in CSR due to perceptions of price fairness. The implication that "doing good" carries a financial burden for businesses leads consumers to infer general price markups. This body of research cautions that corporate warm glows may be coupled with "cold prickles" of extra costs. Product advertising Warm glow can be a central element of cause marketing, in which products are paired with donations. When consumers are exposed to products with a direct cause marketing association, their appraisal of both the product and the company may improve due to warm glow. There is also evidence that product warm glows may play a role in a process called "hedonic licensing", in which consumers who perceive a moral surplus subsequently allow themselves more leeway to make selfish purchases. Capital Markets Inefficiency in sustainable investments Warm glow in the context of sustainable investments involves investors deriving a sense of satisfaction from their responsible investment decision-making rather than from the actual impact. Private investors who engage in sustainable investments tend to rely on their emotions rather than adopting a calculated approach to evaluate the impact of their investment. Hence, utility stems from the prosocial act itself and, therefore, does not increase linearly with the level of impact. This implies that investors’ willingness to pay is insensible to the level of impact of an investment. The concept of warm glow stands in contrast to the conventional behavior outlined in decision theory, often referred to as "consequentialism", where the utility of prosocial investors is directly linked to the level of impact generated by their investments. Private investors showing warm glow behavior typically seek opportunities to prevent climate change, leading to a higher willingness to pay for investments with a sustainable impact than investments with no impact. Leveraging warm glow becomes important in attracting funds for sustainable investments, encouraging investors to integrate sustainability considerations into their financial decisions.    However, there are drawbacks when investors prioritize optimizing their warm glow over maximizing impact. Companies are incentivized to engage in greenwashing or "impact-washing", promoting "light green" financial products that provide emotional satisfaction but lack substantial impact. Warm glow is independent of investors’ sustainability experience, implying that enhanced sustainability accounting won’t contribute to realigning investors’ willingness to pay with the actual level of impact. To address this concern, positive externalities can be quantified in monetary terms so that investors adjust their willingness to pay coherently with the level of impact of the sustainable investment. Additionally, labels could realign investors' emotional preferences with the quantitative level of impact of a financial product and incentivize firms to offer real "green" products. Philanthropy Avoidance behaviors Common phenomena such as avoiding eye contact with beggars or adjusting one's route to avoid a solicitor may be explained using the warm glow model. One behavioral consequence of warm glow is strategic avoidance of giving opportunities. According to this hypothesis, individuals anticipate their warm glow upon identifying a future giving opportunity. Assuming a functional form that allows warm glow to be negative (driven by a guilt of not giving), people may strategically and effortfully avoid giving situations. The strategic incentive is easily understood through the utility function where the warm glow is positive for a donation (joy of giving) and negative for not giving (guilt). For an agent who would suffer a disutility of giving at their desired level because the marginal utility of private expenditure exceeds the marginal utility of warm-glow giving, they should prefer to give nothing Because giving nothing may be associated with guilt, the utility of will be negative. Therefore, for a rational agent who cannot justify giving, , can maximize their utility through avoiding a giving situation, effectively dropping the warm glow argument from their utility functions. Thus, suggests avoidance of giving opportunities is a preferred strategy for individuals who experience guilt as a negative warm glow. Economic models assign a cost of effort to avoidance, and predict that people will incur such effort whenever where is the utility of not giving, is the cost of avoidance, and is the utility of giving to a solicitor, conditional upon not avoiding. Through this lens, avoidance can be viewed as an economic commitment device, where a person commits to avoiding a situation (being asked to give) in which they are likely to surrender to temptation (giving). Central to this avoidance hypothesis is that individuals can anticipate their behavior in high-empathy "hot states", while in low-empathy "cold states". While this model assumes a high degree of sophistication on the part of the individual, research by Andreoni, Rao, and Trachtman explores this very phenomenon by observing avoidance and donation behavior of customers entering a supermarket during the holidays. Customers often walked to a further entrance to avoid solicitors for the Salvation Army. According to their model, "empathetically vulnerable" individuals who are not able to give (for budgetary reasons), faced the greatest incentive to avoid collectors because of the guilt they would experience upon saying "no". Grouping behaviors Charities may strategically employ categorical donor recognition. For example, a charitable organization may distinguish any gift between $500-$999.99 by a title distinct from that awarded for gifts above $1,000. As a consequence, the social signaling component of the warm-glow effect (in extrinsic operationalizations of warm glow) suggests individuals should be motivated to make the minimum donation to acquire their desired categorical status. Consistent with this hypothesis, research has indicated significant grouping behavior of donors around category minimums. Inefficiency in charitable allocation A majority of those who choose to give some portion of their wealth to charity support multiple different causes. Rather than giving 100% of their cumulative donations to the same source, there exists a widespread preference to distribute funds across charities. The warm glow model explains this by recognizing that givers receive multiple warm glows through giving to multiple causes, thus supporting the preference to make multiple small contributions. As a consequence, some scholars suggest an efficiency loss due to high volumes of small donations – which are less efficient to process — rather than fewer large donations. Moral philosopher Peter Singer mentions warm-glow givers in his 2015 book, The Most Good You Can Do. Singer states that these types of donors "give small amounts to many charities [and] are not so interested in whether what they are doing helps others." He references "empathetic concern" and "personal distress" as two distinct components of warm-glow givers. Inefficiency in charitable selection Warm glow may offer an explanation for some of the observed inefficiencies in charitable giving. For example, United States citizens directed more than 60% of their total charitable contributions to religious groups, education institutions, art organizations, and foundations in 2017; compared to under 7% in foreign aid. According to models of social justice and economic QALYs, in which human lives are treated with equal dignity and equal respect - regardless of race, gender, or place of origin - the goal of charity should be to fight global poverty. Similarly, economic models, which attempt to place a monetary value on the human life, highlight the inefficiency of all philanthropy not used to combat global poverty, which offers the highest marginal return. The warm-glow model accounts for such inefficiency because impure altruists may be insensitive to the actual cause, and more sensitive to the act of giving or size of the gift. Thus, warm-glow may generate philanthropic inefficiencies to the extent that it desensitizes potential donors to the marginal impact of a given charity. In response to this concern, William MacAskill and colleagues have advanced a process of philanthropic allocation called "effective altruism". This methodology seeks to leverage logic and responsibility to identify effective charitable opportunities, thus minimizing the effect of warm-glow in the decision-making process. Technology Warm-glow has been found to influence user behavioral intention to adopt a technology. Criticisms Ad-hoc A common criticism of the warm-glow paradigm is that it seems ad-hoc. Indeed, Andreoni, the father of the original model, stated that "putting warm-glow into the model is, while intuitively appealing, an admittedly ad hoc fix." As the body of research has evolved over nearly 30 years — incorporating philosophical, psychological, and physiological insights – it has become a better descriptive model of behavior. Self-delusion An obscure criticism of the warm-glow paradigm is that it necessitates self-deception. This argument states that in order to reap the emotional reward of helping others, one must believe his actions to be motivated altruistically. Yet, the mere existence of a warm glow should then contradict the belief of pure altruism. A question arises as to whether prolonged self-delusion is sustainable and impervious to learning through self-perception. Extensions Some research has investigated the link between warm glow and the phenomenon of mere exposure, leading researchers to consider warm glow as a heuristic. References Further reading Altruism Behavioral economics Moral psychology
Warm-glow giving
Biology
5,010
67,097,838
https://en.wikipedia.org/wiki/Rc-o319
Rc-o319 is a bat-derived strain of severe acute respiratory syndrome–related coronavirus collected in little Japanese horseshoe bats (Rhinolophus cornutus) from sites in Iwate, Japan. Its has 81% similarity to SARS-CoV-2 and is the earliest strain branch of the SARS-CoV-2 related coronavirus. References SARS-CoV-2 Bat virome Coronaviridae Animal virology Sarbecovirus Zoonoses
Rc-o319
Biology
102
12,366,559
https://en.wikipedia.org/wiki/Crevice%20corrosion
Crevice corrosion refers to corrosion occurring in occluded spaces such as interstices in which a stagnant solution is trapped and not renewed. These spaces are generally called crevices. Examples of crevices are gaps and contact areas between parts, under gaskets or seals, inside cracks and seams, spaces filled with deposits and under sludge piles. Mechanism The corrosion resistance of a stainless steel is dependent on the presence of an ultra-thin protective oxide film (passive film) on its surface, but it is possible under certain conditions for this oxide film to break down, for example in halide solutions or reducing acids. Areas where the oxide film can break down can also sometimes be the result of the way components are designed, for example under gaskets, in sharp re-entrant corners or associated with incomplete weld penetration or overlapping surfaces. These can all form crevices which can promote corrosion. To function as a corrosion site, a crevice has to be of sufficient width to permit entry of the corrodent, but narrow enough to ensure that the corrodent remains stagnant. Accordingly crevice corrosion usually occurs in gaps a few micrometres wide, and is not found in grooves or slots in which circulation of the corrodent is possible. This problem can often be overcome by paying attention to the design of the component, in particular to avoiding formation of crevices or at least keeping them as open as possible. Crevice corrosion is a very similar mechanism to pitting corrosion; alloys resistant to one are generally resistant to both. Crevice corrosion can be viewed as a less severe form of localized corrosion when compared with pitting. The depth of penetration and the rate of propagation in pitting corrosion are significantly greater than in crevice corrosion. Crevices can develop a local chemistry which is very different from that of the bulk fluid. For example, in boilers, concentration of non-volatile impurities may occur in crevices near heat-transfer surfaces because of the continuous water vaporization. "Concentration factors" of many millions are not uncommon for common water impurities like sodium, sulfate or chloride ions. The concentration process is often referred to as "hideout" (HO), whereas the opposite process, whereby the concentrations tend to even out (e.g., during shutdown) is called "hideout return" (HOR). In a neutral pH solution, the pH inside the crevice can drop to 2, a highly acidic condition that accelerates the corrosion of most metals and alloys. For a given crevice type, two factors are important in the initiation of crevice corrosion: the chemical composition of the electrolyte in the crevice and the electrical potential drop into the crevice. Researchers had previously claimed that either one or the other of the two factors was responsible for initiating crevice corrosion, but recently it has been shown that it is a combination of the two that causes active crevice corrosion. Both the drop of potential and the change in composition of the crevice electrolyte are produced by the oxygen depletion of the solution inside the crevice (oxygen consumption caused by the metal oxidation at the inner surface of the occluded cavity) and the separation of electroactive areas, with net anodic reactions (oxidation) occurring within the crevice and net cathodic reactions (reduction) occurring at the exterior of the crevice (on the bold surface). The ratio of the surface areas between the cathodic and anodic region is significant. Some of the phenomena occurring within the crevice may be somewhat reminiscent of galvanic corrosion: galvanic corrosion two connected metals + single environment crevice corrosion one metal part + two connected environments The mechanism of crevice corrosion can be (but is not always) similar to that of pitting corrosion. However, there are sufficient differences to warrant a separate treatment. For example, in crevice corrosion, one has to consider the geometry of the crevice and the nature of the concentration process leading to the development of the differential local chemistry. The extreme and often unexpected local chemistry conditions inside the crevice need to be considered. Galvanic effects can play a role in crevice degradation. Mode of attack Depending on the environment developed in the crevice and the nature of the metal, the crevice corrosion can take a form of: pitting (i.e., formation of pits), but note pitting and crevice corrosion are not the same phenomenon, filiform corrosion (this type of crevice corrosion that may occur on a metallic surface underneath an organic coating), intergranular attack, or, stress corrosion cracking. Stress corrosion cracking A common form of crevice failure occurs due to stress corrosion cracking, where a crack or cracks develop from the base of the crevice where the stress concentration is greatest. This was the root cause of the fall of the Silver Bridge over the Ohio River, in 1967 in West Virginia, where a single critical crack only about 3 mm long suddenly grew and fractured a tie bar joint. The rest of the bridge fell in less than a minute. The disaster was caused by one single point of failure (SPOF). The eyebars in the Silver Bridge were not redundant, as links were composed of only two bars each, of high strength steel (more than twice as strong as common mild steel), rather than a thick stack of thinner bars of modest material strength "combed" together as is usual for redundancy. With only two bars, the failure of one could impose excessive loading on the second, causing total failure—unlikely if more bars are used. While a low-redundancy chain can be engineered to the design requirements, the safety is completely dependent upon correct, high quality manufacturing and assembly. Significance The susceptibility to crevice corrosion varies widely from one material-environment system to another. In general, crevice corrosion is of greatest concern for materials which are normally passive metals, like stainless steel or aluminum. Crevice corrosion tends to be of greatest significance to components built of highly corrosion-resistant superalloys and operating with the purest-available water chemistry. For example, steam generators in nuclear power plants degrade largely by crevice corrosion. Crevice corrosion is extremely dangerous because it is localized and can lead to component failure while the overall material loss is minimal. The initiation and progress of crevice corrosion can be difficult to detect. See also Corrosion engineering References External links Crevice Corrosion of Stainless Steels Corrosion Fouling Engineering failures Materials degradation
Crevice corrosion
Chemistry,Materials_science,Technology,Engineering
1,366
20,221,174
https://en.wikipedia.org/wiki/All%20Nightmare%20Long
"All Nightmare Long" is a song by American heavy metal band Metallica, released as the third single from their album Death Magnetic. The single was released on December 15, 2008. The song is in drop D tuning. It was nominated for the Kerrang! Award for Best Single. Music video The music video, directed by Roboshobo (Robert Schober), debuted on December 7, 2008, on Metallica's official website and Yahoo! Video. The video, which does not feature the band, is an alternate history narrative done in grainy mockumentary style, depicting a sequence of fictional events following the historic 1908 Tunguska event, at which Soviet scientists discover spores of an extraterrestrial organism, a small harmless thing resembling an armored worm. However, it turns out the incredibly hardy spores are able to reanimate dead tissue, and subjects turn violent sometime after exposure to the spores; a cartoon then shows the USSR adapting them as a bioweapon and scatters them from balloons in a preemptive strike against the U.S., causing a localized zombie apocalypse before intervening militarily to distribute humanitarian aid. At the end of the cartoon, a hybrid U.S.–USSR flag is raised in the now-Soviet-ruled America, and in 1972, a headless corpse is shown breaching containment and escaping from a Soviet biowarfare lab. Video origin Initially, in a video on the website Metclub.com, Kirk Hammett explained the origins of the video. He claimed to have bought the film from a fan for $5 in Russia and soon forgot about it. After digging it up and watching the animated film, he said that he was fascinated by it, researched about its background, and asked a friend's Russian girlfriend to translate parts of it. Following this, Hammett had supposedly been trying to incorporate the film into one of the band's music videos. However, as it was later revealed, Hammett's story was a fake to produce hype about the video: the film was not made in Russia and Hammett did not actually buy it there. Rather, as the video's director Roboshobo stated in an interview, the live action segments (including the ending) were specially shot to look like excerpts of old Russian documentary footage. The video bears similarities to the underground documentary Experiments in the Revival of Organisms, where animal experimentation to produce life extension is depicted. The subtitles and everything else included in the video are part of its concept. The word "Тунгусский" ("Tunguska") appears several times with different typos ("тунгузский", "тунзский", "тчнгзский"). Lyrical meaning In an interview, James Hetfield commented on the song's lyrical meaning: Release versions The single is available in a three-disc collectors set. The first disc was released as a digipack to store the remaining two discs with the album version of "All Nightmare Long", along with the songs "Wherever I May Roam" and "Master of Puppets", recorded live in Berlin at the Death Magnetic release bash at the O2 Arena in September 2008. The second disc also has the studio version of "All Nightmare Long", along with the songs "Blackened" and "Seek & Destroy", also recorded at the Berlin O2 Arena. The third disc is a DVD, which, along with the album version of the song as audio, includes a ten-minute-long mini-documentary about the bands' day in Berlin, along with twenty minutes' worth of live tracks from that night's album release party, as well a fifteen-minute-long movie from the tuning room at the Rock im Park. In pop culture The song first appeared as one of the songs off of Death Magnetic that was made available as downloadable content for Guitar Hero III: Legends of Rock. In addition, "All Nightmare Long" can also be imported to several Guitar Hero titles as well as the stand-alone game focused around the band itself, Guitar Hero: Metallica. "All Nightmare Long" appeared in the documentary McConkey. WWE used the song as the official theme for the 2008 pay-per-view event No Mercy, and in the video package to promote the Winner Takes All match for the WWE and Universal Championship at Wrestlemania 38. American Dad! uses the song in the season 12 episode "The Life Aquatic with Steve Smith". Track listing Personnel Metallica James Hetfield – vocals, rhythm guitar Lars Ulrich – drums Kirk Hammett – lead guitar Robert Trujillo – bass Production Rick Rubin – producing Ted Jensen – mastering Greg Fidelman – mixing Charts References 2008 singles Metallica songs Song recordings produced by Rick Rubin Songs written by James Hetfield Songs written by Kirk Hammett Songs written by Lars Ulrich Songs written by Robert Trujillo Zombies in popular culture 2008 songs Warner Records singles Songs about nightmares Cthulhu Mythos music Tunguska event
All Nightmare Long
Physics
1,040
8,334,805
https://en.wikipedia.org/wiki/Steam%20jet%20cooling
Steam jet cooling uses a high-pressure jet of steam to cool water or other fluid media. Typical uses include industrial sites, where a suitable steam supply already exists for other purposes or, historically, for air conditioning on passenger trains which use steam for heating. Steam jet cooling experienced a wave of popularity during the early 1930s for air conditioning large buildings. Steam ejector refrigeration cycles were later supplanted by systems using mechanical compressors. Principle Steam is passed through a vacuum ejector of high efficiency to exhaust a separate, closed vessel which forms part of a cooling water circuit. The partial vacuum in the vessel causes some of the water to evaporate, thus giving up heat through evaporative cooling. The chilled water is pumped through the circuit to air coolers, while the evaporated water from the ejector is recovered in separate condensers and returned to the cooling circuit. Usage The AT&SF railroad (Santa Fe) used this method, which they called "Steam Ejector Air Conditioning", on both heavyweight and lightweight passenger cars, built until the mid-1950s. See also Steam generator (railroad) Steam jet ejector Injector or Ejector References Steam jet refrigeration plant United States patent 4204410 Steam Ejector System The Air Conditioning system preferred by the Santa Fe, By C. M. Drennan, The Brotherhood Railway Carmen of America Rail technologies Passenger rail rolling stock Heating, ventilation, and air conditioning
Steam jet cooling
Engineering
303
11,057,402
https://en.wikipedia.org/wiki/Sulfolene
Sulfolene, or butadiene sulfone is a cyclic organic chemical with a sulfone functional group. It is a white, odorless, crystalline, indefinitely storable solid, which dissolves in water and many organic solvents. The compound is used as a source of butadiene. Production Sulfolene is formed by the cheletropic reaction between butadiene and sulfur dioxide. The reaction is typically conducted in an autoclave. Small amounts of hydroquinone or pyrogallol are added to inhibit polymerization of the diene. The reaction proceeds at room temperature over the course of days. At 130 °C, only 30 minutes are required. An analogous procedure gives the isoprene-derived sulfone. Reactions Acid-base reactivity The compound is unaffected by acids. It can even be recrystallized from conc. HNO3. The protons in the 2- and 5-positions rapidly exchange with deuterium oxide under alkaline conditions. Sodium cyanide catalyzes this reaction. Isomerization to 2-sulfolene In the presence of base or cyanide, 3-sulfolene isomerizes to a mixture of 2-sulfolene and 3-sulfolene. At 50 °C an equilibrium mixture is obtained containing 42% 3-sulfolene and 58% 2-sulfolene. The thermodynamically more stable 2-sulfolene can be isolated from the mixture of isomers as pure substance in the form of white plates (m.p. 48-49 °C) by heating for several days at 100 °C, because of the thermal decomposition of the 3-sulfolene at temperatures above 80 °C. Hydrogenation Catalytic hydrogenation yields sulfolane, a solvent used in the petrochemical industry for the extraction of aromatics from hydrocarbon streams. The hydrogenation of 3-sulfolene over Raney nickel at approx. 20 bar and 60 °C gives sulfolane in yields of up to 65% only because of the poisoning of the catalyst by sulfur compounds. Halogenation 3-Sulfolene reacts in aqueous solution with bromine to give 3,4-dibromotetrohydrothiophene-1,1-dioxide, which can be dehydrobrominated to thiophene-1,1-dioxide with silver carbonate. Thiophene-1,1-dioxide, a highly reactive species, is also accessible via the formation of 3,4-bis(dimethylamino)tetrahydrothiophene-1,1-dioxide and successive double quaternization with methyl iodide and Hofmann elimination with silver hydroxide. A less cumbersome two-step synthesis is the two-fold dehydrobromination of 3,4-dibromotetrohydrothiophene-1,1-dioxide with either powdered sodium hydroxide in tetrahydrofuran (THF) or with ultrasonically dispersed metallic potassium. Diels-Alder reactions 3-sulfolene is mainly valued as a stand-in for butadiene. The in situ production and immediate consumption of 1,3-butadiene largely avoids contact with the diene, which is a gas at room temperature. One potential drawback, aside from expense, is that the evolved sulfur dioxide can cause side reactions with acid-sensitive substrates. Diels-Alder reaction between 1,3-butadiene and dienophiles of low reactivity usually requires prolonged heating above 100 °C. Such procedures are rather dangerous. If neat butadiene is used, special equipment for work under elevated pressure is required. With sulfolene no buildup of butadiene pressure could be expected as the liberated diene is consumed in the cycloaddition, and therefore the equilibrium of the reversible extrusion reaction acts as an internal "safety valve". 3-Sulfolene reacts with maleic anhydride in boiling xylene to cis-4-cyclohexene-1,2-dicarboxylic anhydride, obtaining yields of up to 90%. 3-Sulfolene reacts also with dienophiles in trans configuration (such as diethyl fumarate) at 110 °C with SO2 elimination in 66–73% yield to the trans-4-cyclohexene-1,2-dicarboxylic diethyl ester. 6,7-Dibromo-1,4-epoxy-1,4-dihydronaphthalene (6,7-Dibromonaphthalene-1,4-endoxide, accessible after debromination from 1,2,4,5-tetrabromobenzene using an equivalent of n-butyllithium and Diels-Alder reaction in furan in 70% yield) reacts with 3-sulfolene in boiling xylene to give a tricyclic adduct. This precursor yields, after treatment with perchloric acid, a dibromo dihydroanthracene which is dehydrogenated in the last step with 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ) to 2,3-dibromoanthracene. 1,3-Butadiene (formed in the retro-cheletrophic reaction of 3-sulfolene) reacts with dehydrobenzene (benzyne, obtained by thermal decomposition of benzenediazonium-2-carboxylate) in a Diels-Alder reaction in 9% yield to give 1,4-dihydronaphthalene. 2- and 3-Sulfolenes as a dienophile In the presence of very reactive dienes (for example 1,3-diphenylisobenzofuran) butadienesulfone behaves as a dienophile and forms the corresponding Diels-Alder adduct. As early as 1938, Kurt Alder and co-workers reported Diels-Alder adducts from the isomeric 2-sulfolene with 1,3-butadiene and 2-sulfolene with cyclopentadiene. Other cycloadditions The base-catalyzed reaction of 3-sulfolene with carbon dioxide at 3 bar pressure produces 3-sulfolene-3-carboxylic acid in 45% yield. With diazomethane, 3-sulfolene forms in a 1,3-dipolar cycloadduct: Polymerization In 1935, H. Staudinger and co-workers found that the reaction of butadiene and SO2 at room temperature gives a second product in addition to 3-sulfolene. This second product is an amorphous solid polymer. By free-radical polymerization of 3-sulfolene in peroxide-containing diethyl ether, up to 50% insoluble high-molecular-weight poly-sulfolene was obtained. The polymer resists degradation by sulfuric and nitric acids. In subsequent investigations, polymerization of 3-sulfolene was initiated above 100 °C with the radical initiator azobis(isobutyronitrile) (AIBN). 3-sulfolene does not copolymerize with vinyl compounds, however. On the other hand, 2-sulfolene does not homopolymerize, but forms copolymers with vinyl compounds, e.g. acrylonitrile and vinyl acetate. 3-Sulfolene as a recyclable solvent The reversibility of the interconversion of 3-sulfolene with buta-1,3-diene and sulfur dioxide suggests the use of sulfolene as a recyclable aprotic dipolar solvent, in replacement for dimethyl sulfoxide (DMSO), which is often used but difficult to separate and poorly reusable. As a model reaction, the reaction of benzyl azide with 4-toluenesulfonic acid cyanide forming 1-benzyl-5-(4-toluenesulfonyl)tetrazole was investigated. The formation of the tetrazole can also be carried out as a one-pot reaction without the isolation of the benzyl azide with 72% overall yield. After the reaction, the solvent 3-sulfolene is decomposed at 135 °C and the volatile butadiene (b.p. −4.4 °C) and sulfur dioxide (b.p. −10.1 °C) are deposited in a cooling trap at −76 °C charged with excess sulfur dioxide. After the addition of hydroquinone as polymerization inhibition, 3-sulfoles is formed again quantitatively upon heating to room temperature. It appears questionable though, if 3-sulfolene with a useful liquid phase range of only 64 to a maximum of about 100 °C can be used as DMSO substitutes (easy handling, low cost, environmental compatibility) in industrial practice. Uses Aside from its synthetic versatility (see above), sulfolene is used as an additive in electrochemical fluorination. It can increase the yield of perfluorooctanesulfonyl fluoride by about 70%. It is "highly soluble in anhydrous HF and increases the conductivity of the electrolyte solution". In this application, it undergoes a ring opening and is fluorinated to form perfluorobutanesulfonyl fluoride. Further reading References Reagents for organic chemistry Sulfones
Sulfolene
Chemistry
2,031
3,188,361
https://en.wikipedia.org/wiki/Gobe%20Software
Gobe Software, Inc was a software company founded in 1997 by members of the ClarisWorks development team that developed and published an integrated desktop software suite for BeOS. In later years, it was the distributor of BeOS itself. History Gobe was founded in 1997 by members of the ClarisWorks development team and some of the authors of the original Styleware application for the Apple II. After leaving StyleWare and creating the product later known as ClarisWorks and AppleWorks, Bob Hearn, Scott Holdaway joined Tom Hoke, Scott Lindsey, Bruce Q. Hammond, and Carl Grice who also worked at Apple Computer's Claris subsidiary and formed Gobe Software, Inc with the notion to create a next-generation integrated office suite similar to ClarisWorks, but for the BeOS platform. It released Gobe Productive in 1998. When Be Inc. outsourced publication of BeOS in 2000, Gobe became the publisher of BeOS in North America, Australia, and sections of Asia. Only weeks after signing up other publishers around the globe, Be, Inc. halted development for the BeOS platform and publicly announced that all of its corporate focus would be on "Internet Appliances" and made public announcements that hampered forward momentum of the BeOS platform. In addition, the publishers in general and Gobe in particular did not have source code access to the BeOS and were not able to continue its development or add drivers that the platform needed to be a viable alternative to Windows or Linux. Gobe also published Hicom Entertainment/Next Generation Entertainments "Corum III" role-playing game for BeOS during this period. The failure of Be, Inc and BeOS meant ports had to be undertaken, and Windows and Linux variants were developed. Although the company shipped a Windows version of its software in December 2001, it was unable to obtain sufficient operating capital after the 2000 stock market crash and suspended operations 2002. In 2008 Gobe management began to work with distribution and development teams in Greater Asia and had plans to ship a new version of the product for the India market early 2010. Later in August 2010, Gobe Productive's website was disabled and then sold to an Indian movie producer called ErosNow. Gobe Productive The main product, Gobe Productive, was by far the most polished of the word processors, spreadsheet and vector graphics applications for BeOS, but as an integrated package a la ClarisWorks and Microsoft Works. Gobe Productive v1.0 for BeOS was released in August 1998, v2.0 in August 1999, and v2.0.1 on 29 February 2000. After the failure of Be, Inc a Windows and Linux variants were developed. The company shipped a Windows version of Gobe Productive 3 in December 2001. Other Gobe employees Dave Johnson Ben Chang Joël Spaltenstein Kurt von Finck Daniel Maia Alves Cheyenne Tuller Tomy Hudson See also Comparison of office suites Notes References Further reading External links Be, Inc. article (archived) BeOS Discontinued software Defunct software companies of the United States
Gobe Software
Technology
621
2,862,625
https://en.wikipedia.org/wiki/Fouling
Fouling is the accumulation of unwanted material on solid surfaces. The fouling materials can consist of either living organisms (biofouling, organic) or a non-living substance (inorganic). Fouling is usually distinguished from other surface-growth phenomena in that it occurs on a surface of a component, system, or plant performing a defined and useful function and that the fouling process impedes or interferes with this function. Other terms used in the literature to describe fouling include deposit formation, encrustation, crudding, deposition, scaling, scale formation, slagging, and sludge formation. The last six terms have a more narrow meaning than fouling within the scope of the fouling science and technology, and they also have meanings outside of this scope; therefore, they should be used with caution. Fouling phenomena are common and diverse, ranging from fouling of ship hulls, natural surfaces in the marine environment (marine fouling), fouling of heat-transfer components through ingredients contained in cooling water or gases, and even the development of plaque or calculus on teeth or deposits on solar panels on Mars, among other examples. This article is primarily devoted to the fouling of industrial heat exchangers, although the same theory is generally applicable to other varieties of fouling. In cooling technology and other technical fields, a distinction is made between macro fouling and micro fouling. Of the two, micro fouling is the one that is usually more difficult to prevent and therefore more important. Components subject to fouling Examples of components that may be subject to fouling and the corresponding effects of fouling: Heat exchanger surfaces – reduces thermal efficiency, decreases heat flux, increases temperature on the hot side, decreases temperature on the cold side, induces under-deposit corrosion, increases use of cooling water; Piping, flow channels – reduces flow, increases pressure drop, increases upstream pressure, increases energy expenditure, may cause flow oscillations, slugging in two-phase flow, cavitation; may increase flow velocity elsewhere, may induce vibrations, may cause flow blockage; Ship hulls – creates additional drag, increases fuel usage, reduces maximum speed; Turbines – reduces efficiency, increases probability of failure; Solar panels – decreases the electrical power generated; Reverse osmosis membranes – increases pressure drop, increases energy expenditure, reduces flux, membrane failure (in severe cases); Electrical heating elements – increases temperature of the element, increases corrosion, reduces lifespan; Firearm barrels - increases chamber pressure; hampers loading for muzzleloaders Nuclear fuel in pressurized water reactors – axial offset anomaly, may need to de-rate the power plant; Injection/spray nozzles (e.g., a nozzle spraying a fuel into a furnace) – incorrect amount injected, malformed jet, component inefficiency, component failure; Venturi tubes, orifice plates – inaccurate or incorrect measurement of flow rate; Pitot tubes in airplanes – inaccurate or incorrect indication of airplane speed; Spark plug electrodes in cars – engine misfiring; Production zone of petroleum reservoirs and oil wells – decreased petroleum production with time; plugging; in some cases complete stoppage of flow in a matter of days; Teeth – promotes tooth or gum disease, decreases aesthetics; Living organisms – deposition of excess minerals (e.g., calcium, iron, copper) in tissues is (sometimes controversially) linked to aging/senescence. Macro fouling Macro fouling is caused by coarse matter of either biological or inorganic origin, for example industrially produced refuse. Such matter enters into the cooling water circuit through the cooling water pumps from sources like the open sea, rivers or lakes. In closed circuits, like cooling towers, the ingress of macro fouling into the cooling tower basin is possible through open canals or by the wind. Sometimes, parts of the cooling tower internals detach themselves and are carried into the cooling water circuit. Such substances can foul the surfaces of heat exchangers and may cause deterioration of the relevant heat transfer coefficient. They may also create flow blockages, redistribute the flow inside the components, or cause fretting damage. Examples Manmade refuse; Detached internal parts of components; Tools and other "foreign objects" accidentally left after maintenance; Algae; Mussels; Leaves, parts of plants up to entire trunks. Micro fouling As to micro fouling, distinctions are made between: Scaling or precipitation fouling, as crystallization of solid salts, oxides, and hydroxides from water solutions (e.g., calcium carbonate or calcium sulfate) Particulate fouling, i.e., accumulation of particles, typically colloidal particles, on a surface Corrosion fouling, i.e., in-situ growth of corrosion deposits, for example, magnetite on carbon steel surfaces Chemical reaction fouling, for example, decomposition or polymerization of organic matter on heating surfaces Solidification fouling - when components of the flowing fluid with a high-melting point freeze onto a subcooled surface Biofouling, like settlements of bacteria and algae Composite fouling, whereby fouling involves more than one foulant or fouling mechanism Precipitation fouling Scaling or precipitation fouling involves crystallization of solid salts, oxides, and hydroxides from solutions. These are most often water solutions, but non-aqueous precipitation fouling is also known. Precipitation fouling is a very common problem in boilers and heat exchangers operating with hard water and often results in limescale. Through changes in temperature, or solvent evaporation or degasification, the concentration of salts may exceed the saturation, leading to a precipitation of solids (usually crystals). As an example, the equilibrium between the readily soluble calcium bicarbonate - always prevailing in natural water - and the poorly soluble calcium carbonate, the following chemical equation may be written: \mathsf{{Ca(HCO3)2}_{(aqueous)} -> {CaCO3(v)} + {CO2}\!{\uparrow} + H2O} The calcium carbonate that forms through this reaction precipitates. Due to the temperature dependence of the reaction, and increasing volatility of CO2 with increasing temperature, the scaling is higher at the hotter outlet of the heat exchanger than at the cooler inlet. In general, the dependence of the salt solubility on temperature or presence of evaporation will often be the driving force for precipitation fouling. The important distinction is between salts with "normal" or "retrograde" dependence of solubility on temperature. Salts with the "normal" solubility increase their solubility with increasing temperature and thus will foul the cooling surfaces. Salts with "inverse" or "retrograde" solubility will foul the heating surfaces. An example of the temperature dependence of solubility is shown in the figure. Calcium sulfate is a common precipitation foulant of heating surfaces due to its retrograde solubility. Precipitation fouling can also occur in the absence of heating or vaporization. For example, calcium sulfate decreases its solubility with decreasing pressure. This can lead to precipitation fouling of reservoirs and wells in oil fields, decreasing their productivity with time. Fouling of membranes in reverse osmosis systems can occur due to differential solubility of barium sulfate in solutions of different ionic strength. Similarly, precipitation fouling can occur because of solubility changes induced by other factors, e.g., liquid flashing, liquid degassing, redox potential changes, or mixing of incompatible fluid streams. The following lists some of the industrially common phases of precipitation fouling deposits observed in practice to form from aqueous solutions: Calcium carbonate (calcite, aragonite usually at t > ~50 °C, or rarely vaterite); Calcium sulfate (anhydrite, hemihydrate, gypsum); Calcium oxalate (e.g., beerstone); Barium sulfate (barite); Magnesium hydroxide (brucite); magnesium oxide (periclase); Silicates (serpentine, acmite, gyrolite, gehlenite, amorphous silica, quartz, cristobalite, pectolite, xonotlite); Aluminium oxide hydroxides (boehmite, gibbsite, diaspore, corundum); Aluminosilicates (analcite, cancrinite, noselite); Copper (metallic copper, cuprite, tenorite); Phosphates (hydroxyapatite); Magnetite or nickel ferrite (NiFe2O4) from extremely pure, low-iron water. The deposition rate by precipitation is often described by the following equations: Transport: Surface crystallisation: Overall: where: - mass of the material (per unit surface area), kg/m2 - time, s - concentration of the substance in the bulk of the fluid, kg/m3 - concentration of the substance at the interface, kg/m3 - equilibrium concentration of the substance at the conditions of the interface, kg/m3 - order of reaction for the crystallization reaction and the overall deposition process, respectively, dimensionless - kinetic rate constants for the transport, the surface reaction, and the overall deposition reaction, respectively; with the dimension of m/s (when ) Particulate fouling Fouling by particles suspended in water ("crud") or in gas progresses by a mechanism different than precipitation fouling. This process is usually most important for colloidal particles, i.e., particles smaller than about 1 μm in at least one dimension (but which are much larger than atomic dimensions). Particles are transported to the surface by a number of mechanisms and there they can attach themselves, e.g., by flocculation or coagulation. Note that the attachment of colloidal particles typically involves electrical forces and thus the particle behaviour defies the experience from the macroscopic world. The probability of attachment is sometimes referred to as "sticking probability", : where and are the kinetic rate constants for deposition and transport, respectively. The value of for colloidal particles is a function of both the surface chemistry, geometry, and the local thermohydraulic conditions. An alternative to using the sticking probability is to use a kinetic attachment rate constant, assuming the first order reaction: and then the transport and attachment kinetic coefficients are combined as two processes occurring in series: where: is the rate of the deposition by particles, kg m−2 s−1, are the kinetic rate constants for deposition, m/s, and are the concentration of the particle foulant at the interface and in the bulk fluid, respectively; kg m−3. Being essentially a surface chemistry phenomenon, this fouling mechanism can be very sensitive to factors that affect colloidal stability, e.g., zeta potential. A maximum fouling rate is usually observed when the fouling particles and the substrate exhibit opposite electrical charge, or near the point of zero charge of either of them. Particles larger than those of colloidal dimensions may also foul e.g., by sedimentation ("sedimentation fouling") or straining in small-size openings. With time, the resulting surface deposit may harden through processes collectively known as "deposit consolidation" or, colloquially, "aging". The common particulate fouling deposits formed from aqueous suspensions include: iron oxides and iron oxyhydroxides (magnetite, hematite, lepidocrocite, maghemite, goethite); Sedimentation fouling by silt and other relatively coarse suspended matter. Fouling by particles from gas aerosols is also of industrial significance. The particles can be either solid or liquid. The common examples can be fouling by flue gases, or fouling of air-cooled components by dust in air. The mechanisms are discussed in article on aerosol deposition. Corrosion fouling Corrosion deposits are created in-situ by the corrosion of the substrate. They are distinguished from fouling deposits, which form from material originating ex-situ. Corrosion deposits should not be confused with fouling deposits formed by ex-situ generated corrosion products. Corrosion deposits will normally have composition related to the composition of the substrate. Also, the geometry of the metal-oxide and oxide-fluid interfaces may allow practical distinction between the corrosion and fouling deposits. An example of corrosion fouling can be formation of an iron oxide or oxyhydroxide deposit from corrosion of the carbon steel underneath. Corrosion fouling should not be confused with fouling corrosion, i.e., any of the types of corrosion that may be induced by fouling. Chemical reaction fouling Chemical reactions may occur on contact of the chemical species in the process fluid with heat transfer surfaces. In such cases, the metallic surface sometimes acts as a catalyst. For example, corrosion and polymerization occurs in cooling water for the chemical industry which has a minor content of hydrocarbons. Systems in petroleum processing are prone to polymerization of olefins or deposition of heavy fractions (asphaltenes, waxes, etc.). High tube wall temperatures may lead to carbonizing of organic matter. The food industry, for example milk processing, also experiences fouling problems by chemical reactions. Fouling through an ionic reaction with an evolution of an inorganic solid is commonly classified as precipitation fouling (not chemical reaction fouling). Solidification fouling Solidification fouling occurs when a component of the flowing fluid "freezes" onto a surface forming a solid fouling deposit. Examples may include solidification of wax (with a high melting point) from a hydrocarbon solution, or of molten ash (carried in a furnace exhaust gas) onto a heat exchanger surface. The surface needs to have a temperature below a certain threshold; therefore, it is said to be subcooled in respect to the solidification point of the foulant. Biofouling Biofouling or biological fouling is the undesirable accumulation of micro-organisms, algae and diatoms, plants, and animals on surfaces, such as ships and submarine hulls, or piping and reservoirs with untreated water. This can be accompanied by microbiologically influenced corrosion (MIC). Bacteria can form biofilms or slimes. Thus the organisms can aggregate on surfaces using colloidal hydrogels of water and extracellular polymeric substances (EPS) (polysaccharides, lipids, nucleic acids, etc.). The biofilm structure is usually complex. Bacterial fouling can occur under either aerobic (with oxygen dissolved in water) or anaerobic (no oxygen) conditions. In practice, aerobic bacteria prefer open systems, when both oxygen and nutrients are constantly delivered, often in warm and sunlit environments. Anaerobic fouling more often occurs in closed systems when sufficient nutrients are present. Examples may include sulfate-reducing bacteria (or sulfur-reducing bacteria), which produce sulfide and often cause corrosion of ferrous metals (and other alloys). Sulfide-oxidizing bacteria (e.g., Acidithiobacillus), on the other hand, can produce sulfuric acid, and can be involved in corrosion of concrete. Zebra mussels serve as an example of larger animals that have caused widespread fouling in North America. Composite fouling Composite fouling is common. This type of fouling involves more than one foulant or more than one fouling mechanism working simultaneously. The multiple foulants or mechanisms may interact with each other resulting in a synergistic fouling which is not a simple arithmetic sum of the individual components. Fouling on Mars NASA Mars Exploration Rovers (Spirit and Opportunity) experienced (presumably) abiotic fouling of solar panels by dust particles from the Martian atmosphere. Some of the deposits subsequently spontaneously cleaned off. This illustrates the universal nature of the fouling phenomena. Quantification of fouling The most straightforward way to quantify fairly uniform fouling is by stating the average deposit surface loading, i.e., kg of deposit per m2 of surface area. The fouling rate will then be expressed in kg/m2s, and it is obtained by dividing the deposit surface loading by the effective operating time. The normalized fouling rate (also in kg/m2s) will additionally account for the concentration of the foulant in the process fluid (kg/kg) during preceding operations, and is useful for comparison of fouling rates between different systems. It is obtained by dividing the fouling rate by the foulant concentration. The fouling rate constant (m/s) can be obtained by dividing the normalized fouling rate by the mass density of the process fluid (kg/m3). Deposit thickness (μm) and porosity (%) are also often used for description of fouling amount. The relative reduction of diameter of piping or increase of the surface roughness can be of particular interest when the impact of fouling on pressure drop is of interest. In heat transfer equipment, where the primary concern is often the effect of fouling on heat transfer, fouling can be quantified by the increase of the resistance to the flow of heat (m2K/W) due to fouling (termed "fouling resistance"), or by development of heat transfer coefficient (W/m2K) with time. If under-deposit or crevice corrosion is of primary concern, it is important to note non-uniformity of deposit thickness (e.g., deposit waviness), localized fouling, packing of confined regions with deposits, creation of occlusions, "crevices", "deposit tubercles", or sludge piles. Such deposit structures can create environment for underdeposit corrosion of the substrate material, e.g., intergranular attack, pitting, stress corrosion cracking, or localized wastage. Porosity and permeability of the deposits will likely influence the probability of underdeposit corrosion. Deposit composition can also be important - even minor components of the deposits can sometimes cause severe corrosion of the underlying metal (e.g., vanadium in deposits of fired boilers causing hot corrosion). There is no general rule on how much deposit can be tolerated, it depends on the system. In many cases, a deposit even a few micrometers thick can be troublesome. A deposit in a millimeter-range thickness will be of concern in almost any application. Progress of fouling with time Deposit on a surface does not always develop steadily with time. The following fouling scenarios can be distinguished, depending on the nature of the system and the local thermohydraulic conditions at the surface: Induction period - Sometimes, a near-nil fouling rate is observed when the surface is new or very clean. This is often observed in biofouling and precipitation fouling. After the "induction period", the fouling rate increases. "Negative" fouling - This can occur when fouling rate is quantified by monitoring heat transfer. Relatively small amounts of deposit can improve heat transfer, relative to clean surface, and give an appearance of "negative" fouling rate and negative total fouling amount. Negative fouling is often observed under nucleate-boiling heat-transfer conditions (deposit improves bubble nucleation) or forced-convection (if the deposit increases the surface roughness and the surface is no longer "hydraulically smooth"). After the initial period of "surface roughness control", the fouling rate usually becomes strongly positive. Linear fouling - The fouling rate can be steady with time. This is a common case. Falling fouling - In this scenario, the fouling rate decreases with time, but never drops to zero. The deposit thickness does not achieve a constant value. The progress of fouling can be often described by two numbers: the initial fouling rate (a tangent to the fouling curve at zero deposit loading or zero time) and the fouling rate after a long period of time (an oblique asymptote to the fouling curve). Asymptotic fouling - Here, the fouling rate decreases with time, until it finally reaches zero. At this point, the deposit thickness remains constant with time (a horizontal asymptote). This is often the case for relatively soft or poorly adherent deposits in areas of fast flow. The asymptote is usually interpreted as the deposit loading at which the deposition rate equals the deposit removal rate. Accelerating fouling - In this scenario, the fouling rate increases with time; the rate of deposit buildup accelerates with time (perhaps until it becomes transport limited). Mechanistically, this scenario can develop when fouling increases the surface roughness, or when the deposit surface exhibits higher chemical propensity to fouling than the pure underlying metal. Seesaw fouling - Here, fouling loading generally increases with time (often assuming a generally linear or falling rate), but, when looked at in more detail, the fouling progress is periodically interrupted and takes the form of sawtooth curve. The periodic sharp variations in the apparent fouling amount often correspond to the moments of system shutdowns, startups or other transients in operation. The periodic variations are often interpreted as periodic removal of some of the deposit (perhaps deposit re-suspension due to pressure pulses, spalling due thermal stresses, or exfoliation due to redox transients). Steam blanketing has been postulated to occur between the partially spalled deposits and the heat transfer surface. However, other reasons are possible, e.g., trapping of air inside the surface deposits during shutdowns, or inaccuracy of temperature measurements during transients ("temperature streaming"). Fouling modelling Fouling of a system can be modelled as consisting of several steps: Generation or ingress of the species that causes fouling ("foulant sourcing"); Foulant transport with the stream of the process fluid (most often by advection); Foulant transport from the bulk of the process fluid to the fouling surface. This transport is often by molecular or turbulent-eddy diffusion, but may also occur by inertial coasting/impaction, particle interception by the surface (for particles with finite sizes), electrophoresis, thermophoresis, diffusiophoresis, Stefan flow (in condensation and evaporation), sedimentation, Magnus force (acting on rotating particles), thermoelectric effect, and other mechanisms. Induction period, i.e., a near-nil fouling rate at the initial period of fouling (observed only for some fouling mechanisms); Foulant crystallisation on the surface (or attachment of the colloidal particle, or chemical reaction, or bacterial growth); Sometimes fouling autoretardation, i.e., reduction (or potentially enhancement) of crystallisation/attachment rate due to changes in the surface conditions caused by the fouling deposit; Deposit dissolution (or re-entrainment of loosely attached particles); Deposit consolidation on the surface (e.g., through Ostwald ripening or differential solubility in temperature gradient) or cementation, which account for deposit losing its porosity and becoming more tenacious with time; Deposit spalling, erosion wear, or exfoliation. Deposition consists of transport to the surface and subsequent attachment. Deposit removal is either through deposit dissolution, particle re-entrainment, or deposit spalling, erosive wear, or exfoliation. Fouling results from foulant generation, foulant deposition, deposit removal, and deposit consolidation. For the modern model of fouling involving deposition with simultaneous deposit re-entrainment and consolidation, the fouling process can be represented by the following scheme: [ rate of deposit accumulation ] = [ rate of deposition ] - [ rate of re-entrainment of unconsolidated deposit ] [ rate of accumulation of unconsolidated deposit ] = [ rate of deposition ] - [ rate of re-entrainment of unconsolidated deposit ] - [ rate of consolidation of unconsolidated deposit ] Following the above scheme, the basic fouling equations can be written as follows (for steady-state conditions with flow, when concentration remains constant with time): where: m is the mass loading of the deposit (consolidated and unconsolidated) on the surface (kg/m2); t is time (s); kd is the deposition rate constant (m/s); ρ is the fluid density (kg/m3); Cm - mass fraction of foulant in the fluid (kg/kg); λr is the re-entrainment rate constant (1/s); mr is the mass loading of the removable (i.e., unconsolidated) fraction of the surface deposit (kg/m2); and λc is the consolidation rate constant (1/s). This system of equations can be integrated (taking that m = 0 and mr = 0 at t = 0) to the form: where λ = λr + λc. This model reproduces either linear, falling, or asymptotic fouling, depending on the relative values of k, λr, and λc. The underlying physical picture for this model is that of a two-layer deposit consisting of consolidated inner layer and loose unconsolidated outer layer. Such a bi-layer deposit is often observed in practice. The above model simplifies readily to the older model of simultaneous deposition and re-entrainment (which neglects consolidation) when λc=0. In the absence of consolidation, the asymptotic fouling is always anticipated by this older model and the fouling progress can be described as: where m* is the maximum (asymptotic) mass loading of the deposit on the surface (kg/m2). Economic and environmental importance of fouling Fouling is ubiquitous and generates tremendous operational losses, not unlike corrosion. For example, one estimate puts the losses due to fouling of heat exchangers in industrialized nations to be about 0.25% of their GDP. Another analysis estimated (for 2006) the economical loss due to boiler and turbine fouling in China utilities at 4.68 billion dollars, which is about 0.169% the country GDP. The losses initially result from impaired heat transfer, corrosion damage (in particular under-deposit and crevice corrosion), increased pressure drop, flow blockages, flow redistribution inside components, flow instabilities, induced vibrations (possibly leading to other problems, e.g., fatigue), fretting, premature failure of electrical heating elements, and a large number of other often unanticipated problems. In addition, the ecological costs should be (but typically are not) considered. The ecological costs arise from the use of biocides for the avoidance of biofouling, from the increased fuel input to compensate for the reduced output caused by fouling, and an increased use of cooling water in once-through cooling systems. For example, "normal" fouling at a conventionally fired 500 MW (net electrical power) power station unit accounts for output losses of the steam turbine of 5 MW and more. In a 1,300 MW nuclear power station, typical losses could be 20 MW and up (up to 100% if the station shuts down due to fouling-induced component degradation). In seawater desalination plants, fouling may reduce the gained output ratio by two-digit percentages (the gained output ratio is an equivalent that puts the mass of generated distillate in relation to the steam used in the process). The extra electrical consumption in compressor-operated coolers is also easily in the two-digit area. In addition to the operational costs, also the capital cost increases because the heat exchangers have to be designed in larger sizes to compensate for the heat-transfer loss due to fouling. To the output losses listed above, one needs to add the cost of down-time required to inspect, clean, and repair the components (millions of dollars per day of shutdown in lost revenue in a typical power plant), and the cost of actually doing this maintenance. Finally, fouling is often a root cause of serious degradation problems that may limit the life of components or entire plants. Fouling control The most fundamental and usually preferred method of controlling fouling is to prevent the ingress of the fouling species into the cooling water circuit. In steam power stations and other major industrial installations of water technology, macro fouling is avoided by way of pre-filtration and cooling water debris filters. Some plants employ foreign-object exclusion program (to eliminate the possibility of salient introduction of unwanted materials, e.g., forgetting tools during maintenance). Acoustic monitoring is sometimes employed to monitor for fretting by detached parts. In the case of micro fouling, water purification is achieved with extensive methods of water treatment, microfiltration, membrane technology (reverse osmosis, electrodeionization) or ion-exchange resins. The generation of the corrosion products in the water piping systems is often minimized by controlling the pH of the process fluid (typically alkalinization with ammonia, morpholine, ethanolamine or sodium phosphate), control of oxygen dissolved in water (for example, by addition of hydrazine), or addition of corrosion inhibitors. For water systems at relatively low temperatures, the applied biocides may be classified as follows: inorganic chlorine and bromide compounds, chlorine and bromide cleavers, ozone and oxygen cleavers, unoxidizable biocides. One of the most important unoxidizable biocides is a mixture of chloromethyl-isothiazolinone and methyl-isothiazolinone. Also applied are dibrom nitrilopropionamide and quaternary ammonium compounds. For underwater ship hulls bottom paints are applied. Chemical fouling inhibitors can reduce fouling in many systems, mainly by interfering with the crystallization, attachment, or consolidation steps of the fouling process. Examples for water systems are: chelating agents (for example, EDTA), long-chain aliphatic amines or polyamines (for example, octadecylamine, helamin, and other "film-forming" amines), organic phosphonic acids (for example, etidronic acid), or polyelectrolytes (for example, polyacrylic acid, polymethacrylic acid, usually with a molecular weight lower than 10000). For fired boilers, aluminum or magnesium additives can lower the melting point of ash and promote creation of deposits which are easier to remove. See also process chemicals. Magnetic water treatment has been a subject of controversy as to its effectiveness for fouling control since the 1950s. The prevailing opinion is that it simply "does not work". Nevertheless, some studies suggest that it may be effective under some conditions to reduce buildup of calcium carbonate deposits. On the component design level, fouling can often (but not always) be minimized by maintaining a relatively high (for example, 2 m/s) and uniform fluid velocity throughout the component. Stagnant regions need to be eliminated. Components are normally overdesigned to accommodate the fouling anticipated between cleanings. However, a significant overdesign can be a design error because it may lead to increased fouling due to reduced velocities. Periodic on-line pressure pulses or backflow can be effective if the capability is carefully incorporated at the design time. Blowdown capability is always incorporated into steam generators or evaporators to control the accumulation of non-volatile impurities that cause or aggravate fouling. Low-fouling surfaces (for example, very smooth, implanted with ions, or of low surface energy like Teflon) are an option for some applications. Modern components are typically required to be designed for ease of inspection of internals and periodic cleaning. On-line fouling monitoring systems are designed for some application so that blowing or cleaning can be applied before unpredictable shutdown is necessary or damage occurs. Chemical or mechanical cleaning processes for the removal of deposits and scales are recommended when fouling reaches the point of impacting the system performance or an onset of significant fouling-induced degradation (e.g., by corrosion). These processes comprise pickling with acids and complexing agents, cleaning with high-velocity water jets ("water lancing"), recirculating ("blasting") with metal, sponge or other balls, or propelling offline mechanical "bullet-type" tube cleaners. Whereas chemical cleaning causes environmental problems through the handling, application, storage and disposal of chemicals, the mechanical cleaning by means of circulating cleaning balls or offline "bullet-type" cleaning can be an environmentally friendlier alternative. In some heat-transfer applications, mechanical mitigation with dynamic scraped surface heat exchangers is an option. Also ultrasonic or abrasive cleaning methods are available for many specific applications. See also International Convention on the Control of Harmful Anti-fouling Systems on Ships Oilfield scale inhibition Particle deposition Steam generator (nuclear power) Tube cleaning References External links Crude Oil Fouling research Filters Hydraulic engineering Transport phenomena Water technology Water treatment
Fouling
Physics,Chemistry,Materials_science,Engineering,Environmental_science
6,905
39,130,672
https://en.wikipedia.org/wiki/Artificial%20precision
In numerical mathematics, artificial precision is a source of error that occurs when a numerical value or semantic is expressed with more precision than was initially provided from measurement or user input. For example, a person enters their birthday as the date 1984-01-01 but it is stored in a database as 1984-01-01T00:00:00Z which introduces the artificial precision of the hour, minute, and second they were born, and may even affect the date, depending on the user's actual place of birth. This is also an example of false precision, which is artificial precision specifically of numerical quantities or measures. See also false precision accuracy and precision significant figures References Computational statistics Numerical analysis
Artificial precision
Mathematics
141
1,389,320
https://en.wikipedia.org/wiki/Gas%20in%20a%20harmonic%20trap
The results of the quantum harmonic oscillator can be used to look at the equilibrium situation for a quantum ideal gas in a harmonic trap, which is a harmonic potential containing a large number of particles that do not interact with each other except for instantaneous thermalizing collisions. This situation is of great practical importance since many experimental studies of Bose gases are conducted in such harmonic traps. Using the results from either Maxwell–Boltzmann statistics, Bose–Einstein statistics or Fermi–Dirac statistics we use the Thomas–Fermi approximation (gas in a box) and go to the limit of a very large trap, and express the degeneracy of the energy states () as a differential, and summations over states as integrals. We will then be in a position to calculate the thermodynamic properties of the gas using the partition function or the grand partition function. Only the case of massive particles will be considered, although the results can be extended to massless particles as well, much as was done in the case of the ideal gas in a box. More complete calculations will be left to separate articles, but some simple examples will be given in this article. Thomas–Fermi approximation for the degeneracy of states For massive particles in a harmonic well, the states of the particle are enumerated by a set of quantum numbers . The energy of a particular state is given by: Suppose each set of quantum numbers specify states where is the number of internal degrees of freedom of the particle that can be altered by collision. For example, a spin-1/2 particle would have , one for each spin state. We can think of each possible state of a particle as a point on a 3-dimensional grid of positive integers. The Thomas–Fermi approximation assumes that the quantum numbers are so large that they may be considered to be a continuum. For large values of , we can estimate the number of states with energy less than or equal to from the above equation as: which is just times the volume of the tetrahedron formed by the plane described by the energy equation and the bounding planes of the positive octant. The number of states with energy between and is therefore: Notice that in using this continuum approximation, we have lost the ability to characterize the low-energy states, including the ground state where . For most cases this will not be a problem, but when considering Bose–Einstein condensation, in which a large portion of the gas is in or near the ground state, we will need to recover the ability to deal with low energy states. Without using the continuum approximation, the number of particles with energy is given by: where {| |- | |   for particles obeying Maxwell–Boltzmann statistics |- | |   for particles obeying Bose–Einstein statistics |- | |   for particles obeying Fermi–Dirac statistics |} with , with being the Boltzmann constant, being temperature, and being the chemical potential. Using the continuum approximation, the number of particles with energy between and is now written: Energy distribution function We are now in a position to determine some distribution functions for the "gas in a harmonic trap." The distribution function for any variable is and is equal to the fraction of particles which have values for between and : It follows that: Using these relationships we obtain the energy distribution function: Specific examples The following sections give an example of results for some specific cases. Massive Maxwell–Boltzmann particles For this case: Integrating the energy distribution function and solving for gives: Substituting into the original energy distribution function gives: Massive Bose–Einstein particles For this case: where is defined as: Integrating the energy distribution function and solving for gives: where is the polylogarithm function. The polylogarithm term must always be positive and real, which means its value will go from 0 to as goes from 0 to 1. As the temperature goes to zero, will become larger and larger, until finally will reach a critical value , where and The temperature at which is the critical temperature at which a Bose–Einstein condensate begins to form. The problem is, as mentioned above, the ground state has been ignored in the continuum approximation. It turns out that the above expression expresses the number of bosons in excited states rather well, and so we may write: where the added term is the number of particles in the ground state. (The ground state energy has been ignored.) This equation will hold down to zero temperature. Further results can be found in the article on the ideal Bose gas. Massive Fermi–Dirac particles (e.g. electrons in a metal) For this case: Integrating the energy distribution function gives: where again, is the polylogarithm function. Further results can be found in the article on the ideal Fermi gas. References Huang, Kerson, "Statistical Mechanics", John Wiley and Sons, New York, 1967 A. Isihara, "Statistical Physics", Academic Press, New York, 1971 L. D. Landau and E. M. Lifshitz, "Statistical Physics, 3rd Edition Part 1", Butterworth-Heinemann, Oxford, 1996 C. J. Pethick and H. Smith, "Bose–Einstein Condensation in Dilute Gases", Cambridge University Press, Cambridge, 2004 Statistical mechanics
Gas in a harmonic trap
Physics
1,090
31,608,684
https://en.wikipedia.org/wiki/Istanbul%20Canal
The Istanbul Canal ( ) is a project for an artificial sea-level waterway planned by Turkey in East Thrace, connecting the Black Sea to the Sea of Marmara, and thus to the Aegean and Mediterranean seas. The Istanbul Canal would bisect the current European side of Istanbul and thus form an island between Asia and Europe (the island would have a shoreline with the Black Sea, Sea of Marmara, the new canal and the Bosporus). The new waterway would bypass the current Bosporus. The canal aims to minimise shipping traffic in the Bosporus. It is projected to have a capacity of 160 vessel transits a day – similar to the current volume of traffic through the Bosporus, where traffic congestion leaves ships queuing for days to transit the strait. Some analysts have speculated that the main reason for construction of the canal is to bypass the Montreux Convention, which limits the number and tonnage of warships from non-Black Sea powers that could enter the sea via the Bosporus, as well as prohibiting tolls on traffic passing through it. Indeed, in January 2018, the Turkish Prime Minister Binali Yıldırım announced that the Istanbul Canal would not be subject to the Montreux Convention. The Istanbul Canal project also includes the construction of ports (a large container terminal in the Black Sea, close to the Istanbul Airport), logistic centres and artificial islands to be integrated with the canal, as well as new earthquake-resistant residential areas along the channel. The artificial islands are to be built using soil dug for the canal. Transport projects to be integrated with the canal project include the Halkali-Kapikule high-speed train, the Turkish State Railways project, the Yenikapi-Sefakoy-Beylikduzu and Mahmutbey-Esenyurt metro lines in Istanbul and the D-100 highway crossing, Tem highway and Sazlibosna highway. Financing the canal is expected to be via a build-operate-transfer model, but could also be funded through public-private partnerships. The government is expecting to generate in revenue per year from the Istanbul Canal, in part from a service fee for transits. Critics, such as Korkut Boratav, have questioned this number and said that the net revenues could be negative. Other criticisms include the need to direct resources for focusing on earthquake readiness and addressing economic issues, and potential negative environmental impacts. History A canal linking the Black Sea with the Sea of Marmara has been proposed at least seven times. Early proposals The first proposal was made by the Ottoman sultan Suleiman the Magnificent (reigned 1520–1566). His architect, Mimar Sinan, was said to have devised plans for the project. The project was abandoned for unknown reasons. On March 6, 1591, during the reign of Sultan Murad III, an imperial ferman (order) was issued and work on the project recommenced, but, again for unknown reasons, the project was stopped. In 1654, during the reign of Sultan Mehmed IV, pressure for the recommencement of the canal was applied, but to no avail. Sultan Mustafa III (reigned 1757–1774) attempted twice in 1760, but the project could not go ahead because of a lack of funds. During the reign of Sultan Mahmud II (reigned 1808–1839), an Imperial Ottoman Committee was established to examine the project once again. A report was prepared in 1813, but no concrete steps were taken. Modern proposals The Energy Ministry's Consultant Yüksel Önem suggested constructing an alternative waterway to the Bosporus in the 1985 magazine of the Turkish Standards Institution and in the Science and Technical Magazine of the Scientific and Technological Research Council of Turkey in 1990. In 1991, Nusret Avcı, head of the Istanbul Metropolitan Municipality Environment Commission, proposed that a canal long be constructed between Silivri and Karacaköy. He suggested that this channel would significantly reduce hazards of maritime traffic and pollution in the Bosporus. Finally, on January 17, 1994, shortly before the local elections, the leader of the Democratic Left Party (DSP) Bülent Ecevit proposed a canal connecting the Black Sea with the Sea of Marmara. Canal Istanbul was announced by President Recep Tayyip Erdoğan in June 2021. It would be large enough to accommodate VLCC class vessels. Project Purpose The stated purpose of the project is to reduce the large marine traffic through the Bosporus and minimise the risks and dangers associated particularly with tankers. About 41,000 vessels of all sizes pass yearly through the Istanbul Strait, among them 8,000 tankers carrying 145 million tons of crude oil. International pressure is growing to increase the marine traffic tonnage through the Turkish straits, which brings risks for the security of marine navigation during the passage. The Bosporus sees nearly three times the traffic of the Suez Canal. The canal will further help prevent the pollution caused by cargo vessels passing through or mooring in the Sea of Marmara before the southern entrance of the Bosporus. According to data of Ministry of Transport and Infrastructure, there was a decrease in total amount of vessels, but an increase in the amount of very large vessels and the total gross tonnage. The data are shown in the following table: Layout On January 15, 2018, the route of the project was declared. The final route for Istanbul Canal was selected after studies on five alternative routes. The Ministry of Transport announced that the project will pass through Lake Küçükçekmece near the Marmara Sea. It will pass through the districts of Avcılar and Başakşehir before reaching the Black Sea in the Arnavutköy district north of the city. Seven kilometers of the route passes through Küçükçekmece, 3.1 kilometers goes through Avcılar, 6.5 kilometers goes through Başakşehir, and the major 28.6-kilometer part of the route goes through Arnavutköy. The waterway will have a length of , with a depth of . Its width will be on the surface and wide at the bottom. The largest ship sizes that can pass through the canal were determined as 275–350 meters long, 49 meters wide, draft of 17 meters and an air draft of 58 meters. Project preparations On September 23, 2010, Hıncal Uluç, a columnist with the daily Sabah, wrote an article named "A Crazy Project from the Prime Minister" without mentioning the content of the project. In this article, Uluç wrote his reaction to his phone call with Prime Minister Erdogan, stating that, "I had the phone in my hand and froze. This is the most crazy project I've ever heard about Istanbul. If anyone would have asked me to come up with thousand projects, it still wouldn't have crossed my mind. It's that crazy." This article led to creating hype around the project, dubbing it the "Crazy Project" (). It appeared that the Justice and Development Party (AKP) government had started discreet studies on the project earlier and that concrete steps were taken for the revival of this project. The project was mentioned by Minister of Transport Binali Yıldırım in May 2009 at the parliament. On April 27, 2011, the then-prime minister Recep Tayyip Erdoğan officially announced the Kanal İstanbul project during a rally held in connection with the upcoming 2011 general elections Studies relating to the project were completed within two years. The canal was initially planned to be in service at the latest in 2023, the 100th anniversary of the foundation of the Republic. On 22 January 2013, the Turkish Government announced that research studies about the canal would commence in May 2013. In April 2013, the first stage of the Kanal İstanbul project, which includes the construction of various network bridges and highways, commenced. By December 2019, construction had not yet commenced. President Erdoğan indicated that a request for tender for the project would be published in early 2020. Meanwhile, Ekrem İmamoğlu, elected as the mayor of Istanbul in 2019 from the opposition party CHP, is opposed to the project. In January 2020, the Environment and Urbanization Ministry approved the final version of the Environmental Impact Assessment (EIA) report of the Istanbul Canal project. Construction work is scheduled to begin in mid-2021. The project is expected to take seven years to complete. Construction On June 26, 2021, construction started on Sazlıdere Bridge, which Erdogan also stated was the start of the canal construction. Cost Turkish President Recep Tayyip Erdogan and Istanbul Metropolitan Municipality officials have stated that Istanbul Canal will cost an estimated ₺75 billion (US$10 billion) to build. The central government has put forward a build-operate-transfer model as its main preference, but will use funds from the national budget if needed. Approximately 8,000–10,000 people are expected to be employed during the construction phase of the project, while 500-800 are to be employed during the operational phase. Environmental impact The Black Sea is 50 cm higher than the Marmara and less salty. Simulations predict that, unlike the Bosphorus, which flows both ways, water would rarely flow north through the canal but almost always south, which would make the top 25m of the Marmara less salty. However the ecosystems of both seas could be affected. The project has been criticized for destroying agricultural and forest land and a walking trail, and potentially contaminating groundwater with salt and increasing flooding. Other environmental criticism includes potential changes to the salinity of Marmara Sea, leading to Istanbul smelling of hydrogen sulfide. Criticism Some critics have stated that Turkey aims to bypass the Montreux Convention Regarding the Regime of the Straits, in order to attain greater autonomy with respect to the passage of military ships (which are limited in number, tonnage, and weaponry) from the Black Sea to the Sea of Marmara. In 2013, Stratfor characterized the announced US$12 billion construction budget and initial operating date of 2023 as being "not realistic for a project of this magnitude." The city government of Istanbul and local groups are opposed to the project because it would eliminate Lake Durusu, which is used for a fifth of the city's drinking water, and because they expect it will cause overcrowding as the local population increases. Observers said the plan to charge transit fees to oil and gas tankers is unrealistic, as long as free passage is guaranteed through the Bosporous. However, Article 3 of the Montreux convention does allow sanitary inspections before transiting the Bosphorus, leading to speculation that Turkey might apply lengthy sanitary inspections, making the canal a faster alternative. Along with members of the royal family of Qatar, Berat Albayrak, the former Turkish Minister of Finance and son-in-law of President Erdoğan, purchased property along the route, meaning he would personally benefit financially from the resulting real-estate development. Ekrem Imamoglu, Istanbul's mayor, said that limited financial resources should be used for getting Istanbul ready for an earthquake and solving economic problems, and that all buildings that have an earthquake risk in Istanbul could be rebuilt with Istanbul Canal's budget. According to a survey in Istanbul by MAK, 80.4% of the respondents were against Istanbul Canal project, while only 7.9% supported it. In April 2021, ten retired Turkish navy admirals were arrested over public criticism of the Istanbul Canal project. The arrests followed a day after a group of 104 senior former navy officials signed an open letter warning that the proposed canal could, by invalidating the Montreux Convention, harm Turkish security. Implications for anticipatory policy-making The project has been described as a 'testing ground for anticipatory policy-making'. Should the world move decisively away from fossil fuels in the coming decades, the problem of traffic congestion in the Bosporus Strait will dissipate, removing one of the justifications for the canal. See also Black Sea trade and economy Yavuz Sultan Selim Bridge Istanbul Airport References External links Official website by Turkey's Directorate of Communications Canals in Turkey Proposed canals Macro-engineering Cuts (earthmoving) Transport in Istanbul Province Recep Tayyip Erdoğan controversies Water transport in Turkey
Istanbul Canal
Engineering
2,531
7,948,184
https://en.wikipedia.org/wiki/Computational%20engineering
Computational Engineering is an emerging discipline that deals with the development and application of computational models for engineering, known as Computational Engineering Models or CEM. Computational engineering uses computers to solve engineering design problems important to a variety of industries. At this time, various different approaches are summarized under the term Computational Engineering, including using computational geometry and virtual design for engineering tasks, often coupled with a simulation-driven approach In Computational Engineering, algorithms solve mathematical and logical models that describe engineering challenges, sometimes coupled with some aspect of AI, specifically Reinforcement Learning. In Computational Engineering the engineer encodes their knowledge using logical structuring. The result is an algorithm, the Computational Engineering Model, that can produce many different variants of engineering designs, based on varied input requirements. The results can then be analyzed through additional mathematical models to create algorithmic feedback loops. Simulations of physical behaviors relevant to the field, often coupled with high-performance computing, to solve complex physical problems arising in engineering analysis and design (as well as natural phenomena (computational science). It is therefore related to Computational Science and Engineering, which has been described as the "third mode of discovery" (next to theory and experimentation). In Computational Engineering, computer simulation provides the capability to create feedback that would be inaccessible to traditional experimentation or where carrying out traditional empirical inquiries is prohibitively expensive. Computational Engineering should neither be confused with pure computer science, nor with computer engineering, although a wide domain in the former is used in Computational Engineering (e.g., certain algorithms, data structures, parallel programming, high performance computing) and some problems in the latter can be modeled and solved with Computational Engineering methods (as an application area). It is typically offered as a masters or doctorate program. Methods Computational Engineering methods and frameworks include: High performance computing and techniques to gain efficiency (through change in computer architecture, parallel algorithms etc.) Modeling and simulation Algorithms for solving discrete and continuous problems Analysis and visualization of data Mathematical foundations: Numerical and applied linear algebra, initial & boundary value problems, Fourier analysis, optimization Data Science for developing methods and algorithms to handle and extract knowledge from large scientific data With regard to computing, computer programming, algorithms, and parallel computing play a major role in Computational Engineering. The most widely used programming language in the scientific community is FORTRAN. Recently, C++ and C have increased in popularity over FORTRAN. Due to the wealth of legacy code in FORTRAN and its simpler syntax, the scientific computing community has been slow in completely adopting C++ as the lingua franca. Because of its very natural way of expressing mathematical computations, and its built-in visualization capacities, the proprietary language/environment MATLAB is also widely used, especially for rapid application development and model verification. Python along with external libraries (such as NumPy, SciPy, Matplotlib) has gained some popularity as a free and Copycenter alternative to MATLAB. Open Source Movement There are a number of Free and Open-Source Software (FOSS) tools that support Computational Engineering. OpenSCAD was released in 2010 and allows the scripted generation of CAD models, which can form the basis for Computational Engineering Models. CadQuery uses Python to generate CAD models and is based on the OpenCascade framework. It is released under the Apache 2.0 Open-Source License. PicoGKis an open-source framework for Computational Engineering which was released under the Apache 2.0 Open-Source License in 2023 by LEAP 71, a Dubai-based company. Applications Computational Engineering finds diverse applications, including in: Aerospace Engineering and Mechanical Engineering: combustion simulations, structural dynamics, computational fluid dynamics, computational thermodynamics, computational solid mechanics, vehicle crash simulation, biomechanics, trajectory calculation of satellites Astrophysical systems Battlefield simulations and military gaming, homeland security, emergency response Biology and Medicine: protein folding simulations (and other macromolecules), bioinformatics, genomics, computational neurological modeling, modeling of biological systems (e.g., ecological systems), 3D CT ultrasound, MRI imaging, molecular bionetworks, cancer and seizure control Chemistry: calculating the structures and properties of chemical compounds/molecules and solids, computational chemistry/cheminformatics, molecular mechanics simulations, computational chemical methods in solid state physics, chemical pollution transport Civil Engineering: finite element analysis, structures with random loads, construction engineering, water supply systems, transportation/vehicle modeling Computer Engineering, Electrical Engineering, and Telecommunications: VLSI, computational electromagnetics, semiconductor modeling, simulation of microelectronics, energy infrastructure, RF simulation, networks Epidemiology: influenza spread Environmental Engineering and Numerical weather prediction: climate research, Computational geophysics (seismic processing), modeling of natural disasters Finance: derivative pricing, risk management Industrial Engineering: discrete event and Monte-Carlo simulations (for logistics and manufacturing systems for example), queueing networks, mathematical optimization Material Science: glass manufacturing, polymers, and crystals Nuclear Engineering: nuclear reactor modeling, radiation shielding simulations, fusion simulations Petroleum engineering: petroleum reservoir modeling, oil and gas exploration Physics: Computational particle physics, automatic calculation of particle interaction or decay, plasma modeling, cosmological simulations Transportation See also Modeling and simulation Applied mathematics Computational science Computational mathematics Computational fluid dynamics Computational electromagnetics High-performance computing Engineering mathematics Grand Challenges Numerical analysis Multiphysics References External links Oden Institute for Computational Engineering and Sciences Scope of Computational engineering Society of Industrial and Applied Mathematics International Centre for Computational Engineering (IC2E) Georgia Institute of Technology, USA, MS/PhD Programme Computational Science & Engineering The graduate program for the University of Tennessee at Chattanooga Master and PhD Program in Computational Modeling at Rio de Janeiro State University Computational Science and Engineering with Scilab Internacional Center for Numerical Methods in Engineering (CIMNE) Computational science Computational fields of study
Computational engineering
Mathematics,Technology
1,169
43,724,378
https://en.wikipedia.org/wiki/Collective%20effects%20%28accelerator%20physics%29
Charged particle beams in a particle accelerator or a storage ring undergo a variety of different processes. Typically the beam dynamics is broken down into single particle dynamics and collective effects. Sources of collective effects include single or multiple inter-particle scattering and interaction with the vacuum chamber and other surroundings, formalized in terms of impedance. The collective effects of charged particle beams in particle accelerators share some similarity to the dynamics of plasmas. In particular, a charged particle beam may be considered as a non-neutral plasma, and one may find mathematical methods in common with the study of stability or instabilities. One may also find commonality with the field of fluid mechanics since the density of charged particles is often sufficient to be considered as flowing continuum. Another important topic is the attempt to mitigate collective effects by use of single bunch or multi-bunch feedback systems. Types of collective effects Collective effects can include emittance growth, bunch length or energy spread growth, instabilities, or particle losses. There are also multi-bunch effects. Formalisms for treating collective effects The collective beam motion may be modeled in a variety of ways. One may use macroparticle models, or else a continuum model. The evolution equation in the latter case is typically called the Vlasov equation, and requires one to write down the Hamiltonian function including the external magnetic fields, and the self interaction. Stochastic effects may be added by generalizing to the Fokker–Planck equation. Software for computation of collective effects Depending on the effects considered and the modeling formalism used, different software is available for simulation. The collective effects must typically be added in addition to the single particle dynamics, which may be modeled using a tracking code. See article on Accelerator physics codes. References Accelerator physics
Collective effects (accelerator physics)
Physics
357
36,323,341
https://en.wikipedia.org/wiki/Y%20Y%20Y
In molecular biology, the YYY domain is a protein domain that is considered to be important in bacterial signal transduction, however the exact function of this protein domain is not known. It is named after the three conserved tyrosines found in the alignment. The domain forms part of the periplasmic sensor domain which binds to unsaturated disaccharides. Structure This region is mostly found to the C-terminus of the beta propellers (INTERPRO) in a family of two component regulators. However they are also found tandemly repeated in SWISSPROT without other signal conduction domains being present. The structure of this domain contains 8 beta strands organised into a beta sandwich Immunoglobulin-like fold. References Protein domains
Y Y Y
Biology
154
69,005,591
https://en.wikipedia.org/wiki/TOI-4138%20b
TOI-4138 b is a transiting exoplanet orbiting the G-type subgiant TOI-4138 1,674 light years away in the northern circumpolar constellation Ursa Minor. Discovery The planet was discovered by TESS using the transit method, which involves measuring light curves during a planet’s eclipse. The paper states that it’s inflated due to heating from its host star, which has a high luminosity. Its discovery was announced in October 2021. Properties Orbit and mass TOI-4138 b has an orbital period of 3.6 days, typical for a hot Jupiter. This corresponds to a separation from its host close to one eighth of the distance of Mercury from the Sun. Since the inclination is known, doppler spectroscopy measurements give the planet a mass only 67% that of Jupiter. Its separation is comparable with HD 209458 b, but is much larger due to the evolved state of the host star. Radius and density TOI-4138 b’s transit gives it a radius 1.49 times that of Jupiter; this combined with its low mass of gives it a density only 25% that of water. Host star TOI-4138 b orbits TOI-4138, a subgiant star located in the constellation Ursa Minor. The star has an enlarged radius of , a luminosity of and an effective temperature of . It has 1.32 times the Sun's mass, and it has an intermediate age of around 3.5 billion years. The apparent magnitude of the star is 11.8, making it not visible to the naked eye. References Exoplanets discovered in 2021 Hot Jupiters Ursa Minor Transiting exoplanets Exoplanets discovered by TESS
TOI-4138 b
Astronomy
361