id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
76,325,708
https://en.wikipedia.org/wiki/Symmetries%20of%20Culture%3A%20Theory%20and%20Practice%20of%20Plane%20Pattern%20Analysis
Symmetries of Culture: Theory and Practice of Plane Pattern Analysis is a book by anthropologist Dorothy K. Washburn and mathematician Donald W. Crowe published in 1988 by the University of Washington Press. The book is about the identification of patterns on cultural objects. Structure and topics The book is divided into seven chapters. Chapter 1 reviews the historical application of symmetry analysis to the discovery and enumeration of patterns in the plane, otherwise known as tessellations or tilings, and the application of geometry to design and the decorative arts. Chapters 2 to 6 describe how to identify and classify patterns on cultural objects such as ceramics, textiles and surface designs. Chapter 2 establishes the mathematical tools required to perform the symmetry analysis of patterns. Chapter 3 introduces the concept of color symmetry, for two-colored and multicolored patterns. Chapters 4 and 5 describe the one-dimensional (frieze) designs and the two-dimensional (plane) designs respectively; flow charts are used to help the reader to identify patterns. Chapter 6 describes finite designs, for example circular designs, which are those without translations or glide refections. Chapter 7 discusses problems that may arise in symmetry classification, for example pattern irregularities. The benefit of the flow charts is that they allow the reader to analyse the design of any cultural object in order to assign it to a specific pattern. The number of distinct patterns in one or two dimensions, with one or two colors, is shown in the table. The book, which was 10 years in development, has over 500 illustrations, and includes a mathematical appendix, a 270-entry bibliography, and an index. Audience The authors describe their book as a "handbook for the non-mathematician" of the theory and practice of plane pattern analysis. Reviewers of the book identified the audience for the book in various ways. Roger Neich writing in Man said "[The authors'] aim is to make symmetry analysis accessible to all researchers, regardless of any mathematical training, and in this aim they succeed admirably, provided the reader is prepared to invest some considerable effort." Doris Schattschneider writing in The American Mathematical Monthly commented: "[The book] was written for archaeologists, anthropologists, and art historians, but the authors have taken care in their presentation of the geometry of symmetry and color symmetry analysis." H.C. Williams reviewing the book for The Mathematical Gazette said: "This interesting book is written by a mathematician and an anthropologist and is aimed primarily at the non-mathematician. That said, it is well worth the attention of mathematicians, particularly teachers, who have an interest in pattern." Reception Contemporary reviews of the book were mostly positive. The book was reviewed by journals in the fields of anthropology, archaeology, the arts, and mathematics. Mary Frame in African Arts said: "a solid and attractive book that takes the reader in logical stages toward an understanding of the symmetrical basis of pattern repeats." [...] "I believe that Symmetries of Culture is a landmark work that will furnish the impetus and method for many studies in this fertile area." Owen Lindauer in American Anthropologist commented: "Question-answer flowcharts enable the reader to correctly classify designs using a standard notation. The book is extensively illustrated with carvings, textiles, basketry, tiles, and pottery, which are used as examples of various symmetry patterns." Dwight W. Read in Antiquity: "Symmetries of Culture is an impressive book - both in terms of its physical appearance and its content. [...] will undoubtedly become the major reference on the analysis of patterns in terms of symmetry properties." Jon Muller writing in American Antiquity: " ... a fine book that achieves its goals in a straight-forward and clear fashion. It presents a set of methods that can be applied consistently and usefully in looking at symmetrical plane designs." and Roger Neich in Man: "... wide use of this book will certainly contribute to a great improvement in the systematic study of material culture." Criticism The reviewer in African Arts pointed out the existence of cultural patterns, such as in ancient Peruvian art, that are not included in the crystallographic symmetry approach to patterns used in the book. This criticism was echoed by the reviewer in American Antiquity who had some reservations about the potential dangers of limiting design analysis to certain convenient classes of design. George Kubler, an art historian writing in Winterthur Portfolio criticised the book: "The authors' present method is non-historical. The objects illustrated are mostly undatable, and nowhere is concern shown for their seriation or place in time." Kubler criticises the authors' entire approach as being non-historical, because it analyses each object individually rather than considering them in chronological order. Influence In 2021 the book was praised by Palaguta and Starkova in Terra Artis. Art and Design. In their review, they stated that the problem of creating a basis for systematizing patterns on the principles of symmetry was solved in Symmetries of Culture. They give three reasons for continuing to value the book: firstly, despite the passage of time, the book is still valid and useful; secondly, since the release of the book, the authors have done a great deal to attract new workers into the field; and thirdly, in recent years, interdisciplinary research on symmetry and ornamentation has increased, and the interest in this topic has grown among both anthropologists and art historians, which greatly broadens the readership of the book. Editions The hardback original Symmetries of Culture: Theory and Practice of Plane Pattern Analysis was published in 1988. A paperback reprint was published in 1992. In 2020 a paperback reprint of the full text was published by Dover. References External links at the Internet Archive Patterns Symmetry Mathematics and art 1988 non-fiction books
Symmetries of Culture: Theory and Practice of Plane Pattern Analysis
Physics,Mathematics
1,178
264,361
https://en.wikipedia.org/wiki/Dating%20the%20Bible
The oldest surviving Hebrew Bible manuscripts, the Dead Sea Scrolls, date to . Some of these scrolls are presently stored at the Shrine of the Book in Jerusalem. The oldest text of the entire Christian Bible, including the New Testament, is the Codex Sinaiticus dating from the 4th century CE, with its Old Testament a copy of a Greek translation known as the Septuagint. The oldest extant manuscripts of the vocalized Masoretic Text date to the 9th century CE. With the exception of a few biblical sections in the Nevi'im, virtually no biblical text is contemporaneous with the events it describes. Internal evidence within the texts of the 27-book New Testament canon suggests that most of these books were written in the 1st century CE. The first book written is thought to be either the Epistle to the Galatians (written around 48 CE) or 1 Thessalonians, written around 50 CE. The final book in the ordering of the canon, the Book of Revelation, is generally accepted by traditional scholarship to have been written during the reign of Domitian (81–96) before the writing of 1 and 2 Timothy, Titus and the Epistles of John. Dating the composition of the texts relies primarily on internal evidence, including direct references to historical events—textual criticism of philological and linguistic evidence provides more subjective indications. Table I: Chronological overview This table summarises the chronology of the main tables and serves as a guide to the historical periods mentioned. Much of the Hebrew Bible/Old Testament may have been assembled in the 5th century BCE. The New Testament books were composed largely in the second half of the 1st century CE. The deuterocanonical books fall largely in between. Table II: Hebrew Bible/Christian Old Testament Table III: Deuterocanonical Old Testament Table IV: New Testament See also Apocalyptic literature Authorship of the Bible Authorship of the Johannine works Authorship of the Pauline epistles Authorship of the Petrine epistles Biblical apocrypha Biblical canon Categories of New Testament manuscripts Deuterocanonical books Development of the Hebrew Bible canon Development of the New Testament canon Development of the Old Testament canon Historical criticism Historicity of the Bible Jewish apocrypha List of Old Testament pseudepigrapha Mosaic authorship New Testament apocrypha Protocanonical books Pseudepigrapha References Citations Bibliography Further reading Biblical criticism Biblical studies Chronology
Dating the Bible
Physics
485
4,885,509
https://en.wikipedia.org/wiki/Substrate%20%28aquatic%20environment%29
Substrate is the earthy material that forms or collects at the bottom of an aquatic habitat. It is made of sediments that may consist of: Silt – A loose, granular material with mineral particles 0.5 mm or less in diameter. Clay – A smooth, fine-grained material made of fine particles of hydrous aluminium phyllosilicate minerals (such as kaolinite). Mud – A mixture of water with silt, clay, or loam. Sand – Mineral particles between 0.06 and 2 mm in diameter. Granule – Between 2 and 4 mm in diameter. Pebble – Between 4 – 64 mm in diameter. Cobble – between 6.4 and 25.6 cm in diameter Boulder – more than 25.6 cm in diameter. Other, assorted organic matter, detritus. Stream substrate can affect the life found within the stream habitat. Muddy streams generally have more sediment in the water, reducing clarity. Clarity is one guide to stream health. Marine substrate can be classified geologically as well. See Green et al., 1999 for a reference. Mollusks and clams that live in areas with substrate, and need them to survive, use their silky byssal threads to cling to it. See Cteniodes Ales for reference. See also Grain size Substrate (biology) References Bibliography Gordon, McMahon, Finlayson, Gippel and Nathan. "Substrate". Stream Hydrology: An Introduction for Ecologists. 2nd Ed. John Wiley and Sons. 2004. pp 13 & 14. Baker, Ffolliott, DeBano and Neary (eds). "Stream Substrate". Riparian Areas of the Southwestern United States: Hydrology, Ecology, and Management. Lewis Publishers. 2004. Taylor and Francis e-Library. 2005. pp 285 & 286. "Stream Substrate Particle Size". Eldorado and Tahoe National Forests (N.F.), Range Standards & Guidelines to Amend the Land & Resource Management Plans of the Eldorado and Tahoe National Forests. Draft Environmental Impact Statement. July 1999. Page A-6 Aquatic ecology Marine biology
Substrate (aquatic environment)
Biology
436
32,475,185
https://en.wikipedia.org/wiki/SIGNAL%20%28programming%20language%29
SIGNAL is a programming language based on synchronized dataflow (flows + synchronization): a process is a set of equations on elementary flows describing both data and control. The SIGNAL formal model provides the capability to describe systems with several clocks (polychronous systems) as relational specifications. Relations are useful as partial specifications and as specifications of non-deterministic devices (for instance a non-deterministic bus) or external processes (for instance an unsafe car driver). Using SIGNAL allows one to specify an application, to design an architecture, to refine detailed components down to RTOS or hardware description. The SIGNAL model supports a design methodology which goes from specification to implementation, from abstraction to concretization, from synchrony to asynchrony. SIGNAL has been mainly developed in INRIA Espresso team since the 1980s, at the same time as similar programming languages, Esterel and Lustre. A brief history The SIGNAL language was first designed for signal processing applications in the beginning of the 1980s. It has been proposed to answer the demand of new domain-specific language for the design of signal processing applications, adopting a dataflow and block-diagram style with array and sliding window operators. P. Le Guernic, A. Benveniste, and T. Gautier have been in charge of the language definition. The first paper on SIGNAL was published in 1982, while the first complete description of SIGNAL appeared in the PhD thesis of T. Gautier. The symbolic representation of SIGNAL via z/3z (over [-1,0,1]) has been introduced in 1986. A full compiler of SIGNAL based on the clock calculus on hierarchy of Boolean clocks, was described by L. Besnard in his PhD thesis in 1992. The clock calculus has been improved later by T. Amagbegnon with the proposition of arborescent canonical forms. During the 1990s, the application domain of the SIGNAL language has been extended into general embedded and real-time systems. The relation-oriented specification style enabled the increasing construction of the systems, and also led to the design considering multi-clocked systems, compared to the original single-clock-based implementation of Esterel and Lustre. Moreover, the design and implementation of distributed embedded systems were also taken into account in SIGNAL. The corresponding research includes the optimization methods proposed by B. Chéron, the clustering models defined by B. Le Goff, the abstraction and separate compilation formalized by O. Maffeïs, and the implementation of distributed programs developed by P. Aubry. The Polychrony Toolsets The Polychrony toolset is an open-source development environment for critical/embedded systems based on SIGNAL, a real-time polychronous dataflow language. It provides a unified model-driven environment to perform design exploration by using top-down and bottom-up design methodologies formally supported by design model transformations from specification to implementation and from synchrony to asynchrony. It can be included in heterogeneous design systems with various input formalisms and output languages. Polychrony is a set of tools composed of: A SIGNAL batch compiler A graphical user interface (editor + interactive access to compiling functionalities) The Sigali tool, an associated formal system for formal verification and controller synthesis. Sigali is developed together with the INRIA Vertecs project. The SME environment The SME (SIGNAL Meta under Eclipse) environment is a front-end of Polychrony in the Eclipse environment based on Model-Driven Engineering (MDE) technologies. It consists of a set of Eclipse plug-ins which rely on the Eclipse Modeling Framework (EMF). The environment is built around SME, a metamodel of the SIGNAL language extended with mode automata concepts. The SME environment is composed of several plug-ins which correspond to: A reflexive editor: a tree view allowing to manipulate models conform to the SME metamodel. A graphical modeler based on the TopCased modeling facilities (cf. previous picture). A reflexive editor and an Eclipse view to create compilation scenarios. A direct connection to the Polychrony services (compilation, formal verification, etc.). A documentation and model examples. See also Synchronous programming language Dataflow programming Globally asynchronous locally synchronous Formal verification Model checking Formal semantics of programming languages AADL Simulink Avionics System design Asynchrony (computer programming) Notes and references External links The INRIA/IRISA Espresso team The Polychrony toolset dedicated to SIGNAL (official website of Polychrony) backup link Synchrone Lab (the synchronous language Lustre) Esterel (the synchronous Language Esterel) Declarative programming languages Synchronous programming languages Hardware description languages Formal methods Software modeling language
SIGNAL (programming language)
Technology,Engineering
1,006
17,185,722
https://en.wikipedia.org/wiki/Marden%27s%20theorem
[[File:Marden theorem.svg|thumb|A triangle and its Steiner inellipse. The zeroes of are the black dots, and the zeroes of {{math|p(z)}} are the red dots). The center green dot is the zero of . Marden's theorem states that the red dots are the foci of the ellipse.]] In mathematics, Marden's theorem''', named after Morris Marden but proved about 100 years earlier by Jörg Siebeck, gives a geometric relationship between the zeroes of a third-degree polynomial with complex coefficients and the zeroes of its derivative. See also geometrical properties of polynomial roots. Statement A cubic polynomial has three zeroes in the complex number plane, which in general form a triangle, and the Gauss–Lucas theorem states that the roots of its derivative lie within this triangle. Marden's theorem states their location within this triangle more precisely: Suppose the zeroes , , and of a third-degree polynomial are non-collinear. There is a unique ellipse inscribed in the triangle with vertices , , and tangent to the sides at their midpoints: the Steiner inellipse. The foci of that ellipse are the zeroes of the derivative {{math|p'(z)}}. Proof This proof comes from an exercise in Fritz Carlson's book “Geometri” (in Swedish, 1943). Additional relations between root locations and the Steiner inellipse By the Gauss–Lucas theorem, the root of the double derivative must be the average of the two foci, which is the center point of the ellipse and the centroid of the triangle. In the special case that the triangle is equilateral (as happens, for instance, for the polynomial ) the inscribed ellipse becomes a circle, and the derivative of  has a double root at the center of the circle. Conversely, if the derivative has a double root, then the triangle must be equilateral . Generalizations A more general version of the theorem, due to , applies to polynomials whose degree may be higher than three, but that have only three roots , , and . For such polynomials, the roots of the derivative may be found at the multiple roots of the given polynomial (the roots whose exponent is greater than one) and at the foci of an ellipse whose points of tangency to the triangle divide its sides in the ratios , , and . Another generalization () is to n-gons: some n-gons have an interior ellipse that is tangent to each side at the side's midpoint. Marden's theorem still applies: the foci of this midpoint-tangent inellipse are zeroes of the derivative of the polynomial whose zeroes are the vertices of the n-gon. History Jörg Siebeck discovered this theorem 81 years before Marden wrote about it. However, Dan Kalman titled his American Mathematical Monthly'' paper "Marden's theorem" because, as he writes, "I call this Marden’s Theorem because I first read it in M. Marden’s wonderful book". attributes what is now known as Marden's theorem to and cites nine papers that included a version of the theorem. Dan Kalman won the 2009 Lester R. Ford Award of the Mathematical Association of America for his 2008 paper in the American Mathematical Monthly describing the theorem. See also Bôcher's theorem for rational functions References . ; 2005 pbk reprint with corrections hathitrust link Theorems about triangles Theorems about polynomials Conic sections Theorems in complex geometry
Marden's theorem
Mathematics
766
4,157,430
https://en.wikipedia.org/wiki/Truncated%20great%20dodecahedron
In geometry, the truncated great dodecahedron is a nonconvex uniform polyhedron, indexed as U37. It has 24 faces (12 pentagrams and 12 decagons), 90 edges, and 60 vertices. It is given a Schläfli symbol t{5,5/2}. Related polyhedra It shares its vertex arrangement with three other uniform polyhedra: the nonconvex great rhombicosidodecahedron, the great dodecicosidodecahedron, and the great rhombidodecahedron; and with the uniform compounds of 6 or 12 pentagonal prisms. This polyhedron is the truncation of the great dodecahedron: The truncated small stellated dodecahedron looks like a dodecahedron on the surface, but it has 24 faces, 12 pentagons from the truncated vertices and 12 overlapping as (truncated pentagrams). Small stellapentakis dodecahedron The small stellapentakis dodecahedron (or small astropentakis dodecahedron) is a nonconvex isohedral polyhedron. It is the dual of the truncated great dodecahedron. It has 60 intersecting triangular faces. See also List of uniform polyhedra References External links Uniform polyhedra and duals Nonconvex polyhedra Uniform polyhedra
Truncated great dodecahedron
Physics
278
478,706
https://en.wikipedia.org/wiki/Threatened%20species
A threatened species is any species (including animals, plants and fungi) which is vulnerable to extinction in the near future. Species that are threatened are sometimes characterised by the population dynamics measure of critical depensation, a mathematical measure of biomass related to population growth rate. This quantitative metric is one method of evaluating the degree of endangerment without direct reference to human activity. IUCN definition The International Union for Conservation of Nature (IUCN) is the foremost authority on threatened species, and treats threatened species not as a single category, but as a group of three categories, depending on the degree to which they are threatened: Vulnerable species Endangered species Critically endangered species Less-than-threatened categories are near threatened, least concern, and the no longer assigned category of conservation dependent. Species that have not been evaluated (NE), or do not have sufficient data (data deficient) also are not considered "threatened" by the IUCN. Although threatened and vulnerable may be used interchangeably when discussing IUCN categories, the term threatened is generally used to refer to the three categories (critically endangered, endangered, and vulnerable), while vulnerable is used to refer to the least at risk of those three categories. They may be used interchangeably in most contexts however, as all vulnerable species are threatened species (vulnerable is a category of threatened species); and, as the more at-risk categories of threatened species (namely endangered and critically endangered) must, by definition, also qualify as vulnerable species, all threatened species may also be considered vulnerable. Threatened species are also referred to as a red-listed species, as they are listed in the IUCN Red List of Threatened Species. Subspecies, populations and stocks may also be classified as threatened. By country Australia Federal The Commonwealth of Australia (federal government) has legislation for categorising and protecting endangered species, namely the Environment Protection and Biodiversity Conservation Act 1999, which is known in short as the EPBC Act. This Act has six categories: extinct, extinct in the wild, critically endangered, endangered, vulnerable, and conservation dependent, as defined in Section 179 of the Act. These could be summarised as: "Extinct" – "no reasonable doubt that the last member of the species has died"; "Extinct in the wild" – "known only to survive in cultivation" and "despite exhaustive surveys" has not been seen in the wild; "Critically endangered" – "extremely high risk of extinction in the wild in the immediate future"; "Endangered" – "very high risk of extinction in the wild in the near future"; "Vulnerable" – "high risk of extinction in the wild in medium-term future"; and "Conservation dependent" – "focus of a specific conservation program" without which the species would enter one of the above categories. The EPBC Act also recognises and protects threatened ecosystems such as plant communities, and Ramsar Convention wetlands used by migratory birds. Lists of threatened species are drawn up under the Act and these lists are the primary reference to threatened species in Australia. The Species Profile and Threats Database (SPRAT) is a searchable online database about species and ecological communities listed under the EPBC Act. It provides information on what the species looks like, its population and distribution, habitat, movements, feeding, reproduction and taxonomic comments. A Threatened Mammal Index, publicly launched on 22 April 2020 and combined with the Threatened Bird Index (created 2018) as the Threatened Species Index, is a research collaboration of the National Environmental Science Program's Threatened Species Recovery Hub, the University of Queensland and BirdLife Australia. It does not show detailed data of individual species, but shows overall trends, and the data can be downloaded via a web-app "to allow trends for different taxonomic groups or regions to be explored and compared". The Index uses data visualisation tools to show data clearly in graphic form, including a graph from 1985 to present of the main index, geographical representation, monitoring consistency and time series and species accumulation. In April 2020 the Mammal Index reported that there had been a decline of more than a third of threatened mammal numbers in the 20 years between 1995 and 2016, but the data also show that targeted conservation efforts are working. The Threatened Mammal Index "is compiled from more than 400,000 individual surveys, and contains population trends for 57 of Australia's threatened or near-threatened terrestrial and marine mammal species". States and territories Individual states and territories of Australia are bound under the EPBC Act, but may also have legislation which gives further protection to certain species, for example Western Australia's Wildlife Conservation Act 1950. Some species, such as Lewin's rail (Lewinia pectoralis), are not listed as threatened species under the EPBC Act, but they may be recognised as threatened by individual states or territories. Pests and weeds, climate change and habitat loss are some of the key threatening processes faced by native plants and animals listed by the Department of Planning, Industry and Environment of New South Wales. Germany The German Federal Agency for Nature Conservation (, BfN) publishes a regional Red List for Germany of at least 48000 animals and 24000 plants and fungi. The scheme for categorization is similar to that of the IUCN, but adds a "warning list", includes species endangered to an unknown extend, and rare species that are not endangered, but are highly at risk of extinction due to the small population. Philippines United States Federal Under the Endangered Species Act in the United States, "threatened" is defined as "any species which is likely to become an endangered species within the foreseeable future throughout all or a significant portion of its range". It is the less protected of the two protected categories. The Bay checkerspot butterfly (Euphydryas editha bayensis) is an example of a threatened subspecies protected under the Endangered Species Act. States Within the U.S., state wildlife agencies have the authority under the ESA to manage species which are considered endangered or threatened within their state but not within all states, and which therefore are not included on the national list of endangered and threatened species. For example, the trumpeter swan (Cygnus buccinator) is threatened in the state of Minnesota, while large populations still remain in Canada and Alaska. See also Biodiversity Action Plan IUCN Red List Illegal logging Rare species Red and blue-listed Slash-and-burn Threatened fauna of Australia Notes and references Further reading Biota by conservation status International Union for Conservation of Nature Environmental conservation Ecological restoration Population dynamics
Threatened species
Chemistry,Engineering,Biology
1,308
1,091,669
https://en.wikipedia.org/wiki/Forschungszentrum%20J%C3%BClich
Forschungszentrum Jülich (FZJ; “Jülich Research Centre”) is a German national research institution that pursues interdisciplinary research in the fields of energy, information, and bioeconomy. It operates a broad range of research infrastructures like supercomputers, an atmospheric simulation chamber, electron microscopes, a particle accelerator, cleanrooms for nanotechnology, among other things. Current research priorities include the structural change in the Rhineland lignite-mining region, hydrogen, and quantum technologies. As a member of the Helmholtz Association with roughly 6,800 employees in ten institutes and 80 subinstitutes, Jülich is one of the largest research institutions in Europe. Forschungszentrum Jülich's headquarters are located between the cities of Aachen, Cologne, and Düsseldorf on the outskirts of the North Rhine-Westphalian town of Jülich. FZJ has 15 branch offices in Germany and abroad, including eight sites at European and international neutron and synchrotron radiation sources, two joint institutes with the University of Münster, Friedrich-Alexander Universität Erlangen-Nürnberg (FAU), and Helmholtz-Zentrum Berlin (HZB), and three offices of Project Management Jülich (PtJ) in the cities of Bonn, Rostock, and Berlin. Jülich cooperates closely with RWTH Aachen University within the Jülich Aachen Research Alliance (JARA). The institution was established on 11 December 1956 by the state of North Rhine-Westphalia as a registered association before it was renamed Nuclear Research Centre Jülich in 1967. In 1990, its name was changed to "Forschungszentrum Jülich GmbH". History On 11 December 1956, the State Parliament of North Rhine-Westphalia decided to establish an "atomic research centre". The Society for the Promotion of Nuclear Physics Research (GFKF) was thus established as a registered association (e. V.). Its founder is considered to be State Secretary Leo Brandt (Ministry of Economic Affairs and Transport of the Federal State of North Rhine-Westphalia). Several locations were considered but the decision was made in favour of the Stetternich forest in what was then the district of Jülich. The Society for the Promotion of Nuclear Physics Research (GFKF) was renamed Nuclear Research Centre Jülich (or KFA for short, which was taken from the German). Seven years later, it was converted into a limited liability company (GmbH), and in 1990, it was named Forschungszentrum Jülich GmbH. The partners of Forschungszentrum Jülich are the Federal Republic of Germany (90%) and the federal state of North Rhine-Westphalia (10%). MERLIN and DIDO In 1958, the foundation stone was laid for the research reactors MERLIN (FRJ-1) and DIDO (FRJ-2), and they went into operation in 1962. The FRJ-1 research reactor was decommissioned in 1985 and completely dismantled between 2000 and 2008. The FRJ-2 research reactor was a DIDO-class reactor and it was used for neutron scattering experiments. It was operated by the Central Research Reactors Division (ZFR). FRJ-2 was the strongest neutron source in Germany until the research neutron source Heinz Maier-Leibnitz in Garching (FRM II) was put in operation. FRJ-2 was primarily used to conduct scattering and spectroscopic experiments on condensed matter. It was in operation from 14 November 1962 until 2 May 2006. In 2006, the Jülich Centre for Neutron Science (JCNS) was founded, reflecting Forschungszentrum Jülich's role as a national competence centre for neutron scattering. Six of the most important instruments were moved from FRJ-2 to FRM II; new instruments were also assembled there. AVR In 1956, an interest group was formed to prepare the construction of the AVR reactor. In 1959, it became the "Arbeitsgemeinschaft Versuchsreaktor GmbH" (AVR GmbH) – a consortium of 15 local electricity suppliers headed by the Düsseldorf municipal utilities (Stadtwerke Düsseldorf) as the owner and operator (other partners included the municipal utilities in Aachen, Bonn, Bremen, Hagen, Hanover, Munich, and Wuppertal). The aim was to demonstrate the feasibility and operability of a graphite-moderated, gas-cooled high-temperature reactor to produce electricity. BBC and Krupp were responsible for construction of the AVR reactor, which began in August 1961 and was completed in 1966, after the consortium had received the design contract in April 1957 and the construction contract in February 1959. The cost of construction was in the region of DM 100 million. In 1967, the AVR reactor was put into operation and began feeding electricity into the national power grid. On 31 December 1988, the AVR reactor was shut down; during its operation, it had proven the feasibility of the pebble bed reactor. Karl Strauss said in 2016 that "the facility had generally been operated without any problems". The mean availability was 60.4%. AVR received scientific support and operating subsidies from the Nuclear Research Centre Jülich (KFA) but was formally independent. From the mid-1980s, the then KFA reduced its commitment to the further development of the gas-cooled high-temperature reactor. The AVR pebble bed reactor is still being dismantled today (see its dismantling and disposal). The severe contamination of the reactor core with radioactive graphite dust particles proved particularly difficult. This contamination was caused by the coating of the fuel pellets made of silicon carbide and porous carbon, which leaked under the high temperatures in the reactor core and released radioactive fission products. The BBC and Krupp construction consortium had miscalculated the temperatures in the reactor core as 300 K lower. FZJ solved the problem by filling the reactor core with foamed lightweight concrete, which binds the dust particles and stabilizes the reactor core. Safety researcher Rainer Moormann, who raised public attention to the graphite dust contamination, was awarded the Whistleblower Prize in 2011. Immediately after the Fukushima nuclear disaster, FZJ and AVR GmbH established an independent expert group to investigate the history of the AVR reactor, and in particular, to issue an official statement on Moormann's public disclosures. Fields of research since the 1960s Source: In addition to research on nuclear physics and nuclear energy, work began soon after FZJ's foundation on new, non-nuclear topics and projects, such as environmental research and soil research for agriculture. One of the first institutes to be founded was the Institute of Biology (Botany department) on 1 May 1961. In autumn 1961, the Central Institute for Applied Mathematics (ZAM) was established, combining a mathematical institute with a computer centre, which was unusual at that time. Research into what is now known as neuroscience began in 1964 when the Institute of Nuclear Medicine was founded and radiotracers were developed and used in imaging techniques. Another research priority was understanding solid states as a basis for the investigation and modification of material properties, for example for new materials in energy research. In 1970, the Institute of Solid State Research was established. In the decades that followed, Jülich expanded its range of research fields to include life sciences, energy and environmental research, materials science, and information technologies. The Institute of Biotechnology was founded in 1977. In 1981, the large-scale facility TEXTOR was put into operation. It was Jülich's fusion experiment for exploring nuclear fusion reactor technology in the field of plasma-wall interaction. The facility was decommissioned at the end of 2013. In 1993, the COSY particle accelerator went into operation. In 1984, the CRAY X-MP supercomputer, one of the fastest computers in the world, was inaugurated at ZAM. ZAM played a pivotal role in founding the first national supercomputing centre (HLRZ) in 1987. In 2007, ZAM became the Jülich Supercomputing Centre (JSC), which today operates the powerful supercomputer JUWELS and makes it available to European researchers. The new scientific orientation led to a name change and "Forschungszentrum Jülich GmbH" (FZJ) came into being in 1990. Forschungszentrum Jülich is a founding member of the then Association of National Research Centres (AGF, 1970), which became the Helmholtz Association of German Research Centres in 1995. In 2004, the Ernst Ruska-Centre for Electron Microscopy was founded. It is equipped with transmission electron microscopes. Soil and environmental research were interlinked with climate research. In 2001, the SAPHIR atmospheric simulation chamber was inaugurated, followed by the PhyTec experimental facility for plants in 2014. Collaboration with RWTH Aachen University was consolidated in 2007 by establishing the Jülich Aachen Research Alliance (JARA). In 2011, Forschungszentrum Jülich, in partnership with the universities in Aachen, Bonn, Cologne, and Düsseldorf, founded the Bioeconomy Science Center (BioSc) as a scientific centre of excellence for sustainable bioeconomy. FZJ also works closely with the universities in Bonn, Cologne, and Aachen within the Geoverbund ABC/J. In 2011, the ESS Competence Centre was established at Forschungszentrum Jülich. It coordinates the German contributions to the European Spallation Source (ESS) in Lund, Sweden. Corporate structure Forschungszentrum Jülich is a limited liability company (GmbH) with the following company bodies: Partners' Meeting, Supervisory Board, and Board of Directors. The Partners' Meeting comprises representatives of the German federal government and state government of North Rhine-Westphalia. The chair of the board of directors is Wolfgang Marquardt, who has been in office since 1 July 2014. The other members of the Board of Directors are – as of October 2021 – Karsten Beneke (vice-chair since 2011), Astrid Lambrecht (since 2021), and Frauke Melchior (since 2021). FZJ's committees are the Scientific Advisory Council and the Scientific and Technical Council (WTR). Finances The annual budget of Forschungszentrum Jülich was approximately €948 million in 2022. Of this, 48% was institutional funding from the German federal government and the state of North Rhine-Westphalia and 52% was external funding. External funding comprises international (EU) and national (federal and state government, DFG, and other) project funding, R&D and infrastructure services (contracts), as well as project management on behalf of the Federal Republic of Germany and the federal state of North Rhine-Westphalia. Employees Forschungszentrum Jülich has 6,796 employees (as of Dec. 2020). Almost 2,700 of these employees are scientists, and of these scientists 850 are doctoral researchers. The scientists work in the natural, life, and engineering sciences in the fields of information, energy, and bioeconomy. Around 867 people work in the administration and service areas; 1,380 individuals work for Project Management Jülich; and 500 employees are classed as technical employees. FZJ also has more than 300 vocational trainees and students on placement in 23 different professions. In 2020, 672 visiting scientists from 62 countries were conducting research at Jülich. Prizes and awards for Jülich employees On 10 December 2007, Peter Grünberg from Forschungszentrum Jülich was awarded the Nobel Prize in Physics together with Albert Fert from Paris-Sud University in France. The two scientists were honoured for the discovery of giant magnetoresistance, which they had made independently of each other. This was the first Nobel Prize for an employee of Forschungszentrum Jülich or the Helmholtz Association. In 1998, Peter Grünberg had been awarded the German Future Prize, and in 2007, he and Albert Fert were joint recipients of the Japan Prize as well as the Israeli Wolf Prize in Physics. The Wolf Prize in Physics was also jointly awarded in 2011 to Knut Urban from Forschungszentrum Jülich, Maximilian Haider from CEOS GmbH, Heidelberg, and Harald Rose from the Technical University of Darmstadt for their breakthrough in electron microscopy. They also received the Japanese Honda Prize in 2008 for the same discovery. In 2002, Maria-Regina Kula und Martina Pohl won the German Future Prize for the development of biological catalysts. Training and teaching at Forschungszentrum Jülich In 2020, more than 300 people trained in 23 different professions at Forschungszentrum Jülich. In cooperation with RWTH Aachen University and Aachen University of Applied Sciences, FZJ also offers dual vocational and academic courses. After successful completion of their final exams, trainees are offered a six-month employment contract in their chosen profession. Since Forschungszentrum Jülich was founded, more than 5,000 trainees have completed their training in more than 25 different professions. In a joint procedure with the federal state of North Rhine-Westphalia, the institute directors at Forschungszentrum Jülich are appointed professors at one of the neighbouring universities (e.g. in Aachen, Bonn, Cologne, Düsseldorf, Bochum, Duisburg-Essen, Münster) in line with the "Jülich model". In cooperation with the universities, graduate and research schools are established (e.g. International Helmholtz Research School of Biophysics and Soft Matter with the universities in Cologne Düsseldorf). The idea behind this is to support and encourage the interdisciplinary scientific education of doctoral students. Research fields and activities Research areas Forschungszentrum Jülich groups its research activities into three interdisciplinary strategic research areas: energy, information, and bioeconomy. Information Scientists in the research area of information investigate how information is processed in biological and technical systems. They are working on simulation and data sciences within high-performance computing (HPC) or supercomputing, brain research, and research into bioelectronics- and nanoelectronics-based information technologies with the aim of transferring findings on biological information processing to technical systems. In the field of supercomputing, Jülich develops and operates its own supercomputers (see section on research infrastructures), which can be used for simulation calculations. Brain research also draws on these facilities. Brain research at Jülich aims to shed light on the molecular and structural organization of the brain to better understand illnesses such as Alzheimer's disease. Research is conducted in cooperation with the neighbouring university hospitals in Bonn, Cologne, Aachen, and Düsseldorf. Research into quantum technologies is associated with the research field of information. This includes work on quantum computers, with components, concepts, and prototypes being developed at Jülich. Forschungszentrum Jülich cooperated with Google in developing the Sycamore quantum computer, and it will be home to the first universal quantum computer developed in Europe as part of the OpenSuperQ project. Energy Jülich research is aimed towards an energy system based on renewable energy sources. This research field is primarily covered by the Institute of Energy and Climate Research (IEK). IEK has 14 subinstitutes that focus on various tasks in collaboration with other institutes. Its research priorities include photovoltaics, fuel cells, and hydrogen as an energy carrier, research into batteries and new methods of energy storage, as well as processes for increasing the efficiency of fossil energy. In the context of the feasibility of the energy transition, Forschungszentrum Jülich explores and models energy systems. With its materials research, the institute is also involved in developing nuclear fusion reactors (such as ITER and Wendelstein 7-X). In the field of producing energy through nuclear fission (atomic energy), FZJ now only conducts research into the disposal of nuclear waste. Two subinstitutes of IEK are involved in atmospheric and climate research, focusing on the interactions between human activities, air quality, and climate, as well as on improving climate and atmospheric models in cooperation with the Jülich Supercomputing Centre. FZJ, with 265 full-time positions (as of 2019), boasts the largest site for investigating hydrogen technologies within the Helmholtz Association. Research is conducted into the production, conversion, and storage (e.g. in liquid media, liquid organic hydrogen carriers) of hydrogen, as well as into the infrastructure of a hydrogen economy. Sustainable bioeconomy The bioeconomy is an economic system based on the sustainable use of biological resources including plants, animals, and microorganisms. It is argued that a bioeconomy will become necessary due to the finite nature of oil reserves, on which many industrial and everyday products are based, anthropogenic climate change, and the continued growth of the world population. In the area of sustainable bioeconomy, FZJ concentrates on the transition from an oil-based economy to a bioeconomy. This research is conducted in the field of biotechnology in an effort to use renewable raw materials to biotechnologically produce industrially or pharmaceutically relevant base materials. Plant research focuses on optimizing crop yield and the usability of plants as fuels. The third research area at FZJ focuses on chemical and physical processes in soil. Structural change in the Rhineland lignite-mining region The Rhineland lignite-mining region, where FZJ is located, is undergoing an important structural change due to the coal phase-out. The state government of North Rhine-Westphalia aims to transform the region into a European model region for energy supply and resource security. Through its research projects, FZJ will support the successful transformation of the Rhineland region. These projects include the cultivation of novel plants, sustainable agriculture, and the hydrogen economy, as well as collaborations between the field of information and industry, for example in the area of artificial intelligence or data analysis. The aim is to create a locational advantage for innovative enterprises. Research infrastructures Forschungszentrum Jülich operates numerous research infrastructures, which are available to internal and external users. FZJ coordinates and is involved in several research infrastructures in the ESFRI Roadmap, which identifies strategically important facilities and platforms in the EU. Examples include the neuroscientific digital platform EBRAINS, the EMPHASIS project for plant phenotyping, the coordination of the European supercomputer network PRACE, and the IAGOS cooperation for research into the Earth's atmosphere using instruments on commercial aircraft. The Ernst Ruska-Centre 2.0 for ultrahigh-resolution electron microscopy and the German contribution to the European Aerosols, Clouds and Trace gases Research Infrastructure (ACTRIS-D) have been part of Germany's National Roadmap since 2019. In this Roadmap, the German Federal Ministry of Education and Research (BMBF) prioritizes infrastructure projects that are important in terms of strategy and research policy. Helmholtz Nano Facility The Helmholtz Nano Facility (HNF) is a facility with a large (1,100 m2) ISO 1–3 classified clean room. The HNF is a central technology platform for the production of nanostructures and circuits within the Helmholtz Association. Work at the HNF focuses on green microchips/computing, quantum computing, neuromorphic computing, bioelectronics, and microfluidics. Ernst Ruska-Centre The Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons (ER-C) was selected by the German Federal Ministry of Education and Research (BMBF) as a national research infrastructure for ultrahigh-resolution electron microscopy. The electron-optical instruments at ER-C can also be used by external scientists and enterprises. They make it possible to investigate structures at the atomic and molecular level. The PICO electron microscope can be used for this work as it is can correct the lens errors of spherical and chromatic aberration. SAPHIR atmospheric simulation chamber In the 20-metre-long SAPHIR chamber (Simulation of Atmospheric PHotochemistry In a large Reaction Chamber), the Institute of Energy and Climate Research – Troposphere (IEK-8) explores photochemical reactions in the Earth's atmosphere. Jülich Plant Phenotyping Centre The Jülich Plant Phenotyping Center (JPPC) is a leading international institution for the development and application of non-invasive techniques for quantifying the structure and function of plants. At JPPC, technology is developed and plant traits are analysed on a mechanistic level under high-throughput and field conditions. Supercomputers The Jülich Supercomputing Centre at Forschungszentrum Jülich operates supercomputers of the highest performance class and emerged from the first German high-performance computing centre (HLRZ), which was founded at Jülich in 1987. In 2003, a 1,000 m2 machine hall was built for the supercomputers next to the Jülich Supercomputing Centre. JSC joined forces with the High Performance Computing Centre Stuttgart (HLRS) and the Leibniz Supercomputing Centre (LRZ) in Garching near Munich to form the Gauss Centre for Supercomputing (GCS), which unites the three most powerful computing centres under one roof. In addition, JSC coordinates the development of the European supercomputer network PRACE. JSC is headed by physicist and computer scientist Thomas Lippert. JURECA (2015) The JURECA supercomputer replaced JUROPA in 2015 and was expanded to include a GPU-based booster module in 2017. This made JURECA the world's first supercomputer with a modular architecture to be put into productive operation. With a computer performance of 3.78 petaflop/s, the system ranked 29th in the TOP500 list of November 2017. Between autumn 2020 and the beginning of 2021, the JURECA cluster module was replaced by the JURECA-DC module, which is designed to process large volumes of data and increased the system's peak performance to 23.5 petaflop/s. JUWELS (2018) The JUWELS supercomputer (Jülich Wizard for European Leadership Science) was put into operation in 2018 and was expanded in 2020 to include a GPU-based booster module. The combined cluster and booster modules have a theoretical peak performance of 85 quadrillion floating point operations per second (85 petaflop/s), which made JUWELS the most powerful supercomputer in Europe and the 7th most powerful in the world when the booster debuted on the November 2020 TOP500 list. Furthermore, the JUWELS booster module was the most energy-efficient system of the ten most powerful computers in the world when it was introduced. JUPITER (2024) As part of the EuroHPC JU the Jülich Supercomputing Centre will host the JUPITER supercomputer (Joint Undertaking Pioneer for Innovative and Transformative Exascale Research), which is to be the first exascale supercomputer in Europe. The machine is being installed in 2024 and should eclipse the threshold of one quintillion ("1" followed by 18 zeros) calculations per second. Medical imaging The Institute of Neurosciences and Medicine (INM) develops and applies medical imaging techniques using MRI and PET for clinical applications and to investigate neurological, neuropsychological, and psychological issues. Equipment at INM includes a combined 3 tesla and 9.4 Tesla MRI PET tomograph as well as a 7 Tesla, 4 Tesla, and 3 Tesla MRI system. Research with neutrons Forschungszentrum Jülich is a national competence centre for neutron scattering. The Jülich Centre for Neutron Science (JCNS), which operates instruments at various neutron sources all over the world, was established in 2006 – a few months before the original neutron source (the Jülich research reactor FRJ-2) was decommissioned. Six of the most important instruments were moved from FRJ-2 to FRM II; new instruments were also assembled there. In addition, JCNS has branch offices at the Institut Laue-Langevin (ILL) in Grenoble and at the Spallation Neutron Source (SNS) in Oak Ridge. JCNS also plans to operate instruments at the European Spallation Source (ESS), which is currently being constructed in Lund, Sweden, as well as at future high-brilliance accelerator-driven neutron sources. The instruments will be made available to a wide range of users, for example to conduct research into energy materials and active ingredients for medications or to analyse protein structures and magnetic materials. Cooler synchrotron (COSY) The COSY cooler synchrotron is a particle accelerator (synchrotron) and storage ring (circumference: 184 m) for accelerating protons and deuterons operated by the Nuclear Physics Institute (IKP) at FZJ. COSY is characterized by beam cooling, which reduces the deviation of particles from their predetermined path (can also be understood as the thermal motion of particles) using electron or stochastic cooling. At COSY, there are a number of experimental facilities for studies in the field of hadron physics. Research currently focuses on investigating the electric dipole moment of protons, testing components and methods for the planned Facility for Antiproton and Ion Research, and on preparatory experiments constructing an accelerator-based neutron source. Previous core experiments such as the ANKE magnetic spectrometer, the TOF time-of-flight mass spectrometer, and the WASA universal detector, which was moved to COSY from the CELSIUS storage ring of The Svedberg Labors (TSL) in Uppsala, have been decommissioned and mostly dismantled. The synchrotron is used by scientists from German and international research institutions at internal and external target stations. It is one of the research facilities used for collaborative research funded by the German Federal Ministry of Education and Research. EBRAINS EBRAINS is a digital, European research infrastructure that was created as part of the EU-funded Human Brain Project (HBP). Forschungszentrum Jülich supports the infrastructure by providing computing capacities for simulations and big data analyses. The aim is to further brain research and to apply scientific findings in this field to innovations inspired by the brain in computing, medicine, and industry. EMPHASIS The European Infrastructure for Multi-Scale Plant Phenomics and Simulation for Food Security in a Changing Climate (EMPHASIS) is a pan-European, distributed infrastructure for plant phenotyping. The aim of this EU platform, which is coordinated by Forschungszentrum Jülich, is to analyse and quantify the external characteristics of plants (the "phenotype") such as the root architecture or the number of leaves. EMPHASIS integrates information systems with data acquisition using mathematical models and helps scientists to analyse plants for a sustainable European agriculture in different environments with the aim of enabling more efficient plant production in a changing climate. The EU has provided €4 million in funding for the creation of the platform. Biomolecular NMR Centre The Biomolecular NMR Centre is a cooperation between the Institute of Biological Information Processing – Structural Biochemistry at Forschungszentrum Jülich and the Institute of Physical Biology at HHU Düsseldorf. It operates various high-field NMR spectrometers for liquid- and solid-state NMR spectroscopy for research into biologically and medically relevant proteins in order to determine, for example, the three-dimensional structure with a high resolution. This technology is also used to investigate the structural basis for affinities and specificities of these macromolecules in protein-ligand interactions. The Biomolecular NMR Centre has one 900 MHz NMR spectrometer for liquid-state NMR spectroscopy, one 800 MHz NMR spectrometer for liquid- and solid-state NMR spectroscopy, a 700 MHz device for liquid-state NMR, two 600 MHz devices for liquid-state NMR, and another 600 MHz NMR spectrometer for solid-state NMR spectroscopy. A novel 600 MHz DNP-enhanced solid-state NMR device was installed in 2014. Membrane Centre The Membrane Centre at Forschungszentrum Jülich (approx. 1550 m2) provides a research infrastructure for the development of membrane systems, covering the entire spectrum of services from the production of the materials needed and the characterization using analytical instruments right up to the testing of modules and components. A priority is the development of novel membrane systems for energy technology in order to separate greenhouse gases from exhaust gases and to provide a basis for novel fuel cells and solid-state batteries. Other research projects Forschungszentrum Jülich has a lattice steel mast (124 metres high) for meteorological measurements. It is equipped with platforms at 10 m, 20 m, 30 m, 50 m, 80 m, 100 m, and 120 m, on which measuring instruments are positioned. The measuring mast was erected in 1963/4 and is a triangular steel framework construction. Former research activities Early supercomputers IBM p690 cluster "Jump" (2004) The massively parallel supercomputer IBM p690 cluster Jump was put into operation at the beginning of 2004. With 1,312 Power4+ 2C 1.7 GHz processors (41 nodes, each with 32 processors) and an internal memory of 5 terabytes (128 gigabytes per node), the computer had a maximum performance of 5.6 teraflop/s. It was ranked 30th in the list of the world's most powerful computers at the time of its inauguration. The nodes were connected to each other by a high performance switch (HPS). Through a globally parallel data system, applications had access to more than 60 terabytes of storage space and an integrated tape drive with a capacity of one petabyte. The IBM p690 cluster Jump was run on the operating system AIX 5.1. In 2008, the system was temporarily replaced by IBM Power6 p6 575 until JuRoPA began operating. Jülich BlueGene/L supercomputer (JUBL, 2006) JUBL was unveiled in 2006 and is considered to be JUGENE's predecessor. It was decommissioned following JUGENE's successful installation in mid-2008. Jülich BlueGene/P supercomputer (JUGENE, 2008) On 22 February 2008, the massively parallel supercomputer JUGENE, which was based on IBM's BlueGene/P architecture, went into operation. At times, it was the fastest computer in Europe and the fastest civil computer in the world. In 2012, it was replaced by JUQUEEN. HPC-FF and JuRoPA (2009) On 26 May 2009, the two computers HPC-FF and JuRoPA went into operation. The two computers could be connected for specific tasks and together they achieved a performance of 274.8 teraflop/s with Linpack, which placed them tenth worldwide. The operating system was SUSE Linux Enterprise Server. This meant that three computers were effectively in operation in 2009. Both computers were decommissioned in June 2015 and replaced by JURECA. HPC-FF – A computer built by Bull for fusion research with 1080 cluster nodes, each with two Xeon quad-core processors (Xeon X5570, 2.93 GHz). JuRoPA was built by Sun with 4416 Xeon X5570 processors (2208 processor nodes). JUQUEEN (2012) The supercomputer known as JUQUEEN went into operation in 2012. It has a peak performance of 5.9 petaflop/s and was Europe's fastest supercomputer at the time of its inauguration. Institutes Ernst-Ruska-Centre for Microscopy and Spectroscopy with Electrons (ER-C): Physics of Nanoscale Systems (ER-C-1/PPGI-5) Materials Science and Technology (ER-C-2) Structural Biology (ER-C-3) Institute for Advanced Simulation (IAS): Jülich Supercomputing Centre (JSC) Quantum Theory of Materials (PGI-1/IAS-1) Theoretical Physics of Living Matter and Biophysics (IBI-5/IAS-2) Theoretical Nanoelectronics (PGI-2/IAS-3) Theory of the Strong Interactions (IAS-4/IKP-3) Computational Biomedicine (IAS-5/INM-9) Theoretical Neuroscience (IAS-6/INM-6) Civil Safety Research (IAS-7) Data Analytics and Machine Learning (IAS-8) Materials Data Science and Informatics (IAS-9) Institute of Bio- and Geosciences (IBG): Biotechnology (IBG-1) Plant Sciences (IBG-2) Agrosphere (IBG-3) Bioinformatics (IBG-4) Institute of Biological Information Processing (IBI): Molecular and Cellular Physiology (IBI-1) Mechanobiology (IBI-2) Bioelectronics (IBI-3) Biomacromolecular Systems and Processes (IBI-4) Theoretical Physics of Living Matter and Biophysics (IBI-5/IAS-2) Cellular Structural Biology (IBI-6) Structural Biochemistry (IBI-7) Neutron Scattering and Biological Matter (JCNS-1/IBI-8) Technical Services and Administration (IBI-TA) Institute of Energy and Climate Research (IEK): Materials Synthesis and Processing (IEK-1) Microstructure and Properties of Materials (IEK-2) Techno-Economic Systems Analysis (IEK-3) Plasma Physics (IEK-4) Photovoltaics (IEK-5) Nuclear Waste Management and Reactor Safety (IEK-6) Stratosphere (IEK-7) Troposphere (IEK-8) Fundamental Electrochemistry (IEK-9) Energy Systems Engineering (IEK-10) Systems Analysis and Technology Evaluation (IEK-STE) Helmholtz Institute Erlangen-Nürnberg for Renewable Energy (IEK-11/HI ERN) Helmholtz Institute Münster (IEK-12/HI MS) Theory and Computation of Energy Materials (IEK-13) Electrochemical Process Engineering (IEK-14) Institute of Neuroscience and Medicine (INM): Structural and Functional Organization of the Brain (INM-1) Molecular Organization of the Brain (INM-2) Cognitive Neuroscience (INM-3) Medical Imaging Physics (INM-4) Nuclear Chemistry (INM-5) Computational and Systems Neuroscience (INM-6) Brain and Behaviour (INM-7) Ethics of Neuroscience (INM-8) Computational Biomedicine (INM-9/IAS-5) JARA Institute Brain structure-function relationships (INM-10) JARA Institute Molecular neuroscience and neuroimaging (INM-11) Institute for Sustainable Hydrogen Economy (INW) Jülich Centre for Neutron Science (JCNS): Neutron Scattering and Biological Matter (JCNS-1/IBI-8) Quantum Materials and Collective Phenomena (JCNS-2/PGI-4) Neutron Analytics for Energy Research (JCNS-3) Neutron Methods (JCNS-4) Technical Services and Administration (PGI-TA/JCNS-TA) Nuclear Physics Institute (IKP): Experimental Hadron Structure (IKP-1) Experimental Hadron Dynamics (IKP-2) Theory of the Strong Interactions (IAS-4/IKP-3) Large-Scale Nuclear Physics Equipment (IKP-4) Peter Grünberg Institute (PGI): Quantum Theory of Materials (PGI-1/IAS-1) Theoretical Nanoelectronics (PGI-2/IAS-3) Quantum Nanoscience (PGI-3) Quantum Materials and Collective Phenomena (PGI-4/JCNS-2) Microstructure Research (PGI-5) Electronic Properties (PGI-6) Electronic Materials (PGI-7) Quantum Control (PGI-8) Semiconductor Nanoelectronics (PGI-9) JARA Institute Energy-efficient information technology (PGI-10) JARA Institute for Quantum Information (PGI-11) Quantum Computing Analytics (PGI-12) Quantum Computing (PGI-13) Neuromorphic Compute Nodes (PGI-14) Neuromorphic Software Ecosystms (PGI-15) Technical Services and Administration (PGI-TA/JCNS-TA) Central Institute of Engineering, Electronics and Analytics (ZEA): Engineering and Technology (ZEA-1) Electronic Systems (ZEA-2) Analytics (ZEA-3) Location and accessibility Forschungszentrum Jülich is situated in the middle of the Stetternich Forest in Jülich (Düren district, North Rhine-Westphalia) and covers an area of 2.2 square kilometres. It is located about 4 km to the south-east of Jülich, approx. 30 km to the north-east of Aachen, and 45 km to the west of Cologne. Some of the facilities of Forschungszentrum Jülich are not located on campus but about 1 km west of campus on the premises of the former federal railways repair shop (BAW). Infrastructure In addition to the scientific institutes and the large-scale facilities, Forschungszentrum Jülich has several infrastructure divisions and central institutes, including for example a Works Fire Brigade that is staffed 24/7, ready to protect people, property, animals, and the nature in and around Forschungszentrum Jülich. The aim of the work at the Medical Service is to ensure healthy working conditions at Forschungszentrum Jülich. The services offered range from occupational health and safety to emergency medical care and psychosocial counselling. On campus, the State Institute for Occupational Safety (LAfA) for North Rhine-Westphalia operates a state collection centre for radioactive waste for North Rhine-Westphalia and Lower Saxony. This collection centre accepts radioactive waste from Forschungszentrum Jülich as well as other (low-level) radioactive waste from the two aforementioned federal states. Since 1979, Forschungszentrum Jülich has also had its own railway track for freight transport, which is a dead-end track within campus. References Düren (district) Medical imaging research institutes Multidisciplinary research institutes Nuclear research institutes Plasma physics facilities Radiation protection organizations Research institutes in Germany Supercomputer sites Neuroscience research centers in Germany Jülich Research Centre
Forschungszentrum Jülich
Physics,Engineering
8,030
55,755,393
https://en.wikipedia.org/wiki/Institute%20of%20Ecotechnics
The Institute of Ecotechnics is an educational, training and research charity with a special interest in ecotechnology, the environment, conservation, and heritage. With its U.K. headquarters in London, England and its U.S. affiliate in Santa Fe, NM, the institute was founded to "develop and practice the discipline of ecotechnics: the ecology of technics, and the technics of ecology." Ecotechnology is a proposed applied science that deals with the relationship between humanity and the biosphere. It involves the use of technological means for ecosystem management. It seeks to fulfill human needs, based on a deep understanding of natural ecosystems, and minimizing disruption to those ecosystems. The institute was founded and incorporated in New Mexico in 1973 by members of the counterculture community Synergia Ranch, and incorporated in the UK in 1985. It is a recognized charity in England, Wales, and the United States. Activities, Sailing ship The Institute of Ecotechnics runs workshops, organizes conferences, and carries out ecological field research. It has developed agricultural, waste water and air purification, and biosphere technologies; and has published books on ecotechnics, ecological and cultural issues. The ecological research vessel, Heraclitus, which the institute owns was designed and built with personnel from its first ecological project in New Mexico. The ship is a unique blend of ancient and modern: ferro-cement for its hull and deck, Chinese junk sails supplemented with diesel engine.The Heraclitus has sailed the oceans for decades since its launch in Oakland, California in 1975. It has made twelve expeditions,. Some of them were a three year round the world voyage through the tropics exploring the origins of human culture. It also sailed up the Amazon River conducting ethnobotanical collections in Peru and circumnavigated South America with an expedition to Antarctica. It collected corals off the Yucatan coast for the Biosphere 2 project and from 1996-2008 teamed with the Biosphere Foundation to map and study coral reef health at many locations in the Pacific and Indian oceans. Its most recent work was an oral history project among the port and sea people of the Mediterranean. The RV Heraclitus is in drydock in Roses, Spain undergoing a nearly complete rebuild after sailing some 270,000 miles. Ecological field project consultancies The Institute is involved in a series of ecological demonstration projects, selected in challenging biomes facing ecological degradation and cultural conflict. These were chosen because conventional approaches do not work, requiring innovative, "ecotechic" approaches with the goals of ecological upgrade, economic sustainability and to develop the capabilities of I.E. members and associates. The projects, and the Heraclitus, are places where hands-on educational programsand volunteer opportunities are provided. Synergia Ranch, started in 1969, was the Institute's first ecological field project. There on 130 acres of desertifying high altitude sem-arid juniper/pinon grasslands, I.E. developed a program to restore the land, with the goal of creating an oasis in the desert. Over a thousand trees were planted, including 450 fruit trees, organic vegetable gardens and an extensive soil-building program using compost to restore lost fertility to an area which had been overgrazed and cleared for inappropriate broad-acre farming in the 1920s. Adobe buildings and a geodesic dome were built and artisan enterprises started including pottery, woodworking, iron working, clothing, leather. A construction firm build some three dozen adobe buildings in Santa Fe, contributing to a renaissance of traditional adobe architecture. The orchards and vegetable gardens are certified organic and in 2016 Synergia Ranch won the Good Earth award at the New Mexico Organic Agriculture conference in recognition of its land care. The ecotechnic life style approach was to balance by engaging in three lines of work: ecology, enterprise and theater. The Theater of All Possibilities was based at Synergia Ranch for over a decade. Les Marronniers near Aix-en-Provence, France, begun in 1976 is the ecotechnic field project in the Mediterranean biome. It has also served as a frequent location for the international conferences which I.E. organizes bringing together scientists, artists, engineers, managers and thinkers to engage on topics of broad interest. Les Marronniers has also revivified on 7 ha (17 acres) the mixed agriculture small holdings traditional in the area, with orchard, grapes, field crops, woodlot, domestic animals and art/sculpture workshops. Birdwood Downs, a 4300 acre property in the Kimberley region of NW Western Australia, is the Institute's tropical savannah biomic project. Established in 1978, its goals are to regenerate an overgrazed ecology dominated by invasive scrub trees with drought-resistant grasses and legumes and to demonstrate ecotechnic ways of living within a sometimes harsh climate. Robyn Tredwell, a director of I.E., managed the project from 1985 until her death in 2012 and was honored by being selected as the 1995 Australian Rural Woman of the Year. Recognizing that cities, part of the anthropogenic urban biome, are crucial players in the biosphere, I.E. helped establish the October Gallery in London in 1979 as its world city project. The October Gallery's goal is to find and showcase cutting-edge artists of the "transvangarde" from around the world. It also supports a robust educational program bringing many London school children into the gallery to meet artists and make art, as well as traveling exhibitions. Designed as a place where science and art can interact, it also hosts scientific, cultural and musical events. In 1983, the Las Casas de la Selva project in Patillas, Puerto Rico became I.E. tropical rainforest project. On almost 1000 acres of mountainous secondary forest, the goal of the project is to show innovative methods of forest enrichment and to promote sustainable tropical forestry. With the cooperation of Puerto Rican departments of development and forestry, some 40,000 seedlings of valuable timber species were planted on one-third of the land. Line-planting was employed to minimize the impact on the surrounding forest and to conserve biodiversity. Las Casas won the 2016 Energy Globe national award for Puerto Rico and is recovering from severe damage caused by Hurricane Maria in 2017. During the island-wide cleanup from the Hurricane, Las Casas management are organizing the rescue of valuable hardwood trees felled which would otherwise be sent to landfills and wasted. Biosphere 2 Space Biosphere Ventures' Biosphere 2 is another project in which the Institute of Ecotechnics was involved. I.E. served as an ecological systems consultant and managed the organizing of scientific and design workshops both at Biosphere 2 and a series of International Workshops on Closed Ecological Systems and Biospherics held at the Royal Society in 1987, at the Institute of Biosphysics (Bios 3) in Krasnoyarsk, Siberia in 1989, at Biosphere 2 in 1992 and at the Linnean Society of London in 1996. An Earth system science research facility or closed ecological system in Oracle, Arizona, its inventor was Institute of Ecotechnics director John P. Allen, and bankrolled by Texas financier, Ed Bass, who also served as an I.E. for many years. At the time, the Biosphere 2 project was the subject of much media attention, and there were allegations that this was not science but a stunt, and that the project had been a failure. Some of the controversy may have stemmed from the radical goals of the project - to develop methods of harmonizing its technosphere and living systems including a non-polluting, non-toxic farming, a challenge to business as usual which is damaging the Earth's biosphere. Also it fell into the contentious divide between analytic (reductionist) and integrative (holistic) science though the project employed both to create the world's first mini biosphere. These allegations have been strenuously denied and refuted by some of those involved, including ecologist and author Mark Nelson, the chairman of the Institute of Ecotechnics and a Biosphere 2 crew member from 1991–1993, and Bill Dempster, the Biosphere 2 Director of Systems Engineering until 1994. A wealth of scientific results have been published from the early closure experiments, 1991-1994, including a compendium of research papers published in the journal Ecological Engineering in reprinted as an Elsevier book, "Biosphere 2 Research Past and Present edited by H.T. Odum and Bruno Marino. Other consultancies and collaborations The Institute works in collaboration with the Biospherics Academy's Academia Biospherica program, which organizes workshops, lectures and events in the fields of restoration ecology, intercultural studies and the arts. Past projects include consulting on the Hotel Vajra in Kathmandu, Nepal, working with a community of Tibetan refugees to create a travelers' hotel where East and West can meet. In Ft. Worth, I.E. was involved in the development of a jazz club, cultural and arts complex, the Caravan of Dreams, to help revive the city center. Wastewater Gardens subsurface flow constructed wetland systems were tested inside Biosphere 2 to treat all wastewater as no additional pollution could be emitted within the enclosed environment. The productivity and added value was such that further study and refinement took place as of 1994 so that today systems have been installed in over 14 countries, greatly enhancing the environment and enabling added biodiversity. An environmentally-friendly approach to treating and recycling residential sewage, animal slurry and other types of contaminated water, protecting the groundwater and greening the landscape, constructed wetlands have been installed in many coastal and inland communities where human waste is not treated at all or only through septic tanks, which then seep into groundwater and eventually affect human health and the health of rivers, lakes and the ocean. I.E. has been highly active in spreading this technique on all continents and was involved in the first Wastewater Garden (constructed wetlands) project in Algeria and in other applications of the ecotechnic technology, including a proposed project to protect the Marsh Arabs and their environment from sewage pollution. Key figures John P. Allen Kathelin Gray Željko Malnar Mark Nelson See also Caravan of Dreams October Gallery Theater of All Possibilities References Further reading Allen, J. P., 2009. Me and the Biospheres: A Memoir by the Inventor of Biosphere 2, Synergetic Press, Santa Fe, NM Nelson, Mark, 2018. Pushing the Limits: Insights from Biosphere 2, University of Arizona Press, Tucson, AZ. Nelson, Mark, 2014. The Wastewater Gardener: Preserving the Planet One Flush at a Time, Synergetic Press, Santa Fe, NM. Nelson Mark, Cattin Florence, (July/August 2014). "Greening the Planet" (PDF). World Water, Volume 37 (pp. 22–23, 49), Issue 4. '.' External links Official web site October Gallery web site Research Vessel Heraclitus web site Synergia Ranch web site Academia Biospherica Program Birdwood Downs website Las Casas de la Selva website Wastewater Gardens International website Charities based in London Ecology organizations Environmental charities based in the United Kingdom Environmental organisations based in London Environmental science Scientific organizations established in 1973 Sustainable technologies
Institute of Ecotechnics
Environmental_science
2,317
38,256,214
https://en.wikipedia.org/wiki/Glossary%20of%20commutative%20algebra
This is a glossary of commutative algebra. See also list of algebraic geometry topics, glossary of classical algebraic geometry, glossary of algebraic geometry, glossary of ring theory and glossary of module theory. In this article, all rings are assumed to be commutative with identity 1. !$@ A B C D E F G H . I J K L M N O P Q R S T U V W XYZ See also Glossary of ring theory References General references Commutative algebra Wikipedia glossaries using description lists
Glossary of commutative algebra
Mathematics
113
43,114,303
https://en.wikipedia.org/wiki/James%20Bovell
James Bovell (1817–1880) was a prominent Canadian physician, microscopist, educator, theologian and minister. In his youth, he traveled to London to study medicine at Guy's Hospital. There, he was related to Sir Astley Cooper and had Richard Bright and Thomas Addison among his professors, and Robert Graves and William Stokes among his colleagues. He studied at schools in Edinburgh and Glasgow and later was elected a member of the Royal College of Physicians. When he returned to Canada he worked on the fields of pathology and clinical microscopy, and he founded the Upper Canada Journal of Medical, Surgical, and Physical Science which he edited. He became an important member of the Canadian Institute and later became a vice president of it. He became an early mentor of the famous physician William Osler, whom he strongly influenced in his early years. Bovell later became a clergyman of the Church of England and wrote on the topic of natural theology. He is known for his rejection of the Darwinian theory of evolution and Lyell's geology, believing instead in the Book of Genesis on the side of the early Louis Agassiz and John Hunter. Yet, he wrote on the relation of religion and science. In a book published in 1860 he wrote to the Diocese of Huron "with the hope that the explanations given may remove erroneous impressions" at the Church in Canada. Works A lecture on the future of Canada (1846) An outline of the history of the British church from the earliest times to the period of Reformation (1852) Outlines of natural theology for the use of the Canadian student (1859) Defence of doctrinal statement (1860) Passing thoughts on man's relation to God and on God's relation to man (1862) A plea for inebriate asylums: commended of the consideration of the legislators of the province of Canada (1862) Letters: addressed to the Rev. Mr. Fletcher and others: framers of a series of resolutions on "ritual" (1867) The world at the advent of the Lord Jesus. Toronto : W.C. Chewett. (1868) References Bovell, James (1859). Outlines of natural theology : for the use of the Canadian student. Toronto. Rowsell & Ellis External links BOVELL, JAMES, Dictionary of Canadian Biography James Bovell (1817-1880): The Toronto Years 1817 births 1880 deaths Canadian Anglican priests Canadian Anglican theologians 19th-century Canadian physicians Church of England priests Fellows of the Royal College of Physicians Microscopists
James Bovell
Chemistry
514
32,268,698
https://en.wikipedia.org/wiki/Lipetsk%20fighter-pilot%20school
The Lipetsk fighter-pilot school (), also known as WIWUPAL from its German codename Wissenschaftliche Versuchs- und Personalausbildungsstation "Scientific Experimental and Personnel Training Station", was a secret training school for fighter pilots operated by the German Reichswehr at Lipetsk, Soviet Union, because Germany was prohibited by the Treaty of Versailles from operating an air force and sought alternative means to continue training and development for the future Luftwaffe. It is now the site of Lipetsk air base. Background The Treaty of Versailles, signed on 28 June 1919, prohibited Germany from operating any form of air force after the country had lost the First World War. Initially, it also prohibited the production and import of any form of aircraft to the country. In 1922, the clause on civilian aircraft was dropped and Germany was able to produce planes again, followed in 1923 with the country regaining control of its airspace. The operation or production of aircraft for military means was however still prohibited. The German military, the Reichswehr, was well aware of the value of air warfare and was determined not to fall too far behind in knowledge and training. For this purpose alternative means, outside Germany, were explored. Germany had normalised its relations with Soviet Russia in 1922, with the signing of the Treaty of Rapallo. At the time, both countries were outcasts in the world community. Initially, Germany was unwilling to break the Treaty of Versailles. This attitude changed however in 1923, when French and Belgian troops occupied the Ruhr area after Germany defaulted on reparations payments. In light of the events of the Ruhrkampf, the German Army ordered 100 new aircraft from Fokker in the Netherlands, among them 50 newly developed Fokker D.XIIIs. Additionally, the German Navy had also ordered a small number of planes. With the end of the Ruhrkampf in September, Germany was at a loss as to how to utilize the planes which were due for delivery in 1924. The Soviet Union was approached and showed an interest in allowing Germany to develop aircraft in the country; the German manufacturer Junkers had already been operating a production facility for military aircraft near Moscow since 1923. In June 1924, retired Colonel Hermann von der Lieth-Thomsen became a permanent representative of the Reichswehr's Truppenamt, the secret General Staff of the German Army, in Moscow. At the same time, seven German instructors were sent to the Red Air Force. On 15 April 1925, Lieth-Thomsen signed a contract to establish a German fighter-pilot school at Lipetsk. Fighter school Extensive works were required at Lipetsk to prepare for the German fighter-pilot school, Lipetsk Air Base. It operated from 1926 to 1933. In June 1925, the base was ready for flight operations but training of German pilots was only possible from spring 1926 onwards. The new school, up until its closure, trained 120 fighter pilots, over 300 ground personnel and 450 administrative and training staff, who, in turn, were able to serve as instructors when the new German Luftwaffe was formed in 1935. The facilities were also used to train Soviet pilots and to develop new bombing targeting methods. In an average summer, 140 German personnel were at Lipetsk, a number that was reduced to 40 in winter. Additionally, 340 Soviet personnel were employed, with an annual budget of 4 million Reichsmark (equivalent to million €) at its high-point in 1929. The disguise German name for the facility, abbreviated with the WIWUPAL contraction, was the Wissenschaftliche Versuchs-und Prüfanstalt für Luftfahrzeuge (Scientific Research and Test Institute for Aircraft). In addition to the school at Lipetsk, Germany operated a tank school, the Panzerschule Kama (1926–33) and a gas warfare facility, Gas-Testgelände Tomka (1928–31) in the Soviet Union. Closure In the early 1930s, the political situation for the flight school began to change. The Soviet Union opened itself to the West while Germany attempted a closer approach to France. Additionally, the Soviets were unhappy about the lack of development carried out at the school. In December 1932, Germany achieved being viewed as an equal at the Geneva Conference, making the fighter school somewhat unnecessary. With the rise of the Nazis to power in January 1933, the ideological gap between fascist Germany and the communist Soviet Union became too large and the fighter school at Lipetsk was closed on 15 September 1933. In popular culture The fighter school at Lipetsk is referenced in the German crime drama series Babylon Berlin, Season 2 - episodes 3, 4, 5 and 6. See also Kama tank school Tomka gas test site References External links Lipetsk. The secret flying school and test site of the Reichswehr in the Soviet Union German Federal Archives - History and pictures of the fighter-pilot school Reichswehr German military aviation history 20th-century German aviation Lipetsk Oblast 1926 establishments in the Soviet Union Secret military programs Germany–Soviet Union relations (1918–1941) Military education and training in the Soviet Union Training establishments of the Luftwaffe Germany–Soviet Union military relations Aviation schools in Germany
Lipetsk fighter-pilot school
Engineering
1,070
19,885,739
https://en.wikipedia.org/wiki/Myxopyronin
Myxopyronins (Myx) are a group of alpha-pyrone antibiotics, which are inhibitors of bacterial RNA polymerase (RNAP). They target switch 1 and switch 2 of the RNAP "switch region". Rifamycins and fidaxomicin also target RNAP, but target different sites in RNAP. Myxopyronins do not have cross-resistance with any other drugs so myxopyronins may be useful to address the growing problem of drug resistance in tuberculosis. They also may be useful in treatment of methicillin-resistant Staphylococcus aureus (MRSA). They are in pre-clinical development and has not yet started clinical trials. Myxopyronin was first isolated in 1983 from a soil bacterium by Werner Kohl and Herbert Irschik at the Helmholtz Centre for Infection Research (former GBF). A total synthesis of myxopyronin was first reported in 1998 by James S. Panek and co-workers. The target, the mechanism of action, and the structure of the complex of RNAP with myxopyronin were first reported in 2008 by Richard H. Ebright and co-workers. Synthetic analogs of the natural myxopyronins have been synthesized at Anadys Pharmaceuticals and at Rutgers University. Terence I. Moy and co-workers at Cubist Pharmaceuticals have stated that, based on high resistance rate and high serum protein binding (comparable to rifamycins and lipiarmycin), the unmodified natural product myxopyronin B is not a viable starting point for antibiotic development. References Antibiotics 4-Pyrones
Myxopyronin
Biology
359
49,603,311
https://en.wikipedia.org/wiki/Annual%20vs.%20perennial%20plant%20evolution
Annuality (living and reproducing in a single year) and perenniality (living more than two years) represent major life history strategies within plant lineages. These traits can shift from one to another over both macroevolutionary and microevolutionary timescales. While perenniality and annuality are often described as discrete either-or traits, they often occur in a continuous spectrum. The complex history of switches between annual and perennial habit involve both natural and artificial causes, and studies of this fluctuation have importance to sustainable agriculture. (Note that "perennial" here refers to both woody and herbaceous perennial species.) Globally, only 6% of all plant species and 15% of herbaceous plants (excluding trees and shrubs) are annuals. The annual life cycle has independently emerged in over 120 different plant families throughout the entire angiosperm phylogeny. The life-history theory posits that annual plants are favored when adult mortality is higher than seedling (or seed) mortality, i.e., annuals will dominate environments with disturbances or high temporal variability, reducing adult survival. This hypothesis finds support in observations of increased prevalence of annuals in regions with hot-dry summers, with elevated adult mortality and high seed persistence. Furthermore, the evolution of the annual life cycle under hot-dry summer in different families makes it one of the best examples of convergent evolution. Additionally, annual prevalence is also positively affected by year-to-year variability. According to some studies, either the trait of annuality or perenniality may be ancestral. This contradicts the commonly held belief that annuality is a derived trait from an ancestral perennial life form, as is suggested by a regarded plant population biology text. Spatiotemporal scale Above the species level, plant lineages clearly vary in their tendency for annuality or perenniality (e.g., wheat vs. oaks). On a microevolutionary timescale, a single plant species may show different annual or perennial ecotypes (e.g., adapted to dry or tropical range), as in the case of the wild progenitor of rice (Oryza rufipogon). Indeed, ability to perennate (live more than one year) may vary within a single population of a species. Underlying mechanisms: Trade-offs Annuality and perenniality are complex traits involving many underlying, often quantitative, genotypic and phenotypic factors. They are often determined by a trade-off between allocation to sexual (flower) structures and asexual (vegetative) structures. Switches between the annual and perennial habit are known to be common among herbaceous angiosperms. Increased allocation to reproduction early in life generally leads to a decrease in survival later in life (senescence); this occurs in both annual and perennial semelparous plants. Exceptions to this pattern include long-lived clonal (see ramets section below) and long-lived non-clonal perennial species (e.g., bristlecone pine). Associated traits Many traits involving mating patterns (e.g., outcrossing or selfing) and life history strategies (e.g., annual or perennial) are inherently linked. Typical annual-associated traits Self-fertilization Self-fertilization (selfing, or autogamy) is more common in annual compared to perennial herbs. Since annuals typically have only one opportunity for reproduction, selfing provides a reliable source of fertilization. However, switches to selfing in annuals may result in an "evolutionary dead end," in the sense that it is probably unlikely to return to an outcrossing (allogamous) state. Selfing and inbreeding can also result in the accumulation of deleterious alleles, resulting in inbreeding depression. Semelparity All annual plants are considered semelparous (a.k.a., monocarpy or big-bang reproduction), i.e., they reproduce once before death. Even semelparity exerts some plasticity in terms of seed-production timing over the year (see "Anomalies" section). That is, it is uncommon for all offspring to be generated at exactly the same time, which would be considered the extreme end of semelparity. Instead offspring are usually generated in discrete packages (as a sort of micro-iteroparous strategy), and the temporal spacing of these reproductive events varies by organism. This is attributed to phenotypic plasticity. Biennial plants (living two years and reproducing in the second) are also considered semelparous. Seed bank Although annuals have no vegetative regrowth from year to year, many retain a dormant population back-up underground in the form of a seed bank. The seed bank serves as an annual's source of age structure in the sense that often not all seeds will germinate each year. Thus, each year's population will consist of individuals of different ages in terms of seed dormancy times. The seed bank also helps to ensure the annual's survival and genetic integrity in variable or disturbed habitats (e.g., a desert), where good growing conditions are not guaranteed every year. Not all annuals, however, retain a seed bank. As far as population density, annuals with seed banks are predicted to be more temporally variable yet more spatially constant over time, while plants with no seed bank would be expected to be patchy (spatially variable). Typical perennial-associated traits Cross-fertilization Certain non-selfing reproductive adaptations, such as dioecy (obligate outcrossing via separate male and female individuals), may have arisen in long lived herbaceous and woody species due to negative side effects of selfing in these species, notably genetic load and inbreeding depression. Among angiosperms, dioecy is known to be substantially more common than pure self-incompatibility. Dioecy is also more typical of trees and shrubs compared to annual species. Iteroparity Most perennials are iteroparous (or polycarpic), which means they reproduce multiple times during their lifespan. Persistence of ramets Ramets are vegetative, clonal extensions of a central genet. Common examples are rhizomes (modified stem), tillers, and stolons. A plant is perennial if the birth rate of ramets exceeds their death rate. Several of the oldest known plants are clonal. Some genets have been reported to be many thousands of years old, and a steady rate of branching likely aids in avoiding senescence. The oldest reported minimum age of a single genet is 43,600 years, for Lomatia tasmanica W.M.Curtis. It is hypothesized that some perennial plants even display negative senescence, in which their fecundity and survival increase with age. Examples of plants with rhizomatous growth include perennial Sorghum and rice, which likely share similar underlying genes controlling rhizome growth. In wheat (Thinopyrum), perenniality is associated with production of a secondary set of tillers (stems arising from the crown's apical meristem) following the reproductive phase. This is called post-sexual cycle regrowth (PSCR). Such long-lived genets in a population may provide a buffer against random environmental fluctuations. Polyploidy There is a possible connection between polyploidy (having more than two copies of one's chromosomes) and perenniality. One potential explanation is that both polyploids (larger in size) and asexual reproduction (common in perennials) tend to be selected for in inhospitable extremes of a species' distribution. One example could be the intricate polyploidy of native Australian perennial Glycine species. Niche conservatism Woody species have been found to occupy fewer climatic niches than herbaceous species, which was suggested to be a result of their slower generation time; such differences in adaptation may result in niche conservatism among perennial species, in the sense that their climatic niche has not changed much over evolutionary time. Anomalies Semelparity and iteroparity Semelparity in perennials is rare but occurs in several types of plants, likely due to adaptive changes for greater seed allocation in response to seed predation (although other drivers, such as biased pollination, have been proposed). List of semelparous perennials: Carrot (Daucus carota subsp. sativus) Agave Agave deserti (century plant) Hesperoyucca whipplei semelparous bamboo Phyllostachys bambusoides Corypha umbraculifera (Talipot palm) Lobelia telekii Senecio jacobaea (ragwort) Cynoglossum officinale Mating system The Polemoniaceae (phlox) family shows considerable flexibility in both life history and mating system, showing combinations of annual / selfing, annual / outcrossing, perennial / selfing, and perennial / outcrossing lineages. These switches indicate a more ecologically determined, rather than a phylogenetically fixed, change in habit. Environmental drivers High environmental stochasticity, i.e., random fluctuations in climate or disturbance regime, can be buffered by both the annual and perennial habit. However, the annual habit is more closely associated with a stochastic environment, whether that is naturally or artificially induced. This is due to higher seedling compared to adult survival in such stochastic environments; common examples are arid environments such as deserts as well as frequently disturbed habitats (e.g., cropland). Iteroparous perennial species are more likely to persist in habitats where adult survival is favored over seedling survival (e.g., canopied, moist). This adult/juvenile trade-off can be described succinctly in the following equations: λa = cmaλp = cmp + pma > (or <) mp + (p/c)(Silvertown & Charlesworth, 2001, p. 296)Where: λa = rate of growth of annual population. λp = rate of growth of perennial population. c = survival to reproductive age (flowering). ma = seeds produced for each annual individual (average). mp = seeds produced for each perennial individual. p = adult survival. If ma > mp + (p/c), the annual habit has greater fitness. If ma < mp + (p/c), the perennial habit has greater fitness. Thus a great deal of the fitness balance depends on the reproductive allocation to seeds, which is why annuals are known for greater reproductive effort than perennials. Different climate and disturbance patterns may also cause demographic changes in populations. Evolution rate The annual vs. perennial trait has been empirically associated with differing subsequent rates of molecular evolution within multiple plant lineages. The perennial trait is generally associated with a slower rate of evolution than annual species when looking at both non-coding and coding DNA. Generation time is often implicated as one of the major factors contributing to this disparity, with perennials having longer generation times and likewise an overall slower mutation and adaptation rate. This may result in higher genetic diversity in annual lineages. Plant taxon groups that have evolved both annual and perennial life forms. Artificial selection Artificial selection seems to have favored the annual habit, at least in the case of herbaceous species, likely due to fast generation time and therefore a quick response to domestication and improvement efforts. However, woody perennials also exemplify a major group of crops, especially fruit trees and nuts. High yield herbaceous perennial grain or seed crops, however, are virtually nonexistent, despite potential agronomic benefits. Several common herbaceous perennial fruit, herbs, and vegetables exist, however; see perennial plants for a list. Annual and perennial species are known to respond to selection in different ways. For instance, annual domesticates tend to experience more severe genetic bottlenecks than perennial species, which, at least in those clonally propagated, are more prone to continuation of somatic mutations. Cultivated woody perennials are also known for their longer generation time, outcrossing with wild species (introducing new genetic variation), and variety of geographic origin. Some woody perennials (e.g., grapes or fruit trees) also have a secondary source of genetic variation within their rootstock (base to which the above-ground portion, the scion, is grafted). Current agricultural applications Compared to annual monocultures (which occupy c. 2/3 of the world's agricultural land), perennial crops provide protection against soil erosion, better conserve water and nutrients, and undergo a longer growing season. Wild perennial species are often more resistant to pests than annual cultivars, and many perennial crop wild relatives have already been hybridized with annual crops to confer this resistance. Perennial species also typically store more atmospheric carbon than annual crops, which can help to mitigate climate change. Unfavorable characteristics of such herbaceous perennials include energetically unfavorable trade-offs and long periods of juvenile non-productivity. Some institutions, such as The Land Institute, have begun to develop perennial grains, such as Kernza (perennial wheat), as potential crops. Some traits underlying perenniality may involve relatively simple networks of traits, which can be conferred through hybrid crosses, as in the case of perennial wheat crossed with annual wheat. See also Semelparity and iteroparity Annual plant Perennial plant Biennial plant Life history theory Perennial grain The Land Institute Plant evolution Plant strategies References Botany
Annual vs. perennial plant evolution
Biology
2,836
54,638,711
https://en.wikipedia.org/wiki/DDIT4L
DNA-damage-inducible transcript 4 like (DDIT4L) or regulated in development and DNA damage response 2 (REDD2) is a protein that in humans is encoded by the DDIT4L gene. The gene is located on chromosome 4 or chromosome 3 in human or mouse respectively. Function DDIT4L is a negative regulator of mTOR. DDIT4L is a stress responsive protein, its expression is increased under the hypoxic condition and causes or sensitize towards cell death through the regulation mTOR activity and reduction of thioredoxin-1. Cardiomyocytes showed increase expression of DDIT4L under pathological stress, which promoted autophagy through the inhibition of mTORC1, not mTORC2. Role in Disease In fibrosis, nuclear long noncoding RNA (lncRNA) H19X repressed DDIT4L gene expression, specifically interacting with a region upstream of the DDIT4L gene and increased collagen expression and fibrosis. Expression of DDIT4L is increased in pathological cardiac hypertrophy but not in those of physiological cardiac hypertrophy. Such mice had mild systolic dysfunction, increased baseline autophagy, reduced mTORC1 activity, and increased mTORC2 activity. See also DDIT4/ REDD1 mTOR mTORC1 mTORC2 References Proteins
DDIT4L
Chemistry
291
3,251,249
https://en.wikipedia.org/wiki/Thioacetal
In organosulfur chemistry, thioacetals are the sulfur (thio-) analogues of acetals (). There are two classes: the less-common monothioacetals, with the formula , and the dithioacetals, with the formula (symmetric dithioacetals) or (asymmetric dithioacetals). The symmetric dithioacetals are relatively common. They are prepared by condensation of thiols () or dithiols (two groups) with aldehydes (). These reactions proceed via the intermediacy of hemithioacetals (): Thiol addition to give hemithioacetal: RSH + R'CH(O) -> R'CH(SR)OH Thiol addition with loss of water to give dithioacetal: RSH + R'CH(OH)SR -> R'CH(SR)2 + H2O Such reactions typically employ either a Lewis acid or Brønsted acid as catalyst. Dithioacetals generated from aldehydes and either 1,2-ethanedithiol or 1,3-propanedithiol are especially common among this class of molecules for use in organic synthesis. The carbonyl carbon of an aldehyde is electrophilic and therefore susceptible to attack by nucleophiles, whereas the analogous central carbon of a dithioacetal is not electrophilic. As a result, dithioacetals can serve as protective groups for aldehydes. Far from being unreactive, and in a reaction unlike that of aldehydes, that carbon can be deprotonated to render it nucleophilic: R'CHS2C2H4 + R2NLi -> R'CLiS2C2H4 + R2NH The inversion of polarity between and is referred to as umpolung. The reaction is commonly performed using the 1,3-dithiane. The lithiated intermediate can be used for various nucleophilic bond-forming reactions, and then the dithioketal hydrolyzed back to its carbonyl form. This overall process, the Corey–Seebach reaction, gives the synthetic equivalent of an acyl anion. See also Mozingo reduction Thioketal References Functional groups Organosulfur compounds
Thioacetal
Chemistry
510
4,151,504
https://en.wikipedia.org/wiki/Polytropic%20process
A polytropic process is a thermodynamic process that obeys the relation: where p is the pressure, V is volume, n is the polytropic index, and C is a constant. The polytropic process equation describes expansion and compression processes which include heat transfer. Particular cases Some specific values of n correspond to particular cases: for an isobaric process, for an isochoric process. In addition, when the ideal gas law applies: for an isothermal process, for an isentropic process. Where is the ratio of the heat capacity at constant pressure () to heat capacity at constant volume (). Equivalence between the polytropic coefficient and the ratio of energy transfers For an ideal gas in a closed system undergoing a slow process with negligible changes in kinetic and potential energy the process is polytropic, such that where C is a constant, , , and with the polytropic coefficient Relationship to ideal processes For certain values of the polytropic index, the process will be synonymous with other common processes. Some examples of the effects of varying index values are given in the following table. When the index n is between any two of the former values (0, 1, γ, or ∞), it means that the polytropic curve will cut through (be bounded by) the curves of the two bounding indices. For an ideal gas, 1 < γ < 5/3, since by Mayer's relation Other A solution to the Lane–Emden equation using a polytropic fluid is known as a polytrope. See also Adiabatic process Compressor Internal combustion engine Isentropic process Isobaric process Isochoric process Isothermal process Polytrope Quasistatic equilibrium Thermodynamics Vapor-compression refrigeration References Thermodynamic processes
Polytropic process
Physics,Chemistry
374
2,209,960
https://en.wikipedia.org/wiki/Saccharomyces%20uvarum
Saccharomyces uvarum is a species of yeast that is commonly found in fermented beverages, particularly those fermented at colder temperatures. It was originally described by Martinus Willem Beijerinck in 1898, but was long considered identical to S. bayanus. In 2000 and 2005, genetic investigations of various Saccharomyces species indicated that S. uvarum is genetically distinct from S. bayanus and should be considered a unique species. It is a bottom-fermenting yeast, so-called because it does not form the foam on top of the wort that top-fermenting yeast does. References uvarum Yeasts Yeasts used in brewing Fungus species
Saccharomyces uvarum
Biology
146
11,750,751
https://en.wikipedia.org/wiki/.hack
.hack (pronounced "Dot Hack") is a Japanese multimedia franchise that encompasses two projects: Project .hack and .hack Conglomerate. They were primarily created and developed by CyberConnect2, and published by Bandai Namco Entertainment. The series features an alternative history setting in the rise of the new millennium regarding the technological rise of a new version of the internet following a major global computer network disaster in the year 2005, and the mysterious events regarding the wildly popular fictional massively multiplayer online role-playing game The World. Project .hack Project .hack was the first project of the .hack series. It launched in 2002 with the anime series .hack//Sign in April 2002 and the PlayStation 2 game .hack//Infection in June 2002. Project developers included Koichi Mashimo (Bee Train), Kazunori Itō (Catfish) and Yoshiyuki Sadamoto (Gainax). Since then, Project .hack has spanned television, video games, manga and novels. It centers mainly on the events and affairs of the prime installment of The World. The franchise began internationally when Bandai announced .hack//Infection, which was released in 2003 and .hack//Sign got an English dub, which was released on Cartoon Network in the same year. Games .hack, a series of four PlayStation 2 games that follow the story of the .hackers, Kite and BlackRose, and their attempts to find out what caused the sudden coma of Kite's friend, Orca, and BlackRose's brother, Kazu. The volumes included .hack//Infection, .hack//Mutation, .hack//Outbreak and .hack//Quarantine. .hack//frägment, the first .hack Massively multiplayer online game (online role-playing game). It was released only in Japan, the online servers began on November 23, 2005 and ended on January 18, 2007. .hack//Enemy, a collectible card game created by Decipher Inc. based on the .hack series. It was discontinued after running five separate expansions between 2003 and 2005. Anime .hack//Sign is an anime television series directed by Kōichi Mashimo and produced by studio Bee Train and Bandai Visual. It consists of twenty six original episodes and three additional ones, released on DVD as original video animations. The series focuses on a Wavemaster (magic user) named Tsukasa, a player character in the virtual reality game. He wakes up to find himself in a dungeon in The World, but he suffers amnesia as he wonders where he is and how he got there. The situation gets worse when he discovers he cannot log out and is trapped in the game. Tsukasa embarks with other players on a quest to find the truth behind the abnormal situation. The series is influenced by psychological and sociological subjects, such as anxiety, escapism and interpersonal relationships. The series premiered in Japan on TV Tokyo between April 4, 2002 and September 25, 2002. It was later broadcast across East Asia, Southeast Asia, South Asia, and Latin America by the anime television network, Animax; and across the United States, Nigeria, Canada, and the United Kingdom by Cartoon Network, YTV, and AnimeCentral (English and Japanese) respectively. It is distributed across North America by Bandai Entertainment. .hack//Legend of the Twilight is a miniseries adaptation of the manga series written by Tatsuya Hamazaki and drawn by Rei Izumi. The series was directed by Koichi Mashimo and Koji Sawai, and produced by Bee Train. Set in a fictional MMORPG, The World, the series focuses on twins Rena and Shugo, who receive chibi avatars in the design of the legendary .hackers known as Kite and BlackRose. After Shugo is given the Twilight Bracelet by a mysterious girl, the two embark on a quest to find Aura and unravel the mystery of the Twilight Bracelet. The anime series features many of the same characters as the manga version, but with an alternative storyline. It was localized as .hack//Dusk, among other names, in fan translations prior to the official English release. .hack//Liminality is a set of four DVD OVAs included with the .hack video game series for the PlayStation 2. Liminality is focused on the real world as opposed to the games' MMORPG The World. Separated into four volumes; each volume was released with its corresponding game. The initial episode is 45 minutes long and each subsequent episode is 30 minutes long. The video series was directed by Koichi Mashimo, and written by Kazunori Itō with music by Yuki Kajiura. Primary animation production was handled by Mashimo's studio Bee Train which collaborated for the four games as well as handled major production on .hack//Sign. Liminality follows the story of Mai Minase, Yuki Aihara, Kyoko Tohno, and ex-CyberConnect employee Junichiro Tokuoka as they attempt to find out why players are falling into comas when playing in The World. .hack//Gift, a self-deprecating, tongue-in-cheek, OVA that was created as a "gift" for those who had bought and completed all four .hack video games. It was released under Project .hack. In Japan, it was available when the Data Flag on the memory card file in .hack//Quarantine was present, whereas the American version included Gift on the fourth Liminality DVD. It is predominantly a comedy that makes fun of everything that developed throughout the series, even the franchise's own shortcomings. Character designs are deliberately simplistic. Novels .hack//AI buster, a novel released under Project .hack, in 2002. It tells the story of Albireo and a prototype of the ultimate AI, Lycoris, and of how Orca and Balmung defeated "The One Sin" and became the Descendants of Fianna. .hack//AI buster 2, a collection of stories released under Project .hack. It involves the characters of AI Buster and Legend of the Twilight Bracelet: ".hack//2nd Character", ".hack//Wotan's Spear", ".hack//Kamui", ".hack//Rumor" and ".hack//Firefly". "Rumor" was previously released with the Rena Special Pack in Japan. .hack//Another Birth, a novel series released under Project .hack. It retells the story of the .hack video games from BlackRose's point of view. .hack//Zero, a novel series released under Project .hack. It tells the story of a Long Arm named Carl, of what happened to Sora after he was trapped in The World by Morganna, and of Tsukasa's real life after being able to log out from The World. .hack//Epitaph of Twilight, a novel series telling the story of Harald Hoerwick's niece, Lara Hoerwick, who finds herself trapped in an early version of The World. Manga .hack//Legend of the Twilight, a manga series released under Project .hack. It tells the story of two player characters Shugo and Rena, as they win a mysterious contest that earns them chibi character models of the legendary .hackers Kite & BlackRose. .hack Conglomerate .hack Conglomerate is the current project of .hack by CyberConnect2 and various other companies and successor to Project .hack. The companies include Victor Entertainment, Nippon Cultural Broadcasting, Bandai, TV Tokyo, Bee Train, and Kadokawa Shoten. It encompasses a series of three PlayStation 2 games called .hack//G.U., an anime series called .hack//Roots, prose, and manga. .hack Conglomerate focuses on times and installments after the original The World MMORPG. Games .hack//G.U. is a series of three video games (Vol. 1 Rebirth, Vol. 2 Reminisce, and Vol. 3 Redemption) released for the .hack Conglomerate project. Taking place in the installment of The World R:2 in the year 2017, the series focuses on the player Haseo's search for a cure after his friend was attacked by a player known as Tri-edge, which led to his eventual involvement with Project G.U, and the mysterious anomalies called AIDA that plague The World R:2. A remastered collection was released on November 3, 2017 for the PlayStation 4 and PC that included all three previous volumes and added a new 4th volume called Reconnection. .hack//Link, a PSP game released under the .hack Conglomerate project. It was claimed to be the last game in the series; the game centers on a youth named Tokio in the year 2020, who is given a free copy of The World R:X by the popular but mysterious new classmate Saika Amagi. Contains unplayable characters from .hack and .hack//G.U. video games. .hack//Versus, a PS3 game released under the .hack Conglomerate project. The game is the first .hack fighter game, which is bundled with the film .hack//The Movie. .hack//Guilty Dragon, a card-based mobile game for Android and iOS, it was exclusive for Japan. Its services began from October 15, 2012 and ended on March 23, 2016. .hack//G.U. The Card Battle is a trading card game similar to that of .hack//Enemy released under the .hack Conglomerate project. Unlike .hack//Enemy, the game was made by the original creators of .hack//G.U.. There are two sets of rules, one based on the mini game in the G.U. series, Crimson VS, and the one specifically designed for the trading card game. This game won the Origins Award for Best Trading Card Game of 2003. New World Vol. 1: Maiden of Silver Tears, an Android & iOS game released under the .hack Conglomerate project. it was a Japanese exclusive mobile game, it served as a reboot to the franchise. Services began on January 8, 2016 and ended on December 20, 2016. Anime .hack//Roots is an anime series released under the .hack Conglomerate project. It follows Haseo and his joining (and subsequent exploits with) the Twilight Brigade guild. It also shows his rise to power and how he becomes known as "The Terror of Death". Towards the end of the series we see the start of .hack.//G.U. This series is the last in the .hack anime series to be licensed by Bandai Entertainment. .hack//G.U. Trilogy, a CGI movie adaptation of the .hack//G.U. video games released under the .hack Conglomerate project. .hack//G.U. Returner, a short follow up OVA and the conclusion to .hack//Roots released under the .hack Conglomerate project. It tells the story about the characters of .hack//G.U. in one last adventure. .hack//Quantum, a three part OVA series from Kinema Citrus and the first in the anime series of .hack to be licensed by Funimation. .hack//The Movie, a CGI movie, announced on August 23, 2011. On January 21, 2012, it was launched in theaters throughout Japan. The movie takes place in the year 2024, where a reboot of The World under the name FORCE:ERA is released to a new generation of players. Thanatos Report, OVA in .hack//Versus unlocked after finishing Story Mode. Novels .hack//Cell, a novel series released under the .hack Conglomerate project, written by Ryo Suzukaze. .hack//CELL takes place at the same time as .hack//Roots. The main premise of the story covers the happenings that Midori and Adamas witness and experience in The World R:2, an extremely popular MMORPG that is a new version of the original game, The World. Midori meets numerous characters from .hack//Roots (most notably Haseo) and .hack//G.U. (such as Silabus and Gaspard). The main plot centers around Midori selling herself out to would-be PKers, and some real-world events that center around the girl who also bears the name Midori (Midori Shimomura) who is in a coma. It is later revealed that Midori is a sentient PC, a result of the "virtual cell" that was taken from Midori Shimomura's blood. After Midori Shimomura awakens from her coma, she enters The World R:2 with a PC identical to Midori. Tokyopop has obtained the rights to .hack//CELL and was released on March 2, 2010. .hack//G.U., a novel series adaptation of the three .hack//G.U. Video games released under the .hack Conglomerate project. .hack//bullet, a web novel that follows Flugel after the events of .hack//Link. Manga .hack//4koma, a yonkoma manga series most of the 4-Koma is filled with gags and parodies centring mostly around the main characters of the original .hack video game series and the .hack//G.U. video games series. .hack//Alcor, a manga series released under the .hack Conglomerate project. It focuses on a girl called Nanase, who appears to be quite fond of Silabus, as well as Alkaid during her days as empress of the Demon Palace. .hack//GnU, a humorous manga series released under the .hack Conglomerate project. It revolves around a male Blade Brandier called Raid and the seventh division of the Moon Tree guild. .hack//G.U.+, a manga adaptation series loosely based on the three .hack//G.U. video games, released under the .hack Conglomerate project. .hack//XXXX (read as "X-Fourth"), a manga adaptation series released under the .hack Conglomerate project. The manga is loosely based on the four original .hack video games. .hack//Link, manga series released under the .hack Conglomerate project. It occurs three years after the end of .hack//G.U. in a new version of The World called The World R:X. It focuses on a player named Tokio and a mysterious exchange student named Saika. Other appearances A few characters from the franchise appear in the Nintendo 3DS games Project X Zone and Project X Zone 2. References External links .hack// - Official (Worldwide) .hack// - Official (Worldwide) .hack// - Official Project .hack// - Official .hack// Conglomerate - Official .hack//Trilogy - Official Bandai Namco Entertainment franchises Hack Fictional computers Massively multiplayer online role-playing games in fiction Fiction about artificial intelligence Fiction about sentient objects 2001 introductions
.hack
Technology
3,139
58,398,796
https://en.wikipedia.org/wiki/Aspergillus%20eucalypticola
Aspergillus eucalypticola is a species of fungus in the genus Aspergillus. It belongs to the group of black Aspergilli which are important industrial workhorses. A. eucalypticola belongs to the Nigri section. The species was first described in 2011. A. aculeatinus has been isolated from eucalyptus leaves in Australia, and has been shown to produce pyranonigrin A, funalenone, aurasperone B and other naphtho-γ-pyrones. The genome of A. eucalypticola was sequenced and published in 2014 as part of the Aspergillus whole-genome sequencing project – a project dedicated to performing whole-genome sequencing of all members of the Aspergillus genus. The genome assembly size was 34.79 Mbp. Growth and morphology Aspergillus eucalypticola has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References eucalypticola Fungi described in 2011 Fungus species
Aspergillus eucalypticola
Biology
249
58,692,246
https://en.wikipedia.org/wiki/Aspergillus%20halophilicus
Aspergillus halophilicus is a species of fungus in the genus Aspergillus. It is from the Restricti section. The species was first described in 1959. It has been isolated from dried corn in the United States and a textile in the Netherlands. It has been reported to produce chaetoviridin A, deoxybrevianamid E, pseurotin A, pseurotin D, rugulusovin, stachybotryamide, and tryprostatin B. Growth and morphology A. halophilicus has been cultivated on both Czapek yeast extract agar (CYA) plates and yeast extract sucrose agar (YES) plates. The growth morphology of the colonies can be seen in the pictures below. References halophilicus Fungi described in 1959 Fungus species
Aspergillus halophilicus
Biology
175
61,814,755
https://en.wikipedia.org/wiki/Elizabeth%20Donnelly%20%28engineer%29
Elizabeth Donnelly is a British engineer and executive. She is currently the chief executive officer of Women's Engineering Society United Kingdom. Donnelly was appointed CEO on 23 August 2018. A systems engineer by education, Donnelly worked with companies such as Rolls-Royce. She was a founding member of the Royal Aeronautical Society’s (RAeS) Women in Aviation and Aerospace Committee. Career Donnelly began her career in Systems Engineering studying Databases and Systems Thinking: Managing Complexity at Open University before she specialized in Systems Thinking and graduated with a Masters in Systems Thinking in Practice. In 2005, she started work with Rolls-Royce as an adviser on lobbying governments to support trade unions. In 2008 Donnelly became a Non-Executive Director of the East Midlands Developments Agency. She worked as Head of Skills to lead skills policy in ADS Group Ltd, the trade organization for aerospace, defence and security in the United Kingdom. In 2013, Donnelly set up her own company, Pereloquens Ltd. In August 2018, Elizabeth was appointed the Chief Executive Officer of the Women's Engineering Society (WES) to replace Kirsten Bodley. References 1968 births Living people Systems engineers Women systems engineers British chief executives British women chief executives British company founders British women company founders Presidents of the Women's Engineering Society Rolls-Royce people Alumni of the Open University
Elizabeth Donnelly (engineer)
Engineering
262
1,677,957
https://en.wikipedia.org/wiki/Flexural%20rigidity
Flexural rigidity is defined as the force couple required to bend a fixed non-rigid structure by one unit of curvature, or as the resistance offered by a structure while undergoing bending. Flexural rigidity of a beam Although the moment and displacement generally result from external loads and may vary along the length of the beam or rod, the flexural rigidity (defined as ) is a property of the beam itself and is generally constant for prismatic members. However, in cases of non-prismatic members, such as the case of the tapered beams or columns or notched stair stringers, the flexural rigidity will vary along the length of the beam as well. The flexural rigidity, moment, and transverse displacement are related by the following equation along the length of the rod, : where is the flexural modulus (in Pa), is the second moment of area (in m4), is the transverse displacement of the beam at x, and is the bending moment at x. The flexural rigidity (stiffness) of the beam is therefore related to both , a material property, and , the physical geometry of the beam. If the material exhibits Isotropic behavior then the Flexural Modulus is equal to the Modulus of Elasticity (Young's Modulus). Flexural rigidity has SI units of Pa·m4 (which also equals N·m2). Flexural rigidity of a plate (e.g. the lithosphere) In the study of geology, lithospheric flexure affects the thin lithospheric plates covering the surface of the Earth when a load or force is applied to them. On a geological timescale, the lithosphere behaves elastically (in first approach) and can therefore bend under loading by mountain chains, volcanoes and other heavy objects. Isostatic depression caused by the weight of ice sheets during the last glacial period is an example of the effects of such loading. The flexure of the plate depends on: The plate elastic thickness (usually referred to as effective elastic thickness of the lithosphere). The elastic properties of the plate The applied load or force As flexural rigidity of the plate is determined by the Young's modulus, Poisson's ratio and cube of the plate's elastic thickness, it is a governing factor in both (1) and (2). Flexural Rigidity = Young's Modulus = elastic thickness (~5–100 km) = Poisson's Ratio Flexural rigidity of a plate has units of Pa·m3, i.e. one dimension of length less than the same property for the rod, as it refers to the moment per unit length per unit of curvature, and not the total moment. I is termed as moment of inertia. J is denoted as 2nd moment of inertia/polar moment of inertia. See also Bending stiffness Lithospheric flexure References Solid mechanics de:Biegesteifigkeit
Flexural rigidity
Physics
620
634,296
https://en.wikipedia.org/wiki/John%20Macadam
The Honorable Dr John Macadam (29 May 1827 – 2 September 1865), was a Scottish-Australian chemist, medical teacher, Australian politician and cabinet minister, and honorary secretary of the Burke and Wills expedition. The genus Macadamia (macadamia nut) was named after him in 1857. He died at sea, on a voyage from Australia to New Zealand, aged 38. Early life John Macadam was born at Northbank, Glasgow, Scotland, on 29 May 1827, the son of William Macadam (1783-1853) and Helen, née Stevenson (1803-1857). His father was a Glasgow businessman, who owned a spinning and textile printing works in Kilmarnock, and was a burgess and a bailie (magistrate) of Glasgow. His fellow industrialists and he in the craft had developed, using chemistry, the processes for the large-scale industrial printing of fabrics for which these plants in the area became known. John Macadam was privately educated in Glasgow; he studied chemistry at the Andersonian University (now the University of Strathclyde) and went for advanced study at the University of Edinburgh under Professor William Gregory. In 1846–47, he went on to serve as assistant to Professor George Wilson at the University of Edinburgh in his laboratory in Brown Square. He was elected a fellow of the Royal Scottish Society of Arts that year, and in 1848, a member of the Glasgow Philosophical Society. He then studied medicine at the University of Glasgow (LFPS, MD,1854; FFPSG,1855). He was a member of what became a small dynasty of Scottish scientists and lecturers in analytical chemistry, which included, other than himself, his eldest half brother William Macadam, his immediate younger brother Stevenson Macadam (a younger brother Charles Thomas Macadam, although not involved as a scientist, was also indirectly involved in chemistry becoming a senior partner in a chemical fertiliser company) and nephews William Ivison Macadam and Stevenson J. C. G. Macadam, as well as the former nephew's daughter, his great niece Elison A. Macadam. On 8 June 1855, aged 28, Macadam sailed for Melbourne in the Colony of Victoria, Australia, on the sailing ship Admiral. He arrived on 8 September 1855. Australian academic career In 1855 he was a lecturer on chemistry and natural science at Scotch College, having been engaged for the position before leaving Scotland. In 1857 he was awarded an MD ad eundem from the University of Melbourne in acknowledgment of his MD from the University of Glasgow. In 1857-1858 he also taught at Geelong Church of England Grammar School (now Geelong Grammar School). In 1858, he was appointed the Victorian government analytical chemist. In 1860 he became health officer to the City of Melbourne. He wrote several reports on public health. On 3 March 1862 he was appointed as the first lecturer in medicine (chemistry and practical chemistry) at the University of Melbourne School of Medicine. For the next few years he held classes for a small number of medical students in the Analytical Laboratory behind the Public Library. He was also a member of the Board of Agriculture. Political life Macadam became a member of the Victorian Legislative Assembly of the self-governing Colony of Victoria as a radical and supporter of the Land Convention, representing Castlemaine. Appointed postmaster-general of Victoria in 1861, Macadam resigned from the legislature in 1864. He had sponsored bills on medical practitioners and adulteration of food which became law in 1862 and 1863. Royal Society of Victoria Between 1857 and 1862, Macadam served as honorary secretary of the Philosophical Institute of Victoria, which then became the Royal Society of Victoria in 1860, and was appointed vice-president of it in 1863. He was editor of first five volumes of the society's Transactions. He was active in erecting the Society's Meeting Hall (their present building) and was involved in the institute's initiative to obtain a royal charter. He saw both happen while he held office, when in January 1860, the Philosophical Institute became the Royal Society of Victoria and met in their new building. Burke and Wills expedition Between 1857 and 1865, Macadam served as honorary secretary to the Exploration Committee of the Royal Society of Victoria, which organised the Burke and Wills expedition. The expedition was organised by the society with the aim of crossing the continent of Australia from the south to the north coasts, map it, and collect scientific data and specimens. At that time, most of the interior of Australia had not been explored by the European settlers and was unknown to them. In 1860–61, Robert O'Hara Burke and William John Wills led the expedition of 19 men with that intention, crossing Australia from Melbourne in the south, to the Gulf of Carpentaria in the north, a distance around 2,000 miles. Three men ultimately travelled over 3,000 miles from Melbourne to the shores of the Gulf of Carpentaria and back to the Depot Camp at Cooper Creek. Seven men died in the attempt, including the leaders Burke and Wills. Of the four men who reached the north coast, only one, John King, survived with the help of the indigenous people to return to Melbourne. This expedition became the first to cross the Australian continent. It was of great importance to the subsequent development of Australia and could be compared in importance to the Lewis and Clark Expedition overland to the North American Pacific Coast to the development of the United States. After the heavy death toll of the expedition, initial criticism fell on the Royal Society, but it became clear that their foresight could not have prevented the deaths and this was then widely recognised when it became known that as Secretary of the Exploration Committee of the Burke and Wills expedition, Dr. Macadam had insisted on adequate provisions for their safety. Macadamia The macadamia (genus Macadamia) nut was discovered by the European settlers, and subsequently the tree was named after him by his friend and colleague, Ferdinand von Mueller (1825-1896), Director of the Royal Botanic Gardens, Melbourne. The tree gave his name to macadamia nuts. The genus Macadamia was first described scientifically in 1857 by Dr. Mueller and he named the new genus in honour of his friend Dr John Macadam. Mueller had done a great deal of taxonomy of the flora, naming innumerable genera but chose this "...a beautiful genus dedicated to John Macadam, M.D. the talented and deserving secretary of our institute." Australian rules football On 7 August 1858, Macadam, along with Tom Wills, officiated at a game of football played between Scotch College and Melbourne Grammar. This game was a predecessor to the modern game of Australian rules football and is commemorated by a statue outside the Melbourne Cricket Ground. The two schools have competed annually ever since, lately for the Cordner–Eggleston Cup. Learned societies 1847 fellow of the Royal Scottish Society of Arts 1848 a member of the Glasgow Philosophical Society (now Royal Philosophical Society of Glasgow) 1855 elected Fellow of the Faculty of Physicians and Surgeons, the University of Glasgow 1855 elected member (1857–59 Hon. Sec), the Philosophical Institute of Victoria, later to become the Royal Society of Victoria 1860 vice-president of Royal Society of Victoria Family On 18 September 1856, a year after he arrived from Scotland, he married Elizabeth Clark in Melbourne, Australia. She had arrived three days before the wedding with her maid on the Admiral, the same ship on which he had travelled out a year earlier, which reached Hobson's Bay (Melbourne's port) on 15 September 1856, having set sail from London on 7 June 1856. Elizabeth Clark was probably born on 7 October 1832 in Barony parish Scotland, near Glasgow (her mother being Mary McGregor). She was the second daughter of John Clark, of Levenfield House in Alexandria, the Vale of Leven, a short distance north of Glasgow in West Dunbartonshire. His Levenfield Works were involved in similar work to Dr John Macadam's father William Macadam in Kilmarnock in the then lucrative business of textile printing for domestic and European markets. The Clarks and Macadams must have become known to each other in Scotland because of their respective fathers' business connections. Elizabeth died in 1915, in Brighton, Victoria. John and Elizabeth had two sons: John Melnotte Macadam was born 29 August 1858 at Fitzroy, Melbourne, Australia, and died on 30 January 1859, aged 5 months (he was reburied with his father, whose monument bears the additional inscription: In memory of his only children John Melnotte Macadam Born August 29, 1858 Died January 30, 1859 followed by an inscription to his second son below it). William Castlemaine Macadam was born on 2 July 1860 and died 17 December 1865 at Williamstown, Victoria, Australia. He died aged five and had survived his father by a few months. The inscription on his father's burial monument under His only children has him listed under his elder brother (above), who died in infancy, but does not for some reason give William's date of death on it. Death In March 1865 Macadam sailed to New Zealand to give evidence at the trial of Captain W. A. Jarvey, accused of fatally poisoning his wife, but the jury did not reach a verdict. During the return voyage, Macadam fractured his ribs during a storm. He was advised, on medical grounds, not to return for the adjourned trial but did so and died on the ship on 2 September 1865. His medical-student assistant John Drummond Kirkland gave evidence at the trial in Macadam's place, and Jarvey was convicted. The Australian News commented, "At the time of his death, Dr Macadam was but 38 years of age; there can be little doubt that the various and onerous duties he discharged for the public must be attributed in great measure the shortening of his days." The Australian Medical Journal stated, "For some time it had been evident to his friends that his general health was giving way: that a frame naturally robust and vigorous was gradually becoming undermined by the incessant and harassing duties of the multifarious offices he filled." The inquest verdict (he died at sea) stated, "His death was caused by excessive debility and general exhaustion." Funeral The funeral was large. The newspapers carried tributes and subsequently lengthier obituaries from learned societies were published, such as that in the Australian Medical Journal and elsewhere. The Melbourne Leader described the funeral: "The coffin was drawn by four horses. Four mourning coaches contained the chief mourners and the more intimate friends of the deceased gentleman. A large procession followed, in which were several members of Parliament, the members of the Royal Society, the Chief Justice; the Mayor and corporation of the city of Melbourne. A number of private carriages and the public wound up the procession....At the University, the chancellor, the vice-chancellor, and a number of the students, all in their academic robes, met the funeral cortege, and proceeded the remainder of the distance". The chief mourner was his youngest brother, George Robert Macadam (1837-1918). John Macadam's grave, surmounted by a marble obelisk, is in Melbourne General Cemetery. Widow remarried After John Macadam and her children's deaths his widow, Elizabeth Clark, later remarried. She married the Reverend John Dalziel Dickie, who was pastor at Colac for 32 years. They married on 26 February 1868 They had four daughters. Elizabeth Dickie died aged 82 in 1915, in Brighton, Victoria, as the widow of the Rev. Dickie. Dickie had died 25 December 1909. References External links Macadam, John (1827-1865) – entry in the Trove database of the National Library of Australia Macadam, John (1827–1865) – entry in the Australian Dictionary of Biography Burke & Wills Web – comprehensive website containing many of the historical documents relating to the Burke & Wills expedition The Burke & Wills Historical Society 1827 births 1865 deaths 19th-century Scottish chemists Academic staff of the University of Melbourne Scottish emigrants to Australia Alumni of the University of Edinburgh Alumni of the University of Strathclyde Alumni of the University of Glasgow Scientists from Glasgow Analytical chemists Members of the Victorian Legislative Assembly Burials at Melbourne General Cemetery Postmasters-general of Victoria
John Macadam
Chemistry
2,551
63,498,200
https://en.wikipedia.org/wiki/List%20of%20World%27s%20Fair%20architecture
This is a list of buildings and structures built for World's Fairs. Officially recognized exhibitions Architecture built for world's fairs recognized by the Bureau International des Expositions. London Great Exhibition 1851 The Crystal Palace Paris Exposition Universelle 1855 Palais de l'Industrie Théâtre du Rond-Point London International Exhibition 1862 The Exhibition Building of 1862 Paris Exposition Universelle 1867 Palais du Champ de Mars Vienna World's Fair 1873 Rotunda Philadelphia Centennial Exposition 1876 Main Exhibition Building Paris Exposition Universelle 1878 Palais du Trocadéro Melbourne International Exhibition 1880 Royal Exhibition Building Barcelona Universal Exposition 1888 Parc de la Ciutadella Paris Exposition Universelle 1889 Eiffel Tower Galerie des machines Chicago World's Columbian Exposition 1893 White City Brussels International Exposition 1897 Palace of the Colonies Paris Exposition Universelle 1900 Grand Palais Petit Palais Liège International 1905 Palais des beaux-arts de Liège San Francisco Panama–Pacific International Exposition 1915 Tower of Jewels Palace of Fine Arts Barcelona International Exposition 1929 Palau Nacional Barcelona Pavilion Estadi Olímpic Lluís Companys Poble Espanyol Teatre Grec Magic Fountain of Montjuïc Paris Exposition Internationale des Arts et Techniques dans la Vie Moderne 1937 Palais de Chaillot New York World's Fair 1939 1939 New York World's Fair pavilions and attractions Trylon and Perisphere Brussels Expo 58 Atomium Philips Pavilion Seattle Century 21 Exposition 1962 Seattle Center Seattle Center Monorail Space Needle Montreal Expo 67 Expo 67 pavilions Habitat 67 San Antonio HemisFair '68 Tower of the Americas Others Porto International Exhibition 1867 Palácio de Cristal Sydney International Exhibition 1879 Garden Palace Adelaide Jubilee International Exhibition 1887 Jubilee Exhibition Building Hanoi Exhibition 1902 Grand Palais Colonial Exhibition of Semarang 1914 Aceh Museum Paris Colonial Exposition 1931 Palais de la Porte Dorée Pagode de Vincennes Glasgow Empire Exhibition 1938 Tait Tower New York World's Fair 1964 1964 New York World's Fair pavilions New York City Pavilion New York Hall of Science New York State Pavilion Terrace on the Park Wisconsin Pavilion Unisphere References Architecture World's Fair Architecture
List of World's Fair architecture
Engineering
415
72,152,899
https://en.wikipedia.org/wiki/WR%20119
WR 119 is a Wolf–Rayet star located about 10,500 light years away in the constellation Scutum. WR 119 is classified as a WC9 star, belonging to the late-type carbon sequence of Wolf-Rayet stars. WR 119 is noteworthy for being the least luminous known Wolf-Rayet star, at just over . The most recent estimate is even lower, at just , based on the most recent analysis using Gaia DR2 data. Properties WR 119's properties are on the very edge of what may be possible for Wolf-Rayet stars, due to being so extremely dim. Modelling its spectrum using PoWR gives a temperature of . Factoring in the distance used in that study of , WR 119's luminosity is only , derived from Gaia DR2's parallax data. The corresponding radius is only , the smallest of the WC9 stars, less than half the size of the average WC9 star. WR 119's luminosity is also just 20% that of the average WC9 star's luminosity. The corresponding mass is just , the lowest mass for any Wolf-Rayet star derived using a mass-luminosity relation. In the visual wavelength, the star is also the dimmest of the WC9 stars (and anything later than WC4 in the study), with a visual luminosity of just 3,130 L☉ because most of the is emitted at ultraviolet wavelengths due to WR 119's very high surface temperature. WR 119 has a strong stellar wind, typical of Wolf-Rayet stars, but weaker than most WC stars. WR 119 loses 10-5.13 M☉ (about ) per year because of this stellar wind, which has a terminal velocity of 1,300 kilometres per second. WR 119 also emits a lot of dust, hence the "d" at the end of its spectral type, which may be an indication of binary status. References Wolf–Rayet stars Scutum (constellation)
WR 119
Astronomy
411
22,458,313
https://en.wikipedia.org/wiki/Philosophy%20of%20computer%20science
The philosophy of computer science is concerned with the philosophical questions that arise within the study of computer science. There is still no common understanding of the content, aims, focus, or topics of the philosophy of computer science, despite some attempts to develop a philosophy of computer science like the philosophy of physics or the philosophy of mathematics. Due to the abstract nature of computer programs and the technological ambitions of computer science, many of the conceptual questions of the philosophy of computer science are also comparable to the philosophy of science, philosophy of mathematics, and the philosophy of technology. Overview Many of the central philosophical questions of computer science are centered on the logical, ethical, methodological, ontological and epistemological issues that concern it. Some of these questions may include: What is computation? Does the Church–Turing thesis capture the mathematical notion of an effective method in logic and mathematics? What are the philosophical consequences of the P vs NP problem? What is information? Church–Turing thesis The Church–Turing thesis and its variations are central to the theory of computation. Since, as an informal notion, the concept of effective calculability does not have a formal definition, the thesis, although it has near-universal acceptance, cannot be formally proven. The implications of this thesis is also of philosophical concern. Philosophers have interpreted the Church–Turing thesis as having implications for the philosophy of mind. P versus NP problem The P versus NP problem is an unsolved problem in computer science and mathematics. It asks whether every problem whose solution can be verified in polynomial time (and so defined to belong to the class NP) can also be solved in polynomial time (and so defined to belong to the class P). Most computer scientists believe that P ≠ NP. Apart from the reason that after decades of studying these problems no one has been able to find a polynomial-time algorithm for any of more than 3000 important known NP-complete problems, philosophical reasons that concern its implications may have motivated this belief. For instance, according to Scott Aaronson, the American computer scientist then at MIT: If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in "creative leaps", no fundamental gap between solving a problem and recognizing the solution once it's found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss. See also Computer-assisted proof: Philosophical objections Philosophy of artificial intelligence Philosophy of information Philosophy of mathematics Philosophy of science Philosophy of technology References Further reading Matti Tedre (2014). The Science of Computing: Shaping a Discipline. Chapman Hall. Scott Aaronson. "Why Philosophers Should Care About Computational Complexity". In Computability: Gödel, Turing, Church, and beyond. Timothy Colburn. Philosophy and Computer Science. Explorations in Philosophy. M.E. Sharpe, 1999. . A.K. Dewdney. New Turing Omnibus: 66 Excursions in Computer Science Luciano Floridi (editor). The Blackwell Guide to the Philosophy of Computing and Information, 2004. Luciano Floridi (editor). Philosophy of Computing and Information: 5 Questions. Automatic Press, 2008. Luciano Floridi. Philosophy and Computing: An Introduction, Routledge, 1999. Christian Jongeneel. The informatical worldview, an inquiry into the methodology of computer science. Jan van Leeuwen. "Towards a philosophy of the information and computing sciences", NIAS Newsletter 42, 2009. Moschovakis, Y. (2001). What is an algorithm? In Enquist, B. and Schmid, W., editors, Mathematics unlimited — 2001 and beyond, pages 919–936. Springer. Alexander Ollongren, Jaap van den Herik. Filosofie van de informatica. London and New York: Routledge, 1999. Taylor and Francis. Ray Turner and Nicola Angius. "The Philosophy of Computer Science". Stanford Encyclopedia of Philosophy. Matti Tedre (2011). Computing as a Science: A Survey of Competing Viewpoints. Minds & Machines 21, 3, 361–387. Ray Turner. Computational Artefacts-Towards a Philosophy of Computer Science. Springer. External links The International Association for Computing and Philosophy Philosophy of Computing and Information at PhilPapers Philosophy of Computation at Berkeley
Philosophy of computer science
Mathematics,Technology
888
41,940,845
https://en.wikipedia.org/wiki/HD%20220773
HD 220773 is a star in the northern constellation of Pegasus. It has an apparent visual magnitude of 7.10, which is too faint to be visible with the naked eye. The distance to this system, as determined by parallax measurements, is 165 light years, but it is drifting closer with a radial velocity of −37.7 km/s. The star shows a high proper motion, traversing the celestial sphere at an angular rate of . The spectrum of HD 220773 presents as a late type star F-type or early G-type main-sequence star with a stellar classification of F9 V or G0 V, respectively. It is older than the Sun, with an estimated age of 6.3 billion years, and the magnetic activity in the chromosphere is at a low level. The star has 15% greater mass than the Sun but the radius is 73% larger. The abundance of iron, a measure of the star's metallicity, is slightly higher than solar. It is radiating over three times the luminosity of the Sun from its photosphere at an effective temperature of 5,852 K. A survey in 2015 ruled out the existence of any additional stellar companions at projected distances from 31 to 337 astronomical units. Search for planets The detection of an exoplanet, HD 220773 b, by the radial velocity method was claimed in 2012 based on observations at the McDonald Observatory. As the inclination of the orbital plane is unknown, only a lower bound on the mass can be determined. This object has at least 1.45 times the mass of Jupiter. It has a very eccentric orbit with a semimajor axis of around , taking 10.2 years to complete an orbit. However, a follow-up study in 2024 found no evidence of this planet in radial velocity data from the HARPS-N spectrograph. In addition, astrometric data from the Gaia space telescope also shows no evidence of a companion, placing an upper limit on the mass of any planet at 5 AU consistent with the claimed minimum mass of planet b. The McDonald team stated that their data collected since 2012 is also no longer consistent with the claimed planet. References F-type main-sequence stars G-type main-sequence stars Pegasus (constellation) BD+07 5030 220773 115697 J23262744+0838376
HD 220773
Astronomy
494
228,668
https://en.wikipedia.org/wiki/Suslin%27s%20problem
In mathematics, Suslin's problem is a question about totally ordered sets posed by and published posthumously. It has been shown to be independent of the standard axiomatic system of set theory known as ZFC; showed that the statement can neither be proven nor disproven from those axioms, assuming ZF is consistent. (Suslin is also sometimes written with the French transliteration as , from the Cyrillic .) Formulation Suslin's problem asks: Given a non-empty totally ordered set R with the four properties R does not have a least nor a greatest element; the order on R is dense (between any two distinct elements there is another); the order on R is complete, in the sense that every non-empty bounded subset has a supremum and an infimum; and every collection of mutually disjoint non-empty open intervals in R is countable (this is the countable chain condition for the order topology of R), is R necessarily order-isomorphic to the real line R? If the requirement for the countable chain condition is replaced with the requirement that R contains a countable dense subset (i.e., R is a separable space), then the answer is indeed yes: any such set R is necessarily order-isomorphic to R (proved by Cantor). The condition for a topological space that every collection of non-empty disjoint open sets is at most countable is called the Suslin property. Implications Any totally ordered set that is not isomorphic to R but satisfies properties 1–4 is known as a Suslin line. The Suslin hypothesis says that there are no Suslin lines: that every countable-chain-condition dense complete linear order without endpoints is isomorphic to the real line. An equivalent statement is that every tree of height ω1 either has a branch of length ω1 or an antichain of cardinality ℵ1. The generalized Suslin hypothesis says that for every infinite regular cardinal κ every tree of height κ either has a branch of length κ or an antichain of cardinality κ. The existence of Suslin lines is equivalent to the existence of Suslin trees and to Suslin algebras. The Suslin hypothesis is independent of ZFC. and independently used forcing methods to construct models of ZFC in which Suslin lines exist. Jensen later proved that Suslin lines exist if the diamond principle, a consequence of the axiom of constructibility V = L, is assumed. (Jensen's result was a surprise, as it had previously been conjectured that V = L implies that no Suslin lines exist, on the grounds that V = L implies that there are "few" sets.) On the other hand, used forcing to construct a model of ZFC without Suslin lines; more precisely, they showed that Martin's axiom plus the negation of the continuum hypothesis implies the Suslin hypothesis. The Suslin hypothesis is also independent of both the generalized continuum hypothesis (proved by Ronald Jensen) and of the negation of the continuum hypothesis. It is not known whether the generalized Suslin hypothesis is consistent with the generalized continuum hypothesis; however, since the combination implies the negation of the square principle at a singular strong limit cardinal—in fact, at all singular cardinals and all regular successor cardinals—it implies that the axiom of determinacy holds in L(R) and is believed to imply the existence of an inner model with a superstrong cardinal. See also List of statements independent of ZFC Continuum hypothesis AD+ Cantor's isomorphism theorem References K. Devlin and H. Johnsbråten, The Souslin Problem, Lecture Notes in Mathematics (405) Springer 1974. Independence results Order theory
Suslin's problem
Mathematics
764
77,326,503
https://en.wikipedia.org/wiki/Negative%20air%20ions
Negative air ions (NAI) are important air component, generally referring to the collections of negatively charged single gas molecules or ion clusters in the air. They play an essential role in maintaining the charge balance of the atmosphere. The main components of air are molecular nitrogen and oxygen. Due to the strong electronegativity of oxygen and oxygen-containing molecules, they can easily capture free electrons to form negatively charged air ions, most of which are superoxide radicals ·O2−, so NAI is mainly composed of negative oxygen ions, also called air negative oxygen ions. Research history In 1889, German scientists Elster and Geitel first discovered the existence of negative oxygen ions. At the end of the 19th century, German physicist Philipp Eduard Anton Lenard first explained the effects of negative oxygen ions on the human body in academic research. In 1902, scholars such as Ashkinas and Caspari further confirmed the biological significance of negative oxygen ions. In 1932, the world's first medical negative oxygen ions generator was invented in the United States. In the middle of the 20th century, Professor Albert P. Krueger of the University of California, Berkeley, conducted pioneering research and experiments on the biological effects of ions at the microscopic level. Professor Krueger demonstrated the impact of negative oxygen ions on humans, animals, and plants from the aspects of biological endocrine, internal circulation, and the generation reactions of various enzymes through a large number of animal and plants experiments. From the end of the 20th century to the beginning of the 21st century, many experts, scholars, and professional medical institutions applied negative ions (negative oxygen ions) technology to clinical practice. Through various explorations, new ways of treating diseases were opened up. In 2011, the official website of the China Air Negative Ion (Negative Oxygen Ion) and Ozone Research Society was launched. This website is the first negative ions industry website in China, and its purpose is to rapidly promote the orderly development of the air negative ion (negative oxygen ion) industry. In 2020, the Tsinghua University successfully developed a medical-grade high-concentration negative oxygen ion generator. It only needs to be sprayed on the room's walls to form a uniform and dense layer of nanoparticles on the wall, allowing the indoor wall to stably and long-term release high-concentration small-particle negative oxygen ions. Generation mechanism Common gases that produce negative air ions include single-component gases such as nitrogen, oxygen, carbon dioxide, water vapor, or multi-component gases obtained by mixing these single-component gases. Various negative air ions are formed by combining active neutral molecules and electrons in the gas through a series of ion-molecule reactions. In the air, due to the presence of many water molecules, the negative air ions formed are easy to combine with water to form hydrated negative air ions, which are typical negative air ions, such as O−·(H2O)n, O2−·(H2O)n, O3−·(H2O)n, OH−·(H2O)n, CO3−·(H2O)n, HCO3−·(H2O)n, CO4−·(H2O)n, NO2−·(H2O)n, NO3−·(H2O)n, etc. The ion clusters formed by the combination of small ions and water molecules have a longer survival period due to their large volume and the fact that the charge is protected by water molecules and is not easy to transfer. This is because in the molecular collision, the larger the molecular volume, the less energy is lost when encountering collisions with other molecules, thereby extending the survival time of negative air ions. Generation methods Negative air ions can be produced by two methods: natural or artificial.The methods of producing negative air ions in nature include the waterfall effect, lightning ionization, plants tip discharge, etc. Natural methods can produce a large number of fresh negative air ions. The artificial means of producing negative air ions include corona discharge, water vapour,and other methods. Compared with the negative air ions produced in nature, although artificial methods can produce high levels of negative air ions, there are specific differences in the types and concentrations of negative air ions, which makes the negative air ions produced by artificial methods may not achieve the excellent environmental health effects of negative air ions produced in nature. Improving the artificial method to produce ecological-level negative ions is necessary. Natural environments Waterfall method : When people are in a water-rich environment such as a waterfall, fountain, or seaside, they usually feel relaxed and release stress, which is related to many negative air ions in the environment, as one of the most common methods for producing negative air ions in nature. The mechanism of producing negative air ions by the waterfall method was first discovered by German scientist Lenard in 1915. The Lenard effect is achieved through two methods: the rupture of the "ring-bag" structure and the local protrusion separation. The "ring-bag" structure rupture theory believes that during the collision between water and gas, the water droplets will form a "U"-shaped intermediate with a "ring-bag" structure when subjected to external impact. The intermediate will then break apart to form small droplets with negative charges and large droplets with positive charges. The local protrusion separation theory believes that when water droplets collide with each other or are subjected to external forces, the water droplets will automatically protrude locally and generate negative charge aggregation. When subjected to shear force, this part will form negative ions with crystal water and be released into the air. Lightning strike method : The atmosphere itself is a huge electric field. Positive and negative charges will accumulate above and below the clouds. When the droplets in the clouds continue to accumulate and gradually approach the ground due to gravity, a giant capacitor will be formed between the clouds and the ground. When the electric field strength between the two exceeds the dielectric strength of the air, discharge will occur and break through the air. During the lightning discharge process, charged particles bombard the surrounding air molecules, ionizing the molecules to generate negative air ions. At the moment of the lightning strike, hundreds of millions of negative air ions will be generated. This is why people feel the air is fresh and clean after rain. It is not only because the rain increases the humidity of the air, but more importantly, the concentration of negative ions in the air has increased significantly. Plants tip discharge : The tip of the leaves of vegetation will discharge under the action of the photoelectric effect. Like the corona discharge, the needle tip will continue to ionize and release negative air ions. In addition, the reason why negative air ions can maintain a high concentration for a long time in forests and some areas covered by green vegetation is that the oxygen concentration released by vegetation during photosynthesis is much higher than that in cities, and a large amount of water vapor is released through respiration and leaf transpiration. Oxygen and water vapor can produce free electrons under ionization. Due to their strong electronegativity, water molecules and oxygen molecules can easily capture free electrons to form negative air ions. Artificial ionization Corona discharge method : Currently, the most common artificial method is to use corona discharge to produce negative ions. The specific process of using corona discharge to produce gaseous negative ions is to connect the high-voltage negative electrode to a thin needle-shaped wire or a conductor with a very small radius of curvature, so that a strong electric field is generated near the electrode, releasing high-speed electrons. The speed is enough to drive the electrons to collide with gas molecules, further ionize, and produce new free electrons and positive ions. The newly generated free electrons will repeat the previous process, continue to collide and ionize, and this process will be repeated many times, so that the tip electrode will continuously release negative air ions. The water vapour method : The water vapour method refers to the use of artificial technology and modern instruments to simulate the principle of waterfall generation, using high-speed airflow to collide with water droplets, dispersing larger water droplets into a large number of microdroplets. As the water droplet dispersion process, the Leonard effect occurs, generating negative ions. Determination method Detection of negative air ions is divided into measurement and identification. NAI measurement can be achieved by measuring the change in atmospheric conductivity when NAI passes through a conductive tube. NAI identification is generally achieved using mass spectrometry, which can effectively identify a variety of negative ions, including O−,O2−,O3−,CO3−,HCO3−,NO3−, etc. Application of negative air ions Health Promotion The effects of NAI on human/animal health are mainly concentrated on the cardiovascular and respiratory systems and mental health. The impacts of NAI on the cardiovascular system include improving red blood cell deformability and aerobic metabolism and lowering blood pressure. In terms of mental health, a experiments have shown that after exposure to NAI, performance on all the experimenters test tasks (mirror drawing, rotation tracking, visual reaction time and hearing) was significantly improved, and symptoms of seasonal affective disorder (SAD) were alleviated. The effects of NAI in relieving mood disorder symptoms are similar to those of antidepressant non-drug treatment trials, and NAI have also shown effective treatment for chronic depression. Environmental Improvement Negative air ions can be effectively used to remove dust and settle harmful pollutants such as PM. In particular, they can significantly degrade indoor pollutants, improve people's indoor living environment, and purify air quality. Some experts and scholars have used a corona-negative ions generator to conduct experiments on particles sedimentation through three steps: charging, migration, and sedimentation. They found that charged PM will settle faster or sink faster under the action of gravity so that PM will settle/precipitate faster than uncharged PM. In addition, experimental studies have shown that negative air ions have a specific degradation effect on chloroform, toluene, and 1,5-Hexadiene and produce carbon dioxide and water as final products through the reaction. See also Air ioniser Ion Negative air ionization therapy References External links Atmosphere Natural environment Biophysics Health care Air pollution Biochemistry
Negative air ions
Physics,Chemistry,Biology
2,119
54,184,905
https://en.wikipedia.org/wiki/Dicirenone
Dicirenone (INN, USAN; developmental code name SC-26304; also known as 7α-carboxyisopropylspirolactone) is a synthetic, steroidal antimineralocorticoid of the spirolactone group which was developed as a diuretic and antihypertensive agent but was never marketed. It was synthesized and assayed in 1974. Similarly to other spirolactones like spironolactone, dicirenone also possesses antiandrogen activity, albeit with relatively reduced affinity. References Abandoned drugs Antimineralocorticoids Carboxylic acids Esters Isopropyl esters Lactones Pregnanes Spiro compounds Spirolactones Steroidal antiandrogens
Dicirenone
Chemistry
167
19,855,393
https://en.wikipedia.org/wiki/AS4
AS4 (Applicability Statement 4) is an open standard for the secure and payload-agnostic exchange of Business-to-business documents using Web services. Secure document exchange is governed by aspects of WS-Security, including XML Encryption and XML Digital Signatures. Payload agnosticism refers to the document type (e.g. purchase order, invoice, etc.) not being tied to any defined SOAP action or operation. It is a Conformance Profile of the OASIS ebMS 3.0 specification. AS4 became an OASIS standard in 2013 and an ISO standard in 2020. The majority of the AS4 profiling points constraining the ebMS 3.0 specification are based upon the functional requirements of the AS2 specification. By scaling back ebMS 3.0 using AS2 as a blueprint, AS4 provides an entry-level on-ramp for Web services B2B by simplifying the complexities of Web services. Key technical highlights Support for SOAP 1.1 and 1.2 enveloping structure Payload agnosticism Support for single or multiple payloads contained either within the SOAP body or as SOAP attachment(s) Support for payload compression Support for message-level security including various combinations of XML Digital Signature and/or XML Encryption Support for X.509 security tokens and username/password tokens Support for business receipt of non-repudiation similar to the Message Disposition Notification (MDN) used by AS2 and specified as an XML schema by the ebXML BPSS group Support for the ebMS 3.0 One-Way/Push message exchange pattern with support for either synchronous or asynchronous responses Support for the ebMS 3.0 One-Way/Pull message exchange pattern which is beneficial for exchanging documents with non-addressable endpoints See also AS1 AS2 AS3 References External links OASIS ebXML Messaging Services Technical Committee Will AS4 Become the Communications Standard for Cloud Based Integration Services? Computer file formats Technical communication XML-based standards Data interchange standards
AS4
Technology
425
52,505,053
https://en.wikipedia.org/wiki/Vulnerability%20assessment%20%28computing%29
Vulnerability assessment is a process of defining, identifying and classifying the security holes in information technology systems. An attacker can exploit a vulnerability to violate the security of a system. Some known vulnerabilities are Authentication Vulnerability, Authorization Vulnerability and Input Validation Vulnerability. Purpose Before deploying a system, it first must go through from a series of vulnerability assessments that will ensure that the build system is secure from all the known security risks. When a new vulnerability is discovered, the system administrator can again perform an assessment, discover which modules are vulnerable, and start the patch process. After the fixes are in place, another assessment can be run to verify that the vulnerabilities were actually resolved. This cycle of assess, patch, and re-assess has become the standard method for many organizations to manage their security issues. The primary purpose of the assessment is to find the vulnerabilities in the system, but the assessment report conveys to stakeholders that the system is secured from these vulnerabilities. If an intruder gained access to a network consisting of vulnerable Web servers, it is safe to assume that he gained access to those systems as well. Because of assessment report, the security administrator will be able to determine how intrusion occurred, identify compromised assets and take appropriate security measures to prevent critical damage to the system. Assessment types Depending on the system a vulnerability assessment can have many types and level. Host assessment A host assessment looks for system-level vulnerabilities such as insecure file permissions, application level bugs, backdoor and Trojan horse installations. It requires specialized tools for the operating system and software packages being used, in addition to administrative access to each system that should be tested. Host assessment is often very costly in term of time, and thus is only used in the assessment of critical systems. Tools like COPS and Tiger are popular in host assessment. Network assessment In a network assessment one assess the network for known vulnerabilities. It locates all systems on a network, determines what network services are in use, and then analyzes those services for potential vulnerabilities. This process does not require any configuration changes on the systems being assessed. Unlike host assessment, network assessment requires little computational cost and effort. Vulnerability assessment vs penetration testing Vulnerability assessment and penetration testing are two different testing methods. They are differentiated on the basis of certain specific parameters. References External links List of known Vulnerabilities Information technology Computer security
Vulnerability assessment (computing)
Technology
490
8,980,927
https://en.wikipedia.org/wiki/World%20Urbanism%20Day
The international organisation for World Urbanism Day, also known as "World Town Planning Day", was founded in 1949 by the late Professor Carlos Maria della Paolera of the University of Buenos Aires, a graduate at the Institut d'urbanisme in Paris, to advance public and professional interest in planning. It is celebrated in more than 30 countries on four continents each November 8. See also Urbanism Urban planning New Urbanism Institut d'Urbanisme de Paris (French Wikipedia) References External links American Planning Association: World Town Planning Day World Urbanism Day from WN Network Urban planning Planned communities Garden suburbs November observances International observances
World Urbanism Day
Engineering
133
917,475
https://en.wikipedia.org/wiki/Allantoin
Allantoin is a chemical compound with formula C4H6N4O3. It is also called 5-ureidohydantoin or glyoxyldiureide. It is a diureide of glyoxylic acid. Allantoin is a major metabolic intermediate in most organisms including animals, plants and bacteria, though not humans. It is produced from uric acid, which itself is a degradation product of nucleic acids, by action of urate oxidase (uricase). Allantoin also occurs as a natural mineral compound (IMA symbol Aan). History Allantoin was first isolated in 1800 by the Italian physician Michele Francesco Buniva (1761–1834) and the French chemist Louis Nicolas Vauquelin, who mistakenly believed it to be present in the amniotic fluid. In 1821, the French chemist Jean Louis Lassaigne found it in the fluid of the allantois; he called it "l'acide allantoique". In 1837, the German chemists Friedrich Wöhler and Justus Liebig synthesized it from uric acid and renamed it "allantoïn". Animals Named after the allantois (an amniote embryonic excretory organ in which it concentrates during development in most mammals except humans and other hominids), it is a product of oxidation of uric acid by purine catabolism. After birth, it is the predominant means by which nitrogenous waste is excreted in the urine of these animals. In humans and other higher apes, the metabolic pathway for conversion of uric acid to allantoin is not present, so the former is excreted. Recombinant rasburicase is sometimes used as a drug to catalyze this metabolic conversion in patients. In fish, allantoin is broken down further (into ammonia) before excretion. Allantoin has been shown to improve insulin resistance when administered to rats and to increase lifespan when administered to the nematode worm Caenorhabditis elegans. Bacteria In bacteria, purines and their derivatives (such as allantoin) are used as secondary sources of nitrogen under nutrient-limiting conditions. Their degradation yields ammonia, which can then be utilized. For instance, Bacillus subtilis is able to utilize allantoin as its sole nitrogen source. Mutants in the B. subtilis pucI gene were unable to grow on allantoin, indicating that it encodes an allantoin transporter. In Streptomyces coelicolor, allantoinase (EC 3.5.2.5) and allantoicase (EC 3.5.3.4) are essential for allantoin metabolism. In this species the catabolism of allantoin, and the subsequent release of ammonium, inhibits antibiotic production (Streptomyces species synthesize about half of all known antibiotics of microbial origin). Applications Allantoin is present in botanical extracts of the comfrey plant and in the urine of most mammals. Chemically synthesized bulk allantoin, which is chemically equivalent to natural allantoin, is safe, non-toxic, compatible with cosmetic raw materials and meets CTFA and JSCI requirements. Over 10,000 patents reference allantoin. Cosmetics Manufacturers may use allantoin as an ingredient in over-the-counter cosmetics. Pharmaceuticals It is frequently present in toothpaste, mouthwash, and other oral hygiene products, in shampoos, lipsticks, anti-acne products, sun care products, and clarifying lotions, various cosmetic lotions and creams, and other cosmetic and pharmaceutical products. Biomarker of oxidative stress Since uric acid is the end product of the purine metabolism in humans, only non-enzymatic processes with reactive oxygen species will give rise to allantoin, which is thus a suitable biomarker to measure oxidative stress in chronic illnesses and senescence. See also Imidazolidinyl urea and diazolidinyl urea, are antimicrobial condensation products of allantoin with formaldehyde. References External links E. coli Allantoinase (AllB) in Uniprot (P77671) GMD MS Spectrum Cosmetics chemicals Ureas Excipients Hydantoins
Allantoin
Chemistry
914
298,372
https://en.wikipedia.org/wiki/Obfuscated%20Perl%20Contest
The Obfuscated Perl Contest was a competition for programmers of Perl which was held annually between 1996 and 2000. Entrants to the competition aimed to write "devious, inhuman, disgusting, amusing, amazing, and bizarre Perl code". It was run by The Perl Journal and took its name from the International Obfuscated C Code Contest. Contest The entries were judged on aesthetics, output and incomprehensibility. One entrant per year received the Best of Show award. Entrants were advised to try and demonstrate a range of Perl knowledge, while being humorous, surprising and deceitful. Code which purposely crashed the judges' machines was discouraged. The competition was typically divided into four categories, which, in the last contest, included: Create a Diversion (limit of 2048 bytes if using Perl/Tk, 512 bytes otherwise) World Wide Wasteland (limit of 512 bytes) Inner Beauty (limit of 512 bytes) Best The Perl Journal (code which generated the words The Perl Journal, limit of 256 bytes) See also Obfuscated code Just another Perl hacker References Further reading — reprints of the announcements, made in The Perl Journal by Felix S. Gallo, of the Zeroth, First, Third, Fourth, and Fifth contests Perl Computer humour Ironic and humorous awards Programming contests Recurring events established in 1996 Recurring events disestablished in 2000 Software obfuscation
Obfuscated Perl Contest
Technology,Engineering
293
14,235,691
https://en.wikipedia.org/wiki/Outline%20of%20genetics
The following outline is provided as an overview of and topical guide to genetics: Genetics – science of genes, heredity, and variation in living organisms. Genetics deals with the molecular structure and function of genes, and gene behavior in context of a cell or organism (e.g. dominance and epigenetics), patterns of inheritance from parent to offspring, and gene distribution, variation and change in populations. Introduction to genetics Introduction to genetics Genetics Chromosome DNA Genetic diversity Genetic drift Genetic variation Genome Heredity Mutation Nucleotide RNA Introduction to evolution Evolution Modern evolutionary synthesis Transmutation of species Natural selection Extinction Adaptation Polymorphism (biology) Gene flow Biodiversity Biogeography Phylogenetic tree Taxonomy (biology) Mendelian inheritance Molecular evolution Branches of genetics Classical genetics Developmental genetics Conservation genetics Ecological genetics Epigenetics Evolutionary genetics Genetic engineering Metagenics Genetic epidemiology Archaeogenetics Archaeogenetics of the Near East Genetics of intelligence Genetic testing Genomics Human genetics Human evolutionary genetics Human mitochondrial genetics Medical genetics Immunogenetics Microbial genetics Molecular genetics Neurogenetics Population genetics Plant genetics Psychiatric genetics Quantitative genetics Statistical genetics Multi-disciplinary fields that include genetics Evolutionary anthropology History of genetics History of genetics Natural history of genetics History of molecular evolution Cladistics Transitional fossil Extinction event Timeline of the evolutionary history of life History of the science of genetics History of genetics Ancient Concepts of Heredity Experiments on Plant Hybridization History of evolutionary thought History of genetic engineering History of genomics History of paleontology History of plant systematics Neanderthal genome project Timeline of paleontology General genetics concepts Molecules amino acids Nucleobase Adenine Cytosine Guanine Thymine Uracil Adenovirus Antibody Bacteria Codon Deoxyribonucleic acid (DNA) Messenger RNA mRNA Enzyme Exon Intron nucleotide allele animal model antisense apoptosis autosomal dominant autosome bacterial artificial chromosome (BAC) base pair birth defect bone marrow transplantation cancer candidate gene carcinoma carrier cDNA library cell centimorgan centromere chromosome chromosomal translocation cloning congenital disorder contig craniosynostosis cystic fibrosis cytogenetic map deletion diabetes mellitus diploid DNA replication DNA sequencing dominant double helix duplication electrophoresis fibroblasts fluorescence in situ hybridization (FISH) gene gene amplification gene expression gene library gene mapping gene pool gene therapy gene transfer genetic code ATGC genetic counseling genetic linkage genetic map genetic marker genetic screening genome genotype germ line haploid haploinsufficiency hematopoietic stem cell heterozygous highly conserved sequence holoprosencephaly homologous recombination homozygous human artificial chromosome (HAC) Human Genome Project human immunodeficiency virus (HIV) acquired immunodeficiency syndrome (AIDS) hybridization immunotherapy in situ hybridization inherited insertion intellectual property rights Jurassic Park (genetics of) karyotype knockout leukemia List of human genetic disorders locus LOD score lymphocyte malformation Gene mapping marker melanoma Mendel, Johann (Gregor) Mendelian inheritance Metaphase microarray technology microsatellite mitochondrial DNA monosomy mouse model multiple endocrine neoplasia, type 1 MEN1) mutation non-coding DNA non-directiveness nonsense mutation Northern blot Nucleic acid sequence nucleus oligo oncogene oncovirus p53 Particulate inheritance theory patent pedigree peptide phenotype physical map polydactyly polymerase chain reaction (PCR) polymorphism positional cloning primary immunodeficiency primer probe promoter pronucleus protease protein pseudogene recessive recombinant DNA repressor restriction enzymes restriction fragment length polymorphism (RFLP) retrovirus ribonucleic acid (RNA) ribosome risk communication sequence-tagged site (STS) sex chromosome sex-linked shotgun sequencing single-nucleotide polymorphisms (SNPs) somatic cells Southern blot spectral karyotype (SKY) substitution suicide gene syndrome technology transfer transgenic trisomy tumor suppressor gene vector Western blot yeast artificial chromosome (YAC) Genetic Modification Genetic engineering Genetically modified organism Genetically modified food Genetically modified crops Norman Borlaug Genetic research and Darwinism DNA sequencing Medical genetics Genomics Evolutionary ideas of the Renaissance and Enlightenment On the Origin of Species Charles Darwin The eclipse of Darwinism Concepts of Evolution Common descent Evidence of common descent Speciation Co-operation (evolution) Adaptive radiation Coevolution Divergent evolution Convergent evolution Parallel evolution Evolutionary developmental biology Evolutionary biology Evolutionary history of life Human evolution Evolutionary taxonomy Geneticists Classical geneticists Gregor Mendel Hugo de Vries William Bateson Thomas Hunt Morgan Alfred Sturtevant Ronald Fisher Frederick Griffith Jean Brachet Edward Lawrie Tatum George Wells Beadle DNA era geneticists Oswald Theodore Avery Colin McLeod Erwin Chargaff Barbara McClintock James Watson Francis Crick Genomics era geneticists Francis Collins Walter Fiers Eric Lander Kary Banks Mullis Lap-Chee Tsui Frederick Sanger Genetics-related organizations List of genetics research organizations See also Outline of biochemistry Outline of biotechnology References External links Genetics Genetics
Outline of genetics
Biology
1,071
25,081,409
https://en.wikipedia.org/wiki/Imagination%20Age
The Imagination Age is a theorized period following the Information Age where creativity and imagination become the primary creators of economic value (in contrast, the main activities of the Information Age were analysis and rational thought). It has been proposed that new technologies like virtual reality and user created content will change the way humans interact with each other and create economic and social structures. The AI boom of the 2020s has increased the ubiquity of information. The relevant neologism is the Fourth Industrial Revolution, popularized in 2016 based on transformative developments shifting the nature of industrial capitalism. One conception is that the rise of an immersive virtual reality (the metaverse or the cyberspace) will raise the value of "imagination work" done by designers, artists, et cetera, over rational thinking as a foundation of culture and economics. Origins of the term The terms Imagination Age as well as Age of Imagination were first introduced in an essay by designer and writer Charlie Magee in 1993. His essay, "The Age of Imagination: Coming Soon to a Civilization Near You" proposes the idea that the best way to assess the evolution of human civilization is through the lens of communication. The most successful groups throughout human history have had one thing in common: when compared to their competition they had the best system of communication. The fittest communicators—whether tribe, citystate, kingdom, corporation, or nation—had (1) a larger percentage of people with (2) access to (3) higher quality information, (4) a greater ability to transform that information into knowledge and action, (5) and more freedom to communicate that new knowledge to the other members of their group. Imagination Age, as a philosophical tenet heralding a new wave of cultural and economic innovation, appears to have been first introduced by artist, writer and cultural critic Rita J. King in November 2007 essay for the British Council entitled, "The Emergence of a New Global Culture in the Imagination Age", where she began using the phrase, "Toward a New Global Culture and Economy in the Imagination Age":Rather than exist as an unwitting victim of circumstance, all too often unaware of the impact of having been born in a certain place at a certain time, to parents firmly nestled within particular values and socioeconomic brackets, millions of people are creating new virtual identities and meaningful relationships with others who would have remained strangers, each isolated within their respective realities. King further refined the development of her thinking in a 2008 Paris essay entitled, "Our Vision for Sustainable Culture in the Imagination Age" in which she states, Active participants in the Imagination Age are becoming cultural ambassadors by introducing virtual strangers to unfamiliar customs, costumes, traditions, rituals and beliefs, which humanizes foreign cultures, contributes to a sense of belonging to one's own culture and fosters an interdependent perspective on sharing the riches of all systems. Cultural transformation is a constant process, and the challenges of modernization can threaten identity, which leads to unrest and eventually, if left unchecked, to violent conflict. Under such conditions it is tempting to impose homogeneity, which undermines the highly specific systems that encompass the myriad luminosity of the human experience. King has expanded her interpretation of the Imagination Age concept through speeches at the O'Reilly Media, TED, Cusp, and Business Innovation Factory conferences. The term Imagination Age was subsequently popularized in techno-cultural discourse by other writers, futurists and technologists, who attributed the term to King, including Jason Silva. Earlier, one-time, references to the Imagination Age can be found attributed to Carl W. Olson in his 2001 book "The Boss is Dead...: Leadership Breakthroughs for the Imagination Age, and virtual worlds developer Howard Stearns in 2005. Previous ages The ideas of the Imagination Age depend in large part upon an idea of progress through history because of technology, notably outlined by Karl Marx. That cultural progress has been categorized into a number of major stages of development. According to this idea civilization has progressed through the following ages, or epochs: Agricultural Age – economy dominated by physical work with wooden tools and animals in order to produce food Industrial Age – economy dominated by factories to produce commodities Information Age – economy dominated by knowledge workers using computers and other electronic devices for the purposes of research, finance, consulting, information technology, and other services Following this is a new paradigm created by virtual technology, high speed internet, massive data storage, and other technologies. This new paradigm, the argument goes, will create a new kind of global culture and economy called the Imagination Age. The next and current age might have started recently: The term Fourth Industrial Revolution came into popular discourse in 2016 Economic rise of imagination The Imagination Age includes a society and culture dominated by the imagination economy. The idea relies on a key Marxist concept that culture is a superstructure fully conditioned by the economic substructure. According to Marxist thinking certain kinds of culture and art were made possible by the adoption of farming technology. Then with the rise of industry new forms of political organization (democracy, militarism, fascism, communism) were made possible along with new forms of culture (mass media, news papers, films). These resulted in people changing. In the case of industrialization people were trained to become more literate, to follow time routines, to live in urban communities. The concept of the Imagination Age extends this to a new order emerging presently. An imagination economy is defined by some thinkers as an economy where intuitive and creative thinking create economic value, after logical and rational thinking has been outsourced to other economies. Michael Cox Chief Economist at Federal Reserve Bank of Dallas argues that economic trends show a shift away from information sector employment and job growth towards creative jobs. Jobs in publishing, he has pointed out are declining while jobs for designers, architects, actors & directors, software engineers and photographers are all growing. This shift in job creation is a sign of the beginning of the Imagination Age. The 21st century has seen a growth in games and interactive media jobs. Cox argues that the skills can be viewed as a "hierarchy of human talents", with raw physical effort as the lowest form of value creation, above this skilled labor and information entry to creative reasoning and emotional intelligence. Each layer provides more value creation than the skills below it, and the outcome of globalization and automation is that labor is made available for higher level skills that create more value. Presently these skills tend to be around imagination, social and emotional intelligence. Technology Key to the idea that imagination is becoming the key commodity of our time is a confidence that virtual reality technology like Oculus Rift and HoloLens will emerge to take much of the place of the current text-and-graphic dominated internet. This will provide a 3D internet where imagination and creativity (over information and search) will be key to creating user experience and value. The concept is not limited to just virtual reality. Charlie Magee states that the technology that will develop during the Imagination Age would include: The best bet is on a hybrid breakthrough created by the meshing of nanotechnology, computer science (including artificial intelligence), biotechnology (including biochemistry, biopsychology, etc.), and virtual reality. In The Singularity is Near, Raymond Kurzweil states that future combination of AI, nano-technology, and biotechnology will create a world where anything that can be imagined will be possible, raising the importance of imagination as the key mode of human thinking. Global implications Rita J. King has been the single major advocate of the Imagination Age concept and its implications on cultural relations, identity and the transformation of the global economy and culture. King has expounded on the concept through speeches at the O'Reilly Media and TED conferences and has argued that virtual world technology and changes in people's ability to imagine other lives could promote world understanding and reduce cultural conflict. Some public policy experts have argued the emergence of the Imagination Age out of the Information Age will have a major impact on overall public policy. All are concepts discussed in The Purpose Economy by Aaron Hurst, and in the creation of The Purpose Revolution discussed in the Golden Age Companion Textbook. See also Attention economy Cognitive-cultural economy Cognitive Surplus, 2010 book Content creator Golden Age Information society Indigo Era Netocracy, concept whereby power revolves around the ability to form and use networks and technological tools Post-scarcity economy Post-work society Post-truth References 20th century 21st century Historical eras Information Age Contemporary history Science fiction themes Virtual reality Virtual economy Imagination 1993 neologisms
Imagination Age
Technology
1,723
1,335,536
https://en.wikipedia.org/wiki/Butterfly%20theorem
The butterfly theorem is a classical result in Euclidean geometry, which can be stated as follows: Let be the midpoint of a chord of a circle, through which two other chords and are drawn; and intersect chord at and correspondingly. Then is the midpoint of . Proof A formal proof of the theorem is as follows: Let the perpendiculars and be dropped from the point on the straight lines and respectively. Similarly, let and be dropped from the point perpendicular to the straight lines and respectively. Since From the preceding equations and the intersecting chords theorem, it can be seen that since . So, Cross-multiplying in the latter equation, Cancelling the common term from both sides of the resulting equation yields hence , since MX, MY, and PM are all positive, real numbers. Thus, is the midpoint of . Other proofs too exist, including one using projective geometry. History Proving the butterfly theorem was posed as a problem by William Wallace in The Gentleman's Mathematical Companion (1803). Three solutions were published in 1804, and in 1805 Sir William Herschel posed the question again in a letter to Wallace. Rev. Thomas Scurr asked the same question again in 1814 in the Gentleman's Diary or Mathematical Repository. References External links The Butterfly Theorem at cut-the-knot A Better Butterfly Theorem at cut-the-knot Proof of Butterfly Theorem at PlanetMath The Butterfly Theorem by Jay Warendorff, the Wolfram Demonstrations Project. Euclidean plane geometry Theorems about circles Articles containing proofs
Butterfly theorem
Mathematics
308
2,440,784
https://en.wikipedia.org/wiki/Medical%20geology
Medical geology is an interdisciplinary scientific field studying the relationship between natural geological factors and their effects on human and animal health. The Commission on Geological Sciences for Environmental Planning defines medical geology as "the science dealing with the influence of ordinary environmental factors on the geographical distribution of health problems in man and animals." In its broadest sense, medical geology studies exposure to or deficiency of trace elements and minerals; inhalation of ambient and anthropogenic mineral dusts and volcanic emissions; transportation, modification and concentration of organic compounds; and exposure to radionuclides, microbes and pathogens. History Many have deemed medical geology as a new field, when in actuality it is re-emerging. Hippocrates and Aristotle first recognized the relationship between human diseases and the earth's elements. This field ultimately depends on a number of different fields coming and working together to solve some of the earth's mysteries. The scientific term for this field is hydrobiogeochemoepidemiopathoecology; however, it is more commonly known as medical geology. It was established in 1990 by the International Union of Geological Sciences. Paracelsus, the "father of pharmacology" (1493–1541), stated that "all substances are poisons, there is none which is not a poison. The right dosage differentiates a poison and a remedy." This passage sums up the idea of medical geology. The goal of this field is to find the right balance and intake of elements/minerals in order to improve and maintain health. Examples of research in medical geology include: Studies on the impact of contaminant mobility as a result of extreme weather events such as flooding. Lead and other heavy metal exposure resulting from dust and other particulates Asbestos exposure such as amphibole asbestos dusts in Libby, Montana Fungal infection resulting from airborne dust, such as Valley Fever or coccidioidomycosis Recently, a new concept of "geomedical engineering" has been introduced in medical geology through a paper titled "Geomedical Engineering: A new and captivating prospect". It provides the fundamentals of engineering applications to the medical geology issues. Environment and human health It is widely known that the state of our environment affects us in many ways. Minerals and rocks have an impact on human and animal populations because that is what the earth is composed of. Medical geology brings professionals from both the medicine field and the geology field to help us understand this relationship. There are two priorities that have been established within the medical geology field, "(1) the study of trace elements, especially their bioavailability and (2) a need to establish baseline, or background levels of contaminants/xenobiotics/potentially harmful but naturally occurring materials in water, soil, air, food, and animal tissue." The elements and minerals in the land affect people and animals immensely, especially when there is a close relationship between the two. Those who depend heavily on the land are faced with one of two problems. First, those who live in places such as Maputaland, South Africa are exposed to heavily impoverished soils which result in a number of diseases caused by mineral imbalances. Secondly, those in areas such as India and Bangladesh are often exposed to an excess of elements in the land, resulting in mineral toxicity. All living organisms need some naturally occurring elements; however, excessive amounts can be detrimental to health. There is a direct link between health and the earth because all humans ingest and breath in these chemicals and for the most part it is done unknowingly. Sources of chemical exposure There are many ways in which humans come into contact with the earth's elements and below are only a few ways in which we become exposed to them. Volcanoes are one of the main sources that bring all the toxicity from inside the earth to the outside. They bring out chemicals such as; arsenic, beryllium, cadmium, mercury, lead, radon, and uranium. Rocks are also one of the leading sources in exposure to these elements. "They are essentially the source of all the naturally occurring chemical elements found on the earth." Diseases Iodine deficiency One of the biggest geochemical diseases is iodine deficiency. Thirty percent of the world is at risk for it and insufficient intake is the most common cause of intellectual disability and brain damage. The sea is a major source of iodine and those who are further from it are at a disadvantage. Another source of it is in soil; however, goitrogens such as humus and clay trap the iodine, making it hard for people to access it. Some cultures actually consume the earth's minerals by eating soil and clay; this is known as geophagy. It is most common in the tropics, especially among pregnant women. The Ottomac people of South America engage in this practice and none have suffered from any health problems related with mineral/ Iodine deficiency. Cardiovascular disease Cardiovascular disease has often been linked to water hardness as the main cause. Water hardness means that there is magnesium in the water with calcium playing a role. Some research has completely discredited this evidence, and has found that the more magnesium in the water the less chance of death cardiovascular disease. Radiation Natural radiation is found everywhere; it is in the air, water, soil, rocks, minerals and food. The largest amount of radiation comes from radon. Certain places are called 'high background radiation areas' (HBRAs), such as Guarapari, Southwest of France, Ramsar, parts of China, and Kerala Coast. People living in these areas however have not shown any health deficiencies and in some cases are even healthier and live longer than those not in HBRAs. Other issues Among the problems presented there are also issues with fluoride in Africa and India, arsenic in Argentina, Chile, and Taiwan, selenium in areas of the United States, Venezuela, China and nitrate in agricultural areas. As medical geology grows it may become more important to the medical field in relation to the issue of diseases. In addition to deficiencies of particular minerals, dietary excesses of certain elements occurring in specific geographic regions can also be harmful to human health, as per the examples listed below: Hyperkalemia: excess amount of potassium Hypercalcemia: excess amount of calcium Hyperphosphatemia: excess amount of phosphorus International Medical Geology Association "The International Medical Geology Association (IMGA) aims to provide a network and a forum to bring together the combined expertise of geologists and earth scientists, environmental scientists, toxicologists, epidemiologists, and medical specialists, in order to characterize the properties of geological processes and agents, the dispersal of geological material and their effects on human population." IMGA was founded in 2006 and manages affairs and funds, plans conferences, elections and publications, and they are also a way of encouraging growth and recognition in the field. Although it was founded in 2006, it was a work in progress for ten years when a working group of medical geology was established by the International Union of Geological Sciences (IUGS) in 1996. The goal of the working group was to advertise and make people aware of the harmful effects the environment has on our health. References USGS Medical Geology Accessed 22 July 2006 Medical Geology - Geotimes Nov. 2001 accessed 28 January 2006 Bunnell, Joseph E. (2004) Medical Geology: Emerging Discipline on the Ecosystem-Human Health Interface, Ecohealth PDF file accessed 28 January 2007 External links International Medical Geology Association Geobiology Environmental health
Medical geology
Biology
1,538
44,488,972
https://en.wikipedia.org/wiki/Paramodular%20group
In mathematics, a paramodular group is a special sort of arithmetic subgroup of the symplectic group. It is a generalization of the Siegel modular group, and has the same relation to polarized abelian varieties that the Siegel modular group has to principally polarized abelian varieties. It is the group of automorphisms of Z2n preserving a non-degenerate skew symmetric form. The name "paramodular group" is often used to mean one of several standard matrix representations of this group. The corresponding group over the reals is called the parasymplectic group and is conjugate to a (real) symplectic group. A paramodular form is a Siegel modular form for a paramodular group. Paramodular groups were introduced by and named by . Explicit matrices for the paramodular group There are two conventions for writing the paramodular group as matrices. In the first (older) convention the matrix entries are integers but the group is not a subgroup of the symplectic group, while in the second convention the paramodular group is a subgroup of the usual symplectic group (over the rationals) but its coordinates are not always integers. These two forms of the symplectic group are conjugate in the general linear group. Any nonsingular skew symmetric form on Z2n is equivalent to one given by a matrix where F is an n by n diagonal matrix whose diagonal elements Fii are positive integers with each dividing the next. So any paramodular group is conjugate to one preserving the form above, in other words it consists of the matrices of GL2n(Z) such that The conjugate of the paramodular group by the matrix (where I is the identity matrix) lies in the symplectic group Sp2n(Q), since though its entries are not in general integers. This conjugate is also often called the paramodular group. The paramodular group of degree 2 Paramodular group of degree n=2 are subgroups of GL4(Q) so can be represented as 4 by 4 matrices. There are at least 3 ways of doing this used in the literature. This section describes how to represent it as a subgroup of Sp4(Q) with entries that are not necessarily integers. Any non-degenerate skew symmetric form on Z4 is up to isomorphism and scalar multiples equivalent to one given as above by the matrix . In this case one form of the paramodular group consists of the symplectic matrices of the form where each * stands for an integer. The fact that this matrix is symplectic forces some further congruence conditions, so in fact the paramodular group consists of the symplectic matrices of the form The paramodular group in this case is generated by matrices of the forms and for integers x, y, and z. Some authors use the matrix instead of which gives similar results except that the rows and columns get permuted; for example, the paramodular group then consists of the symplectic matrices of the form References External links Discrete groups Modular forms
Paramodular group
Mathematics
665
37,104,812
https://en.wikipedia.org/wiki/Lepiota%20ananya
Lepiota ananya is a gilled mushroom of the genus Lepiota in the order Agaricales. Known only to come from Kerala State, India, it was described as new to science in 2009. Taxonomy The species was first described in a 2009 issue of the journal Mycotaxon. The type collection was made in July 2005, in Palode, a village in the Thiruvananthapuram district of Kerala State, India. The specific epithet ananya is derived from the Sanskrit word for "unique". Description Fruit bodies have caps that are initially convex, becoming broadly convex and finally flattened with an umbo; the cap attains a diameter of . The cap surface is whitish, sometimes with a yellowish tinge, and has dark brown, pressed-down fibrillose scales that are more concentrated in the center of the cap. The cap margin, initially curved inward before straightening in age, has fine grooves and a scalloped edge. The gills are free from attachment to the stem, and colored light yellow. They are crowded together closely, and interspersed with 3–4 tiers of lamellulae (short gills). The edges of the gills are finely fringed. The hollow, cylindrical stem measures by thick, with a thicker base. The stem surface is whitish and fibrillose; the stem base arises from a white mycelium. The stem bears a membranous, whitish ring on its upper portion. The flesh is up to 2 mm thick, whitish to pale yellow, and has no distinct odor. Spores are amygdaliform (almond-shaped) with walls up to 1 μm thick, smooth, hyaline (translucent), and measure 5.5–8 by 3.5–4.5 μm. The spores contain refractive oil droplets. The basidia (spore-bearing cells) are club-shaped, hyaline, contain oil droplets, and have dimensions of 13–21 by 8–10 μm. They are four-spored with sterigmata up to 5 μm long. Cheilocystidia (cystidia on the gill edge) are abundant, and have a shape ranging from cylindrical to club-shaped to utriform (like a leather bottle). They are thin-walled, hyaline or pale yellowish, and measure 15–37 by 7.5–10 μm; there are no cystidia on the gill faces (pleurocystidia). Clamp connections are rare in the hyphae. Habitat and distribution The fruit bodies of Lepiota ananya grow single or scattered on the ground among decaying litterfall. The species is known only from its type locality. See also List of Lepiota species References External links ananya Fungi of India Fungi described in 2009 Fungus species
Lepiota ananya
Biology
582
3,036,665
https://en.wikipedia.org/wiki/843%20Nicolaia
843 Nicolaia is a main-belt asteroid discovered by Danish astronomer H. Thiele on 30 September 1916. It was a lost asteroid for 65 years before being rediscovered by Astronomisches Rechen-Institut at Heidelberg in 1981. The asteroid is orbiting the Sun with a period of 3.44 years. References External links 000843 Discoveries by Holger Thiele Named minor planets 19160930 Recovered astronomical objects
843 Nicolaia
Astronomy
90
11,438,066
https://en.wikipedia.org/wiki/Mycosphaerella%20pyri
Mycosphaerella pyri is a fungal plant pathogen. See also List of Mycosphaerella species References Fungal plant pathogens and diseases pyri Fungi described in 1869 Fungus species
Mycosphaerella pyri
Biology
42
18,823,843
https://en.wikipedia.org/wiki/Howard%20M.%20Wiseman
Howard Mark Wiseman (born 19 June 1968) is an Australian theoretical and quantum physicist, who notable for his work on quantum feedback control, quantum measurements, quantum information (especially quantum steering), open quantum systems, the many interacting worlds interpretation of quantum mechanics, and other topics in quantum foundations. Early life Wiseman was born in Brisbane, Australia and received his B.Sc.(Hons) in Physics from the University of Queensland in 1991. He completed his PhD in physics under Gerard J. Milburn at the University of Queensland in 1994, with a thesis entitled Quantum Trajectories and Feedback. Career After his PhD, Wiseman undertook a postdoc under Dan Walls at the University of Auckland. From 1996 to 2009 he held Australian Research Council (ARC) research fellowships. He is currently a Physics Professor at Griffith University, where he is the Director of the Centre for Quantum Dynamics. He is also an Executive Node Manager in the Centre for Quantum Computation and Communication Technology, an ARC Centre of Excellence. Honors His early-career awards include the Bragg Medal of the Australian Institute of Physics (AIP), the Pawsey Medal of the Australian Academy of Science and the Malcolm Macintosh Medal, one of the Prime Minister's Prizes for Science. He is a fellow of the Australian Academy of Science, a Fellow of the American Physical Society, and a Fellow of The Optical Society of America. In 2022 Wiseman was awarded the AIP’s Walter Boas Medal for Excellence in Research, for elucidating fundamental limits arising from quantum theory, in particular in its applications to metrology and laser science, and via its implications for the foundations of reality. See also Quantum Aspects of Life (book) Selected bibliography References External links Wiseman's homepage Wiseman's scientific genealogy Wiseman's MacIntosh medal Wiseman's Qwiki profile 1968 births Living people Scientists from Brisbane University of Queensland alumni Australian physicists Academic staff of Griffith University Quantum physicists Fellows of the Australian Academy of Science Fellows of the American Physical Society
Howard M. Wiseman
Physics
412
67,686,153
https://en.wikipedia.org/wiki/Chiral%20drugs
Chemical compounds that come as mirror-image pairs are referred to by chemists as chiral or handed molecules. Each twin is called an enantiomer. Drugs that exhibit handedness are referred to as chiral drugs. Chiral drugs that are equimolar (1:1) mixture of enantiomers are called racemic drugs and these are obviously devoid of optical rotation. The most commonly encountered stereogenic unit, that confers chirality to drug molecules are stereogenic center. Stereogenic center can be due to the presence of tetrahedral tetra coordinate atoms (C,N,P) and pyramidal tricoordinate atoms (N,S). The word chiral describes the three-dimensional architecture of the molecule and does not reveal the stereochemical composition. Hence "chiral drug" does not say whether the drug is racemic (racemic drug), single enantiomer (chiral specific drug) or some other combination of stereoisomers. To resolve this issue Joseph Gal introduced a new term called unichiral. Unichiral indicates that the stereochemical composition of a chiral drug is homogenous consisting of a single enantiomer. Many medicinal agents important to life are combinations of mirror-image twins. Despite the close resemblance of such twins, the differences in their biological properties can be profound. In other words, the component enantiomers of a racemic chiral drug may differ wildly in their pharmacokinetic, pharmacodynamic profile. The tragedy of thalidomide illustrates the potential for extreme consequences resulting from the administration of a racemate drug that exhibits multiple effects attributable to individual enantiomers. With the advancements in chiral technology and the increased awareness about three-dimensional consequences of drug action and disposition emerged specialized field "chiral pharmacology". Simultaneously the chirality nomenclature system also evolved. A brief overview of chirality history and terminology/descriptors is given below. A detailed chirality timeline is not the focus of this article. Chirality: history overview Chirality can be traced back to 1812, when physicist Jean-Baptiste Biot found out about a phenomenon called "optical activity." Louis Pasteur, a famous student of Biot's, made a series of observations that led him to suggest that the optical activity of some substances is caused by their molecular asymmetry, which makes nonsuperimposable mirror-images. In 1848, Pasteur grew two different kinds of crystals from the racemic sodium ammonium salt of tartaric acid. He was the first person to separate enantiomeric crystals by hand. In fact Pasteur laid the foundations of stereochemistry and chirality. In 1874, Jacobus Henricus van 't Hoff came up with the idea of an asymmetric carbon atom. He said that all optically active carbon compounds have an asymmetric carbon atom. In the same year, Joseph Achille Le Bel only used asymmetry arguments and talked about the asymmetry of the molecules as a whole instead of the asymmetry of each carbon atom. So, Le Bel's idea could be seen as the general theory of stereoisomerism, while van 't Hoff's could be seen as a special case (restricted to tetrahedral carbon). Soon, scientists started to look into what chiral compounds meant for living things. In 1903, Cushny was the first person to show that enantiomers of a chiral molecule have different biological effects. Lord Kelvin used the word "chiral" for the first time in 1904. Chirality: terminology/descriptors This is to give an overview of the evolving chirality nomenclature system commonly employed to distinguish enantiomers of a chiral drug. In the beginning, enantiomers were distinguished based on their ability to rotate the plane of plane-polarized light. The enantiomer that rotates the plane-polarized light to the right is named "dextro-rotatory", abbreviated as "dextro" or "d" and the counterpart as "levo" or "l". A racemic mixture is denoted as "(±)", "rac", or "dl". Now the d/l system of naming based on optical rotation is falling into disuse. Later, the Fischer convention was introduced to specify the configuration of a stereogenic center and uses the symbols D and L. The use of capital letters is to differentiate from the "d" / "l" notation (optical descriptor) described earlier. In this system, the enantiomers are named with reference to D- and L-glyceraldehyde which is taken as the standard for comparison. The structure of the chiral molecule should be represented in the Fischer projection formula. If the hydroxyl group attached to the highest chiral carbon is on the right-hand side it is referred to as D-series and if on the left-hand side it is called L-series. This nomenclature system has also become obsolete. But D-/L-system of naming is still employed to designate the configuration of amino acids and sugars. In general the D/L system of nomenclature is superseded by the Cahn-Ingold-Prelog (CIP) rule to describe the configuration of a stereogenic/chiral center. In the CIP or R/S convention, or sequence rule, the configuration, spatial arrangements of ligands/substituents around a chiral center, is labeled as either "R" or "S". This convention is now almost worldwide in use and become a part of the IUPAC (International Union of Pure and Applied Chemistry) rules of nomenclature. In this approach: identify the chiral center, label the four atoms directly attached to the stereogenic center in question, assign priorities according to the sequence rule ( from 1 to 4), rotate the molecule until the lowest priority (number 4) substituent is away from the observer/viewer, draw a curve from number 1 to number 2 to number 3 substituent. If the curve is clockwise, the chiral center is of R-absolute configuration, "R" (Latin, rectus = right). If the curve is counterclockwise, the chiral center is of S-absolute configuration, "S" (Latin, sinister = left). Refer to figure, the Cahn-Ingold-Prelog rule. An overview of the nomenclature system is presented in the table below. Racemic drugs For many years scientists in drug development have been blind to the three-dimensional consequences of stereochemistry, chiefly due to the lack of technology for making enantioselective investigations. Besides the thalidomide tragedy, another event that raised the importance of issues of stereochemistry in drug research and development was the publication of a manuscript in 1984 entitled, "Stereochemistry, a basis of sophisticated nonsense in pharmacokinetics and clinical pharmacology" by Ariëns. This article, and the series of articles that followed, criticized the practice of conducting pharmacokinetic and pharmacodynamic studies on racemic drugs and ignoring the separate contributions of the individual enantiomers. These papers have served to crystallize some of the important issues surrounding racemic drugs and stimulated much discussion in industry, government and academia. Chiral pharmacology As a result of these criticisms and the renewed awareness of the three-dimensional effects of drug action fueled by the exponential explosion of chiral technology emerged the new area "stereo-pharmacology". A more specific term is "chiral pharmacology", a phrase popularized by John Caldwell. This field has grown itself into a specialized discipline concerned with the three-dimensional aspects of drug action and disposition. This approach essentially views each version of the chiral twins as separate chemical species. To express the pharmacological activities of each of the chiral twins two technical terms have been coined, eutomer and distomer. The member of the chiral twin that has greater physiological activity is referred to as the eutomer and the other one with lesser activity is referred to as distomer. It is generally understood that this reference is necessarily to a single activity being studied. The eutomer for one effect may well be the distomer when another is studied. The eutomer/distomer ratio is called the eudysmic ratio. Bio-environment and chiral twins The behavior of the chiral twins depends mainly on the nature of the environment (achiral/chiral) in which they are present. An achiral environment does not differentiate the molecular twins whereas a chiral environment does distinguish the left-handed version from the right-handed version. Human body, a classic bio-environment, is inherently handed as it is filled with chiral discriminators like amino acids, enzymes, carbohydrates, lipids, nucleic acids, etc. Hence when a racemic therapeutic gets exposed to biological system the component enantiomers will be acted upon stereoselectively. For drugs, chiral discrimination can take place either in the pharmacokinetic or pharmacodynamic phase. Chiral discrimination Easson and Stedman (1933) advanced a drug-receptor interaction model to account for the differential pharmacodynamic activity between enantiomeric pairs. In this model the more active enantiomer (the eutomer) take part in a minimum of three simultaneous intermolecular interactions with the receptor surface (good fit), Figure. A., where as the less active enantiomer (distomer) interacts at two sites only (bad fit), Figure B. [Refer image for Figure: Easson-Stedman model]. Thus the "fit" of the individual enantiomers to the receptor site differs, as does the energy of interaction. This is a simplistic model but used to explain the biological discrimination between enantiomeric pairs. In reality the drug-receptor interaction is not that simple, but this view of such complex phenomenon has provided major insights into the mechanism of action of drugs. Pharmacodynamic considerations Racemic drugs are not drug combinations in the accepted sense of two or more co-formulated therapeutic agents, but combinations of isomeric substances whose pharmacological activity may reside predominantly in one specific enantiomeric form. In case of stereoselectivity in action only one of the components in the racemic mixture is truly active. The other isomer, the distomer, should be regarded as impurity or isomeric ballast, a term coined by Ariëns, not contributing to the effects aimed at. In contrast to the pharmacokinetic properties of an enantiomeric pair, differences in pharmacodynamic activity tend to be more obvious. There is a wide spectrum of possibilities of distomer actions, many of which are confirmed experimentally. Selected examples of the distomer actions (viz. equipotent, less active, inactive, antagonistic, chiral inversion) are presented in the table below. Drug toxicity Since there is a frequent large pharmacokinetic and pharmacodynamic differences between enantiomers of a chiral drug it is not surprising that enantiomers may result in stereoselective toxicity. They can reside in the pharmacologically active enantiomer (eutomer) or in the inactive one (distomer). The toxicologic differences between enantiomers of have also been demonstrated. The following are examples of some of the chiral drugs where their toxic/undesirable side-effects dwell almost in the distomer. This would seem to be a clear cut case of going for a chiral switch. Penicillamine Penicillamine is a chiral drug with one chiral center and exists as a pair of enantiomers. (S)-penicillamine is the eutomer with the desired antiarthritic activity while the (R)-penicillamine is extremely toxic. Ketamine Ketamine is a widely used anaesthetic agent. It is a chiral molecule that is administered as a racemate. Studies show that (S)-(+)-ketamine is the active anaesthetic and the undesired side-effects (hallucination and agitation) reside in the distomer, (R)-(-)-ketamine. Dopa The initial use of racemic dopa for the treatment of Parkinson's disease resulted in a number of adverse effects viz. nausea, vomiting, anorexia, involuntary movements and granulocytopenia. The use of L-dopa [the (S)-enantiomer] resulted in reducing the required dose, and adverse effects. The granulocytopenia was not observed with the single enantiomer. Ethambutol The antitubercular agent Ethambutol contains two constitutionally symmetrical stereogenic centers in its structure and exists in three stereoisomeric forms. An enantiomeric pair (S,S)- and (R,R)-ethambutol, along with the achiral stereoisomer called meso-form, it holds a diastereomeric relationship with the optically active stereoisomers. The activity of the drug resides in the (S,S)-enantiomer which is 500 and 12 fold more potent than the (R,R)-ethambutol and the meso-form. The drug had initially been introduced for clinical use as the racemate and was changed to the (S,S)-enantiomer, as a result of optic neuritis leading to blindness. Toxicity is related to both dose and duration of treatment. All the three stereoisomers were almost equipotent with respect to side effects. Hence the use of S,S)-enantiomer greatly enhanced the risk/benefit ratio. Thalidomide Thalidomide is a classical example highlighting the alleged role of chirality in drug toxicity. Thalidomide was a racemic therapeutic and prescribed to pregnant women to control nausea and vomiting. The drug was withdrawn from world market when it became evident that the use in pregnancy causes phocomelia (clinical conditions where babies are born with deformed hand and limbs). Later in late 1970s studies indicated that the (R)- enantiomer is an effective sedative, the (S)-enantiomer harbors teratogenic effect and causes fetal abnormalities. Later studies established that under biological conditions the (R)-thalidomide, good partner, undergoes an in vivo metabolic inversion to the (S)-thalidomide, evil partner and vice versa. It is a bidirectional chiral inversion. Hence the argument that the thalidomide tragedy could have been avoided by using a single enantiomer is ambiguous and pointless. The salient features are presented in the table below. Unichiral drugs Unichiral indicates configurationally homogeneous substance (i.e. made up of chiral molecules of one and the same configuration). Other commonly used synonyms are enantiopure drugs and enantiomerically pure drugs. Monochiral drugs has also been suggested as another synonym. Professor Eliel, Wilen, and Gal expressed their deep concern over the misuse of the term "homochiral" in articles to denote enantiomerically pure drugs, which is incorrect. Homochiral means objects or molecules of the same handedness. Hence should be used only for comparison of two or more objects of like "chirality". For instance, left hands of different individuals, or say R-naproxen and R-ibuprofen. Globally drug companies and regulatory agencies have an inclination towards the development of unichiral drugs as a consequence of the increased understanding of the differing biological properties of individual enantiomers of a racemic therapeutics. Most of these unichiral drugs are the consequence of chiral switch approach. The table below list selected unichiral drugs used in drug therapy. A company may go in for developing a racemic drug against an enantiomer by providing adequate reasoning. The rationale why a company might pursue developing racemic drugs could include expensive separation of enantiomers, eutomer racemizes in solution (e.g. oxazepam), activities of the enantiomeric pair are different but supplementary, distomer is inactive, but separation is exorbitant. Insignificant/low toxicity of the distomer, high therapeutic index, mutually beneficial, pharmacological activities of both the enantiomers, and if the development of an enantiomer takes huge amount of time for a drug of emergency need e.g., cancer, AIDS, etc. Chiral purity Chiral purity is a measure of the purity of a chiral drug. Other synonyms employed include enantiomeric excess, enantiomer purity, enantiomeric purity, and optical purity. Optical purity is an obsolete term since today most of the chiral purity measurements are done using chromatographic techniques (not based on optical principles). Enantiomeric excess tells the extent (in %) to which the chiral substance contains one enantiomer over the other. For a racemic drug the enantiomeric excess will be 0%. There are number of chiral analysis tools such as polarimetry, NMR spectroscopy with the use of chiral shift reagents, chiral GC (gas chromatography), chiral HPLC (high performance liquid chromatography), chiral TLC (thin-layer chromatography) and other chiral chromatographic techniques, that are employed to evaluate chiral purity. Assessing the purity of a unichiral drug or enantiopure drug is of great importance from a drug safety and efficacy perspective. See also Chirality (chemistry) Chiral switch Chiral analysis Enantiopure drug Chiral inversion Racemate Stereochemistry Chirality timeline References External links Chirality Enantiopure drugs Stereochemistry
Chiral drugs
Physics,Chemistry,Biology
3,819
75,328,204
https://en.wikipedia.org/wiki/Nilsequence
In mathematics, a nilsequence is a type of numerical sequence playing a role in ergodic theory and additive combinatorics. The concept is related to nilpotent Lie groups and almost periodicity. The name arises from the part played in the theory by compact nilmanifolds of the type where is a nilpotent Lie group and a lattice in it. The idea of a basic nilsequence defined by an element of and continuous function on is to take , for an integer, as . General nilsequences are then uniform limits of basic nilsequences. For the statement of conjectures and theorems, technical side conditions and quantifications of complexity are introduced. Much of the combinatorial importance of nilsequences reflects their close connection with the Gowers norm. As explained by Host and Kra, nilsequences originate in evaluating functions on orbits in a "nilsystem"; and nilsystems are "characteristic for multiple correlations". Case of the circle group The circle group arises as the special case of the real line and its subgroup of the integers. It has nilpotency class equal to 1, being abelian, and the requirements of the general theory are to generalise to nilpotency class The semi-open unit interval is a fundamental domain, and for that reason the fractional part function is involved in the theory. Functions involving the fractional part of the variable in the circle group occur, under the name "bracket polynomials". Since the theory is in the setting of Lipschitz functions, which are a fortiori continuous, the discontinuity of the fractional part at 0 has to be managed. That said, the sequences , where is a given irrational real number, and an integer, and studied in diophantine approximation, are simple examples for the theory. Their construction can be thought of in terms of the skew product construction in ergodic theory, adding one dimension. Polynomial sequences The imaginary exponential function maps the real numbers to the circle group (see Euler's formula#Topological interpretation). A numerical sequence where is a polynomial function with real coefficients, and is an integer variable, is a type of trigonometric polynomial, called a "polynomial sequence" for the purposes of the nilsequence theory. The generalisation to nilpotent groups that are not abelian relies on the Hall–Petresco identity from group theory for a workable theory of polynomials. In particular the polynomial sequence comes with a definite degree. Möbius function and nilsequences A family of conjectures was made by Ben Green and Terence Tao, concerning the Möbius function of prime number theory and -step nilsequences. Here the underlying Lie group is assumed simply connected and nilpotent with length at most . The nilsequences considered are of type with some fixed in , and the function continuous and taking values in . The form of the conjecture, which requires a stated metric on the nilmanifold and Lipschitz bound in the implied constant, is that the average of up to is smaller asymptotically than any fixed inverse power of As a subsequent paper published in 2012 proving the conjectures put it, The Möbius function is strongly orthogonal to nilsequences. Subsequently Green, Tao and Tamar Ziegler also proved a family of inverse theorems for the Gowers norm, stated in terms of nilsequences. This completed a program of proving asymptotics for simultaneous prime values of linear forms. Tao has commented in his book Higher Order Fourier Analysis on the role of nilsequences in the inverse theorem proof. The issue being to extend IG results from the finite field case to general finite cyclic groups, the "classical phases"—essentially the exponentials of polynomials natural for the circle group—had proved inadequate. There were options other than nilsequences, in particular direct use of bracket polynomials. But Tao writes that he prefers nilsequences for the underlying Lie theory structure. Equivalent form for averaged Chowla and Sarnak conjectures Tao has proved that a conjecture on nilsequences is an equivalent of an averaged form of a noted conjecture of Sarvadaman Chowla involving only the Möbius function, and the way it self-correlates. Peter Sarnak made a conjecture on the non-correlation of the Möbius function with more general sequences from ergodic theory, which is a consequence of Chowla's conjecture. Tao's result on averaged forms showed all three conjectures are equivalent. The 2018 paper The logarithmic Sarnak conjecture for ergodic weights by Frantzikinakis and Host used this approach to prove unconditional results on the Liouville function. Notes Sequences and series Nilpotent groups Ergodic theory Additive combinatorics
Nilsequence
Mathematics
1,017
168,315
https://en.wikipedia.org/wiki/Laundry%20symbol
A laundry symbol, also called a care symbol, is a pictogram indicating the manufacturer's suggestions as to methods of washing, drying, dry-cleaning and ironing clothing. Such symbols are written on labels, known as care labels or care tags, attached to clothing to indicate how a particular item should best be cleaned. While there are internationally recognized standards for the care labels and pictograms, their exact use and form differ by region. In some standards, pictograms coexist with or are complemented by written instructions. Standards GINETEX, the France-based European association for textile care labelling, was formed in 1963 in part to define international standards for the care and labelling of textiles. By the early 1970s, GINETEX was working with ISO to develop international standards for textile labelling, eventually leading to the ISO 3758 standard, Textiles – Care labelling code using symbols. ISO 3758 was supplemented in 1993, revised in 2005 and again in 2012 and 2023 with reviews of the standard held on a five-year cycle. In March 1970, the Canadian Government Specifications Board published 86-GP-1, Standard for Care Labelling of Textiles, which promoted a symbol-based textile care labelling system in which symbols were colored: green indicated "no precautions are necessary", yellow indicated "some caution is necessary", and red indicated "prohibited". Publication 86-GP-1 was revised several times over the following three decades; the most noteworthy change was in 1979, when temperatures changed from Fahrenheit to Celsius, and any additional instructions were to be added in text, in both English and French. In 2003, the system was withdrawn in favor of a black-and-white symbol-based system harmonized with North American and international standards. The inclusion of care symbols on garments made or sold in Canada has always been voluntary; only fabric content labels are mandatory (since 1972). In 1996, in the United States, ASTM International published a system of pictorial care instructions as D5489 Standard Guide for Care Symbols for Care Instructions on Textile Products, with revisions in 1998, 2001, 2007, 2014, and 2018. American Cleaning institute developed and published their guide to fabric care symbols. Additional textile care labelling systems have been developed for Australia, China, and Japan. Worldwide, all of these systems tend to use similar pictograms or labelling to convey laundry care instructions. , the pictograms are not encoded in Unicode standards, because these symbols are not in the public domain across various countries, and are copyrighted. Pictograms General The care label describes the allowable treatment of the garment without damaging the textile. Whether this treatment is necessary or sufficient, is not stated. A milder than specified treatment is always acceptable. The symbols are protected and their use is required to comply with the license conditions; incorrect labelling is prohibited. A bar below each symbol calls for a gentler treatment than usual and a double bar for a very gentle treatment. Washing A stylized washtub is shown, and the number in the tub means the maximum wash temperature (degrees Celsius). A bar under the tub signifies a gentler treatment in the washing machine. A double bar signifies very gentle handling. A hand in the tub signifies that only (gentle) hand washing (not above 40 °C) is allowed. A cross through washtub means that the textile may not be washed under normal household conditions. In the North American standard, dots are used to indicate the proper temperature range. In the European standard, the level of wash agitation recommended is indicated by bars below the wash tub symbol. Absence of bar indicates a maximum agitation (cotton wash), a single bar indicates medium agitation (synthetics cycle) and a double bar indicates very minimal agitation (silk/wool cycle). The bar symbols also indicate the level of spin recommended with more bars indicating lower preferred spin speed. Bleaching An empty triangle (formerly lettered Cl) allows the bleaching with chlorine or non-chlorine bleach. Two oblique lines in the triangle prohibit chlorine bleaching. A crossed triangle prohibits any bleaching. Drying A circle in the square symbolizes a clothes dryer. One dot requires drying at reduced temperature and two dots for normal temperature. The crossed symbol means that the clothing does not tolerate machine drying. In the US and Japan, there are other icons for natural/line drying. Tumble drying Natural drying Ironing The iron with up to three dots allows for ironing. The number of dots are assigned temperatures: one prescribes , two for and three for . An iron with a cross prohibits ironing. Professional cleaning A circle identifies the possibilities of professional cleaning. A bar under the symbol means clean gently, and two bars means very gentle cleaning. Dry cleaning The letters P and F in a circle are for the different solvents used in professional dry cleaning. Wet cleaning The letter W in a circle is for professional wet cleaning. References External links GINETEX: The International Association for Textile Care Labelling-Care Symbols ISO 3758:2012 — Textiles — Care labelling code using symbols The revised Canadian standard Swedish care symbols United States care symbols US, Japanese, and UK woven washing label symbols Consumer symbols Symbol Pictograms
Laundry symbol
Mathematics
1,080
30,329,560
https://en.wikipedia.org/wiki/Berry%20connection%20and%20curvature
In physics, Berry connection and Berry curvature are related concepts which can be viewed, respectively, as a local gauge potential and gauge field associated with the Berry phase or geometric phase. The concept was first introduced by S. Pancharatnam as geometric phase and later elaborately explained and popularized by Michael Berry in a paper published in 1984 emphasizing how geometric phases provide a powerful unifying concept in several branches of classical and quantum physics. Berry phase and cyclic adiabatic evolution In quantum mechanics, the Berry phase arises in a cyclic adiabatic evolution. The quantum adiabatic theorem applies to a system whose Hamiltonian depends on a (vector) parameter that varies with time . If the 'th eigenvalue remains non-degenerate everywhere along the path and the variation with time t is sufficiently slow, then a system initially in the normalized eigenstate will remain in an instantaneous eigenstate of the Hamiltonian , up to a phase, throughout the process. Regarding the phase, the state at time t can be written as where the second exponential term is the "dynamic phase factor." The first exponential term is the geometric term, with being the Berry phase. From the requirement that the state satisfies the time-dependent Schrödinger equation, it can be shown that indicating that the Berry phase only depends on the path in the parameter space, not on the rate at which the path is traversed. In the case of a cyclic evolution around a closed path such that , the closed-path Berry phase is An example of physical systems where an electron moves along a closed path is cyclotron motion (details are given in the page of Berry phase). Berry phase must be considered to obtain the correct quantization condition. Gauge transformation A gauge transformation can be performed to a new set of states that differ from the original ones only by an -dependent phase factor. This modifies the open-path Berry phase to be . For a closed path, continuity requires that ( an integer), and it follows that is invariant, modulo , under an arbitrary gauge transformation. Berry connection The closed-path Berry phase defined above can be expressed as where is a vector-valued function known as the Berry connection (or Berry potential). The Berry connection is gauge-dependent, transforming as . Hence the local Berry connection can never be physically observable. However, its integral along a closed path, the Berry phase , is gauge-invariant up to an integer multiple of . Thus, is absolutely gauge-invariant, and may be related to physical observables. Berry curvature The Berry curvature is an anti-symmetric second-rank tensor derived from the Berry connection via In a three-dimensional parameter space the Berry curvature can be written in the pseudovector form The tensor and pseudovector forms of the Berry curvature are related to each other through the Levi-Civita antisymmetric tensor as . In contrast to the Berry connection, which is physical only after integrating around a closed path, the Berry curvature is a gauge-invariant local manifestation of the geometric properties of the wavefunctions in the parameter space, and has proven to be an essential physical ingredient for understanding a variety of electronic properties. For a closed path that forms the boundary of a surface , the closed-path Berry phase can be rewritten using Stokes' theorem as If the surface is a closed manifold, the boundary term vanishes, but the indeterminacy of the boundary term modulo manifests itself in the Chern theorem, which states that the integral of the Berry curvature over a closed manifold is quantized in units of . This number is the so-called Chern number, and is essential for understanding various quantization effects. Finally, by using for , the Berry curvature can also be written as a summation over all the other eigenstates in the form Note that the curvature of the nth energy level is contributed by all the other energy levels. That is, the Berry curvature can be viewed as the result of the residual interaction of those projected-out eigenstates. This gives the local conservation law for the Berry curvature, if we sum over all possible energy levels for each value of This equation also offers the advantage that no differentiation on the eigenstates is involved, and thus it can be computed under any gauge choice. Example: Spinor in a magnetic field The Hamiltonian of a spin-1/2 particle in a magnetic field can be written as where denote the Pauli matrices, is the magnetic moment, and B is the magnetic field. In three dimensions, the eigenstates have energies and their eigenvectors are Now consider the state. Its Berry connection can be computed as , and the Berry curvature is If we choose a new gauge by multiplying by (or any other phase , ), the Berry connections are and , while the Berry curvature remains the same. This is consistent with the conclusion that the Berry connection is gauge-dependent while the Berry curvature is not. The Berry curvature per solid angle is given by . In this case, the Berry phase corresponding to any given path on the unit sphere in magnetic-field space is just half the solid angle subtended by the path. The integral of the Berry curvature over the whole sphere is therefore exactly , so that the Chern number is unity, consistent with the Chern theorem. Applications in crystals The Berry phase plays an important role in modern investigations of electronic properties in crystalline solids and in the theory of the quantum Hall effect. The periodicity of the crystalline potential allows the application of the Bloch theorem, which states that the Hamiltonian eigenstates take the form where is a band index, is a wavevector in the reciprocal-space (Brillouin zone), and is a periodic function of . Due to translational symmetry, the momentum operator could be replaced with by the Peierls substitution and the wavevector plays the role of the parameter . Thus, one can define Berry phases, connections, and curvatures in the reciprocal space. For example, in an N-band system, the Berry connection of the nth band in reciprocal space is In the system, the Berry curvature of the nth band is given by all the other N − 1 bands for each value of In a 2D crystal, the Berry curvature only has the component out of the plane and behaves as a pseudoscalar. It is because there only exists in-plane translational symmetry when translational symmetry is broken along z direction for a 2D crystal. Because the Bloch theorem also implies that the reciprocal space itself is closed, with the Brillouin zone having the topology of a 3-torus in three dimensions, the requirements of integrating over a closed loop or manifold can easily be satisfied. In this way, such properties as the electric polarization, orbital magnetization, anomalous Hall conductivity, and orbital magnetoelectric coupling can be expressed in terms of Berry phases, connections, and curvatures. References External links The quantum phase, five years after. by M. Berry. Berry Phases and Curvatures in Electronic Structure Theory A talk by D. Vanderbilt. Berry-ology, Orbital Magnetolectric Effects, and Topological Insulators - A talk by D. Vanderbilt. Classical mechanics Quantum phases
Berry connection and curvature
Physics,Chemistry,Materials_science
1,482
27,700,016
https://en.wikipedia.org/wiki/Social%20Age
Social Age encompasses both societal and technological changes succeeding the Information Age. It is divergent from the Information Age as it gives more prominence to social factors when adopting and/or extending technology and information. It further broadens the definition of Attention Age because the Social Age focuses on many forms of societal interactions including online relationships, collaboration and sharing. See also Digital citizen Information Age Information society Netizen Network society Social media Social networking service Social innovation Social technology Sociotechnology Technological innovation Technology and society References Hyperreality Information Age Social information processing Sociology of technology Technology in society
Social Age
Technology
114
21,972,808
https://en.wikipedia.org/wiki/25%20Vulpeculae
25 Vulpeculae is a single star in the northern constellation of Vulpecula, located roughly 1,170 light years away from the Sun. It is visible to the naked eye as a faint, blue-white hued star with an apparent visual magnitude of 5.50 This object is moving closer to the Earth with a heliocentric radial velocity of −11 km/s. This is a Be star with a stellar classification of B6 IVe, matching the spectrum of an aging subgiant with a circumstellar disk of ionized gas. Cowley (1972) had it rated as a more evolved giant star with a class of B8 IIIn, where the 'n' notation indicates "nebulous" lines due to rapid rotation. It has a high rate of spin, showing a projected rotational velocity of 160 km/s. The star has 7 times the mass of the Sun and 11 times the Sun's radius. It is radiating 1,345 times the luminosity of the Sun from its photosphere at an effective temperature of 13,170 K. References External links B-type subgiants B-type giants Vulpecula Durchmusterung objects Vulpeculae, 25 193911 100435 7789 Be stars
25 Vulpeculae
Astronomy
267
72,221,383
https://en.wikipedia.org/wiki/Lead%28II%29%20perchlorate
Lead(II) perchlorate is a chemical compound with the formula Pb(ClO4)2·xH2O, where is x is 0,1, or 3. It is an extremely hygroscopic white solid that is very soluble in water. Preparation Lead perchlorate trihydrate is produced by the reaction of lead(II) oxide, lead carbonate, or lead nitrate by perchloric acid: Pb(NO3)2 + HClO4 → Pb(ClO4)2 + HNO3 The excess perchloric acid was removed by first heating the solution to 125 °C, then heating it under moist air at 160 °C to remove the perchloric acid by converting the acid to the dihydrate. The anhydrous salt, Pb(ClO4)2, is produced by heating the trihydrate to 120 °C under water-free conditions over phosphorus pentoxide. The trihydrate melts at 83 °C. The anhydrous salt decomposes into lead(II) chloride and a mixture of lead oxides at 250 °C. The monohydrate is produced by only partially dehydrating the trihydrate, and this salt undergoes hydrolysis at 103 °C. The solution of anhydrous lead(II) perchlorate in methanol is explosive. Applications Lead perchlorate has a high nucleon density, making it a viable detector for hypothetical proton decay. References Lead(II) compounds Perchlorates
Lead(II) perchlorate
Chemistry
319
61,834,594
https://en.wikipedia.org/wiki/Taiji%20program
The Taiji program is a proposed Chinese satellite-based gravitational-wave observatory. It is scheduled for launch in 2033 to study ripples in spacetime caused by gravitational waves. The program consists of a triangle of three spacecraft orbiting the Sun linked by laser interferometers. There are two alternative plans for Taiji. One is to take a 20 percent share of the European Space Agency's LISA project; the other is to launch China's own satellites by 2033 to authenticate the ASE project. Like LISA, the Taiji spacecraft would be 3 million kilometers apart, making them sensitive to as similar range of frequencies, although Taiji is proposed to perform better in some of that range. Program Goal 'Taiji Program' is the ELISA Program proposed by ESA, and the predecessor of the ELISA Program is the LISA Program cooperated by ESA and NASA. Similar to the configuration of the three networking satellites in the LISA Program, the three satellites in the Taiji Program also rotate around their centroid. The centroid also revolves in orbit around the Sun. The difference is that the phases of the LISA system, Earth system and Taiji system are different. With the Earth as the reference, the phase of the LISA system is 20 degrees behind that of the Programet, and the phase of the Taiji system is 20 degrees ahead of that of the Earth. In addition, the Tai Chi Program is part of the proposed space-based gravitational wave observatories Program, the other parts of which are the Chinese Academy of Sciences (CAS) Tianqin Program and the European Space Agency (ESA) Laser Interferometer Space Antenna (LISA) and the Decimal Hertz Interferometer Gravitational-Wave Observatory (DECIGO) led by the Japan Aerospace Exploration Agency (JAXA). In December 2021, a study pointed out that the gravitational wave detection network combined with Taiji and LISA will accurately measure the Hubble constant greater than 95.5% within ten years. Moreover, The LISA-Taiji network has the potential to detect more than twenty stellar binary black holes (sBBHs), for which the error in luminous distance measurement is in the range of 0.05−0.2, and the relative error in sky positioning is in the range of 1−100deg2 In the range. The main scientific goal of the Taiji Program is to measure the mass, spin and distribution of black holes through the precise measurement of gravitational waves, to explore how intermediate-mass seed black holes develop if dark matter can produce black seed holes, and how enormous and supermassive black holes grow from black seed holes; Look for traces of the earliest generation of stars' genesis, development, and death, give direct restrictions on the intensity of primordial gravitational waves, and detect the polarization of gravitational waves, providing direct observational data for revealing the nature of gravity. Gravitational waves can provide a clear picture of the universe because they are weakly linked to matter, and the information provided can be used in conjunction with information from telescopes and particle detectors. The precise measurement of gravitational waves allows for in-depth and thorough investigation of the universe's large-scale structure, the birth and development of galaxies, and other topics; Better develop and establish a quantum theory of gravity beyond Einstein's general theory of relativity, reveal the nature of gravity, and help understand dark matter, the nature of energy, the formation of black holes, and cosmic inflation, Gravitational waves can transmit information that electromagnetic waves cannot. At the same time, the forward-looking technology developed from this is of great significance for improving the technical level of space science and deep space exploration; It will also play a positive role in applications such as inertial navigation, Earth science, global environmental change, and high-precision satellite platform construction. Program history In 2008, the Chinese Academy of Sciences began demonstrating the feasibility of space gravitational wave detection, proposing the "Taiji Program" for China's space gravitational wave detection, and establishing the "single satellite, dual satellite, three satellites" and "three steps" development strategy and road map; and in August 2018, the "Taiji Program" single-satellite program was implemented in the Space Science (Phase II) Strategic Pilot Science and Technology Special Neutral Program and the first step in the three-step process was launched, that is, the Taiji-1 satellite. On August 31, 2019, Taiji-1 satellite was launched from the Jiuquan Satellite Launch Center. In July 2021, "Taiji-1" has completed all the preset experimental tasks and achieved the highest precision space laser interferometry in China. It has achieved the first full performance verification of the two types of micro-push technology of Microbull-level radiofrequency ion and Hall, and took the lead in realizing the breakthrough of two non-drug control technologies in China. The optical metrology system and the non-resistance control system, both of which are part of Taiji-2 satellites, were confirmed by the Taiji-1 satellite mission; The mission's success also gave sufficient backing for the creation of Taiji-2 satellite; However, because Taiji-1 satellite only has one satellite, there is no way to test the inter-satellite laser link; The relevant unit expects to launch two satellites (Taiji-2) in 2023-2025 to clear obstacles for Taiji-3 satellites. And it is expected to launch an equilateral triangle gravitational wave detection star group composed of three satellites around 2030. Program responsibility unit The scientific application unit and user of Taiji-1 in this Program is UCAS. The Taiji Program and the ground support system are managed by China's National Space Science Center, while the satellite system is developed by the Chinese Academy of Sciences' Institute of Microsatellite Innovation; the Institute of Precision Measurement Science and Technology Innovation, Chinese Academy of Sciences, Institute of Mechanics, Chinese Academy of Sciences, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Changchun Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Singapore University of Science and Technology, Singapore Nanyang Technological University, and the Institute of Precision Measurement Science and Technology Innovation, Chinese Academy of Sciences are among the cooperative units involved in payload development. In addition, the Chinese Academy of Sciences established the gravitational wave cosmic polar laboratory in Hangzhou in April 2021. References Astronomical observatories Gravitational instruments Interferometric gravitational-wave instruments Space telescopes Proposed space probes
Taiji program
Astronomy,Technology,Engineering
1,322
5,042,360
https://en.wikipedia.org/wiki/Constructive%20set%20theory
Axiomatic constructive set theory is an approach to mathematical constructivism following the program of axiomatic set theory. The same first-order language with "" and "" of classical set theory is usually used, so this is not to be confused with a constructive types approach. On the other hand, some constructive theories are indeed motivated by their interpretability in type theories. In addition to rejecting the principle of excluded middle (), constructive set theories often require some logical quantifiers in their axioms to be set bounded. The latter is motivated by results tied to impredicativity. Introduction Constructive outlook Preliminary on the use of intuitionistic logic The logic of the set theories discussed here is constructive in that it rejects the principle of excluded middle , i.e. that the disjunction automatically holds for all propositions . This is also often called the law of excluded middle () in contexts where it is assumed. Constructively, as a rule, to prove the excluded middle for a proposition , i.e. to prove the particular disjunction , either or needs to be explicitly proven. When either such proof is established, one says the proposition is decidable, and this then logically implies the disjunction holds. Similarly and more commonly, a predicate for in a domain is said to be decidable when the more intricate statement is provable. Non-constructive axioms may enable proofs that formally claim decidability of such (and/or ) in the sense that they prove excluded middle for (resp. the statement using the quantifier above) without demonstrating the truth of either side of the disjunction(s). This is often the case in classical logic. In contrast, axiomatic theories deemed constructive tend to not permit many classical proofs of statements involving properties that are provenly computationally undecidable. The law of noncontradiction is a special case of the propositional form of modus ponens. Using the former with any negated statement , one valid De Morgan's law thus implies already in the more conservative minimal logic. In words, intuitionistic logic still posits: It is impossible to rule out a proposition and rule out its negation both at once, and thus the rejection of any instantiated excluded middle statement for an individual proposition is inconsistent. Here the double-negation captures that the disjunction statement now provenly can never be ruled out or rejected, even in cases where the disjunction may not be provable (for example, by demonstrating one of the disjuncts, thus deciding ) from the assumed axioms. More generally, constructive mathematical theories tend to prove classically equivalent reformulations of classical theorems. For example, in constructive analysis, one cannot prove the intermediate value theorem in its textbook formulation, but one can prove theorems with algorithmic content that, as soon as double negation elimination and its consequences are assumed legal, are at once classically equivalent to the classical statement. The difference is that the constructive proofs are harder to find. The intuitionistic logic underlying the set theories discussed here, unlike minimal logic, still permits double negation elimination for individual propositions for which excluded middle holds. In turn the theorem formulations regarding finite objects tends to not differ from their classical counterparts. Given a model of all natural numbers, the equivalent for predicates, namely Markov's principle, does not automatically hold, but may be considered as an additional principle. In an inhabited domain and using explosion, the disjunction implies the existence claim , which in turn implies . Classically, these implications are always reversible. If one of the former is classically valid, it can be worth trying to establish it in the latter form. For the special case where is rejected, one deals with a counter-example existence claim , which is generally constructively stronger than a rejection claim : Exemplifying a such that is contradictory of course means it is not the case that holds for all possible . But one may also demonstrate that holding for all would logically lead to a contradiction without the aid of a specific counter-example, and even while not being able to construct one. In the latter case, constructively, here one does not stipulate an existence claim. Imposed restrictions on a set theory Compared to the classical counterpart, one is generally less likely to prove the existence of relations that cannot be realized. A restriction to the constructive reading of existence apriori leads to stricter requirements regarding which characterizations of a set involving unbounded collections constitute a (mathematical, and so always meaning total) function. This is often because the predicate in a case-wise would-be definition may not be decidable. Adopting the standard definition of set equality via extensionality, the full Axiom of Choice is such a non-constructive principle that implies for the formulas permitted in one's adopted Separation schema, by Diaconescu's theorem. Similar results hold for the Axiom of Regularity existence claim, as shown below. The latter has a classically equivalent inductive substitute. So a genuinely intuitionistic development of set theory requires the rewording of some standard axioms to classically equivalent ones. Apart from demands for computability and reservations regrading of impredicativity, technical question regarding which non-logical axioms effectively extend the underlying logic of a theory is also a research subject in its own right. Metalogic With computably undecidable propositions already arising in Robinson arithmetic, even just Predicative separation lets one define elusive subsets easily. In stark contrast to the classical framework, constructive set theories may be closed under the rule that any property that is decidable for all sets is already equivalent to one of the two trivial ones, or . Also the real line may be taken to be indecomposable in this sense. Undecidability of disjunctions also affects the claims about total orders such as that of all ordinal numbers, expressed by the provability and rejection of the clauses in the order defining disjunction . This determines whether the relation is trichotomous. A weakened theory of ordinals in turn affects the proof theoretic strength defined in ordinal analysis. In exchange, constructive set theories can exhibit attractive disjunction and existence properties, as is familiar from the study of constructive arithmetic theories. These are features of a fixed theory which metalogically relate judgements of propositions provable in the theory. Particularly well-studied are those such features that can be expressed in Heyting arithmetic, with quantifiers over numbers and which can often be realized by numbers, as formalized in proof theory. In particular, those are the numerical existence property and the closely related disjunctive property, as well as being closed under Church's rule, witnessing any given function to be computable. A set theory does not only express theorems about numbers, and so one may consider a more general so-called strong existence property that is harder to come by, as will be discussed. A theory has this property if the following can be established: For any property , if the theory proves that a set exist that has that property, i.e. if the theory claims the existence statement, then there is also a property that uniquely describes such a set instance. More formally, for any predicate there is a predicate so that The role analogous to that of realized numbers in arithmetic is played here by defined sets proven to exist by (or according to) the theory. Questions concerning the axiomatic set theory's strength and its relation to term construction are subtle. While many theories discussed tend have all the various numerical properties, the existence property can easily be spoiled, as will be discussed. Weaker forms of existence properties have been formulated. Some theories with a classical reading of existence can in fact also be constrained so as to exhibit the strong existence property. In Zermelo–Fraenkel set theory with sets all taken to be ordinal-definable, a theory denoted , no sets without such definability exist. The property is also enforced via the constructible universe postulate in . For contrast, consider the theory given by plus the full axiom of choice existence postulate: Recall that this collection of axioms proves the well-ordering theorem, implying well-orderings exists for any set. In particular, this means that relations formally exist that establish the well-ordering of (i.e. the theory claims the existence of a least element for all subsets of with respect to those relations). This is despite the fact that definability of such an ordering is known to be independent of . The latter implies that for no particular formula in the language of the theory does the theory prove that the corresponding set is a well-ordering relation of the reals. So formally proves the existence of a subset with the property of being a well-ordering relation, but at the same time no particular set for which the property could be validated can possibly be defined. Anti-classical principles As mentioned above, a constructive theory may exhibit the numerical existence property, , for some number and where denotes the corresponding numeral in the formal theory. Here one must carefully distinguish between provable implications between two propositions, , and a theory's properties of the form . When adopting a metalogically established schema of the latter type as an inference rule of one's proof calculus and nothing new can be proven, one says the theory is closed under that rule. One may instead consider adjoining the rule corresponding to the meta-theoretical property as an implication (in the sense of "") to , as an axiom schema or in quantified form. A situation commonly studied is that of a fixed exhibiting the meta-theoretical property of the following type: For an instance from some collection of formulas of a particular form, here captured via and , one established the existence of a number so that . Here one may then postulate , where the bound is a number variable in language of the theory. For example, Church's rule is an admissible rule in first-order Heyting arithmetic and, furthermore, the corresponding Church's thesis principle may consistently be adopted as an axiom. The new theory with the principle added is anti-classical, in that it may not be consistent anymore to also adopt . Similarly, adjoining the excluded middle principle to some theory , the theory thus obtained may prove new, strictly classical statements, and this may spoil some of the meta-theoretical properties that were previously established for . In such a fashion, may not be adopted in , also known as Peano arithmetic . The focus in this subsection shall be on set theories with quantification over a fully formal notion of an infinite sequences space, i.e. function space, as it will be introduced further below. A translation of Church's rule into the language of the theory itself may here read Kleene's T predicate together with the result extraction expresses that any input number being mapped to the number is, through , witnessed to be a computable mapping. Here now denotes a set theory model of the standard natural numbers and is an index with respect to a fixed program enumeration. Stronger variants have been used, which extend this principle to functions defined on domains of low complexity. The principle rejects decidability for the predicate defined as , expressing that is the index of a computable function halting on its own index. Weaker, double negated forms of the principle may be considered too, which do not require the existence of a recursive implementation for every , but which still make principles inconsistent that claim the existence of functions which provenly have no recursive realization. Some forms of a Church's thesis as principle are even consistent with the classical, weak so called second-order arithmetic theory , a subsystem of the two-sorted first-order theory . The collection of computable functions is classically subcountable, which classically is the same as being countable. But classical set theories will generally claim that holds also other functions than the computable ones. For example there is a proof in that total functions (in the set theory sense) do exist that cannot be captured by a Turing machine. Taking the computable world seriously as ontology, a prime example of an anti-classical conception related the Markovian school is the permitted subcountability of various uncountable collections. When adopting the subcountability of the collection of all unending sequences of natural numbers () as an axiom in a constructive theory, the "smallness" (in classical terms) of this collection, in some set theoretical realizations, is then already captured by the theory itself. A constructive theory may also adopt neither classical nor anti-classical axioms and so stay agnostic towards either possibility. Constructive principles already prove for any . And so for any given element of , the corresponding excluded middle statement for the proposition cannot be negated. Indeed, for any given , by noncontradiction it is impossible to rule out and rule out its negation both at once, and the relevant De Morgan's rule applies as above. But a theory may in some instances also permit the rejection claim . Adopting this does not necessitate providing a particular witnessing the failure of excluded middle for the particular proposition , i.e. witnessing the inconsistent . Predicates on an infinite domain correspond to decision problems. Motivated by provenly computably undecidable problems, one may reject the possibility of decidability of a predicate without also making any existence claim in . As another example, such a situation is enforced in Brouwerian intuitionistic analysis, in a case where the quantifier ranges over infinitely many unending binary sequences and states that a sequence is everywhere zero. Concerning this property, of being conclusively identified as the sequence which is forever constant, adopting Brouwer's continuity principle strictly rules out that this could be proven decidable for all the sequences. So in a constructive context with a so-called non-classical logic as used here, one may consistently adopt axioms which are both in contradiction to quantified forms of excluded middle, but also non-constructive in the computable sense or as gauged by meta-logical existence properties discussed previously. In that way, a constructive set theory can also provide the framework to study non-classical theories, say rings modeling smooth infinitesimal analysis. History and overview Historically, the subject of constructive set theory (often also "") begun with John Myhill's work on the theories also called and . In 1973, he had proposed the former as a first-order set theory based on intuitionistic logic, taking the most common foundation and throwing out the Axiom of choice as well as the principle of the excluded middle, initially leaving everything else as is. However, different forms of some of the axioms which are equivalent in the classical setting are inequivalent in the constructive setting, and some forms imply , as will be demonstrated. In those cases, the intuitionistically weaker formulations were consequently adopted. The far more conservative system is also a first-order theory, but of several sorts and bounded quantification, aiming to provide a formal foundation for Errett Bishop's program of constructive mathematics. The main discussion presents a sequence of theories in the same language as , leading up to Peter Aczel's well studied , and beyond. Many modern results trace back to Rathjen and his students. is also characterized by the two features present also in Myhill's theory: On the one hand, it is using the Predicative Separation instead of the full, unbounded Separation schema. Boundedness can be handled as a syntactic property or, alternatively, the theories can be conservatively extended with a higher boundedness predicate and its axioms. Secondly, the impredicative Powerset axiom is discarded, generally in favor of related but weaker axioms. The strong form is very casually used in classical general topology. Adding to a theory even weaker than recovers , as detailed below. The system, which has come to be known as Intuitionistic Zermelo–Fraenkel set theory (), is a strong set theory without . It is similar to , but less conservative or predicative. The theory denoted is the constructive version of , the classical Kripke–Platek set theory without a form of Powerset and where even the Axiom of Collection is bounded. Models Many theories studied in constructive set theory are mere restrictions of Zermelo–Fraenkel set theory () with respect to their axiom as well as their underlying logic. Such theories can then also be interpreted in any model of . Peano arithmetic is bi-interpretable with the theory given by minus Infinity and without infinite sets, plus the existence of all transitive closures. (The latter is also implied after promoting Regularity to the Set Induction schema, which is discussed below.) Likewise, constructive arithmetic can also be taken as an apology for most axioms adopted in : Heyting arithmetic is bi-interpretable with a weak constructive set theory, as also described in the article on . One may arithmetically characterize a membership relation "" and with it prove - instead of the existence of a set of natural numbers - that all sets in its theory are in bijection with a (finite) von Neumann natural, a principle denoted . This context further validates Extensionality, Pairing, Union, Binary Intersection (which is related to the Axiom schema of predicative separation) and the Set Induction schema. Taken as axioms, the aforementioned principles constitute a set theory that is already identical with the theory given by minus the existence of but plus as axiom. All those axioms are discussed in detail below. Relatedly, also proves that the hereditarily finite sets fulfill all the previous axioms. This is a result which persists when passing on to and minus Infinity. As far as constructive realizations go there is a relevant realizability theory. Relatedly, Aczel's theory constructive Zermelo-Fraenkel has been interpreted in a Martin-Löf type theories, as sketched in the section on . In this way, theorems provable in this and weaker set theories are candidates for a computer realization. Presheaf models for constructive set theories have also been introduced. These are analogous to presheaf models for intuitionistic set theory developed by Dana Scott in the 1980s. Realizability models of within the effective topos have been identified, which, say, at once validate full Separation, relativized dependent choice , independence of premise for sets, but also the subcountability of all sets, Markov's principle and Church's thesis in the formulation for all predicates. Notation In an axiomatic set theory, sets are the entities exhibiting properties. But there is then a more intricate relation between the set concept and logic. For example, the property of being a natural number smaller than 100 may be reformulated as being a member of the set of numbers with that property. The set theory axioms govern set existence and thus govern which predicates can be materialized as entity in itself, in this sense. Specification is also directly governed by the axioms, as discussed below. For a practical consideration, consider for example the property of being a sequence of coin flip outcomes that overall show more heads than tails. This property may be used to separate out a corresponding subset of any set of finite sequences of coin flips. Relatedly, the measure theoretic formalization of a probabilistic event is explicitly based around sets and provides many more examples. This section introduces the object language and auxiliary notions used to formalize this materialization. Language The propositional connective symbols used to form syntactic formulas are standard. The axioms of set theory give a means to prove equality "" of sets and that symbol may, by abuse of notation, be used for classes. A set in which the equality predicate is decidable is also called discrete. Negation "" of equality is sometimes called the denial of equality, and is commonly written "". However, in a context with apartness relations, for example when dealing with sequences, the latter symbol is also sometimes used for something different. The common treatment, as also adopted here, formally only extends the underlying logic by one primitive binary predicate of set theory, "". As with equality, negation of elementhood "" is often written "". Variables Below the Greek denotes a proposition or predicate variable in axiom schemas and or is used for particular such predicates. The word "predicate" is sometimes used interchangeably with "formulas" as well, even in the unary case. Quantifiers only ever range over sets and those are denoted by lower case letters. As is common, one may use argument brackets to express predicates, for the sake of highlighting particular free variables in their syntactic expression, as in "". Unique existence here means . Classes As is also common, one makes use set builder notation for classes, which, in most contexts, are not part of the object language but used for concise discussion. In particular, one may introduce notation declarations of the corresponding class via "", for the purpose of expressing any as . Logically equivalent predicates can be used to introduce the same class. One also writes as shorthand for . For example, one may consider and this is also denoted . One abbreviates by and by . The syntactic notion of bounded quantification in this sense can play a role in the formulation of axiom schemas, as seen in the discussion of axioms below. Express the subclass claim , i.e. , by . For a predicate , trivially . And so follows that . The notion of subset-bounded quantifiers, as in , has been used in set theoretical investigation as well, but will not be further highlighted here. If there provenly exists a set inside a class, meaning , then one calls it inhabited. One may also use quantification in to express this as . The class is then provenly not the empty set, introduced below. While classically equivalent, constructively non-empty is a weaker notion with two negations and ought to be called not uninhabited. Unfortunately, the word for the more useful notion of 'inhabited' is rarely used in classical mathematics. Two ways to express that classes are disjoint does capture many of the intuitionistically valid negation rules: . Using the above notation, this is a purely logical equivalence and in this article the proposition will furthermore be expressible as . A subclass is called detachable from if the relativized membership predicate is decidable, i.e. if holds. It is also called decidable if the superclass is clear from the context - often this is the set of natural numbers. Extensional equivalence Denote by the statement expressing that two classes have exactly the same elements, i.e. , or equivalently . This is not to be conflated with the concept of equinumerosity also used below. With standing for , the convenient notational relation between and , axioms of the form postulate that the class of all sets for which holds actually forms a set. Less formally, this may be expressed as . Likewise, the proposition conveys " when is among the theory's sets." For the case where is the trivially false predicate, the proposition is equivalent to the negation of the former existence claim, expressing the non-existence of as a set. Further extensions of class comprehension notation as above are in common used in set theory, giving meaning to statements such as "", and so on. Syntactically more general, a set may also be characterized using another 2-ary predicate trough , where the right hand side may depend on the actual variable , and possibly even on membership in itself. Subtheories of ZF Here a series of familiar axioms is presented, or the relevant slight reformulations thereof. It is emphasized how the absence of in the logic affects what is provable and it is highlighted which non-classical axioms are, in turn, consistent. Equality Using the notation introduced above, the following axiom gives a means to prove equality "" of two sets, so that through substitution, any predicate about translates to one of . By the logical properties of equality, the converse direction of the postulated implication holds automatically. In a constructive interpretation, the elements of a subclass of may come equipped with more information than those of , in the sense that being able to judge is being able to judge . And (unless the whole disjunction follows from axioms) in the Brouwer–Heyting–Kolmogorov interpretation, this means to have proven or having rejected it. As may not be detachable from , i.e. as may be not decidable for all elements in , the two classes and must a priori be distinguished. Consider a predicate that provenly holds for all elements of a set , so that , and assume that the class on the right hand side is established to be a set. Note that, even if this set on the right informally also ties to proof-relevant information about the validity of for all the elements, the Extensionality axiom postulates that, in our set theory, the set on the right hand side is judged equal to the one on the left hand side. This above analysis also shows that a statement of the form , which in informal class notation may be expressed as , is then equivalently expressed as . This means that establishing such -theorems (e.g. the ones provable from full mathematical induction) enables substituting the subclass of on the left hand side of the equality for just , in any formula. Note that adopting "" as a symbol in a predicate logic theory makes equality of two terms a quantifier-free expression. Alternative approaches While often adopted, this axiom has been criticized in constructive thought, as it effectively collapses differently defined properties, or at least the sets viewed as the extension of these properties, a Fregian notion. Modern type theories may instead aim at defining the demanded equivalence "" in terms of functions, see e.g. type equivalence. The related concept of function extensionality is often not adopted in type theory. Other frameworks for constructive mathematics might instead demand a particular rule for equality or apartness come for the elements of each and every set discussed. But also in an approach to sets emphasizing apartness may the above definition in terms of subsets be used to characterize a notion of equality "" of those subsets. Relatedly, a loose notion of complementation of two subsets and is given when any two members and are provably apart from each other. The collection of complementing pairs is algebraically well behaved. Merging sets Define class notation for the pairing of a few given elements via disjunctions. E.g. is the quantifier-free statement , and likewise says , and so on. Two other basic existence postulates given some other sets are as follows. Firstly, Given the definitions above, expands to , so this is making use of equality and a disjunction. The axiom says that for any two sets and , there is at least one set , which hold at least those two sets. With bounded Separation below, also the class exists as a set. Denote by the standard ordered pair model , so that e.g. denotes another bounded formula in the formal language of the theory. And then, using existential quantification and a conjunction, saying that for any set , there is at least one set , which holds all the members , of 's members . The minimal such set is the union. The two axioms are commonly formulated stronger, in terms of "" instead of just "", although this is technically redundant in the context of : As the Separation axiom below is formulated with "", for statements the equivalence can be derived, given the theory allows for separation using . In cases where is an existential statement, like here in the union axiom, there is also another formulation using a universal quantifier. Also using bounded Separation, the two axioms just stated together imply the existence of a binary union of two classes and , when they have been established to be sets, denoted by or . For a fixed set , to validate membership in the union of two given sets and , one needs to validate the part of the axiom, which can be done by validating the disjunction of the predicates defining the sets and , for . In terms of the associated sets, it is done by validating the disjunction . The union and other set forming notations are also used for classes. For instance, the proposition is written . Let now . Given , the decidability of membership in , i.e. the potentially independent statement , can also be expressed as . But, as for any excluded middle statement, the double-negation of the latter holds: That union isn't not inhabited by . This goes to show that partitioning is also a more involved notion, constructively. Set existence The property that is false for any set corresponds to the empty class, which is denoted by or zero, . That the empty class is a set readily follows from other existence axioms, such as the Axiom of Infinity below. But if, e.g., one is explicitly interested in excluding infinite sets in one's study, one may at this point adopt the Introduction of the symbol (as abbreviating notation for expressions in involving characterizing properties) is justified as uniqueness for this set can be proven. As is false for any , the axiom then reads . Write for , which equals , i.e. . Likewise, write for , which equals , i.e. . A simple and provenly false proposition then is, for example, , corresponding to in the standard arithmetic model. Again, here symbols such as are treated as convenient notation and any proposition really translates to an expression using only "" and logical symbols, including quantifiers. Accompanied by a metamathematical analysis that the capabilities of the new theories are equivalent in an effective manner, formal extensions by symbols such as may also be considered. More generally, for a set , define the successor set as . The interplay of the successor operation with the membership relation has a recursive clause, in the sense that . By reflexivity of equality, , and in particular is always inhabited. BCST The following makes use of axiom schemas, i.e. axioms for some collection of predicates. Some of the stated axiom schemas shall allow for any collection of set parameters as well (meaning any particular named variables ). That is, instantiations of the schema are permitted in which the predicate (some particular ) also depends on a number of further set variables and the statement of the axiom is understood with corresponding extra outer universal closures (as in ). Separation Basic constructive set theory consists of several axioms also part of standard set theory, except the so called "full" Separation axiom is weakened. Beyond the four axioms above, it postulates Predicative Separation as well as the Replacement schema. This axiom amounts to postulating the existence of a set obtained by the intersection of any set and any predicatively described class . For any proven to be a set, when the predicate is taken as , one obtains the binary intersection of sets and writes . Intersection corresponds to conjunction in an analog way to how union corresponds to disjunction. When the predicate is taken as the negation , one obtains the difference principle, granting existence of any set . Note that sets like or are always empty. So, as noted, from Separation and the existence of at least one set (e.g. Infinity below) will follow the existence of the empty set (also denoted ). Within this conservative context of , the Predicative Separation schema is actually equivalent to Empty Set plus the existence of the binary intersection for any two sets. The latter variant of axiomatization does not make use of a formula schema. Predicative Separation is a schema that takes into account syntactic aspects of set defining predicates, up to provable equivalence. The permitted formulas are denoted by , the lowest level in the set theoretical Lévy hierarchy. General predicates in set theory are never syntactically restricted in such a way and so, in praxis, generic subclasses of sets are still part of the mathematical language. As the scope of subclasses that are provably sets is sensitive to what sets already exist, this scope is expanded when further set existence postulates added added. For a proposition , a recurring trope in the constructive analysis of set theory is to view the predicate as the subclass of the second ordinal . If it is provable that holds, or , or , then is inhabited, or empty (uninhabited), or non-empty (not uninhabited), respectively. Clearly, is equivalent to both the proposition , and also . Likewise, is equivalent to and, equivalently, also . So, here, being detachable from exactly means . In the model of the naturals, if is a number, also expresses that is smaller than . The union that is part of the successor operation definition above may be used to express the excluded middle statement as . In words, is decidable if and only if the successor of is larger than the smallest ordinal . The proposition is decided either way through establishing how is smaller: By already being smaller than , or by being 's direct predecessor. Yet another way to express excluded middle for is as the existence of a least number member of the inhabited class . If one's separation axiom allows for separation with , then is a subset, which may be called the truth value associated with . Two truth values can be proven equal, as sets, by proving an equivalence. In terms of this terminology, the collection of proof values can a priori be understood to be rich. Unsurprisingly, decidable propositions have one of a binary set of truth values. The excluded middle disjunction for that is then also implied by the global statement . No universal set When using the informal class terminology, any set is also considered a class. At the same time, there do arise so called proper classes that can have no extension as a set. When in a theory there is a proof of , then must be proper. (When taking up the perspective of on sets, a theory which has full Separation, proper classes are generally thought of as those that are "too big" to be a set. More technically, they are subclasses of the cumulative hierarchy that extend beyond any ordinal bound.) By a remark in the section on merging sets, a set cannot consistently ruled out to be a member of a class of the form . A constructive proof that it is in that class contains information. Now if is a set, then the class is provably proper. The following demonstrates this in the special case when is empty, i.e. when the right side is the universal class. Being negative results, it reads as in the classical theory. The following holds for any relation . It gives a purely logical condition such that two terms and cannot be -related to one another. Most important here is the rejection of the final disjunct, . The expression does not involve unbounded quantification and is thus allowed in Separation. Russel's construction in turn shows that . So for any set , Predicative Separation alone implies that there exists a set which is not a member of . In particular, no universal set can exist in this theory. In a theory further adopting the axiom of regularity, like , provenly is false for any set . There, this then means that the subset is equal to itself, and that the class is the empty set. For any and , the special case in the formula above gives This already implies that no set equals the subclass of the universal class is, i.e. that subclass is a proper one as well. But even in without Regularity it is consistent for there to be a proper class of singletons which each contain exactly themselves. As an aside, in a theory with stratification like Intuitionistic New Foundations, the syntactic expression may be disallowed in Separation. In turn, the above proof of negation of the existence of a universal set cannot be performed, in that theory. Predicativity The axiom schema of Predicative Separation is also called -Separation or Bounded Separation, as in Separation for set-bounded quantifiers only. (Warning note: The Lévy hierarchy nomenclature is in analogy to in the arithmetical hierarchy, albeit comparison can be subtle: The arithmetic classification is sometimes expressed not syntactically but in terms of subclasses of the naturals. Also, the bottom level of the arithmetical hierarchy has several common definitions, some not allowing the use of some total functions. A similar distinction is not relevant on the level or higher. Finally note that a classification of a formula may be expressed up to equivalence in the theory.) The schema is also the way in which Mac Lane weakens a system close to Zermelo set theory , for mathematical foundations related to topos theory. It is also used in the study of absoluteness, and there part of the formulation of Kripke-Platek set theory. The restriction in the axiom is also gatekeeping impredicative definitions: Existence should at best not be claimed for objects that are not explicitly describable, or whose definition involves themselves or reference to a proper class, such as when a property to be checked involves a universal quantifier. So in a constructive theory without Axiom of power set, when denotes some 2-ary predicate, one should not generally expect a subclass of to be a set, in case that it is defined, for example, as in , or via a similar definitions involving any quantification over the sets . Note that if this subclass of is provenly a set, then this subset itself is also in the unbounded scope of set variable . In other words, as the subclass property is fulfilled, this exact set , defined using the expression , would play a role in its own characterization. While predicative Separation leads to fewer given class definitions being sets, it may be emphasized that many class definitions that are classically equivalent are not so when restricting oneself to the weaker logic. Due to the potential undecidability of general predicates, the notion of subset and subclass is automatically more elaborate in constructive set theories than in classical ones. So in this way one has obtained a broader theory. This remains true if full Separation is adopted, such as in the theory , which however spoils the existence property as well as the standard type theoretical interpretations, and in this way spoils a bottom-up view of constructive sets. As an aside, as subtyping is not a necessary feature of constructive type theory, constructive set theory can be said to quite differ from that framework. Replacement Next consider the It is granting existence, as sets, of the range of function-like predicates, obtained via their domains. In the above formulation, the predicate is not restricted akin to the Separation schema, but this axiom already involves an existential quantifier in the antecedent. Of course, weaker schemas could be considered as well. Via Replacement, the existence of any pair also follows from that of any other particular pair, such as . But as the binary union used in already made use of the Pairing axiom, this approach then necessitates postulating the existence of over that of . In a theory with the impredicative Powerset axiom, the existence of can also be demonstrated using Separation. With the Replacement schema, the theory outlined thus far proves that the equivalence classes or indexed sums are sets. In particular, the Cartesian product, holding all pairs of elements of two sets, is a set. In turn, for any fixed number (in the metatheory), the corresponding product expression, say , can be constructed as a set. The axiomatic requirements for sets recursively defined in the language are discussed further below. A set is discrete, i.e. equality of elements inside a set is decidable, if the corresponding relation as a subset of is decidable. Replacement is relevant for function comprehension and can be seen as a form of comprehension more generally. Only when assuming does Replacement already imply full Separation. In , Replacement is mostly important to prove the existence of sets of high rank, namely via instances of the axiom schema where relates relatively small set to bigger ones, . Constructive set theories commonly have Axiom schema of Replacement, sometimes restricted to bounded formulas. However, when other axioms are dropped, this schema is actually often strengthened - not beyond , but instead merely to gain back some provability strength. Such stronger axioms exist that do not spoil the strong existence properties of a theory, as discussed further below. If is provenly a function on and it is equipped with a codomain (all discussed in detail below), then the image of is a subset of . In other approaches to the set concept, the notion of subsets is defined in terms of "operations", in this fashion. Hereditarily finite sets Pendants of the elements of the class of hereditarily finite sets can be implemented in any common programming language. The axioms discussed above abstract from common operations on the set data type: Pairing and Union are related to nesting and flattening, or taken together concatenation. Replacement is related to comprehension and Separation is then related to the often simpler filtering. Replacement together with Set Induction (introduced below) suffices to axiomize constructively and that theory is also studied without Infinity. A sort of blend between pairing and union, an axiom more readily related to the successor is the Axiom of adjunction. Such principles are relevant for the standard modeling of individual Neumann ordinals. Axiom formulations also exist that pair Union and Replacement in one. While postulating Replacement is not a necessity in the design of a weak constructive set theory that is bi-interpretable with Heyting arithmetic , some form of induction is. For comparison, consider the very weak classical theory called General set theory that interprets the class of natural numbers and their arithmetic via just Extensionality, Adjunction and full Separation. The discussion now proceeds with axioms granting existence of objects which, in different but related form, are also found in dependent type theories, namely products and the collection of natural numbers as a completed set. Infinite sets are particularly handy to reason about operations applied to sequences defined on unbounded index domains, say the formal differentiation of a generating function or the addition of two Cauchy sequences. ECST For some fixed predicate and a set , the statement expresses that is the smallest (in the sense of "") among all sets for which holds true, and that it is always a subset of such . The aim of the axiom of infinity is to eventually obtain unique smallest inductive set. In the context of common set theory axioms, one statement of infinitude is to state that a class is inhabited and also includes a chain of membership (or alternatively a chain of supersets). That is, . More concretely, denote by the inductive property, . In terms of a predicate underlying the class so that , the latter translates to . Write for the general intersection . (A variant of this definition may be considered which requires , but we only use this notion for the following auxiliary definition.) One commonly defines a class , the intersection of all inductive sets. (Variants of this treatment may work in terms of a formula that depends on a set parameter so that .) The class exactly holds all fulfilling the unbounded property . The intention is that if inductive sets exist at all, then the class shares each common natural number with them, and then the proposition , by definition of "", implies that holds for each of these naturals. While bounded separation does not suffice to prove to be the desired set, the language here forms the basis for the following axiom, granting natural number induction for predicates that constitute a set. The elementary constructive Set Theory has the axiom of as well as the postulate Going on, one takes the symbol to denote the now unique smallest inductive set, an unbounded von Neumann ordinal. It contains the empty set and, for each set in , another set in that contains one element more. Symbols called zero and successor are in the signature of the theory of Peano. In , the above defined successor of any number also being in the class follow directly from the characterization of the natural naturals by our von Neumann model. Since the successor of such a set contains itself, one also finds that no successor equals zero. So two of the Peano axioms regarding the symbols zero and the one regarding closedness of come easily. Fourthly, in , where is a set, on can be proven to be an injective operation. For some predicate of sets , the statement claims holds for all subsets of the set of naturals. And the axiom now proves such sets do exist. Such quantification is also possible in second-order arithmetic. The pairwise order "" on the naturals is captured by their membership relation "". The theory proves the order as well as the equality relation on this set to be decidable. Not only is no number smaller than , but induction implies that among subsets of , it is exactly the empty set which has no least member. The contrapositive of this proves the double-negated least number existence for all non-empty subsets of . Another valid principle also classically equivalent to it is least number existence for all inhabited detachable subsets. That said, the bare existence claim for the inhabited subset of is equivalent to excluded middle for , and a constructive theory will therefore not prove to be well-ordered. Weaker formulations of infinity Should it need motivation, the handiness of postulating an unbounded set of numbers in relation to other inductive properties becomes clear in the discussion of arithmetic in set theory further below. But as is familiar from classical set theory, also weak forms of Infinity can be formulated. For example, one may just postulate the existence of some inductive set, - such an existence postulate suffices when full Separation may then be used to carve out the inductive subset of natural numbers, the shared subset of all inductive classes. Alternatively, more specific mere existence postulates may be adopted. Either which way, the inductive set then fulfills the following predecessor existence property in the sense of the von Neumann model: Without making use of the notation for the previously defined successor notation, the extensional equality to a successor is captured by . This expresses that all elements are either equal to or themselves hold a predecessor set which shares all other members with . Observe that through the expression "" on the right hand side, the property characterizing by its members here syntactically again contains the symbol itself. Due to the bottom-up nature of the natural numbers, this is tame here. Assuming -set induction on top of , no two different sets have this property. Also note that there are also longer formulations of this property, avoiding "" in favor unbounded quantifiers. Number bounds Adopting an Axiom of Infinity, the set-bounded quantification legal in predicates used in -Separation then explicitly permits numerically unbounded quantifiers - the two meanings of "bounded" should not be confused. With at hand, call a class of numbers bounded if the following existence statement holds This is a statements of finiteness, also equivalently formulated via . Similarly, to reflect more closely the discussion of functions below, consider the above condition in the form . For decidable properties, these are -statements in arithmetic, but with the Axiom of Infinity, the two quantifiers are set-bound. For a class , the logically positive unboundedness statement is now also one of infinitude. It is in the decidable arithmetic case. To validate infinitude of a set, this property even works if the set holds other elements besides infinitely many of members of . Moderate induction in ECST In the following, an initial segment of the natural numbers, i.e. for any and including the empty set, is denoted by . This set equals and so at this point "" is mere notation for its predecessor (i.e. not involving subtraction function). It is instructive to recall the way in which a theory with set comprehension and extensionality ends up encoding predicate logic. Like any class in set theory, a set can be read as corresponding to predicates on sets. For example, an integer is even if it is a member of the set of even integers, or a natural number has a successor if it is a member of the set of natural numbers that have a successor. For a less primitive example, fix some set and let denote the existential statement that the function space on the finite ordinal into exist. The predicate will be denoted below, and here the existential quantifier is not merely one over natural numbers, nor is it bounded by any other set. Now a proposition like the finite exponentiation principle and, less formally, the equality are just two ways of formulating the same desired statement, namely an -indexed conjunction of existential propositions where ranges over the set of all naturals. Via extensional identification, the second form expresses the claim using notation for subclass comprehension and the bracketed object on the right hand side may not even constitute a set. If that subclass is not provably a set, it may not actually be used in many set theory principles in proofs, and establishing the universal closure as a theorem may not be possible. The set theory can thus be strengthened by more set existence axioms, to be used with predicative bounded Separation, but also by just postulating stronger -statements. The second universally quantified conjunct in the strong axiom of Infinity expresses mathematical induction for all in the universe of discourse, i.e. for sets. This is because the consequent of this clause, , states that all fulfill the associated predicate. Being able to use predicative separation to define subsets of , the theory proves induction for all predicates involving only set-bounded quantifiers. This role of set-bounded quantifiers also means that more set existence axioms impact the strength of this induction principle, further motivating the function space and collection axioms that will be a focus of the rest of the article. Notably, already validates induction with quantifiers over the naturals, and hence induction as in the first-order arithmetic theory . The so called axiom of full mathematical induction for any predicate (i.e. class) expressed through set theory language is far stronger than the bounded induction principle valid in . The former induction principle could be directly adopted, closer mirroring second-order arithmetic. In set theory it also follows from full (i.e. unbounded) Separation, which says that all predicates on are sets. Mathematical induction is also superseded by the (full) Set induction axiom. Warning note: In naming induction statements, one must take care not to conflate terminology with arithmetic theories. The first-order induction schema of natural number arithmetic theory claims induction for all predicates definable in the language of first-order arithmetic, namely predicates of just numbers. So to interpret the axiom schema of , one interprets these arithmetical formulas. In that context, the bounded quantification specifically means quantification over a finite range of numbers. One may also speak about the induction in the first-order but two-sorted theory of so-called second-order arithmetic , in a form explicitly expressed for subsets of the naturals. That class of subsets can be taken to correspond to a richer collection of formulas than the first-order arithmetic definable ones. In the program of reverse mathematics, all mathematical objects discussed are encoded as naturals or subsets of naturals. Subsystems of with very low complexity comprehension studied in that framework have a language that does not merely express arithmetical sets, while all sets of naturals particular such theories prove to exist are just computable sets. Theorems therein can be a relevant reference point for weak set theories with a set of naturals, predicative separation and only some further restricted form of induction. Constructive reverse mathematics exists as a field but is less developed than its classical counterpart. shall moreover not be confused with the second-order formulation of Peano arithmetic . Typical set theories like the one discussed here are also first-order, but those theories are not arithmetics and so formulas may also quantify over the subsets of the naturals. When discussing the strength of axioms concerning numbers, it is also important to keep in mind that the arithmetical and the set theoretical framework do not share a common signature. Likewise, care must always be taken with insights about totality of functions. In computability theory, the μ operator enables all partial general recursive functions (or programs, in the sense that they are Turing computable), including ones e.g. non-primitive recursive but -total, such as the Ackermann function. The definition of the operator involves predicates over the naturals and so the theoretical analysis of functions and their totality depends on the formal framework and proof calculus at hand. Functions General note on programs and functions Naturally, the meaning of existence claims is a topic of interest in constructivism, be it for a theory of sets or any other framework. Let express a property such that a mathematical framework validates what amounts to the statement A constructive proof calculus may validate such a judgement in terms of programs on represented domains and some object representing a concrete assignment , providing a particular choice of value in (a unique one), for each input from . Expressed through the rewriting , this function object may be understood as witnessing the proposition. Consider for example the notions of proof in through realizability theory or function terms in a type theory with a notion of quantifiers. The latter captures proof of logical proposition through programs via the Curry–Howard correspondence. Depending on the context, the word "function" may be used in association with a particular model of computation, and this is a priori narrower than what is discussed in the present set theory context. One notion of program is formalized by partial recursive "functions" in computability theory. But beware that here the word "function" is used in a way that also comprises partial functions, and not just "total functions". The scare quotes are used for clarity here, as in a set theory context there is technically no need to speak of total functions, because this requirement is part of the definition of a set theoretical function and partial function spaces can be modeled via unions. At the same time, when combined with a formal arithmetic, partial function programs provides one particularly sharp notion of totality for functions. By Kleene's normal form theorem, each partial recursive function on the naturals computes, for the values where it terminates, the same as , for some partial function program index , and any index will constitute some partial function. A program can be associated with a and may be said to be -total whenever a theory proves , where amounts to a primitive recursive program and is related to the execution of . Kreisel proved that the class of partial recursive functions proven -total by is not enriched when is added. As a predicate in , this totality constitutes an undecidable subset of indices, highlighting that the recursive world of functions between the naturals is already captured by a set dominated by . As a third warning, note that this notion is really about programs and several indices will in fact constitute the same function, in the extensional sense. A theory in first-order logic, such as the axiomatic set theories discussed here, comes with a joint notion of total and functional for a binary predicate , namely . Such theories relate to programs only indirectly. If denotes the successor operation in a formal language of a theory being studied, then any number, e.g. (the number three), may metalogically be related to the standard numeral, e.g. . Similarly, programs in the partial recursive sense may be unrolled to predicates and weak assumptions suffice so that such a translation respects equality of their return values. Among finitely axiomizable sub-theories of , classical Robinson arithmetic exactly fulfills this. Its existence claims are intended to only concern natural numbers and instead of using the full mathematical induction schema for arithmetic formulas, the theories' axioms postulate that every number is either zero or that there exists a predecessor number to it. Focusing on -total recursive functions here, it is a meta-theorem that the language of arithmetic expresses them by -predicates encoding their graph such that represents them, in the sense that it correctly proves or rejects for any input-output pair of numbers and in the meta-theory. Now given a correctly representing , the predicate defined by represents the recursive function just as well, and as this explicitly only validates the smallest return value, the theory also proves functionality for all inputs in the sense of . Given a representing predicate, then at the cost of making use of , one can always also systematically (i.e. with a ) prove the graph to be total functional. Which predicates are provably functional for various inputs, or even total functional on their domain, generally depends on the adopted axioms of a theory and proof calculus. For example, for the diagonal halting problem, which cannot have a -total index, it is -independent whether the corresponding graph predicate on (a decision problem) is total functional, but implies that it is. Proof theoretical function hierarchies provide examples of predicates proven total functional in systems going beyond . Which sets proven to exist do constitute a total function, in the sense introduced next, also always depends on the axioms and the proof calculus. Finally, note that the soundness of halting claims is a metalogical property beyond consistency, i.e. a theory may be consistent and from it one may prove that some program will eventually halt, despite this never actually occurring when said program is run. More formally, assuming consistency of a theory does not imply it is also arithmetically -sound. Total functional relations In set theory language here, speak of a function class when and provenly . Notably, this definition involves quantifier explicitly asking for existence - an aspect which is particularly important in the constructive context. In words: For every , it demands the unique existence of a so that . In the case that this holds one may use function application bracket notation and write . The above property may then be stated as . This notation may be extended to equality of function values. Some notational conveniences involving function application will only work when a set has indeed been established to be a function. Let (also written ) denote the class of sets that fulfill the function property. This is the class of functions from to in a pure set theory. Below the notation is also used for , for the sake of distinguishing it from ordinal exponentiation. When functions are understood as just function graphs as here, the membership proposition is also written . The Boolean-valued are among the classes discussed in the next section. By construction, any such function respects equality in the sense that , for any inputs from . This is worth mentioning since also more broader concepts of "assignment routines" or "operations" exist in the mathematical literature, which may not in general respect this. Variants of the functional predicate definition using apartness relations on setoids have been defined as well. A subset of a function is still a function and the function predicate may also be proven for enlarged chosen codomain sets. As noted, care must be taken with nomenclature "function", a word which sees use in most mathematical frameworks. When a function set itself is not tied to a particular codomain, then this set of pairs is also member of a function space with larger codomain. This do not happen when by the word one denotes the subset of pairs paired with a codomain set, i.e. a formalization in terms of . This is mostly a matter of bookkeeping, but affects how other predicates are defined, question of size. This choice is also just enforced by some mathematical frameworks. Similar considerations apply to any treatment of partial functions and their domains. If both the domain and considered codomain are sets, then the above function predicate only involves bounded quantifiers. Common notions such as injectivity and surjectivity can be expressed in a bounded fashion as well, and thus so is bijectivity. Both of these tie in to notions of size. Importantly, injection existence between any two sets provides a preorder. A power class does not inject into its underlying set and the latter does not map onto the former. Surjectivity is formally a more complex definition. Note that injectivity shall be defined positively, not by its contrapositive, which is common practice in classical mathematics. The version without negations is sometimes called weakly injective. The existence of value collisions is a strong notion of non-injectivity. And regarding surjectivity, similar considerations exist for outlier-production in the codomain. Whether a subclass (or predicate for that matter) can be judged to be a function set, or even total functional to begin with, will depend on the strength of the theory, which is to say the axioms one adopts. And notably, a general class could also fulfill the above defining predicate without being a subclass of the product , i.e. the property is expressing not more or less than functionality w.r.t. the inputs from . Now if the domain is a set, the function comprehension principle, also called axiom of unique choice or non-choice, says that a function as a set, with some codomain, exists well. (And this principle is valid in a theory like . Also compare with the Replacement axiom.) That is, the mapping information exists as set and it has a pair for each element in the domain. Of course, for any set from some class, one may always associate unique element of the singleton , which shows that merely a chosen range being a set does not suffice to be granted a function set. It is a metatheorem for theories containing that adding a function symbol for a provenly total class function is a conservative extension, despite this formally changing the scope of bounded Separation. In summary, in the set theory context the focus is on capturing particular total relations that are functional. To delineate the notion of function in the theories of the previous subsection (a 2-ary logical predicate defined to express a functions graph, together with a proposition that it is total and functional) from the "material" set theoretical notion here, one may explicitly call the latter graph of a function, anafunction or set function. The axiom schema of Replacement can also be formulated in terms of the ranges of such set functions. Finitude One defines three distinct notions involving surjections. For a general set to be (Bishop-)finite shall mean there is a bijective function to a natural. If the existence of such a bijection is proven impossible, the set is called non-finite. Secondly, for a notion weaker than finite, to be finitely indexed (or Kuratowski-finite) shall mean that there is a surjection from a von Neumann natural number onto it. In programming terms, the element of such a set are accessible in a (ending) for-loop, and only those, while it may not be decidable whether repetition occurred. Thirdly, call a set subfinite if it is the subset of a finite set, which thus injects into that finite set. Here, a for-loop will access all of the set's members, but also possibly others. For another combined notion, one weaker than finitely indexed, to be subfinitely indexed means to be in the surjective image of a subfinite set, and in this just means to be the subset of a finitely indexed set, meaning the subset can also be taken on the image side instead of the domain side. A set exhibiting either of those notions can be understood to be majorized by a finite set, but in the second case the relation between the sets members is not necessarily fully understood. In the third case, validating membership in the set is generally more difficult, and not even membership of its member with respect to some superset of the set is necessary fully understood. The claim that being finite is equivalent to being subfinite, for all sets, is equivalent to . More finiteness properties for a set can be defined, e.g. expressing the existence of some large enough natural such that a certain class of functions on the naturals always fail to map to distinct elements in . One definition considers some notion of non-injectivity into . Other definitions consider functions to a fixed superset of with more elements. Terminology for conditions of finiteness and infinitude may vary. Notably, subfinitely indexed sets (a notion necessarily involving surjections) are sometimes called subfinite (which can be defined without functions). The property of being finitely indexed could also be denoted "finitely countable", to fit the naming logic, but is by some authors also called finitely enumerable (which might be confusing as this suggest an injection in the other direction). Relatedly, the existence of a bijection with a finite set has not established, one may say a set is not finite, but this use of language is then weaker than to claim the set to be non-finite. The same issue applies to countable sets (not proven countable vs. proven non-countable), et cetera. A surjective map may also be called an enumeration. Infinitude The set itself is clearly unbounded. In fact, for any surjection from a finite range onto , one may construct an element that is different from any element in the functions range. Where needed, this notion of infinitude can also be expressed in terms of an apartness relation on the set in question. Being not Kuratowski-finite implies being non-finite and indeed the naturals shall not be finite in any sense. Commonly, the word infinite is used for the negative notion of being non-finite. Further, observe that , unlike any of its members, can be put in bijection with some of its proper unbounded subsets, e.g. those of the form for any . This validates the formulations of Dedekind-infinite. So more generally than the property of infinitude in the previous section on number bounds, one may call a set infinite in the logically positive sense if one can inject into it. A set that is even in bijection with may be called countably infinite. A set is Tarski-infinite if there is a chain of -increasing subsets of it. Here each set has new elements compared to its predecessor and the definition does not speak of sets growing rank. There are indeed plenty of properties characterizing infinitude even in classical and that theory does not prove all non-finite sets to be infinite in the injection existence sense, albeit it there holds when further assuming countable choice. without any choice even permits cardinals aside the aleph numbers, and there can then be sets that negate both of the above properties, i.e. they are both non-Dedekind-infinite and non-finite (also called Dedekind-finite infinite sets). Call an inhabited set countable if there exists a surjection from onto it and subcountable if this can be done from some subset of . Call a set enumerable if there exists an injection to , which renders the set discrete. Notably, all of these are function existence claims. The empty set is not inhabited but generally deemed countable too, and note that the successor set of any countable set is countable. The set is trivially infinite, countable and enumerable, as witnessed by the identity function. Also here, in strong classical theories many of these notions coincide in general and, as a result, the naming conventions in the literature are inconsistent. An infinite, countable set is equinumeros to . There are also various ways to characterize logically negative notion. The notion of uncountability, in the sense of being not countable, is also discussed in conjunction with the exponentiation axiom further below. Another notion of uncountability of is given when one can produce a member in the compliment of any of 's countable subsets. More properties of finiteness may be defined as negations of such properties, et cetera. Characteristic functions Separation lets us cut out subsets of products , at least when they are described in a bounded fashion. Given any , one is now led to reason about classes such as Since , one has and so . But be aware that in absence of any non-constructive axioms may in generally not be decidable, since one requires an explicit proof of either disjunct. Constructively, when cannot be witnessed for all the , or uniqueness of the terms associated with each cannot be proven, then one cannot judge the comprehended collection to be total functional. Case in point: The classical derivation of Schröder–Bernstein relies on case analysis - but to constitute a function, particular cases shall actually be specifiable, given any input from the domain. It has been established that Schröder–Bernstein cannot have a proof on the base of plus constructive principles. So to the extent that intuitionistic inference does not go beyond what is formalized here, there is no generic construction of a bijection from two injections in opposing directions. But being compatible with , the development in this section still always permits "function on " to be interpreted as a completed object that is also not necessarily given as lawlike sequence. Applications may be found in the common models for claims about probability, e.g. statements involving the notion of "being given" an unending random sequence of coin flips, even if many predictions can also be expressed in terms of spreads. If indeed one is given a function , it is the characteristic function actually deciding membership in some detachable subset and Per convention, the detachable subset , as well as any equivalent of the formulas and (with free) may be referred to as a decidable property or set on . One may call a collection searchable for if existence is actually decidable, Now consider the case . If , say, then the range of is an inhabited, counted set, by Replacement. However, the need not be again a decidable set itself, since the claim is equivalent to the rather strong . Moreover, is also equivalent to and so one can state undecidable propositions about also when membership in is decidable. This also plays out like this classically in the sense that statements about may be independent, but any classical theory then nonetheless claims the joint proposition . Consider the set of all indices of proofs of an inconsistency of the theory at hand, in which case the universally closed statement is a consistency claim. In terms of arithmetic principles, assuming decidability of this would be - or arithmetic -. This and the stronger related , or arithmetic -, is discussed below. Witness of apartness The identity of indiscernibles, which in the first-order context is a higher order principle, holds that the equality of two terms and necessitates that all predicates agree on them. And so if there exists a predicate that distinguishes two terms and in the sense that , then the principle implies that the two terms do not coincide. A form of this may be expressed set theoretically: may be deemed apart if there exists a subset such that one is a member and the other is not. Restricted to detachable subsets, this may also be formulated concisely using characteristic functions . Indeed, the latter does not actually depend on the codomain being a binary set: Equality is rejected, i.e. is proven, as soon it is established that not all functions on validate , a logically negative condition. One may on any set define the logically positive apartness relation As the naturals are discrete, for these functions the negative condition is equivalent to the (weaker) double-negation of this relation. Again in words, equality of and implies that no coloring can distinguish them - and so to rule out the former, i.e. to prove , one must merely rule out the latter, i.e. merely prove . Computable sets Going back to more generality, given a general predicate on the numbers (say one defined from Kleene's T predicate), let again Given any natural , then In classical set theory, by and so excluded middle also holds for subclass membership. If the class has no numerical bound, then successively going through the natural numbers , and thus "listing" all numbers in by simply skipping those with , classically always constitutes an increasing surjective sequence . There, one can obtain a bijective function. In this way, the class of functions in typical classical set theories is provenly rich, as it also contains objects that are beyond what we know to be effectively computable, or programmatically listable in praxis. In computability theory, the computable sets are ranges of non-decreasing total functions in the recursive sense, at the level of the arithmetical hierarchy, and not higher. Deciding a predicate at that level amounts to solving the task of eventually finding a certificate that either validates or rejects membership. As not every predicate is computably decidable, also the theory alone will not claim (prove) that all unbounded are the range of some bijective function with domain . See also Kripke's schema. Note that bounded Separation nonetheless proves the more complicated arithmetical predicates to still constitute sets, the next level being the computably enumerable ones at . There is a large corpus of computability theory notions regarding how general subsets of naturals relate to one another. For example, one way to establish a bijection of two such sets is by relating them through a computable isomorphism, which is a computable permutation of all the naturals. The latter may in turn be established by a pair of particular injections in opposing directions. Boundedness criteria Any subset injects into . If is decidable and inhabited with , the sequence i.e. is surjective onto , making it a counted set. That function also has the property . Now consider a countable set that is bounded in the sense defined previously. Any sequence taking values in is then numerically capped as well, and in particular eventually does not exceed the identity function on its input indices. Formally, A set such that this loose bounding statement holds for all sequences taking values in (or an equivalent formulation of this property) is called pseudo-bounded. The intention of this property would be to still capture that is eventually exhausted, albeit now this is expressed in terms of the function space (which is bigger than in the sense that always injects into ). The related notion familiar from topological vector space theory is formulated in terms of ratios going to zero for all sequences ( in the above notation). For a decidable, inhabited set, validity of pseudo-boundedness, together with the counting sequence defined above, grants a bound for all the elements of . The principle that any inhabited, pseudo-bounded subset of that is just countable (but not necessarily decidable) is always also bounded is called -. The principle also holds generally in many constructive frameworks, such as the Markovian base theory , which is a theory postulating exclusively lawlike sequences with nice number search termination properties. However, - is independent of . Choice functions Not even classical proves each union of a countable set of two-element sets to be countable again. Indeed, models of have been defined that negate the countability of such a countable union of pairs. Assuming countable choice rules out that model as an interpretation of the resulting theory. This principle is still independent of - A naive proof strategy for that statement fails at the accounting of infinitely many existential instantiations. A choice principle postulates that certain selections can always be made in a joint fashion in the sense that they are also manifested as a single set function in the theory. As with any independent axiom, this raises the proving capabilities while restricting the scope of possible (model-theoretic) interpretations of the (syntactic) theory. A function existence claim can often be translated to the existence of inverses, orderings, and so on. Choice moreover implies statements about cardinalities of different sets, e.g. they imply or rule out countability of sets. Adding full choice to does not prove any new -theorems, but it is strictly non-constructive, as shown below. The development here proceeds in a fashion agnostic to any of the variants described next. Axiom of countable choice (or ): If , one can form the one-to-many relation-set . The axiom of countable choice would grant that whenever , one can form a function mapping each number to a unique value. The existence of such sequences is not generally provable on the base of and countable choice is not -conservative over that theory. Countable choice into general sets can also be weakened further. One common consideration is to restrict the possible cardinalities of the range of , giving the weak countable choice into countable, finite or even just binary sets (). One may consider the version of countable choice for functions into (called or ), as is implied by the constructive Church's thesis principle, i.e. by postulating that all total arithmetical relations are recursive. in arithmetic may be understood as a form of choice axiom. Another means of weakening countable choice is by restricting the involved definitions w.r.t. their place in the syntactic hierarchies (say -). The weak Kőnig's lemma , which breaks strictly recursive mathematics as further discussed below, is stronger than - and is itself sometimes viewed as capturing a form of countable choice. In the presence of a weak form of countable choice, the lemma becomes equivalent to the non-constructive principle of more logical flavor, . Constructively, a weak form of choice is required for well-behaved Cauchy reals. Countable choice is not valid in the internal logic of a general topos, which can be seen as models of constructive set theories. Axiom of dependent choice : Countable choice is implied by the more general axiom of dependent choice, extracting a sequence in an inhabited , given any entire relation . In set theory, this sequence is again an infinite set of pairs, a subset of . So one is granted to pass from several existence statements to function existence, itself granting unique-existence statements, for every natural. An appropriate formulation of dependent choice is adopted in several constructive frameworks, e.g., by some schools that understand unending sequences as ongoing constructions instead of completed objects. At least those cases seem benign where, for any , next value existence can be validated in a computable fashion. The corresponding recursive function , if it exists, is then conceptualized as being able to return a value at infinitely many potential inputs , but these do not have to be evaluated all together at once. It also holds in many realizability models. In the condition of the formally similar recursion theorem, one is already given a unique choice at each step, and that theorem lets one combine them to a function on . So also with one may consider forms of the axiom with restrictions on . Via the bounded separation axiom in , the principle also is equivalent to a schema in two bounded predicate variables: Keeping all quantifiers ranging over , one may further narrow this set domain using a unary -predicate variable, while also using any 2-ary -predicate instead of the relation set . Relativized dependent choice : This is the schema just using two general classes, instead of requiring and be sets. The domain of the choice function granted to exist is still just . Over , it implies full mathematical induction, which, in turn allows for function definition on through the recursion schema. When is restricted to -definitions, it still implies mathematical induction for -predicates (with an existential quantifier over sets) as well as . In , the schema is equivalent to . -: A family of sets is better controllable if it comes indexed by a function. A set is a base if all indexed families of sets over it, have a choice function , i.e. . A collection of sets holding and its elements and which is closed by taking indexed sums and products (see dependent type) is called -closed. While the axiom that all sets in the smallest -closed class are a base does need some work to formulate, it is the strongest choice principle over that holds in the type theoretical interpretation . Axiom of choice : This is the "full" choice function postulate concerning domains that are general sets containing inhabited sets, with the codomain given as their general union. Given a collection of sets such that the logic allows to make a choice in each, the axiom grants that there exists a set function that jointly captures a choice in all. It is typically formulated for all sets but has also been studied in classical formulations for sets only up to any particular cardinality. A standard example is choice in all inhabited subsets of the reals, which classically equals the domain . For this collection there can be no uniform element selection prescription that provably constitutes a choice function on the base of . Also, when restricted to the Borel algebra of the reals, alone does not prove the existence of a function selecting a member from each non-empty such Lebesgue-measurable subset. (The set is the σ-algebra generated by the intervals . It strictly includes those intervals, in the sense of , but in also only has the cardinality of the reals itself.) Striking existence claims implied by the axiom are abound. proves exists and then the axiom of choice also implies dependent choice. Critically in the present context, it moreover also implies instances of via Diaconescu's theorem. For or theories extending it, this means full choice at the very least proves for all -formulas, a non-constructive consequence not acceptable, for example, from a computability standpoint. Note that constructively, Zorn's lemma does not imply choice: When membership in function domains fails to be decidable, the extremal function granted by that principle is not provably always a choice function on the whole domain. Diaconescu's theorem To highlight the strength of full Choice and its relation to matters of intentionality, one should consider the classes from the proof of Diaconescu's theorem. They are as contingent as the proposition involved in their definition and they are not proven finite. Nonetheless, the setup entails several consequences. Referring back to the introductory elaboration on the meaning of such convenient class notation, as well as to the principle of distributivity, . So unconditionally, as well as , and in particular they are inhabited. As in any model of Heyting arithmetic, using the disjunctive syllogism both and each imply . The two statements are indeed equivalent to the proposition, as clearly . The latter also says that validity of means and share all members, and there are two of these. As are are then sets, also by extensionality. Conversely, assuming they are equal means for any , validating all membership statements. So both the membership statements as well as the equalities are found to be equivalent to . Using the contrapositive results in the weaker equivalence of disjuncts . Of course, explicitly and so one actually finds in which way the sets can end up being different. As functions preserves equality by definition, indeed holds for any with domain . In the following assume a context in which are indeed established to be sets, and thus subfinite sets. The general axiom of choice claims existence of a function with . It is important that the elements of the function's domain are different than the natural numbers in the sense that a priori less is known about the former. When forming then union of the two classes, is a necessary but then also sufficient condition. Thus and one is dealing with functions into a set of two distinguishable values. With choice come the conjunction in the codomain of the function, but the possible function return values are known to be just or . Using the distributivity, there arises a list of conditions, another disjunction. Expanding what is then established, one finds that either both as well as the sets equality holds, or that the return values are different and can be rejected. The conclusion is that the choice postulate actually implies whenever a Separation axiom allows for set comprehension using undecidable proposition . Analysis of Diaconescu's theorem So full choice is non-constructive in set theory as defined here. The issue is that when propositions are part of set comprehension (like when is used to separate, and thereby define, the classes and from ), the notion of their truth values are ramified into set terms of the theory. Equality defined by the set theoretical axiom of extensionality, which itself is not related to functions, in turn couples knowledge about the proposition to information about function values. To recapitulate the final step in terms function values: On the one hand, witnessing implies and and this conclusion independently also applies to witnessing . On the other hand, witnessing implies the two function arguments are not equal and this rules out . There are really only three combinations, as the axiom of extensionality in the given setup makes inconsistent. So if the constructive reading of existence is to be preserved, full choice may be not adopted in the set theory, because the mere claim of function existence does not realize a particular function. To better understand why one cannot expect to be granted a definitive (total) choice function with domain , consider naive function candidates. Firstly, an analysis of the domain is in order. The surjection witnesses that is finitely indexed. It was noted that its members are subfinite and also inhabited, since regardless of it is the case that and . So naively, this would seem to make a contender for a choice function. When can be rejected, then this is indeed the only option. But in the case of provability of , when , there is extensionally only one possible function input to a choice function. So in that situation, a choice function would explicitly have type , for example and this would rule out the initial contender. For general , the domain of a would-be choice function is not concrete but contingent on and not proven finite. When considering the above functional assignment , then neither unconditionally declaring nor is necessarily consistent. Having identified with , the two candidates described above can be represented simultaneously via (which is not proven finite either) with the subfinite "truth value of " given as . As , postulating , or , or the classical principle here would indeed imply that is a natural, so that the latter set constitutes a choice function into . And as in the constructive case, given a particular choice function - a set holding either exactly one or exactly two pairs - one could actually infer whether or whether does hold. Vice versa, the third and last candidate can be captured as part of , where . Such a had already been considered in the early section on the axiom of separation. Again, the latter here is a classical choice function either way also, where functions as a (potentially undecidable) "if-clause". Constructively, the domain and values of such -dependent would-be functions are not understood enough to prove them to be a total functional relation into . For computable semantics, set theory axioms postulating (total) function existence lead to the requirement for halting recursive functions. From their function graph in individual interpretations, one can infer the branches taken by the "if-clauses" that were undecided in the interpreted theory. But on the level of the synthetic frameworks, when they broadly become classical from adopting full choice, these extensional set theories theories contradict the constructive Church's rule. Regularity implies PEM The axiom of choice grants existence a function associated with every set-sized collection of inhabited elements , with which one can then at once pick unique elements . The axiom of regularity states that for every inhabited set in the universal collection, there exists an element in , which shares no elements with . This formulation does not involve functions or unique existence claims, but instead directly guarantees sets with a specific property. As the axiom correlates membership claims at different rank, the axiom also ends up implying : The proof from Choice above had used and a particular set . The proof in this paragraph also assumes Separation applies to and uses , for which by definition. It was already explained that and so one may prove excluded middle for in the form . Now let be the postulated member with the empty intersection property. The set was defined as a subset of and so any given fulfills the disjunction . The left clause implies , while for the right clause one may use that the special non-intersecting element fulfills . Demanding that the set of naturals is well-ordered with respect to it standard order relation imposes the same condition on the inhabited set . So the least number principle has the same non-constructive implication. As with the proof from Choice, the scope of propositions for which these results hold is governed by one's Separation axiom. Arithmetic The four Peano axioms for and , characterizing the set as a model of the natural numbers in the constructive set theory , have been discussed. The order "" of natural numbers is captured by membership "" in this von Neumann model and this set is discrete, i.e. also is decidable. As discussed, induction for arithmetic formulas is a theorem. However, when not assuming full mathematical induction (or stronger axioms like full Separation) in a set theory, there is a pitfall regarding the existence of arithmetic operations. The first-order theory of Heyting arithmetic has the same signature and non-logical axioms as Peano arithmetic . In contrast, the signature of set theory does not contain addition "" or multiplication "". does actually not enable primitive recursion in for function definitions of what would be (where "" here denotes the Cartesian product of set, not to be confused with multiplication above). Indeed, despite having the Replacement axiom, the theory does not prove there to be a set capturing the addition function . In the next section, it is clarified which set theoretical axiom may be asserted to prove existence of the latter as a function set, together with their desired relation to zero and successor. Far beyond just the equality predicate, the obtained model of arithmetic then validates for any quantifier-free formula. Indeed, is -conservative over and double-negation elimination is possible for any Harrop formula. Arithmetic functions from recursion Going a step beyond , the axiom granting definition of set functions via iteration-step set functions must be added: For any set , set and , there must also exist a function attained by making use of the former, namely such that and . This iteration- or recursion principle is akin to the transfinite recursion theorem, except it is restricted to set functions and finite ordinal arguments, i.e. there is no clause about limit ordinals. It functions as the set theoretical equivalent of a natural numbers object in category theory. This then enables a full interpretation of Heyting arithmetic in our set theory, including addition and multiplication functions. With this, and are well-founded, in the sense of the inductive subsets formulation. Further, arithmetic of rational numbers can then also be defined and its properties, like uniqueness and countability, be proven. Recursion from set theory axioms Recall that is short for , where is short for the total function predicate, a proposition in terms of uses bounded quantifiers. If both sides are sets, then by extensionality this is also equivalent to . (Although by slight abuse of formal notation, as with the symbol "", the symbol "" is also commonly used with classes anyhow.) A set theory with the -model enabling recursion principle, spelled out above, will also prove that, for all naturals and , the function spaces are sets. Indeed, bounded recursion suffices, i.e. the principle for -defined classes. Conversely, the recursion principle can be proven from a definition involving the union of recursive functions on finite domains. Relevant for this is the class of partial functions on such that all of its members have a return values only up to some natural number bound, which may be expressed by . Existence of this as a set becomes provable assuming that the individual function spaces all form sets themselves. To this end With this axiom, any such space is now a set of subsets of and this is strictly weaker than full Separation. Notably, adoption of this principle has genuine set theoretical flavor, in contrast to a direct embedding of arithmetic principles into our theory. And it is a modest principle insofar as these function spaces are tame: When instead assuming full induction or full exponentiation, taking to function spaces , or to n-fold Cartesian products, provably does preserve countability. In plus finite exponentiation, the recursion principle is a theorem. Moreover, enumerable forms of the pigeon hole principle can now also be proven, e.g. that on a finitely indexed set, every auto-injection is also a surjection. As a consequence, the cardinality of finite sets, i.e. the finite von Neumann ordinal, is provably unique. The finitely indexed discrete sets are just the finite sets. In particular, finitely indexed subsets of are finite. Taking quotients or taking the binary union or Cartesian product of two sets preserve finiteness, sub-finiteness and being finitely indexed. The set theory axioms listed so far incorporates first-order arithemtic and suffices as formalized framework for a good portion of common mathematics. The restriction to finite domains is lifted in the strictly stronger exponentiation axiom below. However, also that axiom does not entail the full induction schema for formulas with unbound quantifiers over the domain of sets, nor a dependent choice principle. Likewise, there are Collection principles that are constructively not implied by Replacement, as discussed further below. A consequence of this is that for some statements of higher complexity or indirection, even if concrete instances of interest may well be provable, the theory may not prove the universal closure. Stronger than this theory with finite exponentiation is plus full induction. It implies the recursion principle even for classes and such that is unique. Already that recursion principle when restricted to does prove finite exponentiation, and also the existence of a transitive closure for every set with respect to (since union formation is ). With it more common constructions preserve countability. General unions over a finitely indexed set of finitely indexed sets are again finitely indexed, when at least assuming induction for -predicates (with respect to the set theory language, and this then holds regardless of the decidability of their equality relations.) Induction without infinite sets Before discussing even classically uncountable sets, this last section takes a step back to a context more akin to . The addition of numbers, considered as relation on triples, is an infinite collection, just like collection of natural numbers themselves. But note that induction schemas may be adopted (for sets, ordinals or in conjunction with a natural number sort), without ever postulating that the collection of naturals exists as a set. As noted, Heyting arithmetic is bi-interpretable with such a constructive set theory, in which all sets are postulated to be in bijection with an ordinal. The BIT predicate is a common means to encode sets in arithmetic. This paragraph lists a few weak natural number induction principles studied in the proof theory of arithmetic theories with addition and multiplication in their signature. This is the framework where these principles are most well understood. The theories may be defined via bounded formulations or variations on induction schemas that may furthermore only allow for predicates of restricted complexity. On the classical first-order side, this leads to theories between the Robinson arithmetic and Peano arithmetic : The theory does not have any induction. has full mathematical induction for arithmetical formulas and has ordinal , meaning the theory lets one encode ordinals of weaker theories as recursive relation on just the naturals. Theories may also include additional symbols for particular functions. Many of the well studied arithmetic theories are weak regarding proof of totality for some more fast growing functions. Some of the most basic examples of arithmetics include elementary function arithmetic , which includes induction for just bounded arithmetical formulas, here meaning with quantifiers over finite number ranges. The theory has a proof theoretic ordinal (the least not provenly recursive well-ordering) of . The -induction schema for arithmetical existential formulas allows for induction for those properties of naturals a validation of which is computable via a finite search with unbound (any, but finite) runtime. The schema is also classically equivalent to the -induction schema. The relatively weak classical first-order arithmetic which adopts that schema is denoted and proves the primitive recursive functions total. The theory is -conservative over primitive recursive arithmetic . Note that the -induction is also part of the second-order reverse mathematics base system , its other axioms being plus -comprehension of subsets of naturals. The theory is -conservative over . Those last mentioned arithmetic theories all have ordinal . Let us mention one more step beyond the -induction schema. Lack of stronger induction schemas means, for example, that some unbounded versions of the pigeon hole principle are unprovable. One relatively weak one being the Ramsey theorem type claim here expressed as follows: For any and coding of a coloring map , associating each with a color , it is not the case that for every color there exists a threshold input number beyond which is not ever the mappings return value anymore. (In the classical context and in terms of sets, this claim about coloring may be phrased positively, as saying that there always exists at least one return value such that, in effect, for some unbounded domain it holds that . In words, when provides infinite enumerated assignments, each being of one of different possible colors, it is claimed that a particular coloring infinitely many numbers always exists and that the set can thus be specified, without even having to inspect properties of . When read constructively, one would want to be concretely specifiable and so that formulation is a stronger claim.) Higher indirection, than in induction for mere existential statements, is needed to formally reformulate such a negation (the Ramsey theorem type claim in the original formulation above) and prove it. Namely to restate the problem in terms of the negation of the existence of one joint threshold number, depending on all the hypothetical 's, beyond which the function would still have to attain some color value. More specifically, the strength of the required bounding principle is strictly between the induction schema in and . For properties in terms of return values of functions on finite domains, brute force verification through checking all possible inputs has computational overhead which is larger for larger domains, but always finite. Acceptance of an induction schema as in validates the former so called infinite pigeon hole principle, which concerns unbounded domains, and so is about mappings with infinitely many inputs. It is worth noting that in the program of predicative arithmetic, even the mathematical induction schema has been criticized as possibly being impredicative, when natural numbers are defined as the object which fulfill this schema, which itself is defined in terms of all naturals. Exponentiation Classical without the Powerset axiom has natural models in classes of sets of hereditary size less than certain uncountable cardinals. In particular, it is still consistent with all existing sets (including sets holding reals) being subcountable, and there even countable. Such a theory essentially amounts to second-order arithmetic. All sets being subcountable can constructively be consistent even in the present of uncountable sets, as introduced now. Possible choice principles were discussed, a weakened form of the Separation schema was already adopted, and more of the standard axioms shall be weakened for a more predicative and constructive theory. The first one of those is the Powerset axiom, which is adopted in the form of the space of characteristic functions. The following axiom is strictly stronger than its pendant for finite domains discussed above: The formulation here again uses the convenient notation for function spaces, as discussed above. In words, the axiom says that given two sets , the class of all functions is, in fact, also a set. This is certainly required, for example, to formalize the object map of an internal hom-functor like Adopting such an existence statement also the quantification over the elements of certain classes of (total) functions now only range over sets. Consider the collection of pairs validating the apartness relation . Via bounded Separation, this now constitutes a subset of . This examples shows that the Exponentiation axiom not only enriches the domain of sets directly, but via separation also enables the derivation of yet more sets, and this then furthermore also strengthens other axioms. Notably, these bounded quantifiers now range over function spaces that are provably uncountable, and hence even classically uncountable. E.g. the collection of all functions where , i.e. the set of points underlying the Cantor space, is uncountable, by Cantor's diagonal argument, and can at best be taken to be a subcountable set. In this theory one may now also quantify over subspaces of spaces like , which is a third order notion on the naturals. (In this section and beyond, the symbol for the semiring of natural numbers in expressions like is used, or written , just to avoid conflation of cardinal- with ordinal exponentiation.) Roughly, classically uncountable sets, like for example these function spaces, tend to not have computably decidable equality. By taking the general union over an -indexed family , also the dependent or indexed product, written , is now a set. For constant , this again reduces to the function space . And taking the general union over function spaces themselves, whenever the powerclass of is a set, then also the superset of is now a set - giving a means to talk about the space of partial functions on . Unions and countability With Exponentiation, the theory proves the existence of any primitive recursive function in , and in particular in the uncountable function spaces out of . Indeed, with function spaces and the finite von Neumann ordinals as domains, we can model as discussed, and thus encode ordinals in the arithmetic. One then furthermore obtains the ordinal-exponentiated number as a set, which may be characterized as , the counted set of words over an infinite alphabet. The union of all finite sequences over a countable set is now a countable set. Further, for any countable family of counting functions together with their ranges, the theory proves the union of those ranges to be countable. In contrast, not assuming countable choice, even is consistent with the uncountable set being the union of a countable set of countable sets. The list here is by no means complete. Many theorems about the various function existence predicates hold, especially when assuming countable choice - which as noted is never implicitly assumed in this discussion. At last, with Exponentiation, any finitely indexed union of a family of subfinitely indexed resp. subcountable sets is itself subfinitely indexed resp. subcountable as well. The theory also proves the collection of all the countable subsets of any set to be a set itself. Concerning this subset of the powerclass , some natural cardinality questions can also classically only be settled with Choice, at least for uncountable . The class of all subsets of a set Given a sequence of sets, one may define new such sequences, e.g. in . But notably, in a mathematical set theory framework, the collection of all subsets of a set is defined not in a bottom-up construction from its constituents but via a comprehension over all sets in the domain of discourse. The standard, standalone characterization of the powerclass of a set involves unbounded universal quantification, namely , where was previously defined also in terms of the membership predicate . Here, a statement expressed as must a priori be taken for and is not equivalent to a set-bounded proposition. Indeed, the statement itself is . If is a set, then the defining quantification even ranges across , which makes the axiom of powerset impredicative. Recall that a member of the set of characteristic functions corresponds to a predicate that is decidable on a set , which it thus determines a detachable subset . In turn, the class of all detachable subsets of is now also a set, via Replacement. However, may fail to provably have desirable properties, e.g. being closed under unending operations such as the unions over countably infinite index sets: For a countable sequence , the subset of validating for all does exist as a set. But it may fail to be detachable and is therefore then not necessarily provably itself a member of . Meanwhile, over classical logic, all subsets of a set are trivially detachable, meaning and then of course holds any subset. Over classical logic, this furthermore means that Exponentiation turns the power class into a set. Translating results of set theory based mathematical theories like point-set topology or measure theory to a constructive framework is a subtle back and forth. For example, while is a field of sets, for it to form a σ-algebra per definition also requires the above mentioned closedness under unions. But while a domain of subsets may fail to exhibit such closure property constructively, classically a measure is continuous from below and so its value on an infinite union can in any case also be expressed without reference to that set as function input, namely as of the growing sequence of the function's values at finite unions. Apart from the class of detachable sets, also various other subclasses of any powerclass are now provenly sets. For example, the theory also proves this for the collection of all the countable subsets of any set. The richness of the full powerclass in a theory without excluded middle can best be understood by considering small classically finite sets. For any proposition , consider the subclass of (i.e. or ). It equals when can be rejected and it equals (i.e. ), when can be proven. But may also not be decidable at all. Consider three different undecidable proposition, none of which provenly imply another. They can be used to define three subclasses of the singleton , none of which are provenly the same. In this view, the powerclass of the singleton, usually denoted by , is called the truth value algebra and does not necessarily provenly have only two elements. With Exponentiation, the powerclass of the singleton, , being a set already implies Powerset for sets in general. The proof is via replacement for the association of to , and an argument why all subsets are covered. The set injects into the function space also. If the theory proves above a set (as for example unconditionally does), then the subset of is a function with . To claim that is to claim that excluded middle holds for . It has been pointed out that the empty set and the set itself are of course two subsets of , meaning . Whether also is true in a theory is contingent on a simple disjunction: . So assuming for just bounded formulas, predicative Separation then lets one demonstrate that the powerclass is a set. And so in this context, also full Choice proves Powerset. (In the context of , bounded excluded middle in fact already turns set theory classical, as discussed further below.) Full Separation is equivalent to just assuming that each individual subclass of is a set. Assuming full Separation, both full Choice and Regularity prove . Assuming in this theory, Set induction becomes equivalent to Regularity and Replacement becomes capable of proving full Separation. Note that cardinal relations involving uncountable sets are also elusive in , where the characterization of uncountability simplifies to . For example, regarding the uncountable power , it is independent of that classical theory whether all such have , nor does it prove that . See continuum hypothesis and the related Easton's theorem. Category and type theoretic notions So in this context with Exponentiation, first-order arithmetic has a model and all function spaces between sets exist. The latter are more accessible than the classes containing all subsets of a set, as is the case with exponential objects resp. subobjects in category theory. In category theoretical terms, the theory essentially corresponds to constructively well-pointed Cartesian closed Heyting pretoposes with (whenever Infinity is adopted) a natural numbers object. Existence of powerset is what would turn a Heyting pretopos into an elementary topos. Every such topos that interprets is of course a model of these weaker theories, but locally Cartesian closed pretoposes have been defined that e.g. interpret theories with Exponentiation but reject full Separation and Powerset. A form of corresponds to any subobject having a complement, in which case we call the topos Boolean. Diaconescu's theorem in its original topos form says that this hold iff any coequalizer of two nonintersecting monomorphisms has a section. The latter is a formulation of choice. Barr's theorem states that any topos admits a surjection from a Boolean topos onto it, relating to classical statements being provable intuitionistically. In type theory, the expression "" exists on its own and denotes function spaces, a primitive notion. These types (or, in set theory, classes or sets) naturally appear, for example, as the type of the currying bijection between and , an adjunction. A typical type theory with general programming capability - and certainly those that can model , which is considered a constructive set theory - will have a type of integers and function spaces representing , and as such also include types that are not countable. This is just to say, or implies, that among the function terms , none have the property of being a surjection. Constructive set theories are also studied in the context of applicative axioms. Metalogic While the theory does not exceed the consistency strength of Heyting arithmetic, adding Excluded Middle gives a theory proving the same theorems as classical minus Regularity! Thus, adding Regularity as well as either or full Separation to gives full classical . Adding full Choice and full Separation gives minus Regularity. So this would thus lead to a theory beyond the strength of typical type theory. The presented theory does not prove a function space like to be not enumerable, in the sense of injections out of it. Without further axioms, intuitionistic mathematics has models in recursive functions but also forms of hypercomputation. Analysis In this section the strength of is elaborated on. For context, possible further principles are mentioned, which are not necessarily classical and also not generally considered constructive. Here a general warning is in order: When reading proposition equivalence claims in the computable context, one shall always be aware which choice, induction and comprehension principles are silently assumed. See also the related constructive analysis, feasible analysis and computable analysis. The theory so far proves uniqueness of Archimedean, Dedekind complete (pseudo-)ordered fields, with equivalence by a unique isomorphism. The prefix "pseudo" here highlights that the order will, in any case, constructively not always be decidable. This result is relevant assuming complete such models exist as sets. Topology Regardless of the choice of model, the characteristic flavor of a constructive theory of numbers can be explicated using an independent proposition . Consider a counter-example to the constructive provability of the well-orderedness of the naturals, but now embedded in the reals. Say . The infimum metric distance between some point and such a subset, what may be expressed as for example, may constructively fail to provably exist. More generally, this locatedness property of subsets governs the well-developed constructive metric space theory. Whether Cauchy or Dedekind reals, among others, also fewer statements about the arithmetic of the reals are decidable, compared to the classical theory. Cauchy sequences Exponentiation implies recursion principles and so in , one can comfortably reason about sequences , their regularity properties such as , or about shrinking intervals in . So this enables speaking of Cauchy sequences and their arithmetic. This is also the approach to analysis taken in . Cauchy reals Any Cauchy real number is a collection of such sequences, i.e. a subset of a set of functions on constructed with respect to an equivalence relation. Exponentiation together with bounded separation prove the collection of Cauchy reals to be a set, thus somewhat simplifying the logically treatment of the reals. Even in the strong theory with a strengthened form of Collection, the Cauchy reals are poorly behaved when not assuming a form of countable choice, and suffices for most results. This concerns completeness of equivalence classes of such sequences, equivalence of the whole set to the Dedekind reals, existence of a modulus of convergence for all Cauchy sequences and the preservation of such a modulus when taking limits. An alternative approach that is slightly better behaved is to work a collection of Cauchy reals together a choice of modulus, i.e. not with just the real numbers but with a set of pairs, or even with a fixed modulus shared by all real numbers. Towards the Dedekind reals As in the classical theory, Dedekind cuts are characterized using subsets of algebraic structures such as : The properties of being inhabited, numerically bounded above, "closed downwards" and "open upwards" are all bounded formulas with respect to the given set underlying the algebraic structure. A standard example of a cut, the first component indeed exhibiting these properties, is the representation of given by (Depending on the convention for cuts, either of the two parts or neither, like here, may makes use of the sign .) The theory given by the axioms so far validates that a pseudo-ordered field that is also Archimedean and Dedekind complete, if it exists at all, is in this way characterized uniquely, up to isomorphism. However, the existence of just function spaces such as does not grant to be a set, and so neither is the class of all subsets of that do fulfill the named properties. What is required for the class of Dedekind reals to be a set is an axiom regarding existence of a set of subsets and this is discussed further below in the section on Binary refinement. In a context without or Powerset, countable choice into finite sets is assumed to prove the uncountability of the set of all Dedekind reals. Constructive schools Most schools for constructive analysis validate some choice and also -, as defined in the second section on number bounds. Here are some other propositions employed in theories of constructive analysis that are not provable using just base intuitionistic logic: On the recursive mathematics side (the "Russian" or "Markovian" constructive framework with many abbreviations, e.g. ), first one has Markov's principle , which is a form of proof by contradiction motivated by (unbound memory capacity) computable search. This has notable impact on statements about real numbers, as touched upon below. In this school one further even has the anti-classical constructive Church's thesis principle , generally adopted for number-theoretic functions. Church's thesis principle expressed in the language of set theory and formulated for set functions postulates that these all correspond to computable programs that eventually halt on any argument. In computability theory, the natural numbers corresponding to indices of codes of the computable functions which are total are in the arithmetical hierarchy, meaning membership of any index is affirmed by validating a proposition. This is to say that such a collection of functions is still a mere subclass of the naturals and so is, when put in relation to some classical function spaces, conceptually small. In this sense, adopting postulate makes into a "sparse" set, as viewed from classical set theory. Subcountability of sets can also be postulated independently. So on another end, on the Brouwerian intuitionist side (), there are bar induction, the decidable fan theorem saying decidable bars are uniform, which are amongst the weakest often discussed principles, Kripke's schema (with countable choice turning all subclasses of countable), or even Brouwer's anti-classical continuity principle, determining return values of what is established a function on unending sequences already through just finite initial segments. Certain laws in both of those schools contradict , so that choosing to adopt all principles from either school disproves theorems from classical analysis. is still consistent with some choice, but contradicts the classical and , explained below. The independence of premise rule with set existence premises is not fully understood, but as a number theoretic principle it is in conflict with the Russian school axioms in some frameworks. Notably, also contradicts , meaning the constructive schools also cannot be combined in full. Some of the principles cannot be combined constructively to the extent that together they imply forms of - for example plus the countability of all subsets of the naturals. These combinations are then naturally also not consistent with further anti-classical principles. Indecomposability Denote the class of all sets by . Decidability of membership in a class can be expressed as membership in . We also note that, by definition, the two extremal classes and are trivially decidable. Membership in those two is equivalent to the trivial propositions resp. . Call a class indecomposable or cohesive if, for any predicate , This expresses that the only properties that are decidable on are the trivial properties. This is well studied in intuitionistic analysis. The so called indecomposability schema (Unzerlegbarkeit) for set theory is a possible principle which states that the whole class is indecomposable. Extensionally speaking, postulates that the two trivial classes are the only classes that are decidable with respect to the class of all sets. For a simple motivating predicate, consider membership in the first non-trivial class, which is to say the property of being empty. This property is non-trivial to the extent that it separates some sets: The empty set is a member of , by definition, while a plethora of sets are provenly not members of . But, using Separation, one may of course also define various sets for which emptiness is not decidable in a constructive theory at all, i.e. membership in is not provable for all sets. So here the property of emptiness does not partition the set theoretical domain of discourse into two decidable parts. For any such non-trivial property, the contrapositive of says that it cannot be decidable over all sets. is implied by the uniformity principle , which is consistent with and discussed below. Non-constructive principles Of course and many principles defining intermediate logics are non-constructive. and , which is for just negated propositions, can be presented as De Morgan's rules. More specifically, this section shall be concerned with statements in terms of predicates - especially weaker ones, expressed in terms of a few quantifiers over sets, on top of decidable predicates on numbers. Referring back to the section on characteristic functions, one may call a collection searchable if it is searchable for all its detachable subsets, which itself corresponds to . This is a form of - for . Note that in the context of Exponentiation, such proposition on sets are now set-bound. Particularly valuable in the study of constructive analysis are non-constructive claims commonly formulated in terms of the collection of all binary sequences and the characteristic functions on the arithmetic domain are well studied. Here is a decidable proposition at each numeral , but, as demonstrated previously, quantified statements in terms of may not be. As is known from the incompleteness theorem and its variations, already in first-order arithmetic, example functions on can be characterized such that if is consistent, the competing - disjuncts, of low complexity, are each -unprovable (even if proves the disjunction of the two axiomatically.) More generally, the arithmetic -, a most prominent non-constructive, essentially logical statement goes by the name limited principle of omniscience . In the constructive set theory introduced below, it implies -, , the -version of the fan theorem, but also discussed below. Recall examples of famous sentences that can be written down in a -fashion, i.e. of Goldbach-type: Goldbach conjecture, Fermat's last theorem but also the Riemann hypothesis are among them. Assuming relativized dependent choice and the classical over does not enable proofs of more -statements. postulates a disjunctive property, as does the weaker decidability statement for functions being constant (-sentences) , the arithmetic -. The two are related in a similar way as is versus and they essentially differ by . in turn implies the so-called "lesser" version . This is the (arithmetic) -version of the non-constructive De Morgan's rule for a negated conjunction. There are, for example, models of the strong set theory which separate such statements, in the sense that they may validate but reject . Disjunctive principles about -sentences generally hint at equivalent formulations deciding apartness in analysis in a context with mild choice or . The claim expressed by translated to real numbers is equivalent to the claim that either equality or apartness of any two reals is decidable (it in fact decides the trichotomy). It is then also equivalent to the statement that every real is either rational or irrational - without the requirement for or construction of a witness for either disjunct. Likewise, the claim expressed by for real numbers is equivalent that the ordering of any two reals is decidable (dichotomy). It is then also equivalent to the statement that if the product of two reals is zero, then either of the reals is zero - again without a witness. Indeed, formulations of the three omniscience principles are then each equivalent to theorems of the apartness, equality or order of two reals in this way. Yet more can be said about the Cauchy sequences that are augmented with a modulus of convergence. A famous source of computable undecidability - and in turn also of a broad range of undecidable propositions - is the predicate expressing a computer program to be total. Infinite trees Through the relation between computability and the arithmetical hierarchy, insights in this classical study are also revealing for constructive considerations. A basic insight of reverse mathematics concerns computable infinite finitely branching binary trees. Such a tree may e.g. be encoded as an infinite set of finite sets , with decidable membership, and those trees then provenly contain elements of arbitrary big finite size. The so called Weak Kőnig's lemma states: For such , there always exists an infinite path in , i.e. an infinite sequence such that all its initial segments are part of the tree. In reverse mathematics, the second-order arithmetic subsystem does not prove . To understand this, note that there are computable trees for which no computable such path through it exists. To prove this, one enumerates the partial computable sequences and then diagonalizes all total computable sequences in one partial computable sequences . One can then roll out a certain tree , one exactly compatible with the still possible values of everywhere, which by construction is incompatible with any total computable path. In , the principle implies and -, a very modest form of countable choice introduced above. The former two are equivalent assuming that choice principle already in the more conservative arithmetic context. is also equivalent to the Brouwer fixed point theorem and other theorems regarding values of continuous functions on the reals. The fixed point theorem in turn implies the intermediate value theorem, but always be aware that these claims may depend on the formulation, as the classical theorems about encoded reals can translate to different variants when expressed in a constructive context. The , and some variants thereof, concerns infinite graphs and so its contrapositives gives a condition for finiteness. Again to connect to analysis, over the classical arithmetic theory , the claim of is for example equivalent to the Borel compactness regarding finite subcovers of the real unit interval. is a closely related existence claim involving finite sequences in an infinite context. Over , they are actually equivalent. In those are distinct, but, after again assuming some choice, here then implies . Induction Mathematical induction It was observed that in set language, induction principles can read , with the antecedent defined as further above, and with meaning where the set always denotes the standard model of natural numbers. Via the strong axiom of Infinity and predicative Separation, the validity of induction for set-bounded or -definitions was already established and thoroughly discussed. For those predicates involving only quantifiers over , it validates induction in the sense of the first-order arithmetic theory. In a set theory context where is a set, this induction principle can be used to prove predicatively defined subclasses of to be the set itself. The so called full mathematical induction schema now postulates set equality of to all its inductive subclasses. As in the classical theory, it is also implied when passing to the impredicative full Separation schema. As stated in the section on Choice, induction principles such as this are also implied by various forms of choice principles. The recursion principle for set functions mentioned in the section dedicated to arithmetic is also implied by the full mathematical induction schema over one's structure modeling the naturals (e.g. ). So for that theorem, granting a model of Heyting arithmetic, it represents an alternative to exponentiation principles. Predicate formulas used with the schema are to be understood as formulas in first-order set theory. The zero denotes the set as above, and the set denotes the successor set of , with . By Axiom of Infinity above, it is again a member of . Beware that unlike in an arithmetic theory, the naturals here are not the abstract elements in the domain of discourse, but elements of a model. As has been observed in previous discussions, when accepting , not even for all predicatively defined sets is the equality to such a finite von Neumann ordinal necessarily decidable. Set Induction Going beyond the previous induction principles, one has full set induction, which is to be compared to well-founded induction. Like mathematical induction above, the following axiom is formulated as a schema in terms of predicates, and thus has a different character than induction principles proven from predicative set theory axioms. A variant of the axiom just for bounded formulas is also studied independently and may be derived from other axioms. Here holds trivially and so this covers to the "bottom case" in the standard framework. This (as well as natural number induction) may again be restricted to just the bounded set formulas, in which case arithmetic is not impacted. In , the axiom proves induction in transitive sets and so in particular also for transitive sets of transitive sets. The latter then is an adequate definition of the ordinals, and even a -formulation. Set induction in turn enables ordinal arithmetic in this sense. It further allows definitions of class functions by transfinite recursion. The study of the various principles that grant set definitions by induction, i.e. inductive definitions, is a main topic in the context of constructive set theory and their comparatively weak strengths. This also holds for their counterparts in type theory. Replacement is not required to prove induction over the set of naturals from set induction, but that axiom is necessary for their arithmetic modeled within the set theory. The axiom of regularity is a single statement with universal quantifier over sets and not a schema. As show, it implies , and so is non-constructive. Now for taken to be the negation of some predicate and writing for the class , induction reads Via the contrapositive, set induction implies all instances of regularity but only with double-negated existence in the conclusion. In the other direction, given enough transitive sets, regularity implies each instance of set induction. Metalogic The theory formulated above can be expressed as with its collection axioms discarded in favour of the weaker Replacement and Exponentiation axioms. It proves the Cauchy reals to be a set, but not the class of Dedekind reals. Call an ordinal itself trichotomous if the irreflexive membership relation "" among its members is trichotomous. Like the axiom of regularity, set induction restricts the possible models of "" and thus that of a set theory, as was the motivation for the principle in the 20's. But the constructive theory here does not prove a trichotomy for all ordinals, while the trichotomous ordinals are not well behaved with respect to the notion of successor and rank. The added proof-theoretical strength attained with Induction in the constructive context is significant, even if dropping Regularity in the context of does not reduce the proof-theoretical strength. Even without Exponentiation, the present theory with set induction has the same proof theoretic strength as and proves the same functions recursive. Specifically, its proof-theoretic large countable ordinal is the Bachmann–Howard ordinal. This is also the ordinal of classical or intuitionistic Kripke–Platek set theory. It is consistent even with to assume that the class of trichotomous ordinals form a set. The current theory augmented with this ordinal set existence postulate proves the consistency of . Aczel was also one of the main developers or Non-well-founded set theory, which rejects set induction. Relation to ZF The theory also constitutes a presentation of Zermelo–Fraenkel set theory in the sense that variants of all its eight axioms are present. Extensionality, Pairing, Union and Replacement are indeed identical. Separation is adopted in a weak predicative form while Infinity is stated in a strong formulation. Akin to the classical formulation, this Separation axiom and the existence of any set already proves the Empty Set axiom. Exponentiation for finite domains and full mathematical induction are also implied by their stronger adopted variants. Without the principle of excluded middle, the theory here is lacking, in its classical form, full Separation, Powerset as well as Regularity. Accepting now exactly leads into the classical theory. The following highlights the different readings of a formal theory. Let denote the continuum hypothesis and so that . Then is inhabited by and any set that is established to be a member of either equals or . Induction on implies that it cannot consistently be negated that has some least natural number member. The value of such a member can be shown to be independent of theories such as . Nonetheless, any classical set theory like also proves there exists such a number. Strong Collection Having discussed all the weakened forms of the classical set theory axioms, Replacement and Exponentiation can be further strengthened without losing a type theoretical interpretation, and in a way that is not going beyond . So firstly, one may reflect upon the strength of the axiom of replacement, also in the context of the classical set theory. For any set and any natural , there exists the product recursively given by , which have ever deeper rank. Induction for unbound predicates proves that these sets exist for all of the infinitely many naturals. Replacement "for " now moreover states that this infinite class of products can be turned into the infinite set, . This is also not a subset of any previously established set. Going beyond those axioms also seen in Myhill's typed approach, consider the discussed constructive theory with Exponentiation and Induction, but now strengthened by the collection schema. In it is equivalent to Replacement, unless the powerset axiom is dropped. In the current context the strong axiom presented supersedes Replacement, due to not requiring the binary relation definition to be functional, but possibly multi-valued. In words, for every total relation, there exists an image set such that the relation is total in both directions. Expressing this via a raw first-order formulation leads to a somewhat repetitive format. The antecedent states that one considers relation between sets and that are total over a certain domain set , that is, has at least one "image value" for every element in the domain. This is more general than an inhabitance condition in a set theoretical choice axiom, but also more general than the condition of Replacement, which demands unique existence . In the consequent, firstly, the axioms states that then there exists a set which contains at least one "image" value under , for every element of the domain. Secondly, in this axioms formulation it then moreover states that only such images are elements of that new codomain set . It is guaranteeing that does not overshoot the codomain of and thus the axiom is also expressing some power akin to a Separation procedure. The principle may be used in the constructive study of larger sets beyond the everyday need of analysis. Weak collection and predicative separation together imply strong collection: separation cuts out the subset of consisting of those such that for some . Metalogic This theory without , without unbounded separation and without "naive" Power set enjoys various nice properties. For example, as opposed to with its subset collection schema below, it has the existence property. Constructive Zermelo–Fraenkel Binary refinement The so called binary refinement axiom says that for any there exists a set such that for any covering , the set holds two subsets and that also do this covering job, . It is a weakest form of the powerset axiom and at the core of some important mathematical proofs. Fullness below, for relations between the set and the finite , implies that this is indeed possible. Taking another step back, plus Recursion and plus Binary refinement already proves that there exists an Archimedean, Dedekind complete pseudo-ordered field. That set theory also proves that the class of left Dedekind cuts is a set, not requiring Induction or Collection. And it moreover proves that function spaces into discrete sets are sets (there e.g. ), without assuming . Already over the weak theory (which is to say without Infinity) does binary refinement prove that function spaces into discrete sets are sets, and therefore e.g. the existence of all characteristic function spaces . Subset Collection The theory known as adopts the axioms of the previous sections plus a stronger form of Exponentiation. It is by adopting the following alternative to Exponentiation, which can again be seen as a constructive version of the Power set axiom: An alternative that is not a schema is elaborated on below. Fullness For given and , let be the class of all total relations between and . This class is given as As opposed to the function definition, there is no unique existence quantifier in . The class represents the space of "non-unique-valued functions" or "multivalued functions" from to , but as set of individual pairs with right projection in . The second clause says that one is concerned with only these relations, not those which are total on but also extend their domain beyond . One does not postulate to be a set, since with Replacement one can use this collection of relations between a set and the finite , i.e. the "bi-valued functions on ", to extract the set of all its subsets. In other words being a set would imply the Powerset axiom. Over , there is a single, somewhat clearer alternative axiom to the Subset Collection schema. It postulates the existence of a sufficiently large set of total relations between and . This says that for any two sets and , there exists a set which among its members inhabits a still total relation for any given total relation . On a given domain , the functions are exactly the sparsest total relations, namely the unique valued ones. Therefore, the axiom implies that there is a set such that all functions are in it. In this way, Fullness implies Exponentiation. It further implies binary refinement, already over . The Fullness axiom, as well as dependent choice, is in turn also implied by the so-called Presentation Axiom about sections, which can also be formulated category theoretically. Metalogic of CZF has the numerical existence property and the disjunctive property, but there are concessions: lacks the existence property due to the Subset Collection Schema or Fullness axiom. The schema can also be an obstacle for realizability models. The existence property is not lacking when the weaker Exponentiation or the stronger but impredicative Powerset axiom axiom is adopted instead. The latter is in general lacking a constructive interpretation. Unprovable claims The theory is consistent with some anti-classical assertions, but on its own proves nothing not provable in . Some prominent statements not proven by the theory (nor by , for that matter) are part of the principles listed above, in the sections on constructive schools in analysis, on the Cauchy construction and on non-constructive principles. What follows concerns set theoretical concepts: The bounded notion of a transitive set of transitive sets is a good way to define ordinals and enables induction on ordinals. But notably, this definition includes some -subsets in . So assuming that the membership of is decidable in all successor ordinals proves for bounded formulas in . Also, neither linearity of ordinals, nor existence of power sets of finite sets are derivable in this theory, as assuming either implies Power set. The circumstance that ordinals are better behaved in the classical than in the constructive context manifests in a different theory of large set existence postulates. Consider the functions the domain of which is or some . These are sequences and their ranges are counted sets. Denote by the class characterized as the smallest codomain such that the ranges of the aforementioned functions into are also itself members of . In , this is the set of hereditarily countable sets and has ordinal rank at most . In , it is uncountable (as it also contains all countable ordinals, the cardinality of which is denoted ) but its cardinality is not necessarily that of . Meanwhile, does not prove even constitutes a set, even when countable choice is assumed. Finally, the theory does not prove that all function spaces formed from sets in the constructible universe are sets inside , and this holds even when assuming Powerset instead of the weaker Exponentiation axiom. So this is a particular statement preventing from proving the class to be a model of . Ordinal analysis Taking and dropping set induction gives a theory that is conservative over for arithmetic statements, in that sense that it proves the same arithmetical statements for its -model . Adding back just mathematical induction gives a theory with proof theoretic ordinal , which is the first common fixed point of the Veblen functions for . This is the same ordinal as for and is below the Feferman–Schütte ordinal . Exhibiting a type theoretical model, the full theory goes beyond , its ordinal still being the modest Bachmann–Howard ordinal. Assuming the class of trichotomous ordinals is a set raises the proof theoretical strength of (but not of ). Being related to inductive definitions or bar induction, the regular extension axiom raises the proof theoretical strength of . This large set axiom, granting the existence of certain nice supersets for every set, is proven by . Models The category of sets and functions of is a -pretopos. Without diverging into topos theory, certain extended such -pretopoi contain models of . The effective topos contains a model of this based on maps characterized by certain good subcountability properties. Separation, stated redundantly in a classical context, is constructively not implied by Replacement. The discussion so far only committed to the predicatively justified bounded Separation. Note that full Separation (together with , and also for sets) is validated in some effective topos models, meaning the axiom does not spoil cornerstones of the restrictive recursive school. Related are type theoretical interpretations. In 1977 Aczel showed that can still be interpreted in Martin-Löf type theory, using the propositions-as-types approach. More specifically, this uses one universe and -types, providing what is now seen a standard model of in . This is done in terms of the images of its functions and has a fairly direct constructive and predicative justification, while retaining the language of set theory. Roughly, there are two "big" types , the sets are all given through any on some , and membership of a in the set is defined to hold when . Conversely, interprets . All statements validated in the subcountable model of the set theory can be proven exactly via plus the choice principle -, stated further above. As noted, theories like , and also together with choice, have the existence property for a broad class of sets in common mathematics. Martin-Löf type theories with additional induction principles validate corresponding set theoretical axioms. Soundness and Completeness theorems of , with respect to realizability, have been established. Breaking with ZF One may of course add a Church's thesis. One may postulate the subcountability of all sets. This already holds true in the type theoretical interpretation and the model in the effective topos. By Infinity and Exponentiation, is an uncountable set, while the class or even is then provenly not a set, by Cantor's diagonal argument. So this theory then logically rejects Powerset and of course . Subcountability is also in contradiction with various large set axioms. (On the other hand, also using , some such axioms imply the consistency of theories such as and stronger.) As a rule of inference, is closed under Troelstra's general uniformity for both and . One may adopt it as an anti-classical axiom schema, the uniformity principle which may be denoted , This also is incompatible with the powerset axiom. The principle is also often formulated for . Now for a binary set of labels , implies the indecomposability schema , as noted. In 1989 Ingrid Lindström showed that non-well-founded sets can also be interpreted in Martin-Löf type theory, which are obtained by replacing Set Induction in with Aczel's anti-foundation axiom. The resulting theory may be studied by also adding back the -induction schema or relativized dependent choice, as well as the assertion that every set is member of a transitive set. Intuitionistic Zermelo–Fraenkel The theory is adopting both the standard Separation as well as Power set and, as in , one conventionally formulates the theory with Collection below. As such, can be seen as the most straight forward variant of without . So as noted, in , in place of Replacement, one may use the While the axiom of replacement requires the relation to be functional over the set (as in, for every in there is associated exactly one ), the Axiom of Collection does not. It merely requires there be associated at least one , and it asserts the existence of a set which collects at least one such for each such . In classical , the Collection schema implies the Axiom schema of replacement. When making use of Powerset (and only then), they can be shown to be classically equivalent. While is based on intuitionistic rather than classical logic, it is considered impredicative. It allows formation of sets via a power set operation and using the general Axiom of Separation with any proposition, including ones which contain quantifiers which are not bounded. Thus new sets can be formed in terms of the universe of all sets, distancing the theory from the bottom-up constructive perspective. So it is even easier to define sets with undecidable membership, namely by making use of undecidable predicates defined on a set. The power set axiom further implies the existence of a set of truth values. In the presence of excluded middle, this set has two elements. In the absence of it, the set of truth values is also considered impredicative. The axioms of are strong enough so that full is already implied by for bounded formulas. See also the previous discussion in the section on the Exponentiation axiom. And by the discussion about Separation, it is thus already implied by the particular formula , the principle that knowledge of membership of shall always be decidable, no matter the set. Metalogic As implied above, the subcountability property cannot be adopted for all sets, given the theory proves to be a set. The theory has many of the nice numerical existence properties and is e.g. consistent with Church's thesis principle as well as with being subcountable. It also has the disjunctive property. with Replacement instead of Collection has the general existence property, even when adopting relativized dependent choice on top of it all. But just as formulated does not. The combination of schemas including full separation spoils it. Even without , the proof theoretic strength of equals that of . And proves them equiconsistent and they prove the same -sentences. Intuitionistic Z Again on the weaker end, as with its historical counterpart Zermelo set theory, one may denote by the intuitionistic theory set up like but without Replacement, Collection or Induction. Intuitionistic KP Let us mention another very weak theory that has been investigated, namely Intuitionistic (or constructive) Kripke–Platek set theory . It has not only Separation but also Collection restricted to -formulas, i.e. it is similar to but with Induction instead of full Replacement. The theory does not fit into the hierarchy as presented above, simply because it has Axiom schema of Set Induction from the start. This enables theorems involving the class of ordinals. The theory has the disjunction property. Of course, weaker versions of are obtained by restricting the induction schema to narrower classes of formulas, say . The theory is especially weak when studied without Infinity. Sorted theories Constructive set theory As he presented it, Myhill's system is a theory using constructive first-order logic with identity and two more sorts beyond sets, namely natural numbers and functions. Its axioms are: The usual Axiom of Extensionality for sets, as well as one for functions, and the usual Axiom of union. The Axiom of restricted, or predicative, separation, which is a weakened form of the Separation axiom from classical set theory, requiring that any quantifications be bounded to another set, as discussed. A form of the Axiom of Infinity asserting that the collection of natural numbers (for which he introduces a constant ) is in fact a set. The axiom of Exponentiation, asserting that for any two sets, there is a third set which contains all (and only) the functions whose domain is the first set, and whose range is the second set. This is a greatly weakened form of the Axiom of power set in classical set theory, to which Myhill, among others, objected on the grounds of its impredicativity. And furthermore: The usual Peano axioms for natural numbers. Axioms asserting that the domain and range of a function are both sets. Additionally, an Axiom of non-choice asserts the existence of a choice function in cases where the choice is already made. Together these act like the usual Replacement axiom in classical set theory. One can roughly identify the strength of this theory with a constructive subtheories of when comparing with the previous sections. And finally the theory adopts An Axiom of dependent choice, which is much weaker than the usual Axiom of choice. Bishop style set theory Set theory in the flavor of Errett Bishop's constructivist school mirrors that of Myhill, but is set up in a way that sets come equipped with relations that govern their discreteness. Commonly, Dependent Choice is adopted. A lot of analysis and module theory has been developed in this context. Category theories Not all formal logic theories of sets need to axiomize the binary membership predicate "" directly. A theory like the Elementary Theory of the Categories Of Set (), e.g. capturing pairs of composable mappings between objects, can also be expressed with a constructive background logic. Category theory can be set up as a theory of arrows and objects, although first-order axiomatizations only in terms of arrows are possible. Beyond that, topoi also have internal languages that can be intuitionistic themselves and capture a notion of sets. Good models of constructive set theories in category theory are the pretoposes mentioned in the Exponentiation section. For some good set theory, this may require enough projectives, an axiom about surjective "presentations" of set, implying Countable and Dependent Choice. See also Axiom schema of predicative separation Constructive mathematics Constructive analysis Constructive Church's thesis rule and principle Computable set Diaconescu's theorem Disjunction and existence properties Epsilon-induction Hereditarily finite set Heyting arithmetic Impredicativity Intuitionistic type theory Law of excluded middle Ordinal analysis Set theory Subcountability References Further reading Aczel, P. and Rathjen, M. (2001). Notes on constructive set theory. Technical Report 40, 2000/2001. Mittag-Leffler Institute, Sweden. External links Constructivism (mathematics) Intuitionism Systems of set theory
Constructive set theory
Mathematics
31,653
20,759,880
https://en.wikipedia.org/wiki/Enpiperate
Enpiperate is a calcium channel blocker. References Calcium channel blockers Piperidines
Enpiperate
Chemistry,Biology
20
16,270,380
https://en.wikipedia.org/wiki/Omicron%20Draconis
Omicron Draconis (Latinised as ο Draconis, abbreviated to ο Dra) is a giant star in the constellation Draco located 322.93 light years from the Earth. Its path in the night sky is circumpolar for latitudes greater than 31o north, meaning the star never rises or sets when viewed in the night sky. This is a single-lined spectroscopic binary system, but the secondary has been detected using interferometry. It is an RS Canum Venaticorum variable system with eclipses. The total amplitude of variation is only a few hundredths of a magnitude. The secondary star is similar to the Sun, presumably a main sequence star, while the primary is a giant star 25 times larger than the Sun and two hundred times more luminous. Identities as pole star Omicron Draconis can be considered the north pole star of Mercury, as it is the closest star to Mercury's north celestial pole. In addition to that, this star is currently the Moon's north pole star, which occurs once every 18.6 years. The pole star status changes periodically, because of the precession of the Moon's rotational axis. References External links 2004. Starry Night Pro, Version 5.8.4. Imaginova. . www.starrynight.com Draconis, Omicron Draco (constellation) G-type giants RS Canum Venaticorum variables Eclipsing binaries Draconis, 47 092512 7125 175306 Durchmusterung objects
Omicron Draconis
Astronomy
324
3,415,287
https://en.wikipedia.org/wiki/Conformal%20gravity
Conformal gravity refers to gravity theories that are invariant under conformal transformations in the Riemannian geometry sense; more accurately, they are invariant under Weyl transformations where is the metric tensor and is a function on spacetime. Weyl-squared theories The simplest theory in this category has the square of the Weyl tensor as the Lagrangian where is the Weyl tensor. This is to be contrasted with the usual Einstein–Hilbert action where the Lagrangian is just the Ricci scalar. The equation of motion upon varying the metric is called the Bach tensor, where is the Ricci tensor. Conformally flat metrics are solutions of this equation. Since these theories lead to fourth-order equations for the fluctuations around a fixed background, they are not manifestly unitary. It has therefore been generally believed that they could not be consistently quantized. This is now disputed. Four-derivative theories Conformal gravity is an example of a 4-derivative theory. This means that each term in the wave equation can contain up to four derivatives. There are pros and cons of 4-derivative theories. The pros are that the quantized version of the theory is more convergent and renormalisable. The cons are that there may be issues with causality. A simpler example of a 4-derivative wave equation is the scalar 4-derivative wave equation: The solution for this in a central field of force is: The first two terms are the same as a normal wave equation. Because this equation is a simpler approximation to conformal gravity, m corresponds to the mass of the central source. The last two terms are unique to 4-derivative wave equations. It has been suggested that small values be assigned to them to account for the galactic acceleration constant (also known as dark matter) and the dark energy constant. The solution equivalent to the Schwarzschild solution in general relativity for a spherical source for conformal gravity has a metric with: to show the difference between general relativity. 6bc is very small, and so can be ignored. The problem is that now c is the total mass-energy of the source, and b is the integral of density, times the distance to source, squared. So this is a completely different potential from general relativity and not just a small modification. The main issue with conformal gravity theories, as well as any theory with higher derivatives, is the typical presence of ghosts, which point to instabilities of the quantum version of the theory, although there might be a solution to the ghost problem. An alternative approach is to consider the gravitational constant as a symmetry broken scalar field, in which case you would consider a small correction to Newtonian gravity like this (where we consider to be a small correction): in which case the general solution is the same as the Newtonian case except there can be an additional term: where there is an additional component varying sinusoidally over space. The wavelength of this variation could be quite large, such as an atomic width. Thus there appear to be several stable potentials around a gravitational force in this model. Conformal unification to the Standard Model By adding a suitable gravitational term to the Standard Model action in curved spacetime, the theory develops a local conformal (Weyl) invariance. The conformal gauge is fixed by choosing a reference mass scale based on the gravitational constant. This approach generates the masses for the vector bosons and matter fields similar to the Higgs mechanism without traditional spontaneous symmetry breaking. See also Conformal supergravity Hoyle–Narlikar theory of gravity References Further reading Falsification of Mannheim's conformal gravity at CERN Mannheim's rebuttal of above at arXiv. Conformal geometry Lagrangian mechanics Spacetime Theories of gravity
Conformal gravity
Physics,Mathematics
770
8,884,790
https://en.wikipedia.org/wiki/Cutler%27s%20bar%20notation
In mathematics, Cutler's bar notation is a notation system for large numbers, introduced by Mark Cutler in 2004. The idea is based on iterated exponentiation in much the same way that exponentiation is iterated multiplication. Introduction A regular exponential can be expressed as such: However, these expressions become arbitrarily large when dealing with systems such as Knuth's up-arrow notation. Take the following: Cutler's bar notation shifts these exponentials counterclockwise, forming . A bar is placed above the variable to denote this change. As such: This system becomes effective with multiple exponents, when regular denotation becomes too cumbersome. At any time, this can be further shortened by rotating the exponential counterclockwise once more. The same pattern could be iterated a fourth time, becoming . For this reason, it is sometimes referred to as Cutler's circular notation. Advantages and drawbacks The Cutler bar notation can be used to easily express other notation systems in exponent form. It also allows for a flexible summarization of multiple copies of the same exponents, where any number of stacked exponents can be shifted counterclockwise and shortened to a single variable. The bar notation also allows for fairly rapid composure of very large numbers. For instance, the number would contain more than a googolplex digits, while remaining fairly simple to write with and remember. However, the system reaches a problem when dealing with different exponents in a single expression. For instance, the expression could not be summarized in bar notation. Additionally, the exponent can only be shifted thrice before it returns to its original position, making a five degree shift indistinguishable from a one degree shift. Some have suggested using a double and triple bar in subsequent rotations, though this presents problems when dealing with ten- and twenty-degree shifts. Other equivalent notations for the same operations already exist without being limited to a fixed number of recursions, notably Knuth's up-arrow notation and hyperoperation notation. See also Mathematical notation References Mark Cutler, Physical Infinity, 2004 Daniel Geisler, tetration.org R. Knobel. "Exponentials Reiterated." American Mathematical Monthly 88, (1981) Mathematical notation Large numbers
Cutler's bar notation
Mathematics
470
1,412
https://en.wikipedia.org/wiki/Amine
In chemistry, amines (, ) are compounds and functional groups that contain a basic nitrogen atom with a lone pair. Formally, amines are derivatives of ammonia ((in which the bond angle between the nitrogen and hydrogen is 170°), wherein one or more hydrogen atoms have been replaced by a substituent such as an alkyl or aryl group (these may respectively be called alkylamines and arylamines; amines in which both types of substituent are attached to one nitrogen atom may be called alkylarylamines). Important amines include amino acids, biogenic amines, trimethylamine, and aniline. Inorganic derivatives of ammonia are also called amines, such as monochloramine (). The substituent is called an amino group. The chemical notation for amines contains the letter "R", where "R" is not an element, but an "R-group", which in amines could be a single hydrogen or carbon atom, or could be a hydrocarbon chain. Compounds with a nitrogen atom attached to a carbonyl group, thus having the structure , are called amides and have different chemical properties from amines. Classification of amines Amines can be classified according to the nature and number of substituents on nitrogen. Aliphatic amines contain only H and alkyl substituents. Aromatic amines have the nitrogen atom connected to an aromatic ring. Amines, alkyl and aryl alike, are organized into three subcategories (see table) based on the number of carbon atoms adjacent to the nitrogen (how many hydrogen atoms of the ammonia molecule are replaced by hydrocarbon groups): Primary (1°) amines—Primary amines arise when one of three hydrogen atoms in ammonia is replaced by an alkyl or aromatic group. Important primary alkyl amines include methylamine, most amino acids, and the buffering agent tris, while primary aromatic amines include aniline. Secondary (2°) amines—Secondary amines have two organic substituents (alkyl, aryl or both) bound to the nitrogen together with one hydrogen. Important representatives include dimethylamine, while an example of an aromatic amine would be diphenylamine. Tertiary (3°) amines—In tertiary amines, nitrogen has three organic substituents. Examples include trimethylamine, which has a distinctively fishy smell, and EDTA. A fourth subcategory is determined by the connectivity of the substituents attached to the nitrogen: Cyclic amines—Cyclic amines are either secondary or tertiary amines. Examples of cyclic amines include the 3-membered ring aziridine and the six-membered ring piperidine. N-methylpiperidine and N-phenylpiperidine are examples of cyclic tertiary amines. It is also possible to have four organic substituents on the nitrogen. These species are not amines but are quaternary ammonium cations and have a charged nitrogen center. Quaternary ammonium salts exist with many kinds of anions. Naming conventions Amines are named in several ways. Typically, the compound is given the prefix "amino-" or the suffix "-amine". The prefix "N-" shows substitution on the nitrogen atom. An organic compound with multiple amino groups is called a diamine, triamine, tetraamine and so forth. Lower amines are named with the suffix -amine. Higher amines have the prefix amino as a functional group. IUPAC however does not recommend this convention, but prefers the alkanamine form, e.g. butan-2-amine. Physical properties Hydrogen bonding significantly influences the properties of primary and secondary amines. For example, methyl and ethyl amines are gases under standard conditions, whereas the corresponding methyl and ethyl alcohols are liquids. Amines possess a characteristic ammonia smell, liquid amines have a distinctive "fishy" and foul smell. The nitrogen atom features a lone electron pair that can bind H+ to form an ammonium ion R3NH+. The lone electron pair is represented in this article by two dots above or next to the N. The water solubility of simple amines is enhanced by hydrogen bonding involving these lone electron pairs. Typically salts of ammonium compounds exhibit the following order of solubility in water: primary ammonium () > secondary ammonium () > tertiary ammonium (R3NH+). Small aliphatic amines display significant solubility in many solvents, whereas those with large substituents are lipophilic. Aromatic amines, such as aniline, have their lone pair electrons conjugated into the benzene ring, thus their tendency to engage in hydrogen bonding is diminished. Their boiling points are high and their solubility in water is low. Spectroscopic identification Typically the presence of an amine functional group is deduced by a combination of techniques, including mass spectrometry as well as NMR and IR spectroscopies. 1H NMR signals for amines disappear upon treatment of the sample with D2O. In their infrared spectrum primary amines exhibit two N-H bands, whereas secondary amines exhibit only one. In their IR spectra, primary and secondary amines exhibit distinctive N-H stretching bands near 3300 cm−1. Somewhat less distinctive are the bands appearing below 1600 cm−1, which are weaker and overlap with C-C and C-H modes. For the case of propyl amine, the H-N-H scissor mode appears near 1600 cm−1, the C-N stretch near 1000 cm−1, and the R2N-H bend near 810 cm−1. Structure Alkyl amines Alkyl amines characteristically feature tetrahedral nitrogen centers. C-N-C and C-N-H angles approach the idealized angle of 109°. C-N distances are slightly shorter than C-C distances. The energy barrier for the nitrogen inversion of the stereocenter is about 7 kcal/mol for a trialkylamine. The interconversion has been compared to the inversion of an open umbrella into a strong wind. Amines of the type NHRR' and NRR′R″ are chiral: the nitrogen center bears four substituents counting the lone pair. Because of the low barrier to inversion, amines of the type NHRR' cannot be obtained in optical purity. For chiral tertiary amines, NRR′R″ can only be resolved when the R, R', and R″ groups are constrained in cyclic structures such as N-substituted aziridines (quaternary ammonium salts are resolvable). Aromatic amines In aromatic amines ("anilines"), nitrogen is often nearly planar owing to conjugation of the lone pair with the aryl substituent. The C-N distance is correspondingly shorter. In aniline, the C-N distance is the same as the C-C distances. Basicity Like ammonia, amines are bases. Compared to alkali metal hydroxides, amines are weaker. The basicity of amines depends on: The electronic properties of the substituents (alkyl groups enhance the basicity, aryl groups diminish it). The degree of solvation of the protonated amine, which includes steric hindrance by the groups on nitrogen. Electronic effects Owing to inductive effects, the basicity of an amine might be expected to increase with the number of alkyl groups on the amine. Correlations are complicated owing to the effects of solvation which are opposite the trends for inductive effects. Solvation effects also dominate the basicity of aromatic amines (anilines). For anilines, the lone pair of electrons on nitrogen delocalizes into the ring, resulting in decreased basicity. Substituents on the aromatic ring, and their positions relative to the amino group, also affect basicity as seen in the table. Solvation effects Solvation significantly affects the basicity of amines. N-H groups strongly interact with water, especially in ammonium ions. Consequently, the basicity of ammonia is enhanced by 1011 by solvation. The intrinsic basicity of amines, i.e. the situation where solvation is unimportant, has been evaluated in the gas phase. In the gas phase, amines exhibit the basicities predicted from the electron-releasing effects of the organic substituents. Thus tertiary amines are more basic than secondary amines, which are more basic than primary amines, and finally ammonia is least basic. The order of pKb's (basicities in water) does not follow this order. Similarly aniline is more basic than ammonia in the gas phase, but ten thousand times less so in aqueous solution. In aprotic polar solvents such as DMSO, DMF, and acetonitrile the energy of solvation is not as high as in protic polar solvents like water and methanol. For this reason, the basicity of amines in these aprotic solvents is almost solely governed by the electronic effects. Synthesis From alcohols Industrially significant alkyl amines are prepared from ammonia by alkylation with alcohols: ROH + NH3 -> RNH2 + H2O From alkyl and aryl halides Unlike the reaction of amines with alcohols the reaction of amines and ammonia with alkyl halides is used for synthesis in the laboratory: RX + 2 R'NH2 -> RR'NH + [RR'NH2]X In such reactions, which are more useful for alkyl iodides and bromides, the degree of alkylation is difficult to control such that one obtains mixtures of primary, secondary, and tertiary amines, as well as quaternary ammonium salts. Selectivity can be improved via the Delépine reaction, although this is rarely employed on an industrial scale. Selectivity is also assured in the Gabriel synthesis, which involves organohalide reacting with potassium phthalimide. Aryl halides are much less reactive toward amines and for that reason are more controllable. A popular way to prepare aryl amines is the Buchwald-Hartwig reaction. From alkenes Disubstituted alkenes react with HCN in the presence of strong acids to give formamides, which can be decarbonylated. This method, the Ritter reaction, is used industrially to produce tertiary amines such as tert-octylamine. Hydroamination of alkenes is also widely practiced. The reaction is catalyzed by zeolite-based solid acids. Reductive routes Via the process of hydrogenation, unsaturated N-containing functional groups are reduced to amines using hydrogen in the presence of a nickel catalyst. Suitable groups include nitriles, azides, imines including oximes, amides, and nitro. In the case of nitriles, reactions are sensitive to acidic or alkaline conditions, which can cause hydrolysis of the group. is more commonly employed for the reduction of these same groups on the laboratory scale. Many amines are produced from aldehydes and ketones via reductive amination, which can either proceed catalytically or stoichiometrically. Aniline () and its derivatives are prepared by reduction of the nitroaromatics. In industry, hydrogen is the preferred reductant, whereas, in the laboratory, tin and iron are often employed. Specialized methods Many methods exist for the preparation of amines, many of these methods being rather specialized. Reactions Alkylation, acylation, and sulfonation, etc. Aside from their basicity, the dominant reactivity of amines is their nucleophilicity. Most primary amines are good ligands for metal ions to give coordination complexes. Amines are alkylated by alkyl halides. Acyl chlorides and acid anhydrides react with primary and secondary amines to form amides (the "Schotten–Baumann reaction"). Similarly, with sulfonyl chlorides, one obtains sulfonamides. This transformation, known as the Hinsberg reaction, is a chemical test for the presence of amines. Because amines are basic, they neutralize acids to form the corresponding ammonium salts . When formed from carboxylic acids and primary and secondary amines, these salts thermally dehydrate to form the corresponding amides. Amines undergo sulfamation upon treatment with sulfur trioxide or sources thereof: R2NH + SO3 -> R2NSO3H Diazotization Amines reacts with nitrous acid to give diazonium salts. The alkyl diazonium salts are of little importance because they are too unstable. The most important members are derivatives of aromatic amines such as aniline ("phenylamine") (A = aryl or naphthyl): ANH2 + HNO2 + HX -> AN2+ + X- + 2 H2O Anilines and naphthylamines form more stable diazonium salts, which can be isolated in the crystalline form. Diazonium salts undergo a variety of useful transformations involving replacement of the group with anions. For example, cuprous cyanide gives the corresponding nitriles: AN2+ + Y- -> AY + N2 Aryldiazoniums couple with electron-rich aromatic compounds such as a phenol to form azo compounds. Such reactions are widely applied to the production of dyes. Conversion to imines Imine formation is an important reaction. Primary amines react with ketones and aldehydes to form imines. In the case of formaldehyde (R'  H), these products typically exist as cyclic trimers: RNH2 + R'_2C=O -> R'_2C=NR + H2O Reduction of these imines gives secondary amines: R'_2C=NR + H2 -> R'_2CH-NHR Similarly, secondary amines react with ketones and aldehydes to form enamines: R2NH + R'(R''CH2)C=O -> R''CH=C(NR2)R' + H2O Mercuric ions reversibly oxidize tertiary amines with an α hydrogen to iminium ions: Hg^2+ + R2NCH2R' <=> Hg + [R2N=CHR']+ + H+ Overview An overview of the reactions of amines is given below: Biological activity Amines are ubiquitous in biology. The breakdown of amino acids releases amines, famously in the case of decaying fish which smell of trimethylamine. Many neurotransmitters are amines, including epinephrine, norepinephrine, dopamine, serotonin, and histamine. Protonated amino groups () are the most common positively charged moieties in proteins, specifically in the amino acid lysine. The anionic polymer DNA is typically bound to various amine-rich proteins. Additionally, the terminal charged primary ammonium on lysine forms salt bridges with carboxylate groups of other amino acids in polypeptides, which is one of the primary influences on the three-dimensional structures of proteins. Amine hormones Hormones derived from the modification of amino acids are referred to as amine hormones. Typically, the original structure of the amino acid is modified such that a –COOH, or carboxyl, group is removed, whereas the , or amine, group remains. Amine hormones are synthesized from the amino acids tryptophan or tyrosine. Application of amines Dyes Primary aromatic amines are used as a starting material for the manufacture of azo dyes. It reacts with nitrous acid to form diazonium salt, which can undergo coupling reaction to form an azo compound. As azo-compounds are highly coloured, they are widely used in dyeing industries, such as: Methyl orange Direct brown 138 Sunset yellow FCF Ponceau Drugs Most drugs and drug candidates contain amine functional groups: Chlorpheniramine is an antihistamine that helps to relieve allergic disorders due to cold, hay fever, itchy skin, insect bites and stings. Chlorpromazine is a tranquilizer that sedates without inducing sleep. It is used to relieve anxiety, excitement, restlessness or even mental disorder. Ephedrine and phenylephrine, as amine hydrochlorides, are used as decongestants. Amphetamine, methamphetamine, and methcathinone are psychostimulant amines that are listed as controlled substances by the US DEA. Thioridazine, an antipsychotic drug, is an amine which is believed to exhibit its antipsychotic effects, in part, due to its effects on other amines. Amitriptyline, imipramine, lofepramine and clomipramine are tricyclic antidepressants and tertiary amines. Nortriptyline, desipramine, and amoxapine are tricyclic antidepressants and secondary amines. (The tricyclics are grouped by the nature of the final amino group on the side chain.) Substituted tryptamines and phenethylamines are key basic structures for a large variety of psychedelic drugs. Opiate analgesics such as morphine, codeine, and heroin are tertiary amines. Gas treatment Aqueous monoethanolamine (MEA), diglycolamine (DGA), diethanolamine (DEA), diisopropanolamine (DIPA) and methyldiethanolamine (MDEA) are widely used industrially for removing carbon dioxide (CO2) and hydrogen sulfide (H2S) from natural gas and refinery process streams. They may also be used to remove CO2 from combustion gases and flue gases and may have potential for abatement of greenhouse gases. Related processes are known as sweetening. Epoxy resin curing agents Amines are often used as epoxy resin curing agents. These include dimethylethylamine, cyclohexylamine, and a variety of diamines such as 4,4-diaminodicyclohexylmethane. Multifunctional amines such as tetraethylenepentamine and triethylenetetramine are also widely used in this capacity. The reaction proceeds by the lone pair of electrons on the amine nitrogen attacking the outermost carbon on the oxirane ring of the epoxy resin. This relieves ring strain on the epoxide and is the driving force of the reaction. Molecules with tertiary amine functionality are often used to accelerate the epoxy-amine curing reaction and include substances such as 2,4,6-Tris(dimethylaminomethyl)phenol. It has been stated that this is the most widely used room temperature accelerator for two-component epoxy resin systems. Safety Low molecular weight simple amines, such as ethylamine, are only weakly toxic with between 100 and 1000 mg/kg. They are skin irritants, especially as some are easily absorbed through the skin. Amines are a broad class of compounds, and more complex members of the class can be extremely bioactive, for example strychnine. See also Acid-base extraction Amine value Amine gas treating Ammine Biogenic amine Ligand isomerism Official naming rules for amines as determined by the International Union of Pure and Applied Chemistry (IUPAC) References Further reading External links Synthesis of amines Factsheet, amines in food Functional groups
Amine
Chemistry
4,216
65,639,408
https://en.wikipedia.org/wiki/Manifold%20injection
Manifold injection is a mixture formation system for internal combustion engines with external mixture formation. It is commonly used in engines with spark ignition that use petrol as fuel, such as the Otto engine, and the Wankel engine. In a manifold-injected engine, the fuel is injected into the intake manifold, where it begins forming a combustible air-fuel mixture with the air. As soon as the intake valve opens, the piston starts sucking in the still forming mixture. Usually, this mixture is relatively homogeneous, and, at least in production engines for passenger cars, approximately stoichiometric; this means that there is an even distribution of fuel and air across the combustion chamber, and enough, but not more air present than what is required for the fuel's complete combustion. The injection timing and measuring of the fuel amount can be controlled either mechanically (by a fuel distributor), or electronically (by an engine control unit). Since the 1970s and 1980s, manifold injection has been replacing carburettors in passenger cars. However, since the late 1990s, car manufacturers have started using petrol direct injection, which caused a decline in manifold injection installation in newly produced cars. There are two different types of manifold injection: the multi-point injection (MPI) system, also known as port injection, or dry manifold system and the single-point injection (SPI) system, also known as throttle-body injection (TBI), central fuel injection (CFI), electronic gasoline injection (EGI), and wet manifold system In this article, the terms multi-point injection (MPI), and single-point injection (SPI) are used. In an MPI system, there is one fuel injector per cylinder, installed very close to the intake valve(s). In an SPI system, there is only a single fuel injector, usually installed right behind the throttle valve. Modern manifold injection systems are usually MPI systems; SPI systems are now considered obsolete. Description In a manifold injected engine, the fuel is injected with relatively low pressure (70...1470 kPa) into the intake manifold to form a fine fuel vapour. This vapour can then form a combustible mixture with the air, and the mixture is sucked into the cylinder by the piston during the intake stroke. Otto engines use a technique called quantity control for setting the desired engine torque, which means that the amount of mixture sucked into the engine determines the amount of torque produced. For controlling the amount of mixture, a throttle valve is used, which is why quantity control is also called intake air throttling. Intake air throttling changes the amount of air sucked into the engine, which means that if a stoichiometric () air-fuel mixture is desired, the amount of injected fuel has to be changed along with the intake air throttling. To do so, manifold injection systems have at least one way to measure the amount of air that is currently being sucked into the engine. In mechanically controlled systems with a fuel distributor, a vacuum-driven piston directly connected to the control rack is used, whereas electronically controlled manifold injection systems typically use an airflow sensor, and a lambda sensor. Only electronically controlled systems can form the stoichiometric air-fuel mixture precisely enough for a three-way catalyst to work sufficiently, which is why mechanically controlled manifold injection systems such as the Bosch K-Jetronic are now considered obsolete. Main types Single-point injection As the name implies, a single-point injected (SPI) engine only has a single fuel injector. It is usually installed right behind the throttle valve in the throttle body. Single-point injection was a relatively low-cost way for automakers to reduce exhaust emissions to comply with tightening regulations while providing better "driveability" (easy starting, smooth running, freedom from hesitation) than could be obtained with a carburetor. Many of the carburetor's supporting components - such as the air cleaner, intake manifold, and fuel line routing - could be used with few or no changes. This postponed the redesign and tooling costs of these components. However, single-point injection does not allow forming very precise mixtures required for modern emission regulations, and is thus deemed an obsolete technology in passenger cars. Single-point injection was used extensively on American-made passenger cars and light trucks during 1980–1995, and in some European cars in the early and mid-1990s. Single-point injection has been a known technology since the 1960s, but has long been considered inferior to carburettors, because it requires an injection pump, and is thus more complicated. Only with the availability of inexpensive digital engine control units (ECUs) in the 1980s did single-point injection become a reasonable option for passenger cars. Usually, intermittently injecting, low injection pressure (70...100 kPa) systems were used that allowed the use of low-cost electric fuel injection pumps. A very common single-point injection system used in many passenger cars is the Bosch Mono-Jetronic, which German motor journalist Olaf von Fersen considers a "combination of fuel injection and carburettor". The system was called Throttle-body Injection or Digital Fuel Injection by General Motors, Central Fuel Injection by Ford, PGM-CARB by Honda, and EGI by Mazda). Multi-point injection In a multi-point injected engine, every cylinder has its own fuel injector, and the fuel injectors are usually installed in close proximity to the intake valve(s). Thus, the injectors inject the fuel through the open intake valve into the cylinder, which should not be confused with direct injection. Certain multi-point injection systems also use tubes with poppet valves fed by a central injector instead of individual injectors. Typically though, a multi-point injected engine has one fuel injector per cylinder, an electric fuel pump, a fuel distributor, an airflow sensor, and, in modern engines, an engine control unit. The temperatures near the intake valve(s) are rather high, the intake stroke causes intake air swirl, and there is much time for the air-fuel mixture to form. Therefore, the fuel does not require much atomisation. The atomisation quality is relative to the injection pressure, which means that a relatively low injection pressure (compared with direct injection) is sufficient for multi-point injected engines. A low injection pressure results in a low relative air-fuel velocity, which causes large, and slowly vapourising fuel droplets. Therefore, the injection timing has to be precise to minimise unburnt fuel (and thus HC emissions). Because of this, continuously injecting systems such as the Bosch K-Jetronic are obsolete. Modern multi-point injection systems use electronically controlled intermittent injection instead. From 1992 to 1996 General Motors implemented a system called Central Port Injection or Central Port Fuel Injection. The system uses tubes with poppet valves from a central injector to spray fuel at each intake port rather than the central throttle body. Fuel pressure is similar to a single-point injection system. CPFI (used from 1992 to 1995) is a batch-fire system, while CSFI (from 1996) is a sequential system. Injection controlling mechanism In manifold injected engines, there are three main methods of metering the fuel, and controlling the injection timing. Mechanical controlling In early manifold injected engines with fully mechanical injection systems, a gear-, chain- or belt-driven injection pump with a mechanic "analogue" engine map was used. This allowed injecting fuel intermittently, and relatively precisely. Typically, such injection pumps have a three-dimensional cam that depicts the engine map. Depending on the throttle position, the three-dimensional cam is moved axially on its shaft. A roller-type pick-up mechanism that is directly connected to the injection pump control rack rides on the three-dimensional cam. Depending upon the three-dimensional cam's position, it pushes in or out the camshaft-actuated injection pump plungers, which controls both the amount of injected fuel, and the injection timing. The injection plungers both create the injection pressure, and act as the fuel distributors. Usually, there is an additional adjustment rod that is connected to a barometric cell, and a cooling water thermometer, so that the fuel mass can be corrected according to air pressure, and water temperature. Kugelfischer injection systems also have a mechanical centrifugal crankshaft speed sensor. Multi-point injected systems with mechanical controlling were used until the 1970s. No injection-timing controlling In systems without injection-timing controlling, the fuel is injected continuously, thus, no injection timing is required. The biggest disadvantage of such systems is that the fuel is also injected when the intake valves are closed, but such systems are much simpler and less expensive than mechanical injection systems with engine maps on three-dimensional cams. Only the amount of injected fuel has to be determined, which can be done very easily with a rather simple fuel distributor that is controlled by an intake manifold vacuum-driven airflow sensor. The fuel distributor does not have to create any injection pressure, because the fuel pump already provides pressure sufficient for injection (up to 500 kPa). Therefore, such systems are called "unpowered", and do not need to be driven by a chain or belt, unlike systems with mechanical injection pumps. Also, an engine control unit is not required. "Unpowered" multi-point injection systems without injection-timing controlling such as the Bosch K-Jetronic were commonly used from the mid-1970s until the early 1990s in passenger cars, although examples had existed earlier, such as the Rochester Ramjet offered on high-performance versions of the Chevrolet small-block engine from 1957 to 1965. Electronic control unit Engines with manifold injection, and an electronic engine control unit are often referred to as engines with electronic fuel injection (EFI). Typically, EFI engines have an engine map built into discrete electronic components, such as read-only memory. This is both more reliable and more precise than a three-dimensional cam. The engine control circuitry uses the engine map, as well as airflow, throttle valve, crankshaft speed, and intake air temperature sensor data to determine both the amount of injected fuel, and the injection timing. Usually, such systems have a single, pressurised fuel rail, and injection valves that open according to an electric signal sent from the engine control circuitry. The circuitry can either be fully analogue, or digital. Analogue systems such as the Bendix Electrojector were niche systems, and used from the late 1950s until the early 1970s; digital circuitry became available in the late 1970s, and has been used in electronic engine control systems since. One of the first widespread digital engine control units was the Bosch Motronic. Air mass determination In order to mix air and fuel correctly so a proper air-fuel mixture is formed, the injection control system needs to know how much air is sucked into the engine, so it can determine how much fuel has to be injected accordingly. In modern systems, an air-mass meter that is built into the throttle body meters the air mass, and sends a signal to the engine control unit, so it can calculate the correct fuel mass. Alternatively, a manifold vacuum sensor can be used. The manifold vacuum sensor signal, the throttle position, and the crankshaft speed can then be used by the engine control unit to calculate the correct amount of fuel. In modern engines, a combination of all these systems is used. Mechanical injection controlling systems as well as unpowered systems typically only have an intake manifold vacuum sensor (a membrane or a sensor plate) that is mechanically connected to the injection pump rack or fuel distributor. Injection operation modes Manifold injected engines can use either continuous or intermittent injection. In a continuously injecting system, the fuel is injected continuously, thus, there are no operating modes. In intermittently injecting systems however, there are usually four different operating modes. Simultaneous injection In a simultaneously intermittently injecting system, there is one single, fixed injection timing for all cylinders. Therefore, the injection timing is ideal only for some cylinders; there is always at least one cylinder that has its fuel injected against the closed intake valve(s). This causes fuel evaporation times that are different for each cylinder. Group injection Systems with intermittent group injection work similarly to the simultaneously injection systems mentioned earlier, except that they have two or more groups of simultaneously injecting fuel injectors. Typically, a group consists of two fuel injectors. In an engine with two groups of fuel injectors, there is an injection every half crankshaft rotation, so that at least in some areas of the engine map no fuel is injected against a closed intake valve. This is an improvement over a simultaneously injecting system. However, the fuel evaporation times are still different for each cylinder. Sequential injection In a sequentially injecting system, each fuel injector has a fixed, correctly set, injection timing that is in sync with the spark plug firing order, and the intake valve opening. This way, no more fuel is injected against closed intake valves. Cylinder-specific injection Cylinder-specific injection means that there are no limitations to the injection timing. The injection control system can set the injection timing for each cylinder individually, and there is no fixed synchronisation between each cylinder's injector. This allows the injection control unit to inject the fuel not only according to firing order, and intake valve opening intervals, but it also allows it to correct cylinder charge irregularities. This system's disadvantage is that it requires cylinder-specific air-mass determination, which makes it more complicated than a sequentially injecting system. History The first manifold injection system was designed by Johannes Spiel at Hallesche Maschinenfabrik. Deutz started series production of stationary four-stroke engines with manifold injection in 1898. Grade built the first two-stroke engine with manifold injection in 1906; the first manifold injected series production four-stroke aircraft engines were built by Wright and Antoinette the same year (Antoinette 8V). In 1912, Bosch equipped a watercraft engine with a makeshift injection pump built from an oil pump, but this system did not prove to be reliable. In the 1920s, they attempted to use a Diesel engine injection pump in a petrol-fuelled Otto engine. However, they were not successful. In 1930 Moto Guzzi built the first manifold injected Otto engine for motorcycles, which eventually was the first land vehicle engine with manifold injection. From the 1930s until the 1950s, manifold injections systems were not used in passenger cars, despite the fact that such systems existed. This was because the carburettor proved to be a simpler and less expensive, yet sufficient mixture formation system that did not need replacing yet. In ca. 1950, Daimler-Benz started development of a petrol direct injection system for their Mercedes-Benz sports cars. For passenger cars however, a manifold injection system was deemed more feasible. Eventually, the Mercedes-Benz W 128, W 113, W 189, and W 112 passenger cars were equipped with manifold injected Otto engines. From 1951 until 1956, FAG Kugelfischer Georg Schäfer & Co. developed the mechanical Kugelfischer injection system. It was used in many passenger cars, such as the Peugeot 404 (1962), Lancia Flavia iniezione (1965), BMW E10 (1969), Ford Capri RS 2600 (1970), BMW E12 (1973), BMW E20 (1973), and the BMW E26 (1978). In 1957, Bendix Corporation presented the Bendix Electrojector, one of the first electronically controlled manifold injection systems. Bosch built this system under licence, and marketed it from 1967 as the D-Jetronic. In 1973, Bosch introduced their first self-developed multi-point injection systems, the electronic L-Jetronic, and the mechanical, unpowered K-Jetronic. Their fully digital Motronic system was introduced in 1979. It found widespread use in German luxury saloons. At the same time, most American car manufacturers stuck to electronic single-point injection systems. In the mid-1980s, Bosch upgraded their non-Motronic multi-point injection systems with digital engine control units, creating the KE-Jetronic, and the LH-Jetronic. Volkswagen developed the digital "Digijet" injection system for their "Wasserboxer" water-cooled engines, which evolved into the Volkswagen Digifant system in 1985. Cheap single-point injection systems that worked with either two-way or three-way catalyst converters, such as the Mono-Jetronic introduced in 1987, enabled car manufacturers to economically offer an alternative to carburettors even in their economy cars, which helped the extensive spread of manifold injection systems across all passenger car market segments during the 1990s. In 1995, Mitsubishi introduced the first petrol direct injection Otto engine for passenger cars, and the petrol direct injection has been replacing the manifold injection since, but not across all market segments; several newly produced passenger car engines still use multi-point injection. References Engine components Fuel injection systems
Manifold injection
Technology
3,525
2,527,996
https://en.wikipedia.org/wiki/Automatic%20frequency%20control
In radio equipment, Automatic Frequency Control (AFC), also called Automatic Fine Tuning (AFT), is a method or circuit to automatically keep a resonant circuit tuned to the frequency of an incoming radio signal. It is primarily used in radio receivers to keep the receiver tuned to the frequency of the desired station. In radio communication, AFC is needed because, after the bandpass frequency of a receiver is tuned to the frequency of a transmitter, the two frequencies may drift apart, interrupting the reception. This can be caused by a poorly controlled transmitter frequency, but the most common cause is drift of the center bandpass frequency of the receiver, due to thermal or mechanical drift in the values of the electronic components. Assuming that a receiver is nearly tuned to the desired frequency, the AFC circuit in the receiver develops an error voltage proportional to the degree to which the receiver is mistuned. This error voltage is then fed back to the tuning circuit in such a way that the tuning error is reduced. In most frequency modulation (FM) detectors, an error voltage of this type is easily available. See Negative feedback. AFC was mainly used in radios and television sets around the mid-20th century. In the 1970s, receivers began to be designed using frequency synthesizer circuits, which synthesized the receiver's input frequency from a crystal oscillator using the vibrations of an ultra-stable quartz crystal. These maintained sufficiently stable frequencies that AFCs were no longer needed. See also Automatic gain control (AGC) Frequency drift Phase-locked loop (PLL) References External links Radar tutorial Communication circuits Wireless tuning and filtering
Automatic frequency control
Engineering
324
46,532,824
https://en.wikipedia.org/wiki/Intermetallic%20particle
Intermetallic particles form during solidification of metallic alloys. Aluminium alloys Al-Si-Cu-Mg alloys Al-Si-Cu-Mg alloys form Al5FeSi- plate like intermetallic phases like -Al8Fe2Si, Al2Cu, etc. The size and morphology of these intermetallic phases in these alloys control the mechanical properties of these alloys, especially strength and ductility. The size of these phases depends on the secondary dendrite arm spacing, as well as the Si content of the alloy, of the primary phase in the micro structure. Phases and crystal structures Magnesium alloys WE 43 In-situ synchrotron diffraction experiment on Electron alloy-WE 43 (Mg4Y3Nd) shows that this alloy form the following intermetallic phases ;Mg12Nd, Mg14Y4Nd, and Mg24Y5. Phases and crystal structures AZ 91 References
Intermetallic particle
Physics,Chemistry,Materials_science
191
2,632,079
https://en.wikipedia.org/wiki/Fanning%20friction%20factor
The Fanning friction factor (named after American engineer John T. Fanning) is a dimensionless number used as a local parameter in continuum mechanics calculations. It is defined as the ratio between the local shear stress and the local flow kinetic energy density: where is the local Fanning friction factor (dimensionless); is the local shear stress (units of pascals (Pa) = kg/m, or pounds per square foot (psf) = lbm/ft); is the bulk dynamic pressure (Pa or psf), given by: is the density of the fluid (kg/m or lbm/ft) is the bulk flow velocity (m/s or ft/s) In particular the shear stress at the wall can, in turn, be related to the pressure loss by multiplying the wall shear stress by the wall area ( for a pipe with circular cross section) and dividing by the cross-sectional flow area ( for a pipe with circular cross section). Thus Fanning friction factor formula This friction factor is one-fourth of the Darcy friction factor, so attention must be paid to note which one of these is meant in the "friction factor" chart or equation consulted. Of the two, the Fanning friction factor is the more commonly used by chemical engineers and those following the British convention. The formulas below may be used to obtain the Fanning friction factor for common applications. The Darcy friction factor can also be expressed as where: is the shear stress at the wall is the density of the fluid is the flow velocity averaged on the flow cross section For laminar flow in a round tube From the chart, it is evident that the friction factor is never zero, even for smooth pipes because of some roughness at the microscopic level. The friction factor for laminar flow of Newtonian fluids in round tubes is often taken to be: where Re is the Reynolds number of the flow. For a square channel the value used is: For turbulent flow in a round tube Hydraulically smooth piping Blasius developed an expression of friction factor in 1913 for the flow in the regime . Koo introduced another explicit formula in 1933 for a turbulent flow in region of Pipes/tubes of general roughness When the pipes have certain roughness , this factor must be taken in account when the Fanning friction factor is calculated. The relationship between pipe roughness and Fanning friction factor was developed by Haaland (1983) under flow conditions of where is the roughness of the inner surface of the pipe (dimension of length) D is inner pipe diameter; The Swamee–Jain equation is used to solve directly for the Darcy–Weisbach friction factor f for a full-flowing circular pipe. It is an approximation of the implicit Colebrook–White equation. Fully rough conduits As the roughness extends into turbulent core, the Fanning friction factor becomes independent of fluid viscosity at large Reynolds numbers, as illustrated by Nikuradse and Reichert (1943) for the flow in region of . The equation below has been modified from the original format which was developed for Darcy friction factor by a factor of General expression For the turbulent flow regime, the relationship between the Fanning friction factor and the Reynolds number is more complex and is governed by the Colebrook equation which is implicit in : Various explicit approximations of the related Darcy friction factor have been developed for turbulent flow. Stuart W. Churchill developed a formula that covers the friction factor for both laminar and turbulent flow. This was originally produced to describe the Moody chart, which plots the Darcy-Weisbach Friction factor against Reynolds number. The Darcy Weisbach Formula , also called Moody friction factor, is 4 times the Fanning friction factor and so a factor of has been applied to produce the formula given below. Re, Reynolds number (unitless); ε, roughness of the inner surface of the pipe (dimension of length); D, inner pipe diameter; ln is the Natural logarithm; Here, is not the Darcy-Weisbach Friction factor , is 4 times lower than ; Flows in non-circular conduits Due to geometry of non-circular conduits, the Fanning friction factor can be estimated from algebraic expressions above by using hydraulic radius when calculating for Reynolds number Application The friction head can be related to the pressure loss due to friction by dividing the pressure loss by the product of the acceleration due to gravity and the density of the fluid. Accordingly, the relationship between the friction head and the Fanning friction factor is: where: is the friction loss (in head) of the pipe. is the Fanning friction factor of the pipe. is the flow velocity in the pipe. is the length of pipe. is the local acceleration of gravity. is the pipe diameter. References Further reading Dimensionless numbers of fluid mechanics Equations of fluid dynamics Fluid dynamics Piping
Fanning friction factor
Physics,Chemistry,Engineering
989
7,868,885
https://en.wikipedia.org/wiki/A.%20David%20Buckingham
Amyand David Buckingham (28 January 1930 – 4 February 2021) born in Pymble, Sydney, New South Wales, Australia was a chemist, with primary expertise in chemical physics. Life and career David Buckingham obtained a Bachelor of Science and Master of Science, under Professor Raymond Le Fevre, from the University of Sydney and a PhD from the University of Cambridge supervised by John Pople. He was an 1851 Exhibition Senior Student in the Physical Chemistry Laboratory at the University of Oxford from 1955 to 1957, Lecturer and then Student (Fellow) at Christ Church, Oxford from 1955 to 1965 and University Lecturer in the Inorganic Chemistry Laboratory from 1958 to 1965. He was Professor of Theoretical Chemistry at the University of Bristol from 1965 to 1969. He was appointed Professor of Chemistry at the University of Cambridge in 1969. He was elected a Fellow of the Royal Society in 1975, a Fellow of the American Physical Society in 1986 and a Foreign Associate of the United States National Academy of Sciences in 1992. He was a member of the International Academy of Quantum Molecular Science. Buckingham was elected to the Australian Academy of Science in 2008 as a Corresponding Fellow. He was awarded the first Ahmed Zewail Prize in Molecular Sciences for pioneering contributions to the molecular sciences in 2006. He won the Harrie Massey Medal and Prize in 1995. He also played 10 first class cricket matches for Cambridge University and Free Foresters between 1955 and 1960, scoring 349 runs including two half-centuries at an average of 18.36. He was President of Cambridge University Cricket Club between 1990 and 2009. Professor Buckingham finished his career as Emeritus Professor of Chemistry at the University of Cambridge, United Kingdom and Emeritus Fellow at Pembroke College, Cambridge. Scientific contributions Professor Buckingham's research has focussed on the measurement and understanding of the electric, magnetic and optical properties of molecules; as well as on the theory of intermolecular forces. Initially he worked on dielectric properties of liquids, such as dipole moments of molecules in both solution and gas phases. He developed the theory of the interaction of molecules in liquids and gases with external electric and magnetic fields. In 1959, he proposed a direct method of measurement of molecular quadrupole moments of molecules (measured in buckinghams), which he demonstrated experimentally in 1963 on the carbon dioxide molecule. In 1960, he developed theories of solvent effects on nuclear magnetic resonance (NMR) spectra and vibrational spectra of molecules. In 1962 he considered the effect on NMR spectra of molecular orientation in a strong electric field, and developed a method to determine the absolute sign of the spin-spin coupling constant. In 1968, he determined the first accurate values of hyperpolarizability using the Kerr effect. In 1971 Buckingham and Laurence Barron pioneered the study of Raman optical activity, due to differences in the Raman scattering of left and right-polarized light by chiral molecules. In the 1980s, he showed the importance of long-range intermolecular forces in determining the structure and properties of small molecule clusters, with particular applications in biological macromolecules. In 1990 he predicted the linear effect of an electric field on the reflection of light at interfaces. In 1995, he proved that the sum of the rotational strengths of all vibrational transitions from the ground state of a chiral molecule is zero. Personal life In July 1964, David Buckingham sailed from Southampton to Montréal, to take up a research post in Ottawa. On the voyage he met Jillian Bowles, a physiotherapist who was heading to a post in British Columbia. They were engaged in January 1965 and married at Christ Church Cathedral, Oxford six months later. They were married for over 55 years, and had three children: Lucy Elliot and Mark Vincent, born in Bristol, and Alice Susan born in Cambridge. Between them they had eight grandchildren: Carola, Peter, Oliver, William, Patrick, Anna, Samuel and Maeve. David Buckingham died in Cambridge on 4 February 2021, seven days after his 91st birthday; he was survived by Jill, their children and grandchildren. See also Buckingham (unit) References External links 2021 deaths 1930 births Commanders of the Order of the British Empire Fellows of the Royal Society Fellows of Pembroke College, Cambridge Fellows of Christ Church, Oxford Academics of the University of Bristol Australian chemists Members of the International Academy of Quantum Molecular Science Theoretical chemists Foreign associates of the National Academy of Sciences Australian cricketers Cambridge University cricketers Free Foresters cricketers Members of the University of Cambridge Department of Chemistry University of Sydney alumni Fellows of the Australian Academy of Science Fellows of the American Physical Society Alumni of Corpus Christi College, Cambridge Scientists from Sydney Presidents of the Cambridge Philosophical Society 20th-century Australian sportsmen
A. David Buckingham
Chemistry
942
76,369,743
https://en.wikipedia.org/wiki/Erysiphe%20platani
Erysiphe platani, also known as sycamore powdery mildew, is a fungus native to North America that now infects sycamore tree species worldwide. Infections may spread rapidly in urban settings with large groups of young trees or in plant nurseries. This mildew thrives when there are high humidity conditions during the growing season. Symptomatic trees show leaf discoloration and puckering as the mildew spreads across buds and leaf surfaces. The most visible effects, which include "leaf curling, stunting, and distortion," appear on vulnerable newly emerged leaves. This infection only appears on leaves, it has no obvious effect on stems and branches. Fertilization and pollarding increase the number of young shoots, which are the parts of the trees most vulnerable to infection. References Erysiphales Fungi of North America Fungal plant pathogens and diseases Fungus species Fungi described in 1874 Taxa named by Elliot Calvin Howe
Erysiphe platani
Biology
193
25,100,521
https://en.wikipedia.org/wiki/Yessotoxin
Yessotoxins are a group of lipophilic, sulfur bearing polyether toxins that are related to ciguatoxins. They are produced by a variety of dinoflagellates, most notably Lingulodinium polyedrum and Gonyaulax spinifera. When the environmental conditions encourage the growth of YTX producing dinoflagellates, the toxin(s) bioaccumulate in edible tissues of bivalve molluscs, including mussels, scallops, and clams, thus allowing entry of YTX into the food chain. History The first YTX analog discovered, yessotoxin, was initially found in the scallop species Patinopecten yessoensis in the 1960s. Since then, numerous yessotoxin analogs have been isolated from shellfish and marine algae (including 45-hydroxyyessotoxin and carboxyyessotoxin). Initially, scientists wrongly classified YTXs in the group of diarrhetic shellfish poisoning (DSP) toxins along the lines of okadaic acid and azaspiracids. These type of toxins can cause extreme gastrointestinal upset and accelerate cancer growth. Once scientists realized YTXs did not have the same toxicological mechanism of action as the other toxins (protein phosphatase inhibitors), they were given their own classification. Toxicity A large number of studies have been conducted to assess the potential toxicity of YTXs. To date none of these studies has highlighted any toxic effects of YTXs when they are present in humans. They have, however, found YTXs to have toxic effects in mice when the YTX had been administered by an intraperitoneal injection into the animal. The toxicological effects encountered are similar to those seen for paralytic shellfish toxins, and include hepatotoxicity, cardiotoxicity, and neurotoxicity, with a YTX level of 100 μg/kg causing toxic effects. Limited toxic effects have been seen after oral administration of the toxin to animals. The mechanism by which YTX exerts a toxic effect is unknown and is currently being studied by a number of research groups. However, some recent studies suggest the mode of action may have something to do with altering calcium homeostasis. Genotoxicity has been newly reported and confirmed. Although no data illustrate the direct association of YTXs and toxicity in humans, issues with regards to the potential health risks of YTXs still stand due to the significant animal toxicity observed, and like other algal toxins present within shellfish, YTKs are not destroyed by heating or freezing. As a result, several countries, including New Zealand, Japan, and those in Europe, regulate the levels of YTXs in shellfish. In 2002, the European Commission placed the regulatory level at 1 μg of YTXs per g (1 mg/kg) of shellfish meat intended for human consumption (Directive 20012/225/EC). Recently, it was shown that yessotoxins can trigger ribotoxic stress. Analysis The analysis of YTXs is necessary because of the possible health risks and the limits put in place by the European Commission directive. It is complex due to the large number of YTX analogues that can be present in the sample. Analysis is also problematic because YTXs have similar properties to other lipophilic toxins present in the samples, so methods can be subject to false negative or false positive results due to sample interferences. Several experimental techniques have been developed to detect YTXs, each offering varying levels of selectivity and sensitivity, whilst having numerous advantages and disadvantages. Extraction methods Prior to analysis, YTXs must be isolated from the sample medium whether this is the digestive gland of a shellfish, a water sample, or a growth-culture medium. This can be achieved by several methods: Liquid–liquid or solvent extraction Liquid–liquid extraction or solvent extraction can be used to isolate YTXs from the sample medium. Methanol is normally the solvent of choice, but other solvents can also be used including acetone and chloroform. The drawback of using the solvent extraction method is the levels of analyte recovery can be poor, so any results obtained from the quantification processes may not be representative of the sample. Solid phase extraction Solid phase extraction also can be used to isolate YTXs from the sample medium. This technique separates the components of a mixture by using their different chemical and physical properties. This method is robust and extremely useful when small sample volumes are being analysed. It is advantageous over solvent extraction, as it concentrates (can give sample enrichment up to the power of 10) and can purify the sample by the removal of salts and nonpolar substances which can interfere with the final analysis. This technique is also beneficial because it gives good levels of YTX recovery — ranging from 40 to 50%. Analytical techniques A range of analytical methods can be used to identify and quantify YTXs. Mouse bioassay The mouse bioassay (MBA) procedure developed by Yasumoto et al. is the official reference method used to analyse for YTX and lipophilic toxins including okadaic acid, dinophysistoxins (DSPs), azaspiracids, and pectenotoxins. The MBA involves injecting the extracted toxin into a mouse and monitoring the mouse survival rate; the toxicity of the sample can be subsequently deduced and the analyte concentration determined. This calculation is made on the basis that one mouse unit (MU) is the minimum quantity of toxin needed to kill a mouse in 24hours. The MU is set by regulating bodies at 0.05 MU/g of animal. The original Yasumoto MBA is subject to interferences from paralytic shellfish toxins and free fatty acids in solution, which cause false positive results. Several modifications to the MBA can be made to allow the test to be performed without these errors. The MBA, however, still has many drawbacks; The method is a nonspecific assay- it is unable to differentiate between YTX and other sample components, including DSP toxins The method has economic and social issues with regards to testing on animals. The results produced are not very reproducible. The method has insufficient detection capabilities. The method, though, is quick and inexpensive. Due to these factors, the other, more recently developed, techniques are being preferred for analysis of YTX. Enzyme-linked immunosorbent assay The enzyme-linked immunosorbent assay (ELISA) technique used for the analysis of YTXs is a recently developed method by Briggs et al. This competitive, indirect immunoassay uses polyclonal antibodies against YTX to determine its concentration in the sample. The assay is commercially available, and is a rapid technique for the analysis of YTXs in shellfish, algal cells, and culture samples. ELISA has several advantages: it is very sensitive, has a limit of quantification of 75 μg/kg, is relatively cheap, and is easy to carry out. The major disadvantage to this method is it cannot differentiate between the different YTX analogues and takes a long time to generate results. Chromatographic methods A variety of chromatographic methods can be used to analyse YTXs. This includes chromatographic techniques coupled to mass spectrometry and fluorescence detectors. All of the chromatographic techniques require a calibration step prior to sample analysis. Chromatographic methods with fluorescence detection Liquid chromatography with fluorescence detection (LC-FLD) provides a selective, relatively cheap, reproducible method for the qualitative and quantitative analysis of YTX for shellfish and algae samples. This method requires an additional sample preparation step after the analyte extraction procedure has been completed (in this case SPE is preferentially used so common interferences can be removed from the sample). This additional step involves the derivatization of the YTXs with a fluorescent dienophile reagent — dimethoxy-4-methyl-3-oxo-3,4-dihydroquinoxalinyl)ethyl]-1,2,4-triazoline-3,5-dione, which facilitates analyte detection. This additional sample preparation step can make LC-FLD analysis extremely time-consuming and is a major disadvantage of the technique. Chromatographic methods coupled to mass spectrometry This technique is extremely useful for the analysis of multiple toxins. It has numerous advantages over the other techniques used. It is a sensitive and selective analytical method, making it ideal for the analysis of complex samples and those with low analyte concentrations. The method is also beneficial in that it provides important structural information on the analyte which is helpful for aiding analyte identification and when unknown analytes are present in the sample. The technique has benefits over LC-FLD as the derivatisation and purification extraction steps are not necessary. YTX analysis limits of detection of 30 mg/g of shellfish tissue for chromatographic methods coupled to mass spectrometry have been recorded. The major drawback to LC-MS is that the equipment is very expensive. Capillary electrophoresis Capillary electrophoresis (CE)is emerging as the preferred analytical method for YTX analysis, as it has significant advantages over the other analytical techniques used, including high efficiency, a fast and simple separation procedure, a small sample volume required, and minimal reagent is required. The techniques used for YTX analysis include: CE with ultraviolet (UV) detection and CE coupled to mass spectrometry (MS). CEUV is a good method for YTX analysis, as its selectivity can easily differentiate between YTXs and DSP toxins. The sensitivity of these techniques can, however, be poor due to the low molar absorptivity of the analytes. The technique gives a limit of detection (LOD) of 0.3 μg/ml and a limit of quantification (LOQ)of 0.9 μg/ml. The sensitivity of conventional CEUV can be improved by using micellar electrokinetic chromatography (MEKC). CEMS has the added advantage over CEUV of being able to give molecular weight and/or structural information about the analyte. This enables the user to carry out unequivocal confirmations of the analytes present in the sample. The LOD and the LOQ have been calculated as 0.02 μg/ml and 0.08 μg/ml, respectively, again meeting the European Commission directive. See also Canadian Reference Materials References Sources Phycotoxins Polyether toxins Alkene derivatives Organic sodium salts Marine neurotoxins Sulfate esters
Yessotoxin
Chemistry
2,322
5,907,331
https://en.wikipedia.org/wiki/SoundRenderer
SoundRenderer is a spatialized audio rendering plugin for Maya to simulate 3D-positional audio. It can be used to create a multichannel audiotrack from many mono wav-files positioned in the scene for later synchronization with the rendered video. The plugin uses the audiofiles set up in the 3d scene and renders them to a variable number of channels (mono, stereo, 5.1 etc.). To simulate a realistic surrounding, it uses several realworld effects: distance-delay doppler-effect air-absorption panning Nodes There are two different kinds of nodes available. The speaker nodes can be linked to wav-files which are triggered either by a keyframe or an expression, but can also be looped (for constant environmental sounds). The listener node can contain a variable number of speakers and offers a customization of the different effects and their precision. Mixer The mixer offers a parallel configuration of the Speakers in the scene without having to find and select them. It also allows to change the triggering mode. See also 3D audio effect Maya (software) List of Maya plugins References External links SoundRenderer download page Multimedia and Data Software Solutions Multimedia software
SoundRenderer
Technology
249
7,133,473
https://en.wikipedia.org/wiki/Commutation%20matrix
In mathematics, especially in linear algebra and matrix theory, the commutation matrix is used for transforming the vectorized form of a matrix into the vectorized form of its transpose. Specifically, the commutation matrix K(m,n) is the nm × mn permutation matrix which, for any m × n matrix A, transforms vec(A) into vec(AT): K(m,n) vec(A) = vec(AT) . Here vec(A) is the mn × 1 column vector obtain by stacking the columns of A on top of one another: where A = [Ai,j]. In other words, vec(A) is the vector obtained by vectorizing A in column-major order. Similarly, vec(AT) is the vector obtaining by vectorizing A in row-major order. The cycles and other properties of this permutation have been heavily studied for in-place matrix transposition algorithms. In the context of quantum information theory, the commutation matrix is sometimes referred to as the swap matrix or swap operator Properties The commutation matrix is a special type of permutation matrix, and is therefore orthogonal. In particular, K(m,n) is equal to , where is the permutation over for which The determinant of K(m,n) is . Replacing A with AT in the definition of the commutation matrix shows that Therefore, in the special case of m = n the commutation matrix is an involution and symmetric. The main use of the commutation matrix, and the source of its name, is to commute the Kronecker product: for every m × n matrix A and every r × q matrix B, This property is often used in developing the higher order statistics of Wishart covariance matrices. The case of n=q=1 for the above equation states that for any column vectors v,w of sizes m,r respectively, This property is the reason that this matrix is referred to as the "swap operator" in the context of quantum information theory. Two explicit forms for the commutation matrix are as follows: if er,j denotes the j-th canonical vector of dimension r (i.e. the vector with 1 in the j-th coordinate and 0 elsewhere) then The commutation matrix may be expressed as the following block matrix: Where the p,q entry of n x m block-matrix Ki,j is given by For example, Code For both square and rectangular matrices of m rows and n columns, the commutation matrix can be generated by the code below. Python import numpy as np def comm_mat(m, n): # determine permutation applied by K w = np.arange(m * n).reshape((m, n), order="F").T.ravel(order="F") # apply this permutation to the rows (i.e. to each column) of identity matrix and return result return np.eye(m * n)[w, :] Alternatively, a version without imports: # Kronecker delta def delta(i, j): return int(i == j) def comm_mat(m, n): # determine permutation applied by K v = [m * j + i for i in range(m) for j in range(n)] # apply this permutation to the rows (i.e. to each column) of identity matrix I = [[delta(i, j) for j in range(m * n)] for i in range(m * n)] return [I[i] for i in v] MATLAB function P = com_mat(m, n) % determine permutation applied by K A = reshape(1:m*n, m, n); v = reshape(A', 1, []); % apply this permutation to the rows (i.e. to each column) of identity matrix P = eye(m*n); P = P(v,:); R # Sparse matrix version comm_mat = function(m, n){ i = 1:(m * n) j = NULL for (k in 1:m) { j = c(j, m * 0:(n-1) + k) } Matrix::sparseMatrix( i = i, j = j, x = 1 ) } Example Let denote the following matrix: has the following column-major and row-major vectorizations (respectively): The associated commutation matrix is (where each denotes a zero). As expected, the following holds: References Jan R. Magnus and Heinz Neudecker (1988), Matrix Differential Calculus with Applications in Statistics and Econometrics, Wiley. Linear algebra Matrices Articles with example Python (programming language) code Articles with example MATLAB/Octave code
Commutation matrix
Mathematics
1,047
1,358,654
https://en.wikipedia.org/wiki/Measurement%20tower
A measurement tower or measurement mast, also known as meteorological tower or meteorological mast (met tower or met mast), is a free standing tower or a removed mast, which carries measuring instruments with meteorological instruments, such as thermometers and instruments to measure wind speed. Measurement towers are an essential component of rocket launching sites, since one must know exact wind conditions for an execution of a rocket launch. Met masts are crucial in the development of wind farms, as precise knowledge of the wind speed is necessary to know how much energy will be produced, and whether the turbines will survive on the site. Measurement towers are also used in other contexts, for instance near nuclear power stations, and by ASOS stations. Examples Meteorology Other measurement towers Aerial test facility Brück, Brück, Germany BREN Tower, Nevada Test Site, USA Wind farm development Before developers construct a wind farm, they first measure the wind resource on a prospective site by erecting temporary measurement towers. Typically these mount anemometers at a range of heights up to the hub height of the proposed wind turbines, and log the wind speed data at frequent intervals (e.g. every ten minutes) for at least one year and preferably two or more. The data allow the developer to determine if the site is economically viable for a wind farm, and to choose wind turbines optimized for the local wind speed distribution. See also Automatic weather station#Mast Guyed mast Radio masts and towers Truss tower References Meteorological instrumentation and equipment Towers
Measurement tower
Technology,Engineering
305
77,827,864
https://en.wikipedia.org/wiki/Povorcitinib
Povorcitinib is an investigational new drug that is being evaluated for the treatment of the skin conditions hidradenitis suppurativa and chronic prurigo. It is a JAK1 inhibitor. References Azetidines Benzamides Janus kinase inhibitors Pyrazoles Trifluoromethyl compounds Nitriles
Povorcitinib
Chemistry
73
26,955,958
https://en.wikipedia.org/wiki/Counterfeit%20electronic%20component
Counterfeit electronic components are electronic parts whose origin or quality is deliberately misrepresented. Counterfeiting of electronic components can infringe on the legitimate producer's trademark rights. The marketing of electronic components has been commoditized, making it easier for counterfeiters to make it out into the supply chain. Trends According to a January 2010 study by the US Department of Commerce Bureau of Industry and Security, the number of counterfeit incidents reported grew from 3,868 in 2005 to 9,356 in 2008. 387 respondents to the survey cited the two most common types of counterfeit components: 'blatant' fakes and used products re-marked as higher grade. The World Semiconductor Trade Statistics estimate that the global total addressable market (TAM) for semiconductors is in excess of $200 billion. This increase in instances of counterfeit products entering the supply chain is characterized by globalization and the industries in China. On December 11, 2001, China was admitted to the WTO, which lifted the ban on exports by non-government owned and controlled business entities. In late 1989, the Basel Convention was adopted in Basel, Switzerland. Most developed countries have adopted this convention, with the major exception of the US. During this period, the United States has primarily exported its e-waste to China, where e-waste is recycled. Counterfeiting techniques The alteration of existing units is done through sanding and re-marking, blacktopping and re-marking, or similar methods of concealing the original manufacturer. Other strategies involve device substitution and die salvaging, where cheaper or used components are passed off as new or more expensive ones. Manufacturing rejects may also be repurposed and sold as new, and component leads may be re-attached to give the illusion of a new, unused product. Packaging can also be relabeled. Avoidance strategies Some known counterfeiting-detecting strategies include: DNA marking – Botanical DNA as developed by Applied DNA Sciences and required by the DoD's Defense Logistics Agency for certain 'high-risk' microcircuits. X-Ray inspection X-RF Inspection X-ray fluorescence spectroscopy can be used to confirm RoHS status. Decapsulation – By removing the external packaging on a semiconductor and exposing the semiconductor wafer, microscopic inspection of brand marks and trademarks, and laser die etching. SAM (scanning acoustic microscope) Parametric testing, a.k.a., curve tracing Leak testing (gross leaks and fine leaks) of hermetically sealed components Stereo microscope, metallurgical microscope Solderability testing For military products: QPL – Qualified Product List QML – Qualified Manufacturers List QSLD – Qualified Suppliers List of Distributors QTSL – Qualified Testing Suppliers List Policies The formation of the G-19 Counterfeit Electronic Components Committee was introduced. In April 2009, SAE International released AS5553 Counterfeit Electronic Parts; Avoidance, Detection, Mitigation, and Disposition. AS6081 was issued in November 2012 and adopted by the DoD. AS6081 requires the purchased products to go through external visual inspections and radiological examinations. Originally implemented in January 2013, AS5553A was expanded. See also Capacitor plague Counterfeit consumer goods Supply-chain security References Electrical components Forgery
Counterfeit electronic component
Technology,Engineering
671
37,914,029
https://en.wikipedia.org/wiki/Stochastic%20Eulerian%20Lagrangian%20method
In computational fluid dynamics, the Stochastic Eulerian Lagrangian Method (SELM) is an approach to capture essential features of fluid-structure interactions subject to thermal fluctuations while introducing approximations which facilitate analysis and the development of tractable numerical methods. SELM is a hybrid approach utilizing an Eulerian description for the continuum hydrodynamic fields and a Lagrangian description for elastic structures. Thermal fluctuations are introduced through stochastic driving fields. Approaches also are introduced for the stochastic fields of the SPDEs to obtain numerical methods taking into account the numerical discretization artifacts to maintain statistical principles, such as fluctuation-dissipation balance and other properties in statistical mechanics. The SELM fluid-structure equations typically used are The pressure p is determined by the incompressibility condition for the fluid The operators couple the Eulerian and Lagrangian degrees of freedom. The denote the composite vectors of the full set of Lagrangian coordinates for the structures. The is the potential energy for a configuration of the structures. The are stochastic driving fields accounting for thermal fluctuations. The are Lagrange multipliers imposing constraints, such as local rigid body deformations. To ensure that dissipation occurs only through the coupling and not as a consequence of the interconversion by the operators the following adjoint conditions are imposed Thermal fluctuations are introduced through Gaussian random fields with mean zero and the covariance structure To obtain simplified descriptions and efficient numerical methods, approximations in various limiting physical regimes have been considered to remove dynamics on small time-scales or inertial degrees of freedom. In different limiting regimes, the SELM framework can be related to the immersed boundary method, accelerated Stokesian dynamics, and arbitrary Lagrangian Eulerian method. The SELM approach has been shown to yield stochastic fluid-structure dynamics that are consistent with statistical mechanics. In particular, the SELM dynamics have been shown to satisfy detailed-balance for the Gibbs–Boltzmann ensemble. Different types of coupling operators have also been introduced allowing for descriptions of structures involving generalized coordinates and additional translational or rotational degrees of freedom. For numerically discretizing the SELM SPDEs, general methods were also introduced for deriving numerical stochastic fields for SPDEs that take discretization artifacts into account to maintain statistical principles, such as fluctuation-dissipation balance and other properties in statistical mechanics. SELM methods have been used for simulations of viscoelastic fluids and soft materials, particle inclusions within curved fluid interfaces and other microscopic systems and engineered devices. See also Immersed boundary method Stokesian dynamics Volume of fluid method Level-set method Marker-and-cell method References Software : Numerical Codes and Simulation Packages Mango-Selm : Stochastic Eulerian Lagrangian and Immersed Boundary Methods, 3D Simulation Package, (Python interface, LAMMPS MD Integration), P. Atzberger, UCSB Fluid mechanics Computational fluid dynamics Numerical differential equations
Stochastic Eulerian Lagrangian method
Physics,Chemistry,Engineering
616
54,590,538
https://en.wikipedia.org/wiki/Stankoprom
Stankoprom () is a Russian designer and manufacturer of machine tools based in Moscow. It was established in 2013 on the initiative of the Ministry of Industry and Trade of Russia and Rostec State Corporation. as part as Russia's Import substitution strategy to reduce the country's reliance on foreign-made machine tools. It includes scientific centers as well as manufacturing plants. Stankoprom is part of the state-owned holding company Rostec, and it incorporates 14 machine tool manufacturers. History Stankoprom Holding was established in 2013 on the initiative of the Ministry of Industry and Trade of Russia and Rostec State Corporation . Stankoprom is the parent organization of Rostec State Corporation. Operation Stankoprom JSC, within the framework of Rostec State Corporation, has the status of a center for technological audit of technological equipment purchased by the corporation's organizations, as well as ensuring centralized supplies of machine tool products to the corporation's enterprises. In October 2014 the Russian government decided to appoint Stankoprom as an engineering competence center and the locomotive of the process of introducing domestic machine tools into production. In 2014 Stankoprom and the representative company of the German Siemens concern in Russia and Central Asia signed an agreement on cooperation in the development of complex high-precision machines, as well as the implementation of technical re-equipment projects for domestic enterprises. According to Sergey Makarov, CEO of Stankoprom, one of the main goals of the company is to create a joint venture with Siemens "with the mandatory transfer of the most modern machine tool technologies and localization of production in Russia." In 2018 from the report of Prosecutor General Yuri Chaika on the state of law and order in Russia, it became known that Stankoprom holding had not made a single domestic machine tool in four years, thereby disrupting the program to create serial production of machine tool products, for which a large amount of money was allocated. In this regard, a criminal case was opened on the fact of embezzlement by fraud in a large amount of budgetary funds. In May 2021 it became known about the signing of an agreement on the localization of production of mobile turning and milling complexes that have no analogues in Russia between Stankoprom and the German manufacturer TRAWEMA GMBH. The agreement implied a full range of production — from design to production of finished products on the basis of the "VNIIINSTUMENT" ("ВНИИИНСТРУМЕНТ") enterprise, which is part of the Stankoprom holding. Due to the sanctions, this agreement was not implemented. Structure Structure of the company: Scientific Centers Vniialmaz Vniiautogenmash Vniti Em Vniiinstrument Mikron Ulyanovsky Niat Machine Manufacturing Savelovo Machine Building Plant Neftehimautomatika Remos-PM Tools Manufacturing Instrumental Plant-PM Trade and Engineering Foreign Trade Enterprise Stankoimport (Llc) Foreign Trade Enterprise Stankoimport (Ojsc) RT-Stankoinstrument References External links Official website Rostec Manufacturing companies established in 2013 Industrial machine manufacturers Russian brands
Stankoprom
Engineering
657
53,986,519
https://en.wikipedia.org/wiki/NGC%20523
NGC 523, also known as Arp 158, from the ARP catalog is a spiral galaxy located in the constellation Andromeda. It was discovered separately by William Herschel on 13 September 1784, and by Heinrich d'Arrest on 13 August 1862. d'Arrest's discovery was listed as NGC 523, while Herschel's was listed as NGC 537; the two are one and the same. John Dreyer noted in the New General Catalogue that NGC 523 is a double nebula. In September 2001 a type Ia supernova, SN 2001en was discovered in NGC 523. See also Spiral galaxy List of NGC objects (1–1000) Andromeda (constellation) References External links SEDS Andromeda (constellation) Barred spiral galaxies 0523 158 Discoveries by William Herschel 005268
NGC 523
Astronomy
170
372,426
https://en.wikipedia.org/wiki/Production%20equipment%20control
Production equipment control involves production equipment that resides in the shop floor of a manufacturing company and its purpose is to produce goods of a wanted quality when provided with production resources of a required quality. In modern production lines the production equipment is fully automated using industrial control methods and involves limited unskilled labour participation. Modern production equipment consists of mechatronic modules that are integrated according to a control architecture. The most widely known architectures involve hierarchy, polyarchy, hetaerarchy and hybrid. The methods for achieving a technical effect are described by control algorithms, which may or may not utilize formal methods in their design. Industrial equipment Formal methods
Production equipment control
Engineering
130
3,078
https://en.wikipedia.org/wiki/Altair
Altair is the brightest star in the constellation of Aquila and the twelfth-brightest star in the night sky. It has the Bayer designation Alpha Aquilae, which is Latinised from α Aquilae and abbreviated Alpha Aql or α Aql. Altair is an A-type main-sequence star with an apparent visual magnitude of 0.77 and is one of the vertices of the Summer Triangle asterism; the other two vertices are marked by Deneb and Vega. It is located at a distance of from the Sun. Altair is currently in the G-cloud—a nearby interstellar cloud, an accumulation of gas and dust. Altair rotates rapidly, with a velocity at the equator of approximately 286 km/s. This is a significant fraction of the star's estimated breakup speed of 400 km/s. A study with the Palomar Testbed Interferometer revealed that Altair is not spherical, but is flattened at the poles due to its high rate of rotation. Other interferometric studies with multiple telescopes, operating in the infrared, have imaged and confirmed this phenomenon. Nomenclature α Aquilae (Latinised to Alpha Aquilae) is the star's Bayer designation. The traditional name Altair has been used since medieval times. It is an abbreviation of the Arabic phrase Al-Nisr Al-Ṭa'ir, "". In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included Altair for this star. It is now so entered in the IAU Catalog of Star Names. Physical characteristics Along with β Aquilae and γ Aquilae, Altair forms the well-known line of stars sometimes referred to as the Family of Aquila or Shaft of Aquila. Altair is a type-A main-sequence star with about 1.8 times the mass of the Sun and 11 times its luminosity. It is thought to be a young star close to the zero age main sequence at about 100 million years old, although previous estimates gave an age closer to one billion years old. Altair rotates rapidly, with a rotational period of under eight hours; for comparison, the equator of the Sun makes a complete rotation in a little more than 25 days, but Altair's rotation is similar to, and slightly faster than, those of Jupiter and Saturn. Like those two planets, its rapid rotation causes the star to be oblate; its equatorial diameter is over 20 percent greater than its polar diameter. Satellite measurements made in 1999 with the Wide Field Infrared Explorer showed that the brightness of Altair fluctuates slightly, varying by just a few thousandths of a magnitude with several different periods less than 2 hours. As a result, it was identified in 2005 as a Delta Scuti variable star. Its light curve can be approximated by adding together a number of sine waves, with periods that range between 0.8 and 1.5 hours. It is a weak source of coronal X-ray emission, with the most active sources of emission being located near the star's equator. This activity may be due to convection cells forming at the cooler equator. Rotational effects The angular diameter of Altair was measured interferometrically by R. Hanbury Brown and his co-workers at Narrabri Observatory in the 1960s. They found a diameter of 3milliarcseconds. Although Hanbury Brown et al. realized that Altair would be rotationally flattened, they had insufficient data to experimentally observe its oblateness. Later, using infrared interferometric measurements made by the Palomar Testbed Interferometer in 1999 and 2000, Altair was found to be flattened. This work was published by G. T. van Belle, David R. Ciardi and their co-authors in 2001. Theory predicts that, owing to Altair's rapid rotation, its surface gravity and effective temperature should be lower at the equator, making the equator less luminous than the poles. This phenomenon, known as gravity darkening or the von Zeipel effect, was confirmed for Altair by measurements made by the Navy Precision Optical Interferometer in 2001, and analyzed by Ohishi et al. (2004) and Peterson et al. (2006). Also, A. Domiciano de Souza et al. (2005) verified gravity darkening using the measurements made by the Palomar and Navy interferometers, together with new measurements made by the VINCI instrument at the VLTI. Altair is one of the few stars for which a direct image has been obtained. In 2006 and 2007, J. D. Monnier and his coworkers produced an image of Altair's surface from 2006 infrared observations made with the MIRC instrument on the CHARA array interferometer; this was the first time the surface of any main-sequence star, apart from the Sun, had been imaged. The false-color image was published in 2007. The equatorial radius of the star was estimated to be 2.03 solar radii, and the polar radius 1.63 solar radii—a 25% increase of the stellar radius from pole to equator. The polar axis is inclined by about 60° to the line of sight from the Earth. Etymology, mythology and culture The term Al Nesr Al Tair appeared in Al Achsasi al Mouakket's catalogue, which was translated into Latin as Vultur Volans. This name was applied by the Arabs to the asterism of Altair, β Aquilae and γ Aquilae and probably goes back to the ancient Babylonians and Sumerians, who called Altair "the eagle star". The spelling Atair has also been used. Medieval astrolabes of England and Western Europe depicted Altair and Vega as birds. The Koori people of Victoria also knew Altair as Bunjil, the wedge-tailed eagle, and β and γ Aquilae are his two wives the black swans. The people of the Murray River knew the star as Totyerguil. The Murray River was formed when Totyerguil the hunter speared Otjout, a giant Murray cod, who, when wounded, churned a channel across southern Australia before entering the sky as the constellation Delphinus. In Chinese belief, the asterism consisting of Altair, β Aquilae and γ Aquilae is known as Hé Gǔ (; lit. "river drum"). The Chinese name for Altair is thus Hé Gǔ èr (; lit. "river drum two", meaning the "second star of the drum at the river"). However, Altair is better known by its other names: Qiān Niú Xīng ( / ) or Niú Láng Xīng (), translated as the cowherd star. These names are an allusion to a love story, The Cowherd and the Weaver Girl, in which Niulang (represented by Altair) and his two children (represented by β Aquilae and γ Aquilae) are separated from respectively their wife and mother Zhinu (represented by Vega) by the Milky Way. They are only permitted to meet once a year, when magpies form a bridge to allow them to cross the Milky Way. The people of Micronesia called Altair Mai-lapa, meaning "big/old breadfruit", while the Māori people called this star Poutu-te-rangi, meaning "pillar of heaven". In Western astrology, the star was ill-omened, portending danger from reptiles. This star is one of the asterisms used by Bugis sailors for navigation, called bintoéng timoro, meaning "eastern star". A group of Japanese scientists sent a radio signal to Altair in 1983 with the hopes of contacting extraterrestrial life. NASA announced Altair as the name of the Lunar Surface Access Module (LSAM) on December 13, 2007. The Russian-made Beriev Be-200 Altair seaplane is also named after the star. Visual companions The bright primary star has the multiple star designation WDS 19508+0852A and has several faint visual companion stars, WDS 19508+0852B, C, D, E, F and G. All are much more distant than Altair and not physically associated. See also Lists of stars List of brightest stars List of nearest bright stars Historical brightest stars List of most luminous stars Notes References External links Star with Midriff Bulge Eyed by Astronomers, JPL press release, July 25, 2001. Spectrum of Altair Imaging the Surface of Altair, University of Michigan news release detailing the CHARA array direct imaging of the stellar surface in 2007. PIA04204: Altair, NASA. Image of Altair from the Palomar Testbed Interferometer. Altair, SolStation. Secrets of Sun-like star probed, BBC News, June 1, 2007. Astronomers Capture First Images of the Surface Features of Altair , Astromart.com Image of Altair from Aladin. Aquila (constellation) A-type main-sequence stars 4 Aquilae, 53 Aquilae, Alpha 187642 097649 7557 Delta Scuti variables Altair BD+08 4236 G-Cloud Astronomical objects known since antiquity 0768 TIC objects
Altair
Astronomy
1,966
1,318,592
https://en.wikipedia.org/wiki/172%20%28number%29
172 (one hundred [and] seventy-two) is the natural number following 171 and preceding 173. In mathematics 172 is a part of a near-miss for being a counterexample to Fermat's last theorem, as 1353 + 1383 = 1723 − 1. This is only the third near-miss of this form, two cubes adding to one less than a third cube. It is also a "thickened cube number", half an odd cube (73 = 343) rounded up to the next integer. See also 172 (disambiguation) References Integers
172 (number)
Mathematics
124
3,838,839
https://en.wikipedia.org/wiki/Shattuckite
Shattuckite is a copper silicate hydroxide mineral with formula Cu5(SiO3)4(OH)2. It crystallizes in the orthorhombic – dipyramidal crystal system and usually occurs in a granular massive form and also as fibrous acicular crystals. It is closely allied to plancheite in structure and appearance. Shattuckite is a relatively rare copper silicate mineral. It was first discovered in 1915 in the copper mines of Bisbee, Arizona, specifically the Shattuck Mine (hence the name). It is a secondary mineral that forms from the alteration of other secondary minerals. At the Shattuck Mine, it forms pseudomorphs after malachite. A pseudomorph is an atom by atom replacement of a crystal structure by another crystal structure, but with little alteration of the outward shape of the original crystal. It is sometimes used as a gemstone. Gallery References Copper(II) minerals Inosilicates Orthorhombic minerals Minerals in space group 61 Gemstones
Shattuckite
Physics
219
48,432,365
https://en.wikipedia.org/wiki/Leccinum%20albostipitatum
Leccinum albostipitatum is a species of bolete fungus in the family Boletaceae. This fungus is commonly found in Europe, where it grows in association with poplar. It was described as new to science in 2005. References Fungi described in 2005 Fungi of Europe albostipitatum Taxa named by Machiel Noordeloos Fungus species
Leccinum albostipitatum
Biology
75
27,953,209
https://en.wikipedia.org/wiki/WALP%20peptide
WALP peptides are a class of synthesized, membrane-spanning α-helices composed of tryptophan (W), alanine (A), and leucine (L) amino acids. They are designed to study properties of proteins in lipid membranes such as orientation, extent of insertion, and hydrophobic mismatch. Significance The transmembrane region of many integral membrane proteins consists of one or more alpha helices. The orientations and interactions of these helices directly affect cell signaling and molecular transport across the bilayer. The hydrophobic environment of the phospholipid tails in turn modulates the position and structure of such domains and thus may influence protein function. Conversely, the bilayer itself can (locally) change the thickness of its hydrocarbon region to interact optimally with hydrophobic regions of a transmembrane protein (a.k.a. hydrophobic matching). WALPs provide an effective model for studying such interactions because of their systematic design of a core of hydrophobic, alternating alanine and leucine regions. This core is readily manipulated by extending or decreasing the number of amino acids. Another key feature is the presence of "anchoring" residues at the ends of the helix, which are tryptophan residues in the WALP versions. Substituting the anchoring tryptophan residues for charged residues, such as lysine, yields "KALP" peptides. This class of model peptides has proved useful for studying the impact of changes in lipid composition on peptide insertion. Following detailed experimental studies by various techniques, the WALP and related peptides have become commonly used model systems in computational biology. Responses to lipid environment When hydrophobic mismatch occurs, WALPs are known to tilt in the bilayer. The extent of this tilt is affected up to a certain point by an entropy contribution that arises from the helix's presence in the bilayer and then by more specific helix-lipid interactions. When charged residues are substituted for the anchoring residues, these charged amino acids prefer a higher position, farther from the interior of the lipid bilayer, in order to maintain their energetically favorable interaction with water. This interaction thus promotes a smaller angle of tilt. References Peptides Membrane biology
WALP peptide
Chemistry
464
24,748,949
https://en.wikipedia.org/wiki/Colorimeter%20%28chemistry%29
A colorimeter is a device used in colorimetry that measures the absorbance of particular wavelengths of light by a specific solution. It is commonly used to determine the concentration of a known solute in a given solution by the application of the Beer–Lambert law, which states that the concentration of a solute is proportional to the absorbance. Construction The essential parts of a colorimeter are: a light source (often an ordinary low-voltage filament lamp); an adjustable aperture; a set of colored filters; a cuvette to hold the working solution; a detector (usually a photoresistor) to measure the transmitted light; a meter to display the output from the detector. In addition, there may be: a voltage regulator, to protect the instrument from fluctuations in mains voltage; a second light path, cuvette and detector. This enables comparison between the working solution and a "blank", consisting of pure solvent, to improve accuracy. There are many commercialized colorimeters as well as open source versions with construction documentation for education and for research. Filters Changeable optics filters are used in the colorimeter to select the wavelength which the solute absorbs the most, in order to maximize accuracy. The usual wavelength range is from 400 to 700 nm. If it is necessary to operate in the ultraviolet range then some modifications to the colorimeter are needed. In modern colorimeters the filament lamp and filters may be replaced by several (light-emitting diode) of different colors. Cuvettes In a manual colorimeter the cuvettes are inserted and removed by hand. An automated colorimeter (as used in an AutoAnalyzer) is fitted with a flowcell through which solution flows continuously. Output The output from a colorimeter may be displayed by an analogue or digital meter and may be shown as transmittance (a linear scale from 0 to 100%) or as absorbance (a logarithmic scale from zero to infinity). The useful range of the absorbance scale is from 0 to 2 but it is desirable to keep within the range 0–1, because above 1 the results become unreliable due to scattering of light. In addition, the output may be sent to a chart recorder, data logger, or computer. See also Spectronic 20 Spectrophotometer Lovibond Colorimeter Notes References The Nuffield Foundation 2003. March 30, 2003. "Colour." Encyclopædia Britannica. Encyclopædia Britannica Online. Encyclopædia Britannica Inc. (2011) Accessed 17 November 2011. "Colorimetry" Encyclopædia Britannica. Encyclopædia Britannica Online. Encyclopædia Britannica Inc. (2011) 17 November 2011. Orion Colorimetry Theory. The Technical Edge. Scientific instruments Color Optical instruments Spectroscopy Laboratory equipment
Colorimeter (chemistry)
Physics,Chemistry,Technology,Engineering
601
2,854,329
https://en.wikipedia.org/wiki/Oxymetholone
Oxymetholone, sold under the brand names Anadrol and Anapolon among others, is an androgen and anabolic steroid (AAS) medication which is used primarily in the treatment of anemia. It is also used to treat osteoporosis, HIV/AIDS wasting syndrome, and to promote weight gain and muscle growth in certain situations. It is taken by mouth. Side effects of oxymetholone include increased sexual desire as well as symptoms of masculinization like acne, increased hair growth, and voice changes. It can also cause liver damage. The drug is a synthetic androgen and anabolic steroid and hence is an agonist of the androgen receptor (AR), the biological target of androgens like testosterone and dihydrotestosterone (DHT). It has strong anabolic effects and weak androgenic effects. Oxymetholone was first prescribed in 1959 and was introduced for medical use but was discontinued in 1961 due its high lipid toxicity. It is used mostly in the United States. In addition to its medical use, oxymetholone is used to improve physique and performance. The drug is a controlled substance in many countries and so non-medical use is generally illicit. Medical uses The primary clinical applications of oxymetholone include treatment of anemia and osteoporosis, as well as stimulating muscle growth in malnourished or underdeveloped patients. However, in the United States, the only remaining -approved indication is the treatment of anemia. Following the introduction of oxymetholone, nonsteroidal drugs such as epoetin alfa were developed and shown to be more effective as a treatment for anemia and osteoporosis without the side effects of oxymetholone. The drug remained available despite this and eventually found a new use in treating HIV/AIDS wasting syndrome. Presented most commonly as a 50 mg tablet, oxymetholone has been said to be one of the "strongest" and "most powerful" AAS available for medical use. Similarly, there is a risk of side effects. Oxymetholone is highly effective in promoting extensive gains in body mass, mostly by greatly improving protein synthesis. For this reason, it is often used by bodybuilders and athletes. Non-medical uses Oxymetholone is used for physique- and performance-enhancing purposes by competitive athletes, bodybuilders, and powerlifters. Side effects The common side effects of oxymetholone include depression, lethargy, headache, swelling, fast and excessive weight gain, priapism, changes in skin color, urination problems, nausea, vomiting, stomach pain (if taken on an empty stomach), loss of appetite, jaundice, breast swelling in men, feeling restless or excited, insomnia, and diarrhea. In women, side effects also include acne, changes in menstrual periods, voice deepening, hair growth on the chin or chest, pattern hair loss, enlarged clitoris, and changes in libido. Because of its 17α-alkylated structure, oxymetholone is hepatotoxic. Long term use of the drug can cause a variety of serious ailments, including hepatitis, liver cancer, and cirrhosis; therefore periodic liver function tests are recommended for those taking oxymetholone. Pharmacology Pharmacodynamics Like other AAS, oxymetholone is an agonist of the androgen receptor (AR). It is not a substrate for 5α-reductase (as it is already 5α-reduced) and is a poor substrate for 3α-hydroxysteroid dehydrogenase (3α-HSD), and therefore shows a high ratio of anabolic to androgenic activity. As a DHT derivative, oxymetholone is not a substrate for aromatase and hence cannot be aromatized into estrogenic metabolites. However, uniquely among DHT derivatives, oxymetholone is nonetheless associated with relatively high estrogenicity, and is known to have the potential to produce estrogenic side effects such as gynecomastia (rarely) and water retention. It has been suggested that this may be due to direct binding to and activation of the estrogen receptor by oxymetholone. Oxymetholone does not possess any significant progestogenic activity. Pharmacokinetics There is limited information available on the pharmacokinetics of oxymetholone. It appears to be well-absorbed with oral administration. Oxymetholone has very low affinity for human serum sex hormone-binding globulin (SHBG), less than 5% of that of testosterone and less than 1% of that of DHT. The drug is metabolized in the liver by oxidation at the C2 position, reduction at the C3 position, hydroxylation at the C17 position, and conjugation. The C2 hydroxymethylene group of oxymetholone can be cleaved to form mestanolone (17α-methyl-DHT), which may contribute to the effects of oxymetholone. The elimination half-life of oxymetholone is unknown. Oxymetholone and its metabolites are eliminated in the urine. Chemistry Oxymetholone, also known as 2-hydroxymethylene-17α-methyl-4,5α-dihydrotestosterone (2-hydroxymethylene-17α-methyl-DHT) or as 2-hydroxymethylene-17α-methyl-5α-androstan-17β-ol-3-one, is a synthetic androstane steroid and a 17α-alkylated derivative of DHT. History Oxymetholone was first described in a 1959 paper by scientists from Syntex. It was introduced for medical use by Syntex and Imperial Chemical Industries in the United Kingdom under the brand name Anapolon by 1961. Oxymetholone was also introduced under the brand names Adroyd (Parke-Davis) by 1961 and Anadrol (Syntex) by 1962. The drug was marketed in the United States in the early 1960s. Society and culture Generic names Oxymetholone is the generic name of the drug and its , , , , and , while oxymétholone is its . Brand names Oxymetholone has been marketed under a variety of brand names including Anadrol, Anadroyd, Anapolon, Anasterona, Anasteronal, Anasterone, Androlic, Androyd, Hemogenin, Nastenon, Oxitoland, Oxitosona, Oxyanabolic, Oxybolone, Protanabol, Roboral, Synasterobe, Synasteron, and Zenalosyn. Availability United States Oxymetholone is one of the few AAS that remains available for medical use in the United States. The others (as of August 2023) are testosterone, testosterone cypionate, testosterone enanthate, testosterone undecanoate, methyltestosterone, fluoxymesterone, and nandrolone Other countries The availability of oxymetholone is fairly limited and seems to be scattered into isolated markets in Europe, Asia, and North and South America. It is known to be available in Turkey, Greece, Moldova, Iran, Thailand, Brazil, and Paraguay. At least historically, it has also been available in Canada, the United Kingdom, Belgium, the Netherlands, Spain, Poland,The UAE, Israel, Hong Kong, and India. Legal status Oxymetholone, along with other AAS, is a schedule III controlled substance in the United States under the Controlled Substances Act. References Further reading 3β-Hydroxysteroid dehydrogenase inhibitors Anabolic–androgenic steroids Androstanes Hepatotoxins Ketones Synthetic estrogens Tertiary alcohols
Oxymetholone
Chemistry
1,699
38,728,858
https://en.wikipedia.org/wiki/Distance%20between%20two%20parallel%20lines
The distance between two parallel lines in the plane is the minimum distance between any two points. Formula and proof Because the lines are parallel, the perpendicular distance between them is a constant, so it does not matter which point is chosen to measure the distance. Given the equations of two non-vertical parallel lines the distance between the two lines is the distance between the two intersection points of these lines with the perpendicular line This distance can be found by first solving the linear systems and to get the coordinates of the intersection points. The solutions to the linear systems are the points and The distance between the points is which reduces to When the lines are given by the distance between them can be expressed as See also Distance from a point to a line References Abstand In: Schülerduden – Mathematik II. Bibliographisches Institut & F. A. Brockhaus, 2004, , pp. 17-19 (German) Hardt Krämer, Rolf Höwelmann, Ingo Klemisch: Analytische Geometrie und Lineare Akgebra. Diesterweg, 1988, , p. 298 (German) External links Florian Modler: Vektorprodukte, Abstandsaufgaben, Lagebeziehungen, Winkelberechnung – Wann welche Formel?, pp. 44-59 (German) A. J. Hobson: “JUST THE MATHS” - UNIT NUMBER 8.5 - VECTORS 5 (Vector equations of straight lines), pp. 8-9 Euclidean geometry Distance
Distance between two parallel lines
Physics,Mathematics
320
14,429,281
https://en.wikipedia.org/wiki/List%20of%20IEEE%20Milestones
The following list of the Institute of Electrical and Electronics Engineers (IEEE) milestones represents key historical achievements in electrical and electronic engineering. Prior to 1800 1751 – Book Experiments and Observations on Electricity by Benjamin Franklin 1757–1775 – Benjamin Franklin's Work in London 1799 – Alessandro Volta's Electrical Battery Invention 1800–1850 1804 – Francisco Salvá Campillo's Electric Telegraph 1820–1827 – The Birth of Electrodynamics 1828–1837 – Pavel Schilling's Pioneering Contribution to Practical Telegraphy 1836 – Nicholas Callan's Pioneering Contributions to Electrical Science and Technology 1838 – Demonstration of Practical Telegraphy 1850–1870 1852 – Electric Fire alarm system 1860–1871 – Maxwell's equations 1860–1863 – First Studies on Ring Armature for Direct-Current Dynamos 1861 – Transcontinental Telegraph 1861–1867 – Standardisation of the Ohm 1866 – Landing of the Transatlantic Cable 1866 – County Kerry Transatlantic Cable Stations 1870–1890 1876 – First Intelligible Voice Transmission over Electric Wire 1876 – Thomas Alva Edison Historic Site at Menlo Park 1876 – First Distant Speech Transmission in Canada 1882 – Vulcan Street Plant 1882 – First Central Station in South Carolina 1882 – Pearl Street Station 1884 – First AIEE Technical Meeting 1885 – Galileo Ferraris's Rotating Fields and Early Induction Motors 1886 – Alternating Current Electrification, Great Barrington, Massachusetts, by William Stanley, Jr. 1886 – First Generation and Experimental Proof of Electromagnetic Waves 1886–1888 – Electric Lighting of the Kingdom of Hawaii 1887 – Thomas A. Edison West Orange Laboratories and Factories 1887 – Weston Meters, first portable current and voltage meters 1888 – Richmond Union Passenger Railway 1889 – Power System of Boston's Rapid Transit 1889 – First Exploration and Proof of Liquid Crystals 1890–1900 1890 – Discovery of Radioconduction by Édouard Branly, making use of a coherer 1890 – Keage Power Station, Japan's First Commercial Hydroelectric Plant 1891 – Ames Hydroelectric Generating Plant 1893 – Mill Creek No. 1 Hydroelectric Plant 1893 – Birth and Growth of Primary and Secondary Battery Industries in Japan 1894 – First Millimeter-wave Communication Experiments by Jagadish Chandra Bose 1895 – Adams Hydroelectric Generating Plant 1895 – Popov's Contribution to the Development of Wireless Communication 1895 – Guglielmo Marconi's Experiments in Wireless telegraphy 1895 – Electrification by Baltimore and Ohio Railroad 1895 – Krka-Šibenik Electric Power System 1895 – Folsom Powerhouse three-phase system 1896 – Budapest Metroline No. 1 1897 – Chivilingo Hydroelectric Plant 1898 – Decew Falls Hydro-Electric Plant 1898 – Rheinfelden Hydroelectric Power Plant 1898 – French Transatlantic Telegraph Cable 1899–1902 – First Operational Use Of Wireless Telegraphy in the Anglo-Boer War 1899 – Calcutta Electric Supply Corp 1900–1920 1900 – Georgetown Steam Hydro Generating Plant 1901 – Transmission of Transatlantic Radio Signals 1901 – Reception of Transatlantic Radio Signals 1901 – Early Developments in Remote-Control by Leonardo Torres Quevedo 1901–1902 – Rationalization of Units 1901–1905 String Galvanometer 1902 – Poulsen-Arc Radio Transmitter 1903 – Vučje Hydroelectric Plant 1904 – Alexanderson Radio Alternator 1904 – Fleming valve 1904 – Radar Predecessor 1906 – Pinawa Hydroelectric Power Project 1906 – First Wireless Radio Broadcast by Reginald Fessenden 1906 – Grand Central Terminal Electrification 1907 – Alternating-Current Electrification of the New York, New Haven and Hartford Railroad 1909 – Shoshone Transmission Line 1909 – World's First Reliable Hight Voltage Power Fuse 1911 – Discovery of superconductivity 1914 – Panama Canal Electrical and Control Installations 1915–1918 – Invention of Sonar 1916 – Czochralski Process 1920–1930 1920 – Westinghouse Radio Station KDKA (AM) 1920 – Funkerberg Königs Wusterhausen first radio broadcast in Germany 1921–1923 – Piezoelectric Oscillator 1921 – RCA Central, 220 kW transoceanic radio facility 1922 – Neutrodyne Circuit 1924 – Directive Shortwave Antenna (Yagi–Uda antenna) 1924–1941 – Development of electronic television 1924 – Enrico Fermi's major contribution to semiconductor statistics 1925 – Bell Telephone Laboratories 1926 – First Public Demonstration of Television 1928 – One-way police radio communication 1928 – Raman Effect 1929 – Shannon Scheme for the electrification of the Irish Free State 1929 – Largest private (DC) generating plant in the U.S.A. 1929 – Yosami Radio Transmitting Station 1929 – First blind takeoff, flight, and landing; using designated radio and aeronautical instrumentation 1930–1940 1930–1945 – Development of Ferrite Materials and their applications 1931 – Invention of Stereo Sound Reproduction 1932 – First Breaking of Enigma Code by the Team of Polish Cipher Bureau 1933 – Two-Way Police Radio Communication 1933 – Invention of a Temperature-Insensitive Quarz Oscillaiton Plate 1934 – Long-Range Shortwave Voice Transmissions from Byrd's Antarctic Expedition 1937 – Westinghouse Atom Smasher 1938 – Zenit Parabolic Reflector L-Band Pulsed Radar 1939 – Atanasoff–Berry Computer 1939–1945 – Code-breaking at Bletchley Park during World War II 1939 – Single-element Unidirectional Microphone – Shure Unidyne 1939 – Claude Shannon, development of Information Theory 1939–1949 – Development of the Cavity Magnetron 1940–1950 1940 – FM Police Radio Communication 1940–1945 – MIT Radiation Laboratory 1940–1946 – Loran, long range navigation 1941 – Opana Radar Site 1942–1945 – US Naval Computing Machine Laboratory 1944–1959 – Whirlwind Computer, Cambridge, Massachusetts 1944–1959 – Harvard Mark 1 Computer 1945 – Merrill Wheel-Balancing System 1945 – Rincón del Bonete Plant and Transmission System 1946 – Electronic Numerical Integrator and Computer (ENIAC) 1946–1953 – Monochrome-Compatible Electronic Color Television 1946 – Detection of Radar Signals Reflected from the Moon 1947 – Invention of the First Transistor at Bell Telephone Laboratories, Inc. 1947 – Invention of Holography 1948 – Birth of the Barcode 1948 – The Discovery of the Principle of Self-Complementarity in Antennas and the Mushiake Relationship 1948 – First Atomic Clock 1948–1951 – Manchester University "Baby" Computer and its Derivatives 1950–1960 1950–1969 – Electronic Technology for Space Rocket Launches 1950 – First External Cardiac Pacemaker 1951 – Manufacture of Transistors 1951 – Experimental Breeder Reactor I 1951–1958 – SAGE-Semi-Automatic Ground Environment 1951–1952 – A-0 Compiler and Initial Development of Automatic Programming 1953 – First Television Broadcast in Western Canada 1954 – Gotland High Voltage Direct Current Link 1955 – WEIZAC Computer 1956 – RAMAC 1956 – The First Submarine Transatlantic Telephone Cable System (TAT-1) 1956–1963 – Kurobe River No. 4 Hydropower Plant 1956 – Ampex Videotape Recorder 1956 – Birth of Silicon Valley 1957–1958 – First Wearable Cardiac Pacemaker 1957 – SRC/Thyristor 1957–1962 – Atlas Computer and the Invention of Virtual Memory 1958 – First Semiconductor Integrated Circuit (IC) by Jack Kilby 1958 – Star of Laufenburg Interconnection 1958 – The Trans-Canada Microwave System 1959 – Semiconductor planar process by Jean Hoerni and silicon integrated circuit by Robert Noyce 1959 – Commercialization and industrialization of photovoltaic cells by Sharp Corporation 1960–1970 1961–1984 – IBM Thomas J. Watson Research Center 1960 – TIROS I Television Infrared Observation Satellite 1960 – First Working Laser 1961–1964 – First Optical Fiber Laser and Amplifier 1962–1967 – Object-oriented programming 1962 – Stanford Linear Accelerator Center 1962 – Alouette-ISIS Satellite Program 1962 – First Transatlantic Television Signal via Satellite 1962 – First Transatlantic Transmission of a Television Signal via Satellite 1962 – First Transatlantic Reception of a Television Signal via Satellite 1962–1967 – Pioneering Work on the Quartz Electronic Wristwatch at Centre Electronique Horloger, Switzerland 1962 – Mercury spacecraft MA-6, Col. John Glenn piloted the Mercury Friendship 7 spacecraft in the first FAI-legal completed human-orbital flight on 20 February 1962. 1962–1972 – Grumman Lunar Module 1962–1972 – Apollo Guidance Computer 1962–1968 – First Geographic Information System 1962 – Semiconductor Laser 1963 – Taum Sauk Pumped-Storage Electric Power Plant 1963 – NAIC/Arecibo Radiotelescope 1963 – Taum Sauk Pumped-Storage Electric Power Plant 1963 – First Transpacific Reception of a Television (TV) Signal via Satellite 1963 – ASCII 1964 – Mount Fuji Radar System 1964 – Tokaido Shinkansen (Bullet Train) 1964–1973 – Pioneering Work on Electronic Calculators by Sharp Corporation 1964 – TPC-1 Transpacific Cable System 1964 – High-definition television System 1964 – BASIC Computer Language 1965–1984 – Alvin Deep-Sea Research Submersible 1965 – First 735 kV AC Transmission System 1965–1971 – Railroad Ticketing Examining System (developed by OMRON of Japan) 1965 – Dadda multiplier 1965 – Moore's Law 1965–1978 – Development of Computer Graphics and Visualization Techniques 1966 – Interactive Video Games 1966 – Shakey, the first mobile robot to be able to reason about its actions 1966 – DIALOG Online Search System 1967 – First Astronomical Observations Using Very Long Baseline Interferometry 1968 – CERN Experimental Instrumentation 1968 – Liquid-crystal display by George H. Heilmeier 1968 – Public Demonstration of Online Systems and Personal Computing 1969 – Electronic Quartz Wristwatch, Seiko Quartz-Astron 35SQ 1969 – Birth of the Internet 1969 – Inception of the ARPANET 1969–1975 – Invention of Public-key Cryptography 1969 – Apollo 11 Lunar Laser Ranging Experiment (LURE) 1969 – Parkes Radiotelescope 1969–1995 – Mode S Air Traffic Control Radar Beacon System 1970–1980 1970 – World's First Low-Loss Optical Fiber for Telecommunications 1971–1978 – The first word processor for the Japanese Language, JW-10 1969–1970 – SPICE Circuit Simulation Program 1971 – Demonstration of the ALOHA Packed Radio Data Network 1971 – First Computerized Tomography (CT) X-ray Scanner 1971–1977 – Development of the Commercial Laser Printer 1972 – Nelson River HVDC Transmission System 1972 – Development of the HP-35, the First Handheld Scientific Calculator 1972 – Eel River High Voltage Direct Current Converter Station 1972 – First Practical Field Emission Electron Microscope 1972 – SHAKEY: The World’s First Mobile Intelligent Robot 1972 – Polymer Self-Regulating Heat-Tracing Cable 1972–1989 – Gravitational-Wave Antenna 1972–1987 – Deep Space Station 43 1972–1983 – The Xerox Alto Establishes Personal Networked Computing 1973–1985 – Superconducting Magnet System for the Fermilab Tevatron Accelerator/Collider 1973 – The First Two-Dimensional Nuclear Magnetic Resonance Image (MRI) 1973–1985 – Ethernet Local Area Network (LAN) 1974 – First 500 MeV Proton Beam from the TRIUMF Cyclotron 1974–1982 – First Real-Time Speech Communication on Packet Networks 1974 – The CP/M Microcomputer Operating System 1974 – Transmission Control Protocol (TCP) Enables the Internet 1975 – Line Spectrum Pair (LSP) for high-compression speech coding 1975 – Gapless Metal Oxide Surge Arrester (MOSA) for electric power systems 1975 – Handheld Digital Camera 1976 – Development of VHS, a World Standard for Home Video Recording 1976–1978 – The Floating Gate EEPROM 1977 – Lempel–Ziv Data Compression Algorithm 1977 – Vapor-phase Axial Deposition Method for Mass Production of High-quality Optical Fiber 1977 – Perpendicular Magnetic Recording 1978 – Speak & Spell, the First Use of a Digital Signal Processing IC for Speech Generation 1978 – First Digitally Processed Image from a Spaceborne Synthetic Aperture Radar 1978 – First Demonstration of a Fibre Bragg Grating 1979 – Compact disc Audio Player 1979 – 20-inch Diameter Photomultiplier Tubes 1979 – Amorphous Silicon Thin Film Field-Effect Transistor Switches for Liquid Crystal Displays 1979 – HEMT (high-electron-mobility transistor) 1980 to present 1980 – International Standardization of Group 3 Facsimile 1980–1982 – First RISC (Reduced Instruction-Set Computing) Microprocessor 1980 – Outdoor large-scale color display system 1980 – MPD7720DSP, programmable digital signal processor chip μPD7720 1980–1981 – Inverter-Driven Air Conditioner 1980–1999 – Origin of the IEEE 802 Family of Networking Standards 1981 – 16-Bit Monolithic Digital-to-analog converter (DAC) for Digital Audio 1981 – Map-Based Automotive navigation system 1981–1988 – The Development of RenderMan for Photorealistic Graphics 1982 – Nobeyama 45-m Telescope 1982 – Human Rescue Enabled by Space Technology 1982 – First Large-Scale Fingerprint ID 1982 – Commercialization of Multilayer Ceramic Capacitors with Nickel Electrodes 1984 – First Direct-broadcast satellite Service 1984 – The MU (Middle and Upper atmosphere) radar 1984–1989 – Active Shielding of Superconducting Magnets 1984–1993 – MPEG Multimedia Integrated Circuits 1984 – TRON Real-time Operating System Family 1984–1996 – Development of 193-nm Projection Photolithography 1985 – Toshiba T1100, a Pioneering Contribution to the Development of Laptop PC 1985 – Emergency Warning Code Signal Broadcasting System 1985 – Multiple Technologies on a Chip 1985 – IEEE Standard 754 for Binary Floating-Point Arithmetic 1986 – Fiber Optic Connectors 1987 – High-Temperature Superconductivity 1987 – SPARC RISC Architecture 1987 – Superconductivity at 93 Kelvin 1987 – WaveLAN, Precursor of Wi-Fi 1987–1995 – MTI Portable Satellite Communication Terminals 1988 – Sharp 14-Inch Thin Film Transistor Liquid-Crystal Display (TFT-LCD) for TV 1988 – Virginia Smith High-Voltage Direct-Current Converter Station 1988 – Trans-Atlantic Telephone Fiber-optic Submarine Cable, TAT-8 1988 – First Robotic Control from Human Brain Signals 1989 – Development of CDMA for Cellular Communications 1994 – Giant Metrewave Radio Telescope 1994 – QR (Quick Response) Code 1996–1998 – PageRank and the Birth of Google 1996 – Large-Scale Commercialization of a CDMA Cellular Communication System Notes References External links Map of IEEE Milestone plaques Electrical-engineering-related lists History of electrical engineering Milestones Electrical Electrical and electronic engineering
List of IEEE Milestones
Engineering
2,900
23,833,593
https://en.wikipedia.org/wiki/Gymnopilus%20earlei
Gymnopilus earlei is a species of mushroom in the family Hymenogastraceae. Description The cap is in diameter. Habitat and distribution Gymnopilus earlei has been found on coconut logs in Jamaica, during October to November. See also List of Gymnopilus species References External links Gymnopilus earlei at Index Fungorum earlei Fungi of North America Taxa named by William Alphonso Murrill Fungus species
Gymnopilus earlei
Biology
94
5,312,299
https://en.wikipedia.org/wiki/Impedance%20cardiography
Impedance cardiography (ICG) is a non-invasive technology measuring total electrical conductivity of the thorax and its changes in time to process continuously a number of cardiodynamic parameters, such as stroke volume (SV), heart rate (HR), cardiac output (CO), ventricular ejection time (VET), pre-ejection period and used to detect the impedance changes caused by a high-frequency, low magnitude current flowing through the thorax between additional two pairs of electrodes located outside of the measured segment. The sensing electrodes also detect the ECG signal, which is used as a timing clock of the system. Introduction Impedance cardiography (ICG), also referred to as electrical impedance plethysmography (EIP) or Thoracic Electrical Bioimpedance (TEB) has been researched since the 1940s. NASA helped develop the technology in the 1960s. The use of impedance cardiography in psychophysiological research was pioneered by the publication of an article by Miller and Horvath in 1978. Subsequently, the recommendations of Miller and Horvath were confirmed by a standards group in 1990. A comprehensive list of references is available at ICG Publications. With ICG, the placement of four dual disposable sensors on the neck and chest are used to transmit and detect electrical and impedance changes in the thorax, which are used to measure and calculate cardiodynamic parameters. Process Four pairs of electrodes are placed at the neck and the diaphragm level, delineating the thorax High frequency, low magnitude current is transmitted through the chest in a direction parallel with the spine from the set of outside pairs Current seeks path of least resistance: the blood filled aorta (the systolic phase signal) and both vena cava superior and inferior (the diastolic phase signal, mostly related to respiration) The inside pairs, placed at the anatomic landmarks delineating thorax, sense the impedance signals and the ECG signal ICG measures the baseline impedance (resistance) to this current With each heartbeat, blood volume and velocity in the aorta change ICG measures the corresponding change in impedance and its timing ICG attributes the changes in impedance to (a) the volumetric expansion of the aorta (this is the main difference between ICG and electrical cardiometry) and (b) to the blood velocity-caused alignment of erythrocytes as a function of blood velocity ICG uses the baseline and changes in impedance to measure and calculate hemodynamic parameters Hemodynamics Hemodynamics is a subchapter of cardiovascular physiology, which is concerned with the forces generated by the heart and the resulting motion of blood through the cardiovascular system. These forces demonstrate themselves to the clinician as paired values of blood flow and blood pressure measured simultaneously at the output node of the left heart. Hemodynamics is a fluidic counterpart to the Ohm's law in electronics: pressure is equivalent to voltage, flow to current, vascular resistance to electrical resistance and myocardial work to power. The relationship between the instantaneous values of aortic blood pressure and blood flow through the aortic valve over one heartbeat interval and their mean values are depicted in Fig.1. Their instantaneous values may be used in research; in clinical practice, their mean values, MAP and SV, are adequate. Blood flow parameters Systemic (global) blood flow parameters are (a) the blood flow per heartbeat, the Stroke Volume, SV [ml/beat], and (b) the blood flow per minute, the Cardiac Output, CO [l/min]. There is clear relationship between these blood flow parameters: CO[l/min] = (SV[ml] × HR[bpm])/1000 {Eq.1} where HR is the Heart Rate frequency (beats per minute, bpm). Since the normal value of CO is proportional to body mass it has to perfuse, one "normal" value of SV and CO for all adults cannot exist. All blood flow parameters have to be indexed. The accepted convention is to index them by the Body Surface Area, BSA [m2], by DuBois & DuBois Formula, a function of height and weight: BSA[m2] = W0.425[kg] × H0.725[cm] × 0.007184 {Eq.2} The resulting indexed parameters are Stroke Index, SI (ml/beat/m2) defined as SI[ml/beat/m2] = SV[ml]/BSA[m2] {Eq.3} and Cardiac Index, CI (l/min/m2), defined as CI[l/min/m2] = CO[l/min]/BSA[m2] {Eq.4} These indexed blood flow parameters exhibit typical ranges: For the Stroke Index: 35 < SItypical < 65 ml/beat/m2; for the Cardiac Index: 2.8 < CItypical < 4.2 l/min/m2. Eq.1 for indexed parameters then changes to CI[l/min/m2] = (SI[ml/beat/m2] × HR[bpm])/1000 {Eq.1a} Oxygen transport The primary function of the cardiovascular system is transport of oxygen: blood is the vehicle, oxygen is the cargo. The task of the healthy cardiovascular system is to provide adequate perfusion to all organs and to maintain a dynamic equilibrium between oxygen demand and oxygen delivery. In a healthy person, the cardiovascular system always increases blood flow in response to increased oxygen demand. In a hemodynamically compromised person, when the system is unable to satisfy increased oxygen demand, the blood flow to organs lower on the oxygen delivery priority list is reduced and these organs may, eventually, fail. Digestive disorders, male impotence, tiredness, sleepwalking, environmental temperature intolerance, are classic examples of a low-flow-state, resulting in reduced blood flow. Modulators SI variability and MAP variability are accomplished through activity of hemodynamic modulators. The conventional cardiovascular physiology terms for the hemodynamic modulators are preload, contractility and afterload. They deal with (a) the inertial filling forces of blood return into the atrium (preload), which stretch the myocardial fibers, thus storing energy in them, (b) the force by which the heart muscle fibers shorten thus releasing the energy stored in them in order to expel part of blood in the ventricle into the vasculature (contractility), and (c) the forces the pump has to overcome in order to deliver a bolus of blood into the aorta per each contraction (afterload). The level of preload is currently assessed either from the PAOP (pulmonary artery occluded pressure) in a catheterized patient, or from EDI (end-diastolic index) by use of ultrasound. Contractility is not routinely assessed; quite often inotropy and contractility are interchanged as equal terms. Afterload is assessed from the SVRI value. Rather than using the terms preload, contractility and afterload, the preferential terminology and methodology in per-beat hemodynamics is to use the terms for actual hemodynamic modulating tools, which either the body utilizes or the clinician has in his toolbox to control the hemodynamic state: The preload and the Frank-Starling (mechanically)-induced level of contractility is modulated by variation of intravascular volume (volume expansion or volume reduction/diuresis). Pharmacological modulation of contractility is performed with cardioactive inotropic agents (positive or negative inotropes) being present in the blood stream and affecting the rate of contraction of myocardial fibers. The afterload is modulated by varying the caliber of sphincters at the input and output of each organ, thus the vascular resistance, with the vasoactive pharmacological agents (vasoconstrictors or vasodilators and/or ACE Inhibitors and/or ARBs)(ACE = Angiotensin-converting-enzyme; ARB = Angiotensin-receptor-blocker). Afterload also increases with increasing blood viscosity, however, with the exception of extremely hemodiluted or hemoconcentrated patients, this parameter is not routinely considered in clinical practice. With the exception of volume expansion, which can be accomplished only by physical means (intravenous or oral intake of fluids), all other hemodynamic modulating tools are pharmacological, cardioactive or vasoactive agents. The measurement of CI and its derivatives allow clinicians to make timely patient assessment, diagnosis, prognosis, and treatment decisions. It has been well established that both trained and untrained physicians alike are unable to estimate cardiac output through physical assessment alone. Invasive monitoring Clinical measurement of cardiac output has been available since the 1970s. However, this blood flow measurement is highly invasive, utilizing a flow-directed, thermodilution catheter (also known as the Swan-Ganz catheter), which represents significant risks to the patient. In addition, this technique is costly (several hundred dollars per procedure) and requires a skilled physician and a sterile environment for catheter insertion. As a result, it has been used only in very narrow strata (less than 2%) of critically ill and high-risk patients in whom the knowledge of blood flow and oxygen transport outweighed the risks of the method. In the United States, it is estimated that at least two million pulmonary artery catheter monitoring procedures are performed annually, most often in peri-operative cardiac and vascular surgical patients, decompensated heart failure, multi-organ failure, and trauma. Noninvasive monitoring In theory, a noninvasive way to monitor hemodynamics would provide exceptional clinical value because data similar to invasive hemodynamic monitoring methods could be obtained with much lower cost and no risk. While noninvasive hemodynamic monitoring can be used in patients who previously required an invasive procedure, the largest impact can be made in patients and care environments where invasive hemodynamic monitoring was neither possible nor worth the risk or cost. Because of its safety and low cost, the applicability of vital hemodynamic measurements could be extended to significantly more patients, including outpatients with chronic diseases. ICG has even been used in extreme conditions such as outer space and a Mt. Everest expedition. Heart failure, hypertension, pacemaker, and dyspnea patients are four conditions in which outpatient noninvasive hemodynamic monitoring can play an important role in the assessment, diagnosis, prognosis, and treatment. Some studies have shown ICG cardiac output is accurate, while other studies have shown it is inaccurate. Use of ICG has been shown to improve blood pressure control in resistant hypertension when used by both specialists and general practitioners. ICG has also been shown to predict worsening status in heart failure. ICG Parameters The electrical and impedance signals are processed to determine fiducial points, which are then utilized to measure and calculate hemodynamic parameters, such as cardiac output, stroke volume, systemic vascular resistance, thoracic fluid content, acceleration index, and systolic time ratio. References External links http://bomed.us/teb.html Diagnostic cardiology Impedance measurements Medical equipment Measuring instruments Electrophysiology
Impedance cardiography
Physics,Technology,Engineering,Biology
2,413
21,131,668
https://en.wikipedia.org/wiki/Costume%20coordination
Costume coordination is a method of dressing actors, employees or a person or group for theatrical productions and any venue requiring a fully realized character. It consists of pulling or renting existing stock clothing and costumes, altering them as needed to be used as stage clothes in a theatrical production, oversee their use, cleaning and eventual return to storage or rental company. Just as with costume design, the costume coordinator creates the overall appearance of the characters, but with the use of on hand items, including accessories. Sometimes coordinators may have a small budget to augment the existing stock or alter it for production needs. Many theatres with smaller budgets regularly reuse existing stock, especially older companies with large costume warehouses. It is also a staple of community theatre positions because it entails less time and effort and is the usual manner for schools to costume student performers from stored costumes donated or previously purchased. Coordination of costumes is also required at theme parks and festivals which require performers and dancers to have a consistent appearance, or maintained as originally designed. Celebrity costumes There are costumes made for film and theater, but it is also made show off popular looks to fans and friends on the red carpet or other fancy places. Discovered by Insider articles, celebrity women tend to dress or half dress for that manner, when they are not filming or doing production. These specific locations are places like, the Red Carpet, Engagement photos, Music Awards, VMA's, Venice Film Festival, and the Grammy's. This isn't a bashing statement, it is an honoring one. Celebrity women are constantly recognized for what they are wearing outside of the professional/ business atmosphere. These celebrity women are praised by their glam and style. They become more iconic for their stunning appearances at these elegant events. These outfits and designs are still in some way a costume because it's making up a certain character. As mentioned in Jorgensen book, famous designer Edith Head says, a celebrity wears different costumes to become someone different. The clothing chosen by designers for celebrities are the gifts of celebration, style and beauty. Costume trends The fashion and costumes that are used in movies and theater actually have an effect on fashion shows and fans. These examples are seen on social media and common blogs. Some films are enjoyable by genres but others are memorable because of costumes. There are many specific designs and costumes that are used in production that valued and inspiring to most people today. Designers such as Keira Knightley and Audrey Hepburn have made many designs for films that are or have been in high fashion in reality. A few movies to think about and idealize are, Cinderella where there were much patients and dedication with the amounts of fabric used and the number of designers that worked on the dress. A few others, Mad Max Fury World, the designer received an Oscar for creating the costume for the character who Imperator Furiosa,; Atonement, where the green dress worn in the film was to illustrate jealousy and temptation. One other film would be Kill Bill, where the character The Bride played a strong roll and needed an even stronger costume to go with it. There are plenty of other movies and famous designers to read into along with different awards won for the costumes. Movie costumes can definitely become a trend in reality. For an example, the film where, Marilyn Monroe wore a white halterneck dress standing over a subway vent and how it became so popular because how beautiful it was or how correctly it was worn. It also left a message telling women "how to really wear it." Costumes in film or stage have the ability to have some sort of an effect on fashion in society. To go through the list of top designs more, The Matrix where they wore trench coats and shades to build a serious character. Factory Girl being another film which shows exaggeration in the costumes for that time frame. One famous and popular film mentioned would be Cleopatra, where the designer Renle made very creative and unique costumes. The designer didn't use accurate costumes to tell the story but what made the film interesting was the fun that the designers and directors put into it. Famous designers There are many famous designers that are left behind the scene and aren't always recognized for their work but some have significantly made it into the spotlight. Many designers began sewing when they are young and they would start off with small creations. Edith Head for an example is now as "The Greastest Costume Designer". In much news there are plenty of designers that have books written about them and their own websites of their designs. Edith Head had designed not only for herself but many other top celebrities like Grace Kelly. Sandy Powell is another known designer who talks about the particular work of a designer and the pressure. There are a lot of designs for a designer to complete and there isn't always enough time and sketches that need to be made for actors and directors preferences. In Head's words actors and actresses uses their fashion/costumes as a "camouflage" to indicate that they are a different person every time they are seen. Another designer is Kate Carin who does much work in South Africa. She has her own website that features her designs in films like, Saints and Strangers, Cape Town, The 51st State, Strongbow, The book of Negroes and the list goes on. This website talks about Carin's designs along with her relationships with directors; this is numerously stated in other research, mentioning the connection between the two. It also talks about her versatile style with being about to design costumes for commercials and also for period movies that takes research. Reading costumes There is a lot of production and preparation used in a film so it is important to know how to read these costumes. There are popular costumes used in the 1300 to 1500s and also modern day costumes. These costumes discuss what certain costumes have and what does it mean. It also point out specific ways to educate oneself on the way costumes are used. Director of a costume organization, Deborah Landis, who suggests helping students and teachers to appreciate costume designs more. The article digs in a little deeper, particularly focusing on media literacy and observation. It talks about the collaborations between costume designers, directors, and cinematographer. The article is similar to "Role of Importance" by differentiating between fashion and costume designers but this article has more depth. It begins with saying fashion designers have labels to sell their designs while costume designers have no label and they simply make characters. Again there is a lot of research costume designers done to make a perfect character; visiting places that are still standing, learning habits of the culture, collecting photos and so on. Costume designers use portraits to match their sketches. She continues after each section of telling what costume designer does along with providing assignments to teachers for students. For many people, costumes have importance. Costumes are made to make and invent a character in a film or play. Popular movies like "Romeo and Juliet" , "Hamlet" , and so on; are very collaborative movies that people constantly describe each significant style and fabric that a character is wearing throughout the film. These films are known for their heavy leather, lacquer red silk and all other things that were embedded into a costume, and other things made to represent those ages. There's a lot that goes into a costume according to this book, you have to select the fabric, cut out the costume, make sure it fits, and aging the costume which is making it look older by spraying it down with different things or painting over it. After the end of designing the costumes, it is then presented to the director for approval. Along with most costumes there are props to finish off a personality like knives, pistols, crowns, etc. depending on a character's role. These decisions are made by both the director and costume designer to give characters purposes in stories. There's a book that highly discuss renaissance costumes called, "Settings and Costumes of the Modern Stage". This book doesn't just give information and photos about Costumes but also the stage and the setup to help with bringing characters to life. This information will be helpful because it gives costume designers their credit for their designs. The article discusses stage and costume connection, to help readers understand how costumes match the story/plot. This book has more pictures than words but it helps to create the understanding of a scene, story, and character. The book shows many photos and descriptions of plays like: Hamlet, Marriage, Resurrection, and so on. Authors, Simonson and Komisarjevsky also briefly mention the way New York Stage has made its changes in certain settings to a more glamorous fairy-tales (Simonson, Lee 1966). As mentioned above, there should be more appreciation for the work put into costumes. Even the simplest costumes that are in different genres affects a costume and tell a story. In science fiction it makes sense for directors and designers to dress up characters to create a fantasy and make it less compromising to relate. The film Tron Legacy where the colors of their costumes play big roles in the film and within the characters. These colors are specifically used to distinguish between the protagonist and antagonist. In Scott Pilgrim vs. the World, costumes indicate the enemy characters in this film that are called the seven villains. The costumes used in Robin Hood, was to connect to a certain time period of heroes of men that was formed in the medieval times. Costumes in films are used to enhance a reality of the narration not to define it. Simple outfits that are worn in film also tells some type of story or secret. Designers doesn't always enhance their characters by new fabric, costumes are rented from existing materials to fit characters in movies or plays. The use of colors and patterns in costumes are bold representations. Before a movie or play is produced, designers read scripts to identify the era in which the piece is taken place, this helps to get started on their first sketch. They create costume charts that label what characters are wearing in specific scenes. Their final sketch is presented after the first has been approved. The final sketch shows more vivid and unique features with the use of colors. All of this information sounds very similar to a fashion designer. Some people are unable to identify the difference between costume and fashion designers. Costume designing is made for characters in stories while fashion is for a person's style. Costume designs have limited time and assignments on creativity. There are more expenses used in costume designing and there is the need for research and knowledge of culture or history. References Further reading Stagecraft Film production Theatrical occupations
Costume coordination
Engineering
2,121
9,936,460
https://en.wikipedia.org/wiki/Dole%20effect
The Dole effect, named after Malcolm Dole, describes an inequality in the ratio of the heavy isotope 18O (a "standard" oxygen atom with two additional neutrons) to the lighter 16O, measured in the atmosphere and seawater. This ratio is usually denoted δ18O. It was noticed in 1935 that air contained more 18O than seawater; this was quantified in 1975 to 23.5‰, but later refined as 23.88‰ in 2005. The imbalance arises mainly as a result of respiration in plants and in animals. Due to thermodynamics of isotope reactions, respiration removes the lighter—hence more reactive—16O in preference to 18O, increasing the relative amount of 18O in the atmosphere. The inequality is balanced by photosynthesis. Photosynthesis emits oxygen with the same isotopic composition (i.e. the ratio between 18O and 16O) as the water (H2O) used in the reaction, which is independent of the atmospheric ratio. Thus when atmospheric 18O levels are high enough, photosynthesis will act as a reducing factor. However, as a complicating factor, the degree of fractionation (i.e. change in isotope ratio) occurring due to photosynthesis is not entirely dependent on the water drawn up by the plant, as fractionation can occur as a result of preferential evaporation of and other small but significant processes. Use of the Dole effect Since evaporation causes oceanic and terrestrial waters to have a different ratio of 18O to 16O, the Dole effect will reflect the relevant importances of land-based and marine photosynthesis. The complete removal of land-based productivity would result . The stability (to within 0.5‰) of the atmospheric 18O to 16O ratio with respect to sea surface waters since the last interglacial (the last 130 000 years), as derived from ice cores, suggests that terrestrial and marine productivity have varied together during this time period. Millennial variations of the Dole effect were found to be related to abrupt climate change events in the North Atlantic region during the last 60 kyr (1kyr=1000years). High correlations of the Dole effect to speleothem δ18O, an indicator for monsoon precipitation, suggest that it is subject to changes in low-latitude terrestrial productivity. Orbital scale variations of the Dole effect, characterized by periods of 20-100 kyr, respond strongly to Earth's orbital eccentricity and precession, but not obliquity. The Dole effect can also be applied as a tracer in sea water, with slight variations in chemistry being used to track a discrete "parcel" of water and determine its age. See also Isotopes of oxygen References External links "The atmospheric oxygen cycle" at the American Geophysical Union Oxygen Photosynthesis Paleoclimatology
Dole effect
Chemistry,Biology
601
900,160
https://en.wikipedia.org/wiki/Internal%20wave
Internal waves are gravity waves that oscillate within a fluid medium, rather than on its surface. To exist, the fluid must be stratified: the density must change (continuously or discontinuously) with depth/height due to changes, for example, in temperature and/or salinity. If the density changes over a small vertical distance (as in the case of the thermocline in lakes and oceans or an atmospheric inversion), the waves propagate horizontally like surface waves, but do so at slower speeds as determined by the density difference of the fluid below and above the interface. If the density changes continuously, the waves can propagate vertically as well as horizontally through the fluid. Internal waves, also called internal gravity waves, go by many other names depending upon the fluid stratification, generation mechanism, amplitude, and influence of external forces. If propagating horizontally along an interface where the density rapidly decreases with height, they are specifically called interfacial (internal) waves. If the interfacial waves are large amplitude they are called internal solitary waves or internal solitons. If moving vertically through the atmosphere where substantial changes in air density influences their dynamics, they are called anelastic (internal) waves. If generated by flow over topography, they are called Lee waves or mountain waves. If the mountain waves break aloft, they can result in strong warm winds at the ground known as Chinook winds (in North America) or Foehn winds (in Europe). If generated in the ocean by tidal flow over submarine ridges or the continental shelf, they are called internal tides. If they evolve slowly compared to the Earth's rotational frequency so that their dynamics are influenced by the Coriolis effect, they are called inertia gravity waves or, simply, inertial waves. Internal waves are usually distinguished from Rossby waves, which are influenced by the change of Coriolis frequency with latitude. Visualization of internal waves An internal wave can readily be observed in the kitchen by slowly tilting back and forth a bottle of salad dressing - the waves exist at the interface between oil and vinegar. Atmospheric internal waves can be visualized by wave clouds: at the wave crests air rises and cools in the relatively lower pressure, which can result in water vapor condensation if the relative humidity is close to 100%. Clouds that reveal internal waves launched by flow over hills are called lenticular clouds because of their lens-like appearance. Less dramatically, a train of internal waves can be visualized by rippled cloud patterns described as herringbone sky or mackerel sky. The outflow of cold air from a thunderstorm can launch large amplitude internal solitary waves at an atmospheric inversion. In northern Australia, these result in Morning Glory clouds, used by some daredevils to glide along like a surfer riding an ocean wave. Satellites over Australia and elsewhere reveal these waves can span many hundreds of kilometers. Undulations of the oceanic thermocline can be visualized by satellite because the waves increase the surface roughness where the horizontal flow converges, and this increases the scattering of sunlight (as in the image at the top of this page showing of waves generated by tidal flow through the Strait of Gibraltar). Buoyancy, reduced gravity and buoyancy frequency According to Archimedes principle, the weight of an immersed object is reduced by the weight of fluid it displaces. This holds for a fluid parcel of density surrounded by an ambient fluid of density . Its weight per unit volume is , in which is the acceleration of gravity. Dividing by a characteristic density, , gives the definition of the reduced gravity: If , is positive though generally much smaller than . Because water is much more dense than air, the displacement of water by air from a surface gravity wave feels nearly the full force of gravity (). The displacement of the thermocline of a lake, which separates warmer surface from cooler deep water, feels the buoyancy force expressed through the reduced gravity. For example, the density difference between ice water and room temperature water is 0.002 the characteristic density of water. So the reduced gravity is 0.2% that of gravity. It is for this reason that internal waves move in slow-motion relative to surface waves. Whereas the reduced gravity is the key variable describing buoyancy for interfacial internal waves, a different quantity is used to describe buoyancy in continuously stratified fluid whose density varies with height as . Suppose a water column is in hydrostatic equilibrium and a small parcel of fluid with density is displaced vertically by a small distance . The buoyant restoring force results in a vertical acceleration, given by This is the spring equation whose solution predicts oscillatory vertical displacement about in time about with frequency given by the buoyancy frequency: The above argument can be generalized to predict the frequency, , of a fluid parcel that oscillates along a line at an angle to the vertical: . This is one way to write the dispersion relation for internal waves whose lines of constant phase lie at an angle to the vertical. In particular, this shows that the buoyancy frequency is an upper limit of allowed internal wave frequencies. Mathematical modeling of internal waves The theory for internal waves differs in the description of interfacial waves and vertically propagating internal waves. These are treated separately below. Interfacial waves In the simplest case, one considers a two-layer fluid in which a slab of fluid with uniform density overlies a slab of fluid with uniform density . Arbitrarily the interface between the two layers is taken to be situated at The fluid in the upper and lower layers are assumed to be irrotational. So the velocity in each layer is given by the gradient of a velocity potential, and the potential itself satisfies Laplace's equation: Assuming the domain is unbounded and two-dimensional (in the plane), and assuming the wave is periodic in with wavenumber the equations in each layer reduces to a second-order ordinary differential equation in . Insisting on bounded solutions the velocity potential in each layer is and with the amplitude of the wave and its angular frequency. In deriving this structure, matching conditions have been used at the interface requiring continuity of mass and pressure. These conditions also give the dispersion relation: in which the reduced gravity is based on the density difference between the upper and lower layers: with the Earth's gravity. Note that the dispersion relation is the same as that for deep water surface waves by setting Internal waves in uniformly stratified fluid The structure and dispersion relation of internal waves in a uniformly stratified fluid is found through the solution of the linearized conservation of mass, momentum, and internal energy equations assuming the fluid is incompressible and the background density varies by a small amount (the Boussinesq approximation). Assuming the waves are two dimensional in the x-z plane, the respective equations are in which is the perturbation density, is the pressure, and is the velocity. The ambient density changes linearly with height as given by and , a constant, is the characteristic ambient density. Solving the four equations in four unknowns for a wave of the form gives the dispersion relation in which is the buoyancy frequency and is the angle of the wavenumber vector to the horizontal, which is also the angle formed by lines of constant phase to the vertical. The phase velocity and group velocity found from the dispersion relation predict the unusual property that they are perpendicular and that the vertical components of the phase and group velocities have opposite sign: if a wavepacket moves upward to the right, the crests move downward to the right. Internal waves in the ocean Most people think of waves as a surface phenomenon, which acts between water (as in lakes or oceans) and the air. Where low density water overlies high density water in the ocean, internal waves propagate along the boundary. They are especially common over the continental shelf regions of the world oceans and where brackish water overlies salt water at the outlet of large rivers. There is typically little surface expression of the waves, aside from slick bands that can form over the trough of the waves. Internal waves are the source of a curious phenomenon called dead water, first reported in 1893 by the Norwegian oceanographer Fridtjof Nansen, in which a boat may experience strong resistance to forward motion in apparently calm conditions. This occurs when the ship is sailing on a layer of relatively fresh water whose depth is comparable to the ship's draft. This causes a wake of internal waves that dissipates a huge amount of energy. Properties of internal waves Internal waves typically have much lower frequencies and higher amplitudes than surface gravity waves because the density differences (and therefore the restoring forces) within a fluid are usually much smaller. Wavelengths vary from centimetres to kilometres with periods of seconds to hours respectively. The atmosphere and ocean are continuously stratified: potential density generally increases steadily downward. Internal waves in a continuously stratified medium may propagate vertically as well as horizontally. The dispersion relation for such waves is curious: For a freely-propagating internal wave packet, the direction of propagation of energy (group velocity) is perpendicular to the direction of propagation of wave crests and troughs (phase velocity). An internal wave may also become confined to a finite region of altitude or depth, as a result of varying stratification or wind. Here, the wave is said to be ducted or trapped, and a vertically standing wave may form, where the vertical component of group velocity approaches zero. A ducted internal wave mode may propagate horizontally, with parallel group and phase velocity vectors, analogous to propagation within a waveguide. At large scales, internal waves are influenced both by the rotation of the Earth as well as by the stratification of the medium. The frequencies of these geophysical wave motions vary from a lower limit of the Coriolis frequency (inertial motions) up to the Brunt–Väisälä frequency, or buoyancy frequency (buoyancy oscillations). Above the Brunt–Väisälä frequency, there may be evanescent internal wave motions, for example those resulting from partial reflection. Internal waves at tidal frequencies are produced by tidal flow over topography/bathymetry, and are known as internal tides. Similarly, atmospheric tides arise from, for example, non-uniform solar heating associated with diurnal motion. Onshore transport of planktonic larvae Cross-shelf transport, the exchange of water between coastal and offshore environments, is of particular interest for its role in delivering meroplanktonic larvae to often disparate adult populations from shared offshore larval pools. Several mechanisms have been proposed for the cross-shelf of planktonic larvae by internal waves. The prevalence of each type of event depends on a variety of factors including bottom topography, stratification of the water body, and tidal influences. Internal tidal bores Similarly to surface waves, internal waves change as they approach the shore. As the ratio of wave amplitude to water depth becomes such that the wave “feels the bottom,” water at the base of the wave slows down due to friction with the sea floor. This causes the wave to become asymmetrical and the face of the wave to steepen, and finally the wave will break, propagating forward as an internal bore. Internal waves are often formed as tides pass over a shelf break. The largest of these waves are generated during springtides and those of sufficient magnitude break and progress across the shelf as bores. These bores are evidenced by rapid, step-like changes in temperature and salinity with depth, the abrupt onset of upslope flows near the bottom and packets of high frequency internal waves following the fronts of the bores. The arrival of cool, formerly deep water associated with internal bores into warm, shallower waters corresponds with drastic increases in phytoplankton and zooplankton concentrations and changes in plankter species abundances. Additionally, while both surface waters and those at depth tend to have relatively low primary productivity, thermoclines are often associated with a chlorophyll maximum layer. These layers in turn attract large aggregations of mobile zooplankton that internal bores subsequently push inshore. Many taxa can be almost absent in warm surface waters, yet plentiful in these internal bores. Surface slicks While internal waves of higher magnitudes will often break after crossing over the shelf break, smaller trains will proceed across the shelf unbroken. At low wind speeds these internal waves are evidenced by the formation of wide surface slicks, oriented parallel to the bottom topography, which progress shoreward with the internal waves. Waters above an internal wave converge and sink in its trough and upwell and diverge over its crest. The convergence zones associated with internal wave troughs often accumulate oils and flotsam that occasionally progress shoreward with the slicks. These rafts of flotsam can also harbor high concentrations of larvae of invertebrates and fish an order of magnitude higher than the surrounding waters. Predictable downwellings Thermoclines are often associated with chlorophyll maximum layers. Internal waves represent oscillations of these thermoclines and therefore have the potential to transfer these phytoplankton rich waters downward, coupling benthic and pelagic systems. Areas affected by these events show higher growth rates of suspension feeding ascidians and bryozoans, likely due to the periodic influx of high phytoplankton concentrations. Periodic depression of the thermocline and associated downwelling may also play an important role in the vertical transport of planktonic larvae. Trapped cores Large steep internal waves containing trapped, reverse-oscillating cores can also transport parcels of water shoreward. These non-linear waves with trapped cores had previously been observed in the laboratory and predicted theoretically. These waves propagate in environments characterized by high shear and turbulence and likely derive their energy from waves of depression interacting with a shoaling bottom further upstream. The conditions favorable to the generation of these waves are also likely to suspend sediment along the bottom as well as plankton and nutrients found along the benthos in deeper water. References Footnotes Other External links Discussion and videos of internal waves made by an oscillating cylinder. Atlas of Oceanic Internal Waves - Global Ocean Associates Atmospheric dynamics Fluid dynamics Waves Water waves
Internal wave
Physics,Chemistry,Engineering
2,960
35,831,840
https://en.wikipedia.org/wiki/Sippewissett%20microbial%20mat
The Sippewissett microbial mat is a microbial mat in the Sippewissett Salt Marsh located along the lower eastern Buzzards Bay shoreline of Cape Cod, about 5 miles north of Woods Hole and 1 mile southwest of West Falmouth, Massachusetts, in the United States. The marsh has two regions, the Great Sippewisset Marsh to the north and Little Sippewisset Marsh to the south, separated from each other by a narrow tongue of land (Saconesset Hills). The marsh extends into an estuary in which the intertidal zone provides a dynamic environment that supports a diverse ecology, including threatened and endangered species such as the roseate tern (Sterna dougallii). The ecology of the salt marsh is based in and supported by the microbial mats which cover the ground of the marsh. Description The Sippewissett Salt Marsh houses a diverse, laminated intertidal microbial mat around 1 cm thick. The mat is characterized by regular influx of sea water, high amounts of sulfide and iron, and the production of methane. The mat contains four or five distinctly colored layers. The color of each layer can be attributed to the microbial community composition and the biogeochemical processes they perform at each of the layers. The mats are often coated by green macro- and microalgae that adhere to the surface. The top, green-brown layer is composed of cyanobacteria and diatom species. A blue-green intermediate layer is formed by Oscillatoria species. Purple sulfur bacteria are found in the pink central layer. Below the pink layer, an orange-black layer is formed predominately by a single species of purple sulfur bacteria, Thiocapsa pfennigii, and spirochetes. The thin, bottom layer is made up of green sulfur bacteria belonging to the genus Prosthecochloris, though this layer is not always present. Below the mat is iron sulfide-rich sediments and remnants of decaying mats. Structure Green layer The top 1 mm of the green layer is often gold due to the dominant cyanobacteria and diatom species. Specific cyanobacteria identified are Lyngbya, a sheeted cyanobacterium, and Nostoc and Phormidium, which are filamentous cyanobacteria, and Spirulina spp. Diatom species identified include Navicula. Below this top gold layer extends 5 mm and is dominated by Lyngbya and Oscillatoria species The green layer is also composed of green sulfur bacteria which oxidize sulfur during their growth and are strict photolithotrophs. Pink layer The pink layer extends 3 mm below the green layer. The color is due to the presence of carotinoids which are the primary pigments of the phototrophic purple sulfur bacteria. Amoebobacter, Thiocapsa, Chromatium, and Thiocystis are among the species of purple sulfur bacteria identified. Purple sulfur bacteria can use a number of different electron donors for their anaerobic phototrophic growth, including: hydrogen sulfide, sulfur, thiosulfate, and molecular hydrogen. Their diverse use of many electron donors makes this layer stand out in the microbial mat community. Black layer The bottom layer makes up the lower 2 mm of the mat before the depth drops below the chemocline. The black color is due to the high amounts of iron sulfide generated by the green sulfur-reducing bacteria. The layer consists mostly of green sulfur bacteria belonging to the Prosthecochloris, which are a small group of prosthecate bacteria containing many knobby projections. Organisms in this layer decompose organic matter formed by the upper layers, thus recycling the matter. Gray layer The thin, bottommost layer lies below the chemocline and contains fewer organisms than the slightly thicker black layer. The gray color is due to the presence of pyrite. Here, the empty shells of diatoms can be found. Microbial species here are dominated by methylotrophic methanogens which generate the methane observed in the salt marsh. This layer is not active year-round; the organisms are largely dormant in the winter. Metabolism The metabolism of the organisms throughout each layer of the microbial mats are tightly coupled to each other and play important roles in providing nutrients for the plants and animals that live in the marsh. The cyanobacteria and diatom algae present in the mat are aerobic photoautotrophs whose energy is derived from the light with oxygen as the electron acceptor and use hydrogen gas and iron as electron donors. Purple sulfur bacteria are anaerobic or microaerophilic photoautotrophs, and use hydrogen sulfide, sulfur, thiosulfate, and molecular hydrogen as electron donors. Spirochaetes in the orange-black layer are chemoheterotrophic and use iron as an electron donor. Research The Sippewissett Salt Marsh has served as a hallmark for studies done on estuarine environments. Scientists at the Woods Hole Oceanographic Institution, the Boston University Marine Program, and the Marine Biological Laboratory have been studying Great Sippewissett Salt Marsh extensively since 1970 to gain a better understanding of microbial diversity and the effects they have on geochemical cycling and nutrient cycling for other organisms. The Sippewissett salt marsh is of particular importance for research, as it is one of the few generally undisturbed salt marshes in New England. References External links Microbial Diversity Course 1997, MBL, Woods Hole. Microbial Diversity 1997 (copyright) Elke Jaspers and Rolf Schauder . May 17, 2012 Overmann and Garcia-Pichel, 2005. The Phototrophic Way of Life. . May 17, 2012. date=May 2012
Sippewissett microbial mat
Environmental_science
1,213
39,758,769
https://en.wikipedia.org/wiki/Long-term%20support
Long-term support (LTS) is a product lifecycle management policy in which a stable release of computer software is maintained for a longer period of time than the standard edition. The term is typically reserved for open-source software, where it describes a software edition that is supported for months or years longer than the software's standard edition. Short-term support (STS) is a term that distinguishes the support policy for the software's standard edition. STS software has a comparatively short life cycle, and may be afforded new features that are omitted from the LTS edition to avoid potentially compromising the stability or compatibility of the LTS release. Characteristics LTS applies the tenets of reliability engineering to the software development process and software release life cycle. Long-term support extends the period of software maintenance; it also alters the type and frequency of software updates (patches) to reduce the risk, expense, and disruption of software deployment, while promoting the dependability of the software. It does not necessarily imply technical support. At the beginning of a long-term support period, the software developers impose a feature freeze: They make patches to correct software bugs and vulnerabilities, but do not introduce new features that may cause regression. The software maintainer either distributes patches individually, or packages them in maintenance releases, point releases, or service packs. At the conclusion of the support period, the product either reaches end-of-life, or receives a reduced level of support for a period of time (e.g., high-priority security patches only). Rationale Before upgrading software, a decision-maker might consider the risk and cost of the upgrade. As software developers add new features and fix software bugs, they may introduce new bugs or break old functionality. When such a flaw occurs in software, it is called a regression. Two ways that a software publisher or maintainer can reduce the risk of regression are to release major updates less frequently, and to allow users to test an alternate, updated version of the software. LTS software applies these two risk-reduction strategies. The LTS edition of the software is published in parallel with the STS (short-term support) edition. Since major updates to the STS edition are published more frequently, it offers LTS users a preview of changes that might be incorporated into the LTS edition when those changes are judged to be of sufficient quality. While using older versions of software may avoid the risks associated with upgrading, it may introduce the risk of losing support for the old software. Long-term support addresses this by assuring users and administrators that the software will be maintained for a specific period of time, and that updates selected for publication will carry a significantly reduced risk of regression. The maintainers of LTS software only publish updates that either have low IT risk or that reduce IT risk (such as security patches). Patches for LTS software are published with the understanding that installing them is less risky than not installing them. Software with separate LTS versions This table only lists software that have a specific LTS version in addition to a normal release cycle. Many projects, such as CentOS, provide a long period of support for every release. 1. The support period for Ubuntu's parent distribution, Debian, is one year after the release of the next stable version. Since Debian 6.0 "Squeeze", LTS support (bug fixes and security patches) was added to all version releases. The total LTS support time is generally around 5 years for every version. Due to the irregular release cycle of Debian, support times might vary from that average and the LTS support is done not by the Debian team but by a separate group of volunteers. See also References Further reading Computer security procedures Product lifecycle management Reliability engineering Software maintenance Software quality
Long-term support
Engineering
775
46,818,444
https://en.wikipedia.org/wiki/Ceratocystis%20corymbiicola
Ceratocystis corymbiicola is a plant pathogen, affecting Australian Eucalyptus species. It was first isolated from tree wounds and nitidulid beetles associated with these wounds. References Further reading External links MycoBank Microascales Fungi described in 2012 Fungus species
Ceratocystis corymbiicola
Biology
57
15,245,959
https://en.wikipedia.org/wiki/KCND1
Potassium voltage-gated channel, Shal-related subfamily, member 1 (KCND1), also known as Kv4.1, is a human gene. Voltage-gated potassium (Kv) channels represent the most complex class of voltage-gated ion channels from both functional and structural standpoints. Their diverse functions include regulating neurotransmitter release, heart rate, insulin secretion, neuronal excitability, epithelial electrolyte transport, smooth muscle contraction, and cell volume. Four sequence-related potassium channel genes - shaker, shaw, shab, and shal - have been identified in Drosophila, and each has been shown to have human homolog(s). This gene encodes a member of the potassium channel, voltage-gated, shal-related subfamily, members of which form voltage-activated A-type potassium ion channels and are prominent in the repolarization phase of the action potential. This gene is expressed at moderate levels in all tissues analyzed, with lower levels in skeletal muscle. See also Voltage-gated potassium channel References Further reading Ion channels
KCND1
Chemistry
231
45,337
https://en.wikipedia.org/wiki/Nash%20equilibrium
In game theory, the Nash equilibrium is the most commonly-used solution concept for non-cooperative games. A Nash equilibrium is a situation where no player could gain by changing their own strategy (holding all other players' strategies fixed). The idea of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to his model of competition in an oligopoly. If each player has chosen a strategy an action plan based on what has happened so far in the game and no one can increase one's own expected payoff by changing one's strategy while the other players keep theirs unchanged, then the current set of strategy choices constitutes a Nash equilibrium. If two players Alice and Bob choose strategies A and B, (A, B) is a Nash equilibrium if Alice has no other strategy available that does better than A at maximizing her payoff in response to Bob choosing B, and Bob has no other strategy available that does better than B at maximizing his payoff in response to Alice choosing A. In a game in which Carol and Dan are also players, (A, B, C, D) is a Nash equilibrium if A is Alice's best response to (B, C, D), B is Bob's best response to (A, C, D), and so forth. Nash showed that there is a Nash equilibrium, possibly in mixed strategies, for every finite game. Applications Game theorists use Nash equilibrium to analyze the outcome of the strategic interaction of several decision makers. In a strategic interaction, the outcome for each decision-maker depends on the decisions of the others as well as their own. The simple insight underlying Nash's idea is that one cannot predict the choices of multiple decision makers if one analyzes those decisions in isolation. Instead, one must ask what each player would do taking into account what the player expects the others to do. Nash equilibrium requires that one's choices be consistent: no players wish to undo their decision given what the others are deciding. The concept has been used to analyze hostile situations such as wars and arms races (see prisoner's dilemma), and also how conflict may be mitigated by repeated interaction (see tit-for-tat). It has also been used to study to what extent people with different preferences can cooperate (see battle of the sexes), and whether they will take risks to achieve a cooperative outcome (see stag hunt). It has been used to study the adoption of technical standards, and also the occurrence of bank runs and currency crises (see coordination game). Other applications include traffic flow (see Wardrop's principle), how to organize auctions (see auction theory), the outcome of efforts exerted by multiple parties in the education process, regulatory legislation such as environmental regulations (see tragedy of the commons), natural resource management, analysing strategies in marketing, penalty kicks in football (see matching pennies), robot navigation in crowds, energy systems, transportation systems, evacuation problems and wireless communications. History Nash equilibrium is named after American mathematician John Forbes Nash Jr. The same idea was used in a particular application in 1838 by Antoine Augustin Cournot in his theory of oligopoly. In Cournot's theory, each of several firms choose how much output to produce to maximize its profit. The best output for one firm depends on the outputs of the others. A Cournot equilibrium occurs when each firm's output maximizes its profits given the output of the other firms, which is a pure-strategy Nash equilibrium. Cournot also introduced the concept of best response dynamics in his analysis of the stability of equilibrium. Cournot did not use the idea in any other applications, however, or define it generally. The modern concept of Nash equilibrium is instead defined in terms of mixed strategies, where players choose a probability distribution over possible pure strategies (which might put 100% of the probability on one pure strategy; such pure strategies are a subset of mixed strategies). The concept of a mixed-strategy equilibrium was introduced by John von Neumann and Oskar Morgenstern in their 1944 book The Theory of Games and Economic Behavior, but their analysis was restricted to the special case of zero-sum games. They showed that a mixed-strategy Nash equilibrium will exist for any zero-sum game with a finite set of actions. The contribution of Nash in his 1951 article "Non-Cooperative Games" was to define a mixed-strategy Nash equilibrium for any game with a finite set of actions and prove that at least one (mixed-strategy) Nash equilibrium must exist in such a game. The key to Nash's ability to prove existence far more generally than von Neumann lay in his definition of equilibrium. According to Nash, "an equilibrium point is an n-tuple such that each player's mixed strategy maximizes [their] payoff if the strategies of the others are held fixed. Thus each player's strategy is optimal against those of the others." Putting the problem in this framework allowed Nash to employ the Kakutani fixed-point theorem in his 1950 paper to prove existence of equilibria. His 1951 paper used the simpler Brouwer fixed-point theorem for the same purpose. Game theorists have discovered that in some circumstances Nash equilibrium makes invalid predictions or fails to make a unique prediction. They have proposed many solution concepts ('refinements' of Nash equilibria) designed to rule out implausible Nash equilibria. One particularly important issue is that some Nash equilibria may be based on threats that are not 'credible'. In 1965 Reinhard Selten proposed subgame perfect equilibrium as a refinement that eliminates equilibria which depend on non-credible threats. Other extensions of the Nash equilibrium concept have addressed what happens if a game is repeated, or what happens if a game is played in the absence of complete information. However, subsequent refinements and extensions of Nash equilibrium share the main insight on which Nash's concept rests: the equilibrium is a set of strategies such that each player's strategy is optimal given the choices of the others. Definitions Nash equilibrium A strategy profile is a set of strategies, one for each player. Informally, a strategy profile is a Nash equilibrium if no player can do better by unilaterally changing their strategy. To see what this means, imagine that each player is told the strategies of the others. Suppose then that each player asks themselves: "Knowing the strategies of the other players, and treating the strategies of the other players as set in stone, can I benefit by changing my strategy?" For instance if a player prefers "Yes", then that set of strategies is not a Nash equilibrium. But if every player prefers not to switch (or is indifferent between switching and not) then the strategy profile is a Nash equilibrium. Thus, each strategy in a Nash equilibrium is a best response to the other players' strategies in that equilibrium. Formally, let be the set of all possible strategies for player , where . Let be a strategy profile, a set consisting of one strategy for each player, where denotes the strategies of all the players except . Let be player is payoff as a function of the strategies. The strategy profile is a Nash equilibrium if A game can have more than one Nash equilibrium. Even if the equilibrium is unique, it might be weak: a player might be indifferent among several strategies given the other players' choices. It is unique and called a strict Nash equilibrium if the inequality is strict so one strategy is the unique best response: The strategy set can be different for different players, and its elements can be a variety of mathematical objects. Most simply, a player might choose between two strategies, e.g. Or the strategy set might be a finite set of conditional strategies responding to other players, e.g. Or it might be an infinite set, a continuum or unbounded, e.g. such that is a non-negative real number. Nash's existing proofs assume a finite strategy set, but the concept of Nash equilibrium does not require it. Variants Pure/mixed equilibrium A game can have a pure-strategy or a mixed-strategy Nash equilibrium. In the latter, not every player always plays the same strategy. Instead, there is a probability distribution over different strategies. Strict/non-strict equilibrium Suppose that in the Nash equilibrium, each player asks themselves: "Knowing the strategies of the other players, and treating the strategies of the other players as set in stone, would I suffer a loss by changing my strategy?" If every player's answer is "Yes", then the equilibrium is classified as a strict Nash equilibrium. If instead, for some player, there is exact equality between the strategy in Nash equilibrium and some other strategy that gives exactly the same payout (i.e. the player is indifferent between switching and not), then the equilibrium is classified as a weak or non-strict Nash equilibrium. Equilibria for coalitions The Nash equilibrium defines stability only in terms of individual player deviations. In cooperative games such a concept is not convincing enough. Strong Nash equilibrium allows for deviations by every conceivable coalition. Formally, a strong Nash equilibrium is a Nash equilibrium in which no coalition, taking the actions of its complements as given, can cooperatively deviate in a way that benefits all of its members. However, the strong Nash concept is sometimes perceived as too "strong" in that the environment allows for unlimited private communication. In fact, strong Nash equilibrium has to be Pareto efficient. As a result of these requirements, strong Nash is too rare to be useful in many branches of game theory. However, in games such as elections with many more players than possible outcomes, it can be more common than a stable equilibrium. A refined Nash equilibrium known as coalition-proof Nash equilibrium (CPNE) occurs when players cannot do better even if they are allowed to communicate and make "self-enforcing" agreement to deviate. Every correlated strategy supported by iterated strict dominance and on the Pareto frontier is a CPNE. Further, it is possible for a game to have a Nash equilibrium that is resilient against coalitions less than a specified size, k. CPNE is related to the theory of the core. Existence Nash's existence theorem Nash proved that if mixed strategies (where a player chooses probabilities of using various pure strategies) are allowed, then every game with a finite number of players in which each player can choose from finitely many pure strategies has at least one Nash equilibrium, which might be a pure strategy for each player or might be a probability distribution over strategies for each player. Nash equilibria need not exist if the set of choices is infinite and non-compact. For example: A game where two players simultaneously name a number and the player naming the larger number wins does not have a NE, as the set of choices is not compact because it is unbounded. Each of two players chooses a real number strictly less than 5 and the winner is whoever has the biggest number; no biggest number strictly less than 5 exists (if the number could equal 5, the Nash equilibrium would have both players choosing 5 and tying the game). Here, the set of choices is not compact because it is not closed. However, a Nash equilibrium exists if the set of choices is compact with each player's payoff continuous in the strategies of all the players. Rosen's existence theorem Rosen extended Nash's existence theorem in several ways. He considers an n-player game, in which the strategy of each player i is a vector si in the Euclidean space Rmi. Denote m:=m1+...+mn; so a strategy-tuple is a vector in Rm. Part of the definition of a game is a subset S of Rm such that the strategy-tuple must be in S. This means that the actions of players may potentially be constrained based on actions of other players. A common special case of the model is when S is a Cartesian product of convex sets S1,...,Sn, such that the strategy of player i must be in Si. This represents the case that the actions of each player i are constrained independently of other players' actions. If the following conditions hold: T is convex, closed and bounded; Each payoff function ui is continuous in the strategies of all players, and concave in si for every fixed value of s−i. Then a Nash equilibrium exists. The proof uses the Kakutani fixed-point theorem. Rosen also proves that, under certain technical conditions which include strict concavity, the equilibrium is unique. Nash's result refers to the special case in which each Si is a simplex (representing all possible mixtures of pure strategies), and the payoff functions of all players are bilinear functions of the strategies. Rationality The Nash equilibrium may sometimes appear non-rational in a third-person perspective. This is because a Nash equilibrium is not necessarily Pareto optimal. Nash equilibrium may also have non-rational consequences in sequential games because players may "threaten" each other with threats they would not actually carry out. For such games the subgame perfect Nash equilibrium may be more meaningful as a tool of analysis. Examples Coordination game The coordination game is a classic two-player, two-strategy game, as shown in the example payoff matrix to the right. There are two pure-strategy equilibria, (A,A) with payoff 4 for each player and (B,B) with payoff 2 for each. The combination (B,B) is a Nash equilibrium because if either player unilaterally changes their strategy from B to A, their payoff will fall from 2 to 1. A famous example of a coordination game is the stag hunt. Two players may choose to hunt a stag or a rabbit, the stag providing more meat (4 utility units, 2 for each player) than the rabbit (1 utility unit). The caveat is that the stag must be cooperatively hunted, so if one player attempts to hunt the stag, while the other hunts the rabbit, the stag hunter will totally fail, for a payoff of 0, whereas the rabbit hunter will succeed, for a payoff of 1. The game has two equilibria, (stag, stag) and (rabbit, rabbit), because a player's optimal strategy depends on their expectation on what the other player will do. If one hunter trusts that the other will hunt the stag, they should hunt the stag; however if they think the other will hunt the rabbit, they too will hunt the rabbit. This game is used as an analogy for social cooperation, since much of the benefit that people gain in society depends upon people cooperating and implicitly trusting one another to act in a manner corresponding with cooperation. Driving on a road against an oncoming car, and having to choose either to swerve on the left or to swerve on the right of the road, is also a coordination game. For example, with payoffs 10 meaning no crash and 0 meaning a crash, the coordination game can be defined with the following payoff matrix: In this case there are two pure-strategy Nash equilibria, when both choose to either drive on the left or on the right. If we admit mixed strategies (where a pure strategy is chosen at random, subject to some fixed probability), then there are three Nash equilibria for the same case: two we have seen from the pure-strategy form, where the probabilities are (0%, 100%) for player one, (0%, 100%) for player two; and (100%, 0%) for player one, (100%, 0%) for player two respectively. We add another where the probabilities for each player are (50%, 50%). Network traffic An application of Nash equilibria is in determining the expected flow of traffic in a network. Consider the graph on the right. If we assume that there are "cars" traveling from to , what is the expected distribution of traffic in the network? This situation can be modeled as a "game", where every traveler has a choice of 3 strategies and where each strategy is a route from to (one of , , or ). The "payoff" of each strategy is the travel time of each route. In the graph on the right, a car travelling via experiences travel time of , where is the number of cars traveling on edge . Thus, payoffs for any given strategy depend on the choices of the other players, as is usual. However, the goal, in this case, is to minimize travel time, not maximize it. Equilibrium will occur when the time on all paths is exactly the same. When that happens, no single driver has any incentive to switch routes, since it can only add to their travel time. For the graph on the right, if, for example, 100 cars are travelling from to , then equilibrium will occur when 25 drivers travel via , 50 via , and 25 via . Every driver now has a total travel time of 3.75 (to see this, a total of 75 cars take the edge, and likewise, 75 cars take the edge). Notice that this distribution is not, actually, socially optimal. If the 100 cars agreed that 50 travel via and the other 50 through , then travel time for any single car would actually be 3.5, which is less than 3.75. This is also the Nash equilibrium if the path between and is removed, which means that adding another possible route can decrease the efficiency of the system, a phenomenon known as Braess's paradox. Competition game This can be illustrated by a two-player game in which both players simultaneously choose an integer from 0 to 3 and they both win the smaller of the two numbers in points. In addition, if one player chooses a larger number than the other, then they have to give up two points to the other. This game has a unique pure-strategy Nash equilibrium: both players choosing 0 (highlighted in light red). Any other strategy can be improved by a player switching their number to one less than that of the other player. In the adjacent table, if the game begins at the green square, it is in player 1's interest to move to the purple square and it is in player 2's interest to move to the blue square. Although it would not fit the definition of a competition game, if the game is modified so that the two players win the named amount if they both choose the same number, and otherwise win nothing, then there are 4 Nash equilibria: (0,0), (1,1), (2,2), and (3,3). Nash equilibria in a payoff matrix There is an easy numerical way to identify Nash equilibria on a payoff matrix. It is especially helpful in two-person games where players have more than two strategies. In this case formal analysis may become too long. This rule does not apply to the case where mixed (stochastic) strategies are of interest. The rule goes as follows: if the first payoff number, in the payoff pair of the cell, is the maximum of the column of the cell and if the second number is the maximum of the row of the cell then the cell represents a Nash equilibrium. We can apply this rule to a 3×3 matrix: Using the rule, we can very quickly (much faster than with formal analysis) see that the Nash equilibria cells are (B,A), (A,B), and (C,C). Indeed, for cell (B,A), 40 is the maximum of the first column and 25 is the maximum of the second row. For (A,B), 25 is the maximum of the second column and 40 is the maximum of the first row; the same applies for cell (C,C). For other cells, either one or both of the duplet members are not the maximum of the corresponding rows and columns. This said, the actual mechanics of finding equilibrium cells is obvious: find the maximum of a column and check if the second member of the pair is the maximum of the row. If these conditions are met, the cell represents a Nash equilibrium. Check all columns this way to find all NE cells. An N×N matrix may have between 0 and N×N pure-strategy Nash equilibria. Stability The concept of stability, useful in the analysis of many kinds of equilibria, can also be applied to Nash equilibria. A Nash equilibrium for a mixed-strategy game is stable if a small change (specifically, an infinitesimal change) in probabilities for one player leads to a situation where two conditions hold: the player who did not change has no better strategy in the new circumstance the player who did change is now playing with a strictly worse strategy. If these cases are both met, then a player with the small change in their mixed strategy will return immediately to the Nash equilibrium. The equilibrium is said to be stable. If condition one does not hold then the equilibrium is unstable. If only condition one holds then there are likely to be an infinite number of optimal strategies for the player who changed. In the "driving game" example above there are both stable and unstable equilibria. The equilibria involving mixed strategies with 100% probabilities are stable. If either player changes their probabilities slightly, they will be both at a disadvantage, and their opponent will have no reason to change their strategy in turn. The (50%,50%) equilibrium is unstable. If either player changes their probabilities (which would neither benefit or damage the expectation of the player who did the change, if the other player's mixed strategy is still (50%,50%)), then the other player immediately has a better strategy at either (0%, 100%) or (100%, 0%). Stability is crucial in practical applications of Nash equilibria, since the mixed strategy of each player is not perfectly known, but has to be inferred from statistical distribution of their actions in the game. In this case unstable equilibria are very unlikely to arise in practice, since any minute change in the proportions of each strategy seen will lead to a change in strategy and the breakdown of the equilibrium. Finally in the eighties, building with great depth on such ideas Mertens-stable equilibria were introduced as a solution concept. Mertens stable equilibria satisfy both forward induction and backward induction. In a game theory context stable equilibria now usually refer to Mertens stable equilibria. Occurrence If a game has a unique Nash equilibrium and is played among players under certain conditions, then the NE strategy set will be adopted. Sufficient conditions to guarantee that the Nash equilibrium is played are: The players all will do their utmost to maximize their expected payoff as described by the game. The players are flawless in execution. The players have sufficient intelligence to deduce the solution. The players know the planned equilibrium strategy of all of the other players. The players believe that a deviation in their own strategy will not cause deviations by any other players. There is common knowledge that all players meet these conditions, including this one. So, not only must each player know the other players meet the conditions, but also they must know that they all know that they meet them, and know that they know that they know that they meet them, and so on. Where the conditions are not met Examples of game theory problems in which these conditions are not met: The first condition is not met if the game does not correctly describe the quantities a player wishes to maximize. In this case there is no particular reason for that player to adopt an equilibrium strategy. For instance, the prisoner's dilemma is not a dilemma if either player is happy to be jailed indefinitely. Intentional or accidental imperfection in execution. For example, a computer capable of flawless logical play facing a second flawless computer will result in equilibrium. Introduction of imperfection will lead to its disruption either through loss to the player who makes the mistake, or through negation of the common knowledge criterion leading to possible victory for the player. (An example would be a player suddenly putting the car into reverse in the game of chicken, ensuring a no-loss no-win scenario). In many cases, the third condition is not met because, even though the equilibrium must exist, it is unknown due to the complexity of the game, for instance in Chinese chess. Or, if known, it may not be known to all players, as when playing tic-tac-toe with a small child who desperately wants to win (meeting the other criteria). The criterion of common knowledge may not be met even if all players do, in fact, meet all the other criteria. Players wrongly distrusting each other's rationality may adopt counter-strategies to expected irrational play on their opponents’ behalf. This is a major consideration in "chicken" or an arms race, for example. Where the conditions are met In his Ph.D. dissertation, John Nash proposed two interpretations of his equilibrium concept, with the objective of showing how equilibrium points can be connected with observable phenomenon. This idea was formalized by R. Aumann and A. Brandenburger, 1995, Epistemic Conditions for Nash Equilibrium, Econometrica, 63, 1161-1180 who interpreted each player's mixed strategy as a conjecture about the behaviour of other players and have shown that if the game and the rationality of players is mutually known and these conjectures are commonly known, then the conjectures must be a Nash equilibrium (a common prior assumption is needed for this result in general, but not in the case of two players. In this case, the conjectures need only be mutually known). A second interpretation, that Nash referred to by the mass action interpretation, is less demanding on players: For a formal result along these lines, see Kuhn, H. and et al., 1996, "The Work of John Nash in Game Theory", Journal of Economic Theory, 69, 153–185. Due to the limited conditions in which NE can actually be observed, they are rarely treated as a guide to day-to-day behaviour, or observed in practice in human negotiations. However, as a theoretical concept in economics and evolutionary biology, the NE has explanatory power. The payoff in economics is utility (or sometimes money), and in evolutionary biology is gene transmission; both are the fundamental bottom line of survival. Researchers who apply games theory in these fields claim that strategies failing to maximize these for whatever reason will be competed out of the market or environment, which are ascribed the ability to test all strategies. This conclusion is drawn from the "stability" theory above. In these situations the assumption that the strategy observed is actually a NE has often been borne out by research. NE and non-credible threats The Nash equilibrium is a superset of the subgame perfect Nash equilibrium. The subgame perfect equilibrium in addition to the Nash equilibrium requires that the strategy also is a Nash equilibrium in every subgame of that game. This eliminates all non-credible threats, that is, strategies that contain non-rational moves in order to make the counter-player change their strategy. The image to the right shows a simple sequential game that illustrates the issue with subgame imperfect Nash equilibria. In this game player one chooses left(L) or right(R), which is followed by player two being called upon to be kind (K) or unkind (U) to player one, However, player two only stands to gain from being unkind if player one goes left. If player one goes right the rational player two would de facto be kind to her/him in that subgame. However, The non-credible threat of being unkind at 2(2) is still part of the blue (L, (U,U)) Nash equilibrium. Therefore, if rational behavior can be expected by both parties the subgame perfect Nash equilibrium may be a more meaningful solution concept when such dynamic inconsistencies arise. Proof of existence Proof using the Kakutani fixed-point theorem Nash's original proof (in his thesis) used Brouwer's fixed-point theorem (e.g., see below for a variant). This section presents a simpler proof via the Kakutani fixed-point theorem, following Nash's 1950 paper (he credits David Gale with the observation that such a simplification is possible). To prove the existence of a Nash equilibrium, let be the best response of player i to the strategies of all other players. Here, , where , is a mixed-strategy profile in the set of all mixed strategies and is the payoff function for player i. Define a set-valued function such that . The existence of a Nash equilibrium is equivalent to having a fixed point. Kakutani's fixed point theorem guarantees the existence of a fixed point if the following four conditions are satisfied. is compact, convex, and nonempty. is nonempty. is upper hemicontinuous is convex. Condition 1. is satisfied from the fact that is a simplex and thus compact. Convexity follows from players' ability to mix strategies. is nonempty as long as players have strategies. Condition 2. and 3. are satisfied by way of Berge's maximum theorem. Because is continuous and compact, is non-empty and upper hemicontinuous. Condition 4. is satisfied as a result of mixed strategies. Suppose , then . i.e. if two strategies maximize payoffs, then a mix between the two strategies will yield the same payoff. Therefore, there exists a fixed point in and a Nash equilibrium. When Nash made this point to John von Neumann in 1949, von Neumann famously dismissed it with the words, "That's trivial, you know. That's just a fixed-point theorem." (See Nasar, 1998, p. 94.) Alternate proof using the Brouwer fixed-point theorem We have a game where is the number of players and is the action set for the players. All of the action sets are finite. Let denote the set of mixed strategies for the players. The finiteness of the s ensures the compactness of . We can now define the gain functions. For a mixed strategy , we let the gain for player on action be The gain function represents the benefit a player gets by unilaterally changing their strategy. We now define where for . We see that Next we define: It is easy to see that each is a valid mixed strategy in . It is also easy to check that each is a continuous function of , and hence is a continuous function. As the cross product of a finite number of compact convex sets, is also compact and convex. Applying the Brouwer fixed point theorem to and we conclude that has a fixed point in , call it . We claim that is a Nash equilibrium in . For this purpose, it suffices to show that This simply states that each player gains no benefit by unilaterally changing their strategy, which is exactly the necessary condition for a Nash equilibrium. Now assume that the gains are not all zero. Therefore, and such that . Then So let Also we shall denote as the gain vector indexed by actions in . Since is the fixed point we have: Since we have that is some positive scaling of the vector . Now we claim that To see this, first if then this is true by definition of the gain function. Now assume that . By our previous statements we have that and so the left term is zero, giving us that the entire expression is as needed. So we finally have that where the last inequality follows since is a non-zero vector. But this is a clear contradiction, so all the gains must indeed be zero. Therefore, is a Nash equilibrium for as needed. Computing Nash equilibria If a player A has a dominant strategy then there exists a Nash equilibrium in which A plays . In the case of two players A and B, there exists a Nash equilibrium in which A plays and B plays a best response to . If is a strictly dominant strategy, A plays in all Nash equilibria. If both A and B have strictly dominant strategies, there exists a unique Nash equilibrium in which each plays their strictly dominant strategy. In games with mixed-strategy Nash equilibria, the probability of a player choosing any particular (so pure) strategy can be computed by assigning a variable to each strategy that represents a fixed probability for choosing that strategy. In order for a player to be willing to randomize, their expected payoff for each (pure) strategy should be the same. In addition, the sum of the probabilities for each strategy of a particular player should be 1. This creates a system of equations from which the probabilities of choosing each strategy can be derived. Examples In the matching pennies game, player A loses a point to B if A and B play the same strategy and wins a point from B if they play different strategies. To compute the mixed-strategy Nash equilibrium, assign A the probability of playing H and of playing T, and assign B the probability of playing H and of playing T. Thus, a mixed-strategy Nash equilibrium in this game is for each player to randomly choose H or T with and . Oddness of equilibrium points In 1971, Robert Wilson came up with the "oddness theorem", which says that "almost all" finite games have a finite and odd number of Nash equilibria. In 1993, Harsanyi published an alternative proof of the result. "Almost all" here means that any game with an infinite or even number of equilibria is very special in the sense that if its payoffs were even slightly randomly perturbed, with probability one it would have an odd number of equilibria instead. The prisoner's dilemma, for example, has one equilibrium, while the battle of the sexes has three—two pure and one mixed, and this remains true even if the payoffs change slightly. The free money game is an example of a "special" game with an even number of equilibria. In it, two players have to both vote "yes" rather than "no" to get a reward and the votes are simultaneous. There are two pure-strategy Nash equilibria, (yes, yes) and (no, no), and no mixed strategy equilibria, because the strategy "yes" weakly dominates "no". "Yes" is as good as "no" regardless of the other player's action, but if there is any chance the other player chooses "yes" then "yes" is the best reply. Under a small random perturbation of the payoffs, however, the probability that any two payoffs would remain tied, whether at 0 or some other number, is vanishingly small, and the game would have either one or three equilibria instead. See also Notes References Bibliography Game theory textbooks . Dixit, Avinash, Susan Skeath and David Reiley. Games of Strategy. W.W. Norton & Company. (Third edition in 2009.) An undergraduate text. . Suitable for undergraduate and business students. Fudenberg, Drew and Jean Tirole (1991) Game Theory MIT Press. . Lucid and detailed introduction to game theory in an explicitly economic context. Morgenstern, Oskar and John von Neumann (1947) The Theory of Games and Economic Behavior Princeton University Press. . . A modern introduction at the graduate level. . A comprehensive reference from a computational perspective; see Chapter 3. Downloadable free online. Original Nash papers Nash, John (1950) "Equilibrium points in n-person games" Proceedings of the National Academy of Sciences 36(1):48-49. Nash, John (1951) "Non-Cooperative Games" The Annals of Mathematics 54(2):286-295. Other references Mehlmann, A. (2000) The Game's Afoot! Game Theory in Myth and Paradox, American Mathematical Society. Nasar, Sylvia (1998), A Beautiful Mind, Simon & Schuster. Aviad Rubinstein: "Hardness of Approximation Between P and NP", ACM, ISBN 978-1-947487-23-9 (May 2019), DOI: https://doi.org/10.1145/3241304 . # Explains the Nash Equilibrium is a hard problem in computation. External links Complete Proof of Existence of Nash Equilibria Simplified Form and Related Results Game theory equilibrium concepts Fixed points (mathematics) 1951 in economic history
Nash equilibrium
Mathematics
7,562
29,343,046
https://en.wikipedia.org/wiki/Cleanliness%20suitability
Cleanliness suitability describes the suitability of operating materials and ventilation and air conditioning components for use in cleanrooms where the air cleanliness and other parameters are controlled by way of technical regulations. Tests are carried out to determine this. Trends such as the miniaturization of structures as well as increased levels of reliability in technology, research and science require controlled “clean” manufacturing environments. The task of such environments is to minimize influences which could damage the products concerned. The cleanroom environments created by filtering the air were originally developed for the fields of microelectronics and microsystem technology but are now used in a wide range of other high technology sectors such as photovoltaics and the automotive industry. Depending upon the industry and process concerned, different factors may have a damaging influence on a product, e.g.: Particles, in microelectronics such as the semiconductor industry and especially biotic particles in life science industries such as pharmaceutics, bio-engineering and medical technology (cleanroom suitability) Molecular contamination (outgassing), especially in microelectronics such as the semiconductor industry Electrostatic discharge phenomena (ESD), especially in microelectronics such as the semiconductor industry Resistance to cleaning and disinfection agents, especially in life science industries such as pharmaceutics Surface interaction, especially in life science industries such as pharmaceutics, bio-engineering and medical technology Cleanability, especially in life science industries such as pharmaceutics, bio-engineering and medical technology Microbicidity, especially in life science industries such as pharmaceutics, bio-engineering and medical technology The following factors may be responsible for contamination: The cleanroom itself: Staff, although this is becoming less relevant as more and more staff are banned from working in critical areas The use of manufacturing equipment, which is increasing as more and more automated solutions are being implemented. Often in direct contact with the product, manufacturing equipment and the materials used in their construction form a further important contamination factor in a clean production environment. References Cleaning Life sciences industry Cleanroom technology
Cleanliness suitability
Chemistry,Biology
433
55,335,877
https://en.wikipedia.org/wiki/Ceriporia%20amazonica
Ceriporia amazonica is a species of crust fungus in the family Irpicaceae. Found in Brazil, it was described as new to science in 2014. The fungus is characterized by its salmon-coloured pore surface with angular pores numbering 1–3 per millimetre, and small ellipsoid spores (measuring 2–3 μm) that are among the smallest in genus Ceriporia. The type locality is Amapá National Forest, in the Brazilian Amazon, for which the species is named. References Fungi described in 2014 Fungi of Brazil Irpicaceae Taxa named by Leif Ryvarden Fungus species
Ceriporia amazonica
Biology
127
77,684,881
https://en.wikipedia.org/wiki/Sodium%20ferrate
Sodium ferrate is a chemical compound with the formula Na2FeO4. It is a sodium salt of ferric acid that is very difficult to obtain. In most iron compounds, the metal has an oxidation state of +2 or +3. Ferric acid, with an oxidation state of +6, is extremely unstable and does not exist under normal conditions. Therefore, its salts, such as sodium ferrate, also tend to be unstable. Due to its high oxidation state, FeO42- is a potent oxidizing agent. Synthesis The synthesis of sodium ferrate(VI) appears to be very delicate due to the instability of ferrate resulting from its high oxidizing power. The methods to synthesize ferrate(VI) are: thermal, chemical and electrochemical. The thermal method usually requires high temperatures (about 800 °C) and habitually has a low efficiency (50%). The chemical method is multiphase and requires a large number of chemical compounds. The electrochemical method, compared to the other two methods mentioned, has advantages such as the product purity, low solvent demand and the use of an electron which is known as a clean oxidant. Wet chemistry oxidation In this methodology, a solution containing Fe(III) is oxidized in the presence of NaOH and converted to Fe(VI)O42-. However, this compound degrades rapidly, so additional steps such as "sequestration", washing and drying processes are necessary to obtain a more stable product. Another drawback encountered with this methodology is related to the isolation and acquisition of the dry product from the corresponding solution, due to the high solubility of Na2FeO4 in a saturated NaOH solution. By modifying the production procedure in which chlorine gas is passed through a NaOH-saturated solution of trivalent iron, a dry compound containing 41.38% of Na2FeO4 can be obtained. The wet oxidation method has been extensively used by several researchers to produce solid or liquid ferrate, especially sodium and potassium (VI) ferrate (Na2FeO4 and K2FeO4). Generally, it employs: either ferrous (FeII) or ferric (FeIII) salts as the source of iron ions, calcium, sodium hypochlorite (Ca(ClO)2, NaClO), sodium thiosulfate (Na2S2O3) or chlorine (Cl2) as oxidizing agents and, finally, sodium hydroxide, sodium carbonate (NaOH, NaCO3) or potassium hydroxide (KOH) to increase the pH of the solution. Electrochemistry The electrochemical method requires either the use of an anion dissolved in an electrolysis cell containing a strong alkaline solution (NaOH or KOH) or an inert electrode in an Fe(III) solution with an electric current producing the oxidation of iron to Fe(VI). The basic principle is shown in equations 1-4. Anode reaction: Fe0(s) + OH−(aq) → FeO42-(aq) + 4H2O(aq) + 6e− (1) Cathode reaction: 3H2O(aq) → H20(g) + 4H2O(aq) + 6e− (2) Overall reactions: Fe0(s) + 2OH−(aq) → FeO42-(aq) + 3H20(g) + 4H2O(aq) (3) FeO42-(aq) + 2Na+(aq) → Na2FeO4(aq) (4) The first electrochemical synthesis of ferrate(VI) was carried out around 1841, which is one of the easiest routes to obtain sodium ferrate from solutions without impurities. Later, researchers have performed several experiments in different alkaline environments with various NaOH concentrations, different current densities, temperature, and electrolysis intervals. It was found that increasing temperature could increase the oxidation efficiency, but this behavior is only applicable up to a certain temperature (about 60 °C). The intensity of the electric current, the material of the anode electrode, and the type and concentration of the electrolyte significantly affect the production of ferrate (VI). Large amounts of carbon in the anode electrode can also increase the efficiency of ferrate (VI) production. Efficiencies above 70% can be achieved using iron or silver electrodes containing 0.9% carbon. The best ferrate (VI) production data have been obtained using a 99.99% pure iron electrode at temperatures around 30 - 60 °C using alternating current (AC). Dry oxidation Currently, two methodologies are known for the dry oxidation of sodium ferrate: The first involves the oxidation of sodium peroxide at 370 °C in the absence of carbon dioxide. The result of this methodology is the production of FeO54- which immediately hydrolyses to FeO42- or into tetrahedral ions in solution with water while adopting a red-violet colour as shown in equation 5. FeO54-(aq) + 4H2O(aq) → FeO42-(aq) (5) The second one is based on heating the remains of the galvanized process together with iron oxide in a furnace with a temperature up to 800 °C. The galvanisation residues and iron oxide in combination with sodium peroxide are melted and immediately cooled to produce sodium ferrate (VI), as illustrated in equation 6 below: Fe2O3(s) + 3Na2O2(s) → 2Na2FeO4(s) + Na2O(s) (6) Both methods are dangerous and difficult to handle due to the use of high temperatures and therefore the possible risk of explosions. Properties The physical properties of this compound can be described as similar to those of potassium ferrate: a dark crystalline solid that dissolves in water to form a reddish-violet solution. However, sodium ferrate has less viscosity than potassium ferrate. It is difficult to isolate in the solid state by traditional crystallisation methods, such as precipitation by heating/cooling, vapor diffusion, antisolvent, etc., due to the ease with which it decomposes. Regarding its chemical properties, sodium ferrate is a very strong oxidant, stronger and more reactive than potassium ferrate. Its redox potential in acid medium reaches 2.2 V, which is stronger than commonly used compounds for water treatment such as ozone (2.08 V), hydrogen peroxide (1.78 V) or potassium permanganate (1.68 V). In addition, it can also act as a coagulant for unwanted pollution compounds in wastewater, causing them to precipitate as large particles without decomposing into toxic compounds. Applications Due to its properties and the fact that it does not generate environmentally toxic by-products, sodium ferrate can be used in the water treatment process. In water treatment it can act as: Oxidant agent: promoting the oxidation of organic species in metal complexes. Coagulator: allows removal of inorganic pollution compounds such as heavy metals, inorganic salts, trace elements and metal complexes. Disinfectant: destroys human pathogens including viruses, spores, bacteria and protozoa. In addition, sodium ferrate can also remove the colour, odour and oils of polymers and plastics making it a suitable compound for recycling as well as an alternative to traditional processes such as aeration or spreading. Handling Sodium ferrate and its decomposition products are non-toxic. However, sodium ferrate in solid state should not be kept in contact with flammable organic compounds. Sodium ferrate in solid state should be stored in a dark space, without access to air. Ideally, it should be stored in a vacuum or under an inert gas. Its solutions can be handled under normal conditions, but should be stored cold and not for long periods of time. References Ferrates Sodium compounds Oxidizing agents
Sodium ferrate
Chemistry
1,698
12,570,345
https://en.wikipedia.org/wiki/30%20Vulpeculae
30 Vulpeculae is a binary star system in the northern constellation of Vulpecula, located mid-way between Epsilon Cygni and a diamond-shaped asterism in Delphinus. It is visible to the naked eye as a faint, orange-hued point of light with an apparent visual magnitude of 4.91. The system is located approximately 350 light years away from the Sun based on parallax, and is drifting further away with a mean radial velocity of +30 km/s. The system has a relatively high proper motion, traversing the celestial sphere at the rate of 0.186 arc seconds per annum. The variable radial velocity of this system was announced in 1922 by W. W. Campbell. It is a single-lined spectroscopic binary system with an orbital period of and an eccentricity of 0.38. The value is , where a is the semimajor axis and i is the orbital inclination. This provides a lower bound on the true semimajor axis. The visible component is an aging giant star with a stellar classification of K1 III and an estimated age of 4.20 billion years old. Having exhausted the supply of hydrogen at its core, the star has expanded to 22 times the Sun's radius. It has 1.55 times the mass of the Sun and is radiating 173 times the Sun's luminosity from its swollen photosphere at an effective temperature of 4,498 K. References K-type giants Spectroscopic binaries Vulpecula Durchmusterung objects Vulpeculae, 30 197752 102388 7939
30 Vulpeculae
Astronomy
335
681,962
https://en.wikipedia.org/wiki/Coupling%20constant
In physics, a coupling constant or gauge coupling parameter (or, more simply, a coupling), is a number that determines the strength of the force exerted in an interaction. Originally, the coupling constant related the force acting between two static bodies to the "charges" of the bodies (i.e. the electric charge for electrostatic and the mass for Newtonian gravity) divided by the distance squared, , between the bodies; thus: in for Newtonian gravity and in for electrostatic. This description remains valid in modern physics for linear theories with static bodies and massless force carriers. A modern and more general definition uses the Lagrangian (or equivalently the Hamiltonian ) of a system. Usually, (or ) of a system describing an interaction can be separated into a kinetic part and an interaction part : (or ). In field theory, always contains 3 fields terms or more, expressing for example that an initial electron (field 1) interacts with a photon (field 2) producing the final state of the electron (field 3). In contrast, the kinetic part always contains only two fields, expressing the free propagation of an initial particle (field 1) into a later state (field 2). The coupling constant determines the magnitude of the part with respect to the part (or between two sectors of the interaction part if several fields that couple differently are present). For example, the electric charge of a particle is a coupling constant that characterizes an interaction with two charge-carrying fields and one photon field (hence the common Feynman diagram with two arrows and one wavy line). Since photons mediate the electromagnetic force, this coupling determines how strongly electrons feel such a force, and has its value fixed by experiment. By looking at the QED Lagrangian, one sees that indeed, the coupling sets the proportionality between the kinetic term and the interaction term . A coupling plays an important role in dynamics. For example, one often sets up hierarchies of approximation based on the importance of various coupling constants. In the motion of a large lump of magnetized iron, the magnetic forces may be more important than the gravitational forces because of the relative magnitudes of the coupling constants. However, in classical mechanics, one usually makes these decisions directly by comparing forces. Another important example of the central role played by coupling constants is that they are the expansion parameters for first-principle calculations based on perturbation theory, which is the main method of calculation in many branches of physics. Fine-structure constant Couplings arise naturally in a quantum field theory. A special role is played in relativistic quantum theories by couplings that are dimensionless; i.e., are pure numbers. An example of such a dimensionless constant is the fine-structure constant, where is the charge of an electron, is the permittivity of free space, is the reduced Planck constant and is the speed of light. This constant is proportional to the square of the coupling strength of the charge of an electron to the electromagnetic field. Gauge coupling In a non-abelian gauge theory, the gauge coupling parameter, , appears in the Lagrangian as (where G is the gauge field tensor) in some conventions. In another widely used convention, G is rescaled so that the coefficient of the kinetic term is 1/4 and appears in the covariant derivative. This should be understood to be similar to a dimensionless version of the elementary charge defined as Weak and strong coupling In a quantum field theory with a coupling g, if g is much less than 1, the theory is said to be weakly coupled. In this case, it is well described by an expansion in powers of g, called perturbation theory. If the coupling constant is of order one or larger, the theory is said to be strongly coupled. An example of the latter is the hadronic theory of strong interactions (which is why it is called strong in the first place). In such a case, non-perturbative methods need to be used to investigate the theory. In quantum field theory, the dimension of the coupling plays an important role in the renormalizability property of the theory, and therefore on the applicability of perturbation theory. If the coupling is dimensionless in the natural units system (i.e. , ), like in QED, QCD, and the weak interaction, the theory is renormalizable and all the terms of the expansion series are finite (after renormalization). If the coupling is dimensionful, as e.g. in gravity (), the Fermi theory () or the chiral perturbation theory of the strong force (), then the theory is usually not renormalizable. Perturbation expansions in the coupling might still be feasible, albeit within limitations, as most of the higher order terms of the series will be infinite. Running coupling One may probe a quantum field theory at short times or distances by changing the wavelength or momentum, k, of the probe used. With a high frequency (i.e., short time) probe, one sees virtual particles taking part in every process. This apparent violation of the conservation of energy may be understood heuristically by examining the uncertainty relation which virtually allows such violations at short times. The foregoing remark only applies to some formulations of quantum field theory, in particular, canonical quantization in the interaction picture. In other formulations, the same event is described by "virtual" particles going off the mass shell. Such processes renormalize the coupling and make it dependent on the energy scale, μ, at which one probes the coupling. The dependence of a coupling g(μ) on the energy-scale is known as "running of the coupling". The theory of the running of couplings is given by the renormalization group, though it should be kept in mind that the renormalization group is a more general concept describing any sort of scale variation in a physical system (see the full article for details). Phenomenology of the running of a coupling The renormalization group provides a formal way to derive the running of a coupling, yet the phenomenology underlying that running can be understood intuitively. As explained in the introduction, the coupling constant sets the magnitude of a force which behaves with distance as . The -dependence was first explained by Faraday as the decrease of the force flux: at a point B distant by from the body A generating a force, this one is proportional to the field flux going through an elementary surface S perpendicular to the line AB. As the flux spreads uniformly through space, it decreases according to the solid angle sustaining the surface S. In the modern view of quantum field theory, the comes from the expression in position space of the propagator of the force carriers. For relatively weakly-interacting bodies, as is generally the case in electromagnetism or gravity or the nuclear interactions at short distances, the exchange of a single force carrier is a good first approximation of the interaction between the bodies, and classically the interaction will obey a -law (note that if the force carrier is massive, there is an additional dependence). When the interactions are more intense (e.g. the charges or masses are larger, or is smaller) or happens over briefer time spans (smaller ), more force carriers are involved or particle pairs are created, see Fig. 1, resulting in the break-down of the behavior. The classical equivalent is that the field flux does not propagate freely in space any more but e.g. undergoes screening from the charges of the extra virtual particles, or interactions between these virtual particles. It is convenient to separate the first-order law from this extra -dependence. This latter is then accounted for by being included in the coupling, which then becomes -dependent, (or equivalently μ-dependent). Since the additional particles involved beyond the single force carrier approximation are always virtual, i.e. transient quantum field fluctuations, one understands why the running of a coupling is a genuine quantum and relativistic phenomenon, namely an effect of the high-order Feynman diagrams on the strength of the force. Since a running coupling effectively accounts for microscopic quantum effects, it is often called an effective coupling, in contrast to the bare coupling (constant) present in the Lagrangian or Hamiltonian. Beta functions In quantum field theory, a beta function, β(g), encodes the running of a coupling parameter, g. It is defined by the relation where μ is the energy scale of the given physical process. If the beta functions of a quantum field theory vanish, then the theory is scale-invariant. The coupling parameters of a quantum field theory can flow even if the corresponding classical field theory is scale-invariant. In this case, the non-zero beta function tells us that the classical scale-invariance is anomalous. QED and the Landau pole If a beta function is positive, the corresponding coupling increases with increasing energy. An example is quantum electrodynamics (QED), where one finds by using perturbation theory that the beta function is positive. In particular, at low energies, , whereas at the scale of the Z boson, about 90 GeV, one measures . Moreover, the perturbative beta function tells us that the coupling continues to increase, and QED becomes strongly coupled at high energy. In fact the coupling apparently becomes infinite at some finite energy. This phenomenon was first noted by Lev Landau, and is called the Landau pole. However, one cannot expect the perturbative beta function to give accurate results at strong coupling, and so it is likely that the Landau pole is an artifact of applying perturbation theory in a situation where it is no longer valid. The true scaling behaviour of at large energies is not known. QCD and asymptotic freedom In non-abelian gauge theories, the beta function can be negative, as first found by Frank Wilczek, David Politzer and David Gross. An example of this is the beta function for quantum chromodynamics (QCD), and as a result the QCD coupling decreases at high energies. Furthermore, the coupling decreases logarithmically, a phenomenon known as asymptotic freedom (the discovery of which was awarded with the Nobel Prize in Physics in 2004). The coupling decreases approximately as where is the energy of the process involved and β0 is a constant first computed by Wilczek, Gross and Politzer. Conversely, the coupling increases with decreasing energy. This means that the coupling becomes large at low energies, and one can no longer rely on perturbation theory. Hence, the actual value of the coupling constant is only defined at a given energy scale. In QCD, the Z boson mass scale is typically chosen, providing a value of the strong coupling constant of αs(MZ2 ) = 0.1179 ± 0.0010. In 2023 Atlas measured the most precise so far. The most precise measurements stem from lattice QCD calculations, studies of tau-lepton decay, as well as by the reinterpretation of the transverse momentum spectrum of the Z boson. QCD scale In quantum chromodynamics (QCD), the quantity Λ is called the QCD scale. The value is for three "active" quark flavors, viz when the energy–momentum involved in the process allows production of only the up, down and strange quarks, but not the heavier quarks. This corresponds to energies below 1.275 GeV. At higher energy, Λ is smaller, e.g. MeV above the bottom quark mass of about 5 GeV. The meaning of the minimal subtraction (MS) scheme scale ΛMS is given in the article on dimensional transmutation. The proton-to-electron mass ratio is primarily determined by the QCD scale. String theory A remarkably different situation exists in string theory since it includes a dilaton. An analysis of the string spectrum shows that this field must be present, either in the bosonic string or the NS–NS sector of the superstring. Using vertex operators, it can be seen that exciting this field is equivalent to adding a term to the action where a scalar field couples to the Ricci scalar. This field is therefore an entire function worth of coupling constants. These coupling constants are not pre-determined, adjustable, or universal parameters; they depend on space and time in a way that is determined dynamically. Sources that describe the string coupling as if it were fixed are usually referring to the vacuum expectation value. This is free to have any value in the bosonic theory where there is no superpotential. See also Canonical quantization, renormalization and dimensional regularization Quantum field theory, especially quantum electrodynamics and quantum chromodynamics Gluon field, Gluon field strength tensor References External links The Nobel Prize in Physics 2004 – Information for the Public Department of Physics and Astronomy of the Georgia State University – Coupling Constants for the Fundamental Forces An introduction to quantum field theory, by M.E.Peskin and H.D.Schroeder, Quantum field theory Quantum mechanics Statistical mechanics Renormalization group
Coupling constant
Physics
2,734
55,407,740
https://en.wikipedia.org/wiki/Gas%20immersion%20laser%20doping
Gas immersion laser doping (GILD) is a method of doping a semiconductor material such as silicon. In the case of doping silicon with boron to create a P-type semiconductor material, a thin wafer of silicon is placed in a containment chamber and is immersed in boron gas. A pulsed laser is directed at the silicon wafer and this results in localised melting and subsequent recrystallisation of the silicon wafer material, allowing boron atoms in the gas to diffuse into the molten sections of the silicon wafer. The result of this process is a silicon wafer with boron impurities, creating a P-type semiconductor. References Further reading Semiconductor device fabrication
Gas immersion laser doping
Physics,Materials_science
139