text
stringlengths
2
132k
source
dict
of a single linear progression. == Punctuated equilibrium == The theory of punctuated equilibrium developed by Stephen Jay Gould and Niles Eldredge and first presented in 1972 is often mistakenly drawn into the discussion of transitional fossils. This theory, however, pertains only to well-documented transitions within taxa or between closely related taxa over a geologically short period of time. These transitions, usually traceable in the same geological outcrop, often show small jumps in morphology between extended periods of morphological stability. To explain these jumps, Gould and Eldredge envisaged comparatively long periods of genetic stability separated by periods of rapid evolution. Gould made the following observation concerning creationist misuse of his work to deny the existence of transitional fossils: Since we proposed punctuated equilibria to explain trends, it is infuriating to be quoted again and again by creationists—whether through design or stupidity, I do not know—as admitting that the fossil record includes no transitional forms. The punctuations occur at the level of species; directional trends (on the staircase model) are rife at the higher level of transitions within major groups. == See also == Crocoduck Evidence of common descent Missing link Speciation == References == == Sources == == External links == Lloyd, Robin (11 February 2009). "Fossils Reveal Truth About Darwin's Theory". LiveScience. Ogden UT: Purch. Retrieved 19 May 2015. Hunt, Kathleen (17 March 1997). "Transitional Vertebrate Fossils FAQ". TalkOrigins Archive. Houston, TX: The TalkOrigins Foundation, Inc. Retrieved 19 May 2015. "Tiktaalik roseae". Chicago, IL: University of Chicago. Archived from the original on 12 November 2011. Retrieved 19 May 2015. "Whales Tohorā". Wellington, New Zealand: Museum of New Zealand Te Papa Tongarewa. Retrieved 19 May 2015. Hutchinson, John R. (22 January 1998). "Are Birds Really Dinosaurs?". DinoBuzz. Berkeley, CA: University of California Museum of Paleontology. Retrieved 19 May 2015.
{ "page_id": 331755, "source": null, "title": "Transitional fossil" }
A spectrochemical series is a list of ligands ordered by ligand "strength", and a list of metal ions based on oxidation number, group and element. For a metal ion, the ligands modify the difference in energy Δ between the d orbitals, called the ligand-field splitting parameter in ligand field theory, or the crystal-field splitting parameter in crystal field theory. The splitting parameter is reflected in the ion's electronic and magnetic properties such as its spin state, and optical properties such as its color and absorption spectrum. == Spectrochemical series of ligands == The spectrochemical series was first proposed in 1938 based on the results of absorption spectra of cobalt complexes. A partial spectrochemical series listing ligands from small Δ to large Δ is given below. (For a table, see the ligand page.) I− < Br− < S2− < SCN− (S–bonded) < Cl− < NO3– < N3− < F− < OH− < C2O42− < H2O < NCS− (N–bonded) < CH3CN < py (pyridine) < NH3 < en (ethylenediamine) < bipy (2,2'-bipyridine) < phen (1,10-phenanthroline) < NO2− (N–bonded) < PPh3 (Triphenylphosphine) < CN− < CO Weak field ligands: H2O, F−, Cl−, OH− Strong field ligands: CO, CN−, NH3, PPh3 Ligands arranged on the left end of this spectrochemical series are generally regarded as weaker ligands and cannot cause forcible pairing of electrons within the 3d level, and thus form outer orbital octahedral complexes that are high spin. Ligands to the right of the series are stronger ligands and form inner orbital octahedral complexes after forcible pairing of electrons within 3d level and hence are called low spin ligands. However, it is known that "the spectrochemical series is essentially backwards from what it should be for a reasonable prediction based on the assumptions of crystal field theory." This deviation from crystal field theory
{ "page_id": 3936237, "source": null, "title": "Spectrochemical series" }
highlights the weakness of its assumption of purely ionic bonds between metal and ligand. The order of the spectrochemical series can be derived from the understanding that ligands are frequently classified by their donor or acceptor abilities. Some, like NH3, are σ bond donors only, with no orbitals of appropriate symmetry for π bonding interactions. Bonding of these ligands to metals is relatively simple, using only the σ bonds to create relatively weak interactions. Another example of a σ bonding ligand would be ethylenediamine; however, ethylenediamine has a stronger effect than ammonia, generating a larger ligand field split, Δ. Ligands that have occupied p orbitals are potentially π donors. These types of ligands tend to donate these electrons to the metal along with the σ bonding electrons, exhibiting stronger metal-ligand interactions and an effective decrease of Δ. Halide ligands are primary examples of π donor ligands, along with OH−. When ligands have vacant π* and d orbitals of suitable energy, there is the possibility of pi backbonding, and the ligands may be π acceptors. This addition to the bonding scheme increases Δ. Ligands such as CN− and CO do this very effectively. == Spectrochemical series of metals == Metal ions can also be arranged in order of increasing Δ; this order is largely independent of the identity of the ligand. Mn2+ < Ni2+ < Co2+ < Fe2+ < V2+ < Fe3+ < Cr3+ < V3+ < Co3+ In general, it is not possible to say whether a given ligand will exert a strong field or a weak field on a given metal ion. However, when we consider the metal ion, the following two useful trends are observed: Δ increases with increasing oxidation number, and Δ increases down a group. == See also == Nephelauxetic effect – Term in the chemistry
{ "page_id": 3936237, "source": null, "title": "Spectrochemical series" }
of transition metals == References == Zumdahl, Steven S. Chemical Principles Fifth Edition. Boston: Houghton Mifflin Company, 2005. Pages 550-551 and 957-964. D. F. Shriver and P. W. Atkins Inorganic Chemistry 3rd edition, Oxford University Press, 2001. Pages: 227-236. James E. Huheey, Ellen A. Keiter, and Richard L. Keiter Inorganic Chemistry: Principles of Structure and Reactivity 4th edition, HarperCollins College Publishers, 1993. Pages 405-408.
{ "page_id": 3936237, "source": null, "title": "Spectrochemical series" }
Epi-lipoxins are trihydroxy (i.e. containing 3 hydroxyl residues) metabolites of arachidonic acid. They are 15R-epimers of their lipoxin counterparts; that is, the epi-lipoxins, 15-epi-lipoxin A4 (15-epi-LxA4) and 15-epi-lipoxin B4 (15-epi-LXB4), differ from their respective lipoxin A4 (LxA4) and lipoxin B4 (LxB4) epimers in that their 15-hydroxy residue has R rather than S chirality. Formulae for these lipoxins (Lx) are: LxA4: 5S,6R,15S-trihydroxy-7E,9E,11Z,13E-eicosatetraenoic acid LxB4: 5S,14R,15S-trihydroxy-6E,8Z,10E,12E-eicosatetraenoic acid 15-epi-LxA4: 5S,6R,15R-trihydroxy-7E,9E,11Z,13E-eicosatetraenoic acid 15-epi-LxB4: 5S,14R,15R-trihydroxy-6E,8Z,10E,12E-eicosatetraenoic acid The two-epi-Lx's as well as the two lx's are nonclassic eicosanoids that, like other members of the specialized pro-resolving mediators class of autocoids, form during and act to resolve inflammatory responses. Synthesis of the lipoxins typically involves a lipoxygenase enzyme which acts to add a 15S-hydroxyl residue to the lipoxin precursor, arachidonic acid, whereas synthesis of the epi-lipoxins involves aspirin-pretreated cyclooxygenase 2 or a cytochrome P450 enzyme which adds a 15R-hydroxyl residue to arachidonic acid. In acknowledgement of the role played by aspirin-treated cyclooxygenase 2 in forming these products, the epi-lipoxins are sometimes termed ATL which stands for Aspirin-Triggered Lipoxins. The counter-regulatory role of the epi-lipoxins in serving as stop signals for diverse inflammation responses is detailed at the lipoxin site. == See also == Lipoxins Specialized pro-resolving mediators == References ==
{ "page_id": 14028784, "source": null, "title": "Epi-lipoxin" }
Count Apollos Apollosovich Musin-Pushkin (Russian: Аполло́с Аполло́сович Му́син-Пу́шкин; February 17, 1760 – April 18, 1805) was a Russian chemist and plant collector. He led a botanical expedition to the Caucasus in 1802 with his friend botanist Friedrich August Marschall von Bieberstein. In 1797, he was elected a foreign member of the Royal Swedish Academy of Sciences. He was a member of the Russian mining board and developed several new methods of refining and processing of platinum. The genus of Puschkinia commemorates his name. == References ==
{ "page_id": 1183728, "source": null, "title": "Apollo Mussin-Pushkin" }
Aavo Sirk (born in 1945) is an Estonian physicist. He has worked at National Institute of Chemical Physics and Biophysics He gave his signature to Letter of 40 intellectuals. In 2006, he was awarded with Order of the National Coat of Arms, IV class. == References ==
{ "page_id": 67178482, "source": null, "title": "Aavo Sirk" }
A kleptoprotein is a protein which is not encoded in the genome of the organism which uses it, but instead is obtained through diet from a prey organism. Importantly, a kleptoprotein must maintain its function and be mostly or entirely undigested, drawing a distinction from proteins that are digested for nutrition, which become destroyed and non-functional in the process. This phenomenon was first reported in the bioluminescent fish Parapriacanthus, which has specialized light organs adapted towards counter-illumination, but obtains the luciferase enzyme within these organs from bioluminescent ostracods, including Cypridina noctiluca or Vargula hilgendorfii. == See also == Kleptoplasty == References ==
{ "page_id": 62787574, "source": null, "title": "Kleptoprotein" }
Debasisa Mohanty (born 30 November 1966) is an Indian computational biologist, bioinformatician and a staff scientists at the National Institute of Immunology, India. Known for his studies on structure and function prediction of proteins, genome analysis and computer simulation of biomolecular systems, Mohanty is an elected fellow of all the three major Indian science academies namely the Indian Academy of Sciences, the Indian National Science Academy and the National Academy of Sciences, India. The Department of Biotechnology of the Government of India awarded him the National Bioscience Award for Career Development, one of the highest Indian science awards, for his contributions to biosciences, in 2009. == Biography == Born on 30 November 1966, Debasisa Mohanty earned a post graduate degree (MSc) in physics from the Indian Institute of Technology, Kanpur in 1988 and did his doctoral studies at the Molecular Biophysics Unit of the Indian Institute of Science to secure a PhD in computational biophysics in 1994. Subsequently, he completed his post-doctoral work, first the Hebrew University of Jerusalem and, later, at the Scripps Research Institute. On his return to India, he joined the National Institute of Immunology, India (NII) where he serves as a Grade VII staff scientist and hosts a number of research scholars at his laboratory. He currently hold director position at NII. At NII, he also supervises the activities of RiPPMiner, (Bioinformatics Resource for Deciphering Chemical Structures of RiPPs) and the Bioinformatics Centre. Mohanty resides at the NII Campus, along Aruna Asaf Ali Marg in New Delhi. == Legacy == Mohanty's research focus is in the fields of computational biology and bioinformatics and he is known to have developed computational methods for predicting the substrate specificity of proteins as well as identified biosynthetic pathways. His work has assisted in widening the understanding of the function of
{ "page_id": 56365055, "source": null, "title": "Debasisa Mohanty" }
putative proteins in genomes and the protein interaction networks in newly sequenced genomes. His studies have been documented by way of a number of articles and ResearchGate, an online repository of scientific articles has listed 108 of them. Mohanty was a member of the national organizing committee of the International Conference in Bioinformatics (INCOB) held in 2006 in India and has delivered invited speeches at various conferences which included the seminar series on Proteomics and bioinformatics of the Regional Centre for Biotechnology held in 2013, the Symposium on Accelerating Biology 2017: Delivering Precision of the Centre for Development of Advanced Computing (C-DAC) held in January 2017 in Pune, and the Symposium on Functional Genomics organized by the Indraprastha Institute of Information Technology in Delhi in December 2017. == Awards and honors == Mohanty received the Samanta Chandrashekhar Award of the Orissa Bigyan Academy in 2005 and the Rajib Goyal Young Scientist Prize in Life Sciences of Kurukshetra University in 2007. The National Academy of Sciences, India elected him as a fellow the next year and the Department of Biotechnology (DBT) of the Government of India awarded him the National Bioscience Award for Career Development, one of the highest Indian science awards in 2009. He became an elected fellow of the Indian Academy of Sciences in 2012 and of the Indian National Science Academy in 2013. == Selected bibliography == Agrawal, Priyesh; Khater, Shradha; Gupta, Money; Sain, Neetu; Mohanty, Debasisa (3 July 2017). "RiPPMiner: a bioinformatics resource for deciphering chemical structures of RiPPs based on prediction of cleavage and cross-links". Nucleic Acids Research. 45 (W1): W80 – W88. doi:10.1093/nar/gkx408. ISSN 0305-1048. PMC 5570163. PMID 28499008. Sharma, Chhaya; Mohanty, Debasisa (2018). "Sequence- and structure-based analysis of proteins involved in miRNA biogenesis". Journal of Biomolecular Structure and Dynamics. 36 (1): 139–151. doi:10.1080/07391102.2016.1269687. PMID
{ "page_id": 56365055, "source": null, "title": "Debasisa Mohanty" }
27928938. S2CID 11253090. Khater, Shradha; Anand, Swadha; Mohanty, Debasisa (2016). "In silico methods for linking genes and secondary metabolites: The way forward". Synthetic and Systems Biotechnology. 1 (2): 80–88. doi:10.1016/j.synbio.2016.03.001. PMC 5640692. PMID 29062931. == See also == == Notes == == References == == Further reading == D. Mohanty (21 January 2018). "In silico identification of novel biosynthetic pathways by knowledge-based prediction of protein structure/function" (Presentation). Indian Institute of Science. == External links == Shibaeva, A. N. (21 January 2018). "Mohanty on PubMed - NCBI". Fel'dsher I Akusherka. 40 (9): 31–4. PMID 255.
{ "page_id": 56365055, "source": null, "title": "Debasisa Mohanty" }
Several chromosome regions have been defined by convenience and convention in order to talk about gene loci. The largest regions on each chromosome are the short arm p and the long arm q, separated by a narrow region near the center called the centromere. Other specific regions have also been defined, some of which are similarly found on every chromosome, while others are only present in certain chromosomes. Named regions include: Arms (p and q) Centromere Kinetochore Telomere Sub telomere satellite chromosome or trabant. NOR region During cell division, the molecules that compose chromosomes (DNA and proteins) undergo a condensation process (called the chromatin reticulum condensation) that forms a compact and small complex called a chromatid. The complexes containing the duplicated DNA molecules, the sister chromatids, are attached to each other by the centromere(where the Kinetochore assembles). If the chromosome is a submetacentric chromosome (One arm big and the other arm small) then the centromere divides each chromosome into two regions: the smaller one, which is the p region, and the bigger one, the q region. The sister chromatids will be distributed to each daughter cell at the end of the cell division. Whereas if the chromosome is isobrachial (centromere at centre and arms of equal length), the p and q system is meaningless. At either end of a chromosome is a telomere, a cap of DNA that protects the rest of the chromosome from damage. The telomere has repetitive junk DNA and hence any enzymatic damage will not affect the coded regions. The areas of the p and q regions close to the telomeres are the subtelomeres, or subtelomeric regions. The areas closer to the centromere are the pericentronomic regions. Finally, the interstitial regions are the parts of the p and q regions that are close to neither the
{ "page_id": 2101248, "source": null, "title": "Chromosome regions" }
centromere nor the telomeres, but are roughly in the middle of p or q. == See also == Satellite chromosome == References ==
{ "page_id": 2101248, "source": null, "title": "Chromosome regions" }
The molecular formula C11H13NS (molar mass: 191.29 g/mol, exact mass: 191.076871 u) may refer to: 2-APBT 3-APBT 4-APBT 5-APBT 6-APBT 7-APBT
{ "page_id": 79171585, "source": null, "title": "C11H13NS" }
The molecular formula C16H18N2O7S2 (molar mass: 414.453 g/mol) may refer to: Sulbenicillin Temocillin
{ "page_id": 24711175, "source": null, "title": "C16H18N2O7S2" }
Cell migration is a central process in the development and maintenance of multicellular organisms. Tissue formation during embryonic development, wound healing and immune responses all require the orchestrated movement of cells in particular directions to specific locations. Cells often migrate in response to specific external signals, including chemical signals and mechanical signals. Errors during this process have serious consequences, including intellectual disability, vascular disease, tumor formation and metastasis. An understanding of the mechanism by which cells migrate may lead to the development of novel therapeutic strategies for controlling, for example, invasive tumour cells. Due to the highly viscous environment (low Reynolds number), cells need to continuously produce forces in order to move. Cells achieve active movement by very different mechanisms. Many less complex prokaryotic organisms (and sperm cells) use flagella or cilia to propel themselves. Eukaryotic cell migration typically is far more complex and can consist of combinations of different migration mechanisms. It generally involves drastic changes in cell shape which are driven by the cytoskeleton. Two very distinct migration scenarios are crawling motion (most commonly studied) and blebbing motility. A paradigmatic example of crawling motion is the case of fish epidermal keratocytes, which have been extensively used in research and teaching. == Cell migration studies == The migration of cultured cells attached to a surface or in 3D is commonly studied using microscopy. As cell movement is very slow, a few μm/minute, time-lapse microscopy videos are recorded of the migrating cells to speed up the movement. Such videos (Figure 1) reveal that the leading cell front is very active, with a characteristic behavior of successive contractions and expansions. It is generally accepted that the leading front is the main motor that pulls the cell forward. === Common features === The processes underlying mammalian cell migration are believed to be
{ "page_id": 2428938, "source": null, "title": "Cell migration" }
consistent with those of (non-spermatozooic) locomotion. Observations in common include: cytoplasmic displacement at leading edge (front) laminar removal of dorsally-accumulated debris toward trailing edge (back) The latter feature is most easily observed when aggregates of a surface molecule are cross-linked with a fluorescent antibody or when small beads become artificially bound to the front of the cell. Other eukaryotic cells are observed to migrate similarly. The amoeba Dictyostelium discoideum is useful to researchers because they consistently exhibit chemotaxis in response to cyclic AMP; they move more quickly than cultured mammalian cells; and they have a haploid genome that simplifies the process of connecting a particular gene product with its effect on cellular behaviour. == Molecular processes of migration == There are two main theories for how the cell advances its front edge: the cytoskeletal model and membrane flow model. It is possible that both underlying processes contribute to cell extension. === Cytoskeletal model (A) === === Leading edge === Experimentation has shown that there is rapid actin polymerisation at the cell's front edge. This observation has led to the hypothesis that formation of actin filaments "push" the leading edge forward and is the main motile force for advancing the cell's front edge. In addition, cytoskeletal elements are able to interact extensively and intimately with a cell's plasma membrane. === Trailing edge === Other cytoskeletal components (like microtubules) have important functions in cell migration. It has been found that microtubules act as "struts" that counteract the contractile forces that are needed for trailing edge retraction during cell movement. When microtubules in the trailing edge of cell are dynamic, they are able to remodel to allow retraction. When dynamics are suppressed, microtubules cannot remodel and, therefore, oppose the contractile forces. The morphology of cells with suppressed microtubule dynamics indicate that cells can
{ "page_id": 2428938, "source": null, "title": "Cell migration" }
extend the front edge (polarized in the direction of movement), but have difficulty retracting their trailing edge. On the other hand, high drug concentrations, or microtubule mutations that depolymerize the microtubules, can restore cell migration but there is a loss of directionality. It can be concluded that microtubules act both to restrain cell movement and to establish directionality. === Membrane flow model (B) === The leading edge at the front of a migrating cell is also the site at which membrane from internal membrane pools is returned to the cell surface at the end of the endocytic cycle. This suggests that extension of the leading edge occurs primarily by addition of membrane at the front of the cell. If so, the actin filaments that form there might stabilize the added membrane so that a structured extension, or lamella, is formed — rather than a bubble-like structure (or bleb) at its front. For a cell to move, it is necessary to bring a fresh supply of "feet" (proteins called integrins, which attach a cell to the surface on which it is crawling) to the front. It is likely that these feet are endocytosed toward the rear of the cell and brought to the cell's front by exocytosis, to be reused to form new attachments to the substrate. In the case of Dictyostelium amoebae, three conditional temperature sensitive mutants which affect membrane recycling block cell migration at the restrictive (higher) temperature; they provide additional support for the importance of the endocytic cycle in cell migration. Furthermore, these amoebae move quite quickly — about one cell length in ~5 mins. If they are regarded as cylindrical (which is roughly true whilst chemotaxing), this would require them to recycle the equivalent of one cell surface area each 5 mins, which is approximately what is
{ "page_id": 2428938, "source": null, "title": "Cell migration" }
measured. === Mechanistic basis of amoeboid migration === Adhesive crawling is not the only migration mode exhibited by eukaryotic cells. Importantly, several cell types — Dictyostelium amoebae, neutrophils, metastatic cancer cells and macrophages — have been found to be capable of adhesion-independent migration. Historically, the physicist E. M. Purcell theorized (in 1977) that under conditions of low Reynolds number fluid dynamics, which apply at the cellular scale, rearward surface flow could provide a mechanism for microscopic objects to swim forward. After some decades, experimental support for this model of cell movement was provided when it was discovered (in 2010) that amoeboid cells and neutrophils are both able to chemotax towards a chemo-attractant source whilst suspended in an isodense medium. It was subsequently shown, using optogenetics, that cells migrating in an amoeboid fashion without adhesions exhibit plasma membrane flow towards the cell rear that may propel cells by exerting tangential forces on the surrounding fluid. Polarized trafficking of membrane-containing vesicles from the rear to the front of the cell helps maintain cell size. Rearward membrane flow was also observed in Dictyostelium discoideum cells. These observations provide strong support for models of cell movement which depend on a rearward cell surface membrane flow (Model B, above). The migration of supracellular clusters has also been found to be supported by a similar mechanism of rearward surface flow. === Collective biomechanical and molecular mechanism of cell motion === Based on some mathematical models, recent studies hypothesize a novel biological model for collective biomechanical and molecular mechanism of cell motion. It is proposed that microdomains weave the texture of cytoskeleton and their interactions mark the location for formation of new adhesion sites. According to this model, microdomain signaling dynamics organizes cytoskeleton and its interaction with substratum. As microdomains trigger and maintain active polymerization of actin
{ "page_id": 2428938, "source": null, "title": "Cell migration" }
filaments, their propagation and zigzagging motion on the membrane generate a highly interlinked network of curved or linear filaments oriented at a wide spectrum of angles to the cell boundary. It is also proposed that microdomain interaction marks the formation of new focal adhesion sites at the cell periphery. Myosin interaction with the actin network then generate membrane retraction/ruffling, retrograde flow, and contractile forces for forward motion. Finally, continuous application of stress on the old focal adhesion sites could result in the calcium-induced calpain activation, and consequently the detachment of focal adhesions which completes the cycle. == Polarity in migrating cells == Migrating cells have a polarity—a front and a back. Without it, they would move in all directions at once, i.e. spread. How this polarity is formulated at a molecular level inside a cell is unknown. In a cell that is meandering in a random way, the front can easily give way to become passive as some other region, or regions, of the cell form(s) a new front. In chemotaxing cells, the stability of the front appears enhanced as the cell advances toward a higher concentration of the stimulating chemical. From biophysical perspective, polarity was explained in terms of a gradient in inner membrane surface charge between front regions and rear edges of the cell. This polarity is reflected at a molecular level by a restriction of certain molecules to particular regions of the inner cell surface. Thus, the phospholipid PIP3 and activated Ras, Rac, and CDC42 are found at the front of the cell, whereas Rho GTPase and PTEN are found toward the rear. It is believed that filamentous actins and microtubules are important for establishing and maintaining a cell's polarity. Drugs that destroy actin filaments have multiple and complex effects, reflecting the wide role that these filaments
{ "page_id": 2428938, "source": null, "title": "Cell migration" }
play in many cell processes. It may be that, as part of the locomotory process, membrane vesicles are transported along these filaments to the cell's front. In chemotaxing cells, the increased persistence of migration toward the target may result from an increased stability of the arrangement of the filamentous structures inside the cell and determine its polarity. In turn, these filamentous structures may be arranged inside the cell according to how molecules like PIP3 and PTEN are arranged on the inner cell membrane. And where these are located appears in turn to be determined by the chemoattractant signals as these impinge on specific receptors on the cell's outer surface. Although microtubules have been known to influence cell migration for many years, the mechanism by which they do so has remained controversial. On a planar surface, microtubules are not needed for the movement, but they are required to provide directionality to cell movement and efficient protrusion of the leading edge. When present, microtubules retard cell movement when their dynamics are suppressed by drug treatment or by tubulin mutations. == Inverse problems in the context of cell motility == An area of research called inverse problems in cell motility has been established. This approach is based on the idea that behavioral or shape changes of a cell bear information about the underlying mechanisms that generate these changes. Reading cell motion, namely, understanding the underlying biophysical and mechanochemical processes, is of paramount importance. The mathematical models developed in these works determine some physical features and material properties of the cells locally through analysis of live cell image sequences and uses this information to make further inferences about the molecular structures, dynamics, and processes within the cells, such as the actin network, microdomains, chemotaxis, adhesion, and retrograde flow. == Cell migration disruption in pathological
{ "page_id": 2428938, "source": null, "title": "Cell migration" }
conditions == Cell migration could be affected in some pathological states. For example, in conditions of high lipoperoxidation, actin has been shown to be post-translationally modified by the lipoperoxidation product 4-hydroxynonenal (4-HNE). This modification prevents the remodelling of the actin cytoskeleton, which is essential for cell motility. Additionally, another functional protein, coronin-1A, which stabilizes F-actin filaments, is also covalently modified by 4-HNE. These modifications may impair immune cell trans-endothelial migration or their phagocytic ability. Another motility-related mechanism was described: the failure of MCP-1 receptor (CCR2, CD192), TNF receptor 1 (TNFR1, CD120a), and TNF receptor 2 (TNFR2, CD120b) on the monocytes after exposure to pathophysiological concentrations (10 μM) of 4-HNE or after the phagocytosis of malarial pigment hemozoin. These immune cellular dysfunctions potentially lead to a decreased immune response in diseases characterized by high oxidative stress, such as malaria, cancer, metabolic syndrome, atherosclerosis, Alzheimer's disease, rheumatoid arthritis, neurodegenerative diseases, and preeclampsia. == See also == Cap formation Chemotaxis Collective cell migration Durotaxis Endocytic cycle Mouse models of breast cancer metastasis Neurophilic Protein dynamics == References == == External links == Cell Migration Gateway The Cell Migration Gateway is a comprehensive and regularly updated resource on cell migration The Cytoskeleton and Cell Migration A tour of images and videos by the J. V. Small lab in Salzburg and Vienna
{ "page_id": 2428938, "source": null, "title": "Cell migration" }
Leopold Blaschka (27 May 1822 – 3 July 1895) and his son Rudolf Blaschka (17 June 1857 – 1 May 1939) were glass artists from Dresden, Germany. They were known for their production of biological and botanical models, including glass sea creatures and Harvard University's Glass Flowers. == Family background == The Blaschka family's roots trace to Josephthal in Erzgebirge, Bohemia, a region known for processing glass, metals, and gems. Members of the Blaschka family worked in Venice, Bohemia, and Germany. Leopold referred to this history in an 1889 letter to Mary Lee Ware: Many people think that we have some secret apparatus by which we can squeeze glass suddenly into these forms, but it is not so. We have the touch. My son Rudolf has more than I have because he is my son and the touch increases in every generation. The only way to become a glass modeler of skill, I have often said to people, is to get a good great-grandfather who loved glass; then he is to have a son with like tastes; he is to be your grandfather. He in turn will have a son who must, as your father, be passionately fond of glass. You, as his son, can then try your hand, and it is your own fault if you do not succeed. But, if you do not have such ancestors, it is not your fault. Leopold was born in Český Dub, Bohemia, one of the three sons of Joseph Blaschke. Leopold himself would later Latinize the family name to Blaschka. He and his son were native to the Bohemian Czech-German borderland. Leopold was apprenticed to a goldsmith and gem cutter in Turnov, a town in the Liberec Region of today's Czech Republic. He then joined the family business which produced glass ornaments
{ "page_id": 25235466, "source": null, "title": "Leopold and Rudolf Blaschka" }
and glass eyes. Leopold developed a technique which he termed "glass-spinning" which permitted the construction of highly precise and detailed works in glass. He soon began to focus the business on manufacturing glass eyes. In 1846, Leopold married Caroline Zimmermann, and within four years their son Josef Augustin Blaschka was born. Caroline and Josef both died of cholera in 1850. A year later, Leopold's father died. Leopold "sought consolation in the natural world, sketching the plants in the countryside around his home." == Glass marine invertebrates == In 1853, Leopold travelled to the United States. While on route, the ship was delayed at sea for two weeks due to a lack of trade winds. During this time, Leopold studied and sketched local marine invertebrates, the glass-like transparency of their bodies intriguing him. He wrote: It is a beautiful night in May. Hopefully, we look out over the darkness of the sea, which is as smooth as a mirror; there emerges all around in various places a flash like bundle of light beams, as if it is surrounded by thousands of sparks, that form true bundles of fire and of other bright lighting spots, and the seemingly mirrored stars. There emerges close before us a small spot in a sharp greenish light, which becomes ever larger and larger and finally becomes a bright shining sunlike figure. On his return to Český Dub, Leopold focused on producing glass eyes, costume ornaments, lab equipment, and other goods and specialty items whose production was expected of master lampworkers. He married his second wife, Caroline Riegel, in 1854. In his free time, he created glass models of plants. These would eventually become the basis of the Ware Collection of Blaschka Glass Models of Plants, also known as the Glass Flowers, which were collected many years
{ "page_id": 25235466, "source": null, "title": "Leopold and Rudolf Blaschka" }
later. During this period, Blaschka did not make any money producing the models. Eventually, however, the models attracted the attention of Prince Camille de Rohan, who arranged to meet with Leopold at Sychrov Castle in 1857. Prince Camille, an enthusiast of natural sciences, commissioned Leopold to craft 100 glass orchids for his private collection. In 1862, "the prince exhibited about 100 models of orchids and other exotic plants, which he displayed on two artificial tree trunks in his palace in Prague." This royal commission brought Blaschka's craft to the attention of Professor Ludwig Reichenbach, then director of the Natural History Museum in Dresden. Professor Reichenbach admired the botanical models and convinced Leopold to try creating glass models of marine invertebrates. In the nineteenth century, the dominant method of displaying preserved marine invertebrates was wet-preservation, which involved taking a live specimen and placing it in a sealed jar, usually filled with alcohol. This killed the specimen and frequently decomposed the specimens beyond recognition. Initially, the designs for these were based on drawings in books, but Leopold was soon able to use his earlier drawings to produce models of other species. His reputation spread quickly. Demand for the models pushed Leopold to further the training of his son and apprentice, Rudolf Blaschka. A year after the success of the glass sea anemones, the family moved to Dresden to give young Rudolf better educational opportunities. == Belgium == In 1886, Edouard Van Beneden, founder of the Institute of Zoology, ordered 77 Blaschka models in order to illustrate zoology lessons. Some of these models are still on display at Treasure in the Aquarium-Museum in Liège. == Contact with Harvard == By 1880, Rudolf was assisting his father in producing the glass models, including the production of 131 Glass sea creature models for the Boston
{ "page_id": 25235466, "source": null, "title": "Leopold and Rudolf Blaschka" }
Society of Natural History Museum (now the Museum of Science). These models, along with the ones purchased by Harvard's Museum of Comparative Zoology, were seen by Professor George Lincoln Goodale, who was in the process of establishing and building the Harvard Botanical Museum's collection. In 1886, Goodale, traveled to Dresden to meet with the Blaschkas and request a series of glass botanical models for Harvard. Some reports claim that Goodale saw a few glass orchids in the room where they met, surviving from the work two decades earlier. Although initially reluctant, Leopold eventually agreed to send test-models to the U.S. Despite being badly damaged by U.S. Customs, Goodale appreciated their craftsmanship and showed them widely. Goodale was convinced that Blaschka's glass art was a worthy investment for Harvard, which was a global centre for the study of botany. At that time, botanical specimens were almost entirely showcased as dried, pressed and labeled specimens called "specimina exsiccata" (dried specimens), but this presented a number of problems. Pressed plant specimens were two-dimensional and tended to lose their color and form, making them difficult to use as accurate teaching tools. Dried specimens were also quite heavy and bulky, making their transport and storage expensive. Having already seen the intact Blaschka models at Harvard, Professor Goodale decided to commission the glass flowers. To cover the expensive enterprise, Goodale approached former student Mary Lee Ware and her mother, Elizabeth C. Ware, already funders of Harvard's botany department. Mary convinced her mother to underwrite the consignment of the glass models, and in 1887, the Blaschkas contracted half of their time to producing the models for Harvard with the remaining time dedicated to making marine invertebrate models.However, in 1890, the Blaschkas insisted that it was impossible for them to craft the botanical models for half the year
{ "page_id": 25235466, "source": null, "title": "Leopold and Rudolf Blaschka" }
and do the sea creatures during the other half, declaring that they “must give up either one or the other." To resolve this, the Blaschkas signed an exclusive ten-year contract with Harvard to make glass flowers for 8,800 marks per year. New arrangements were also made to send the models directly to Harvard, where museum staff - possibly including Elizabeth Hodges Clark - could open them safely under the observation of Customs staff. Their models showcased a range of plant specimens. In total, up to 164 taxonomic families and a diversity of plant part morphologies, including flowers, leaves, fruits, and roots, were created. Some were shown during pollination by insects; others were diseased in various ways. Goodale noted that the activity of the Blaschkas was "greatly increased by their exclusive devotion to a single line of work." Writing for the Annual reports of the President and Treasurer of Harvard College 1890-1891: It has been only within a comparatively short time that I have discovered the cause of the great reluctance of the elder Blaschka to the undertaking at the outset. It appears upon inquiry that he had constructed a few models of plants before beginning the preparation of the animal models to which he owes his wide celebrity; but these models of plants were, he thought, not appreciated by the persons for whom he had made them. The first set of models passed through various vicissitudes, and finally found a home in the Natural History Museum in Liège, where they were at last destroyed by fire. The artist did not have courage to undertake the experiment again, when he was succeeding so well with his animal models. He regards it as a pleasant turn in his fortunes which permits him to devote all of his time to the subject of
{ "page_id": 25235466, "source": null, "title": "Leopold and Rudolf Blaschka" }
his earliest studies. == Production of Glass Flowers == Claims arose that Leopold and his son were using secret methods to make their glass models. These claims were refuted by Leopold himself. Blaschka stated "One cannot hurry glass. It will take its own time. If we try to hasten it beyond its limits, it resists and no longer obeys us. We have to humor it." The Blaschkas used a mixture of clear and colored glass, sometimes supported with wire, to produce their models. Many pieces were painted by Rudolf. In order to represent plants which were not native to the Dresden area, father and son studied foreign plant collections at Pillnitz Palace and the Dresden Botanical Garden. They also grew some from seeds sent from the United States. In 1892, Rudolf was sent on a trip to the Caribbean and the U.S. to study additional plants, making extensive drawings and notes. At this point, the number of glass models sent annually to Harvard was approximately 120. Leopold died in 1895 while Rudolf was on a second trip to the U.S. Rudolf continued to work alone, but production slowed. By the early twentieth century, he found that he was unable to buy high quality glass and began making his own. This was confirmed by Mary Lee Ware during her 1908 visit to Rudolf. In a letter she later wrote to the second director of the Botanical Museum, Professor Oakes Ames, she observed how "one change in the character of [Rudolf's] work and, consequently in the time necessary to accomplish results since I was last here, is very noteworthy. At that time […] he bought most of his glass and was just beginning to make some, and his finishes were in paint. Now he himself makes a large part of the glass
{ "page_id": 25235466, "source": null, "title": "Leopold and Rudolf Blaschka" }
and all the enamels, which he uses powders to use as paint." In addition to funding and visiting the project, Mary took an active role in its progress, going so far as to personally unpack each model and make arrangements for Rudolph's fieldwork in the U.S. and Jamaica. Ames was less passionate about the Glass Flowers than his predecessor had been. However, he soon requested what he referred to as "Economic Botany", asking Rudolf to make glass models of olives and grapes. This eventually evolved into a series of glass fruit models in both rotting and edible condition. Ames continued to exchange letters with Mary Lee Ware discussing the project and commented on the quality and speed of production declining with Rudolf's age, expressing concern whether Blaschka could continue to produce models of satisfactory quality. Rudolf continued making models for Harvard until 1938. By then 80 years old, he announced his retirement. Neither he nor his father Leopold had taken on an apprentice and Rudolf left no successor, as he and his wife Frieda had no children. In total, Leopold and Rudolf made approximately 4,400 models for Harvard, 780 of which showed species at life-size. As of 2016, fewer than 75 per cent of the models are on regular display at the Harvard Museum of Natural History in the Ware Collection. Older exhibitions contained up to 3,000 models, but this number was reduced during renovations of the museum's collections. Unlike the glass sea creatures which were "a profitable global mail-order business", the Glass Flowers were commissioned solely for Harvard. == Legacy == Over the course of their collective lives, Leopold and Rudolf crafted as many as ten thousand glass marine invertebrate models and 4,400 botanical models, the most famous being Harvard's Glass Flowers. The Blaschka studio survived the bombing of
{ "page_id": 25235466, "source": null, "title": "Leopold and Rudolf Blaschka" }
Dresden in World War II and, in 1993, the Corning Museum of Glass and Harvard Museum of Natural History jointly purchased the remaining Blaschka studio materials from Frieda Blaschka's niece, Gertrud Pones. The Pisa Charterhouse, which houses the Museum of Natural History of the University of Pisa, has a collection of 51 Blaschka glass marine invertebrates. Leopold and Rudolf and their spouses are buried together in the Hosterwitz cemetery in Dresden. == See also == Robert Brendel The Ware Collection of Blaschka Glass Models of Plants, Harvard Museum of Natural History Blaschka Collection, Museum of Natural History of the University of Pisa == References == == External links == The Story of Rudolf and Leopold Blaschka, video The Blaschka Archives, held by the Rakow Library of the Corning Museum of Glass. Blaschka collection at Natural History Museum, London "Sea creatures of the deep - the Blaschka Glass models" National Museum of Wales. 15 May 2007. "The Glass Flowers". Corning Museum of Glass. 18 October 2011. Retrieved 5 June 2014. Out of the Teeming Sea: Cornell Collection of Blaschka Invertebrate Models
{ "page_id": 25235466, "source": null, "title": "Leopold and Rudolf Blaschka" }
In statistical mechanics and mathematics, a Boltzmann distribution (also called Gibbs distribution) is a probability distribution or probability measure that gives the probability that a system will be in a certain state as a function of that state's energy and the temperature of the system. The distribution is expressed in the form: p i ∝ exp ⁡ ( − ε i k T ) {\displaystyle p_{i}\propto \exp \left(-{\frac {\varepsilon _{i}}{kT}}\right)} where pi is the probability of the system being in state i, exp is the exponential function, εi is the energy of that state, and a constant kT of the distribution is the product of the Boltzmann constant k and thermodynamic temperature T. The symbol ∝ {\textstyle \propto } denotes proportionality (see § The distribution for the proportionality constant). The term system here has a wide meaning; it can range from a collection of 'sufficient number' of atoms or a single atom to a macroscopic system such as a natural gas storage tank. Therefore, the Boltzmann distribution can be used to solve a wide variety of problems. The distribution shows that states with lower energy will always have a higher probability of being occupied. The ratio of probabilities of two states is known as the Boltzmann factor and characteristically only depends on the states' energy difference: p i p j = exp ⁡ ( ε j − ε i k T ) {\displaystyle {\frac {p_{i}}{p_{j}}}=\exp \left({\frac {\varepsilon _{j}-\varepsilon _{i}}{kT}}\right)} The Boltzmann distribution is named after Ludwig Boltzmann who first formulated it in 1868 during his studies of the statistical mechanics of gases in thermal equilibrium. Boltzmann's statistical work is borne out in his paper “On the Relationship between the Second Fundamental Theorem of the Mechanical Theory of Heat and Probability Calculations Regarding the Conditions for Thermal Equilibrium" The distribution was
{ "page_id": 4107, "source": null, "title": "Boltzmann distribution" }
later investigated extensively, in its modern generic form, by Josiah Willard Gibbs in 1902. The Boltzmann distribution should not be confused with the Maxwell–Boltzmann distribution or Maxwell-Boltzmann statistics. The Boltzmann distribution gives the probability that a system will be in a certain state as a function of that state's energy, while the Maxwell-Boltzmann distributions give the probabilities of particle speeds or energies in ideal gases. The distribution of energies in a one-dimensional gas however, does follow the Boltzmann distribution. == The distribution == The Boltzmann distribution is a probability distribution that gives the probability of a certain state as a function of that state's energy and temperature of the system to which the distribution is applied. It is given as p i = 1 Q exp ⁡ ( − ε i k T ) = exp ⁡ ( − ε i k T ) ∑ j = 1 M exp ⁡ ( − ε j k T ) {\displaystyle p_{i}={\frac {1}{Q}}\exp \left(-{\frac {\varepsilon _{i}}{kT}}\right)={\frac {\exp \left(-{\tfrac {\varepsilon _{i}}{kT}}\right)}{\displaystyle \sum _{j=1}^{M}\exp \left(-{\tfrac {\varepsilon _{j}}{kT}}\right)}}} where: exp() is the exponential function, pi is the probability of state i, εi is the energy of state i, k is the Boltzmann constant, T is the absolute temperature of the system, M is the number of all states accessible to the system of interest, Q (denoted by some authors by Z) is the normalization denominator, which is the canonical partition function Q = ∑ j = 1 M exp ⁡ ( − ε j k T ) {\displaystyle Q=\sum _{j=1}^{M}\exp \left(-{\tfrac {\varepsilon _{j}}{kT}}\right)} It results from the constraint that the probabilities of all accessible states must add up to 1. Using Lagrange multipliers, one can prove that the Boltzmann distribution is the distribution that maximizes the entropy S ( p 1 , p 2 ,
{ "page_id": 4107, "source": null, "title": "Boltzmann distribution" }
⋯ , p M ) = − ∑ i = 1 M p i log 2 ⁡ p i {\displaystyle S(p_{1},p_{2},\cdots ,p_{M})=-\sum _{i=1}^{M}p_{i}\log _{2}p_{i}} subject to the normalization constraint that ∑ p i = 1 {\textstyle \sum p_{i}=1} and the constraint that ∑ p i ε i {\textstyle \sum {p_{i}{\varepsilon }_{i}}} equals a particular mean energy value, except for two special cases. (These special cases occur when the mean value is either the minimum or maximum of the energies εi. In these cases, the entropy maximizing distribution is a limit of Boltzmann distributions where T approaches zero from above or below, respectively.) The partition function can be calculated if we know the energies of the states accessible to the system of interest. For atoms the partition function values can be found in the NIST Atomic Spectra Database. The distribution shows that states with lower energy will always have a higher probability of being occupied than the states with higher energy. It can also give us the quantitative relationship between the probabilities of the two states being occupied. The ratio of probabilities for states i and j is given as p i p j = exp ⁡ ( ε j − ε i k T ) {\displaystyle {\frac {p_{i}}{p_{j}}}=\exp \left({\frac {\varepsilon _{j}-\varepsilon _{i}}{kT}}\right)} where: pi is the probability of state i, pj the probability of state j, εi is the energy of state i, εj is the energy of state j. The corresponding ratio of populations of energy levels must also take their degeneracies into account. The Boltzmann distribution is often used to describe the distribution of particles, such as atoms or molecules, over bound states accessible to them. If we have a system consisting of many particles, the probability of a particle being in state i is practically the probability
{ "page_id": 4107, "source": null, "title": "Boltzmann distribution" }
that, if we pick a random particle from that system and check what state it is in, we will find it is in state i. This probability is equal to the number of particles in state i divided by the total number of particles in the system, that is the fraction of particles that occupy state i. p i = N i N {\displaystyle p_{i}={\frac {N_{i}}{N}}} where Ni is the number of particles in state i and N is the total number of particles in the system. We may use the Boltzmann distribution to find this probability that is, as we have seen, equal to the fraction of particles that are in state i. So the equation that gives the fraction of particles in state i as a function of the energy of that state is N i N = exp ⁡ ( − ε i k T ) ∑ j = 1 M exp ⁡ ( − ε j k T ) {\displaystyle {\frac {N_{i}}{N}}={\frac {\exp \left(-{\frac {\varepsilon _{i}}{kT}}\right)}{\displaystyle \sum _{j=1}^{M}\exp \left(-{\tfrac {\varepsilon _{j}}{kT}}\right)}}} This equation is of great importance to spectroscopy. In spectroscopy we observe a spectral line of atoms or molecules undergoing transitions from one state to another. In order for this to be possible, there must be some particles in the first state to undergo the transition. We may find that this condition is fulfilled by finding the fraction of particles in the first state. If it is negligible, the transition is very likely not observed at the temperature for which the calculation was done. In general, a larger fraction of molecules in the first state means a higher number of transitions to the second state. This gives a stronger spectral line. However, there are other factors that influence the intensity of a spectral line, such
{ "page_id": 4107, "source": null, "title": "Boltzmann distribution" }
as whether it is caused by an allowed or a forbidden transition. The softmax function commonly used in machine learning is related to the Boltzmann distribution: ( p 1 , … , p M ) = softmax ⁡ [ − ε 1 k T , … , − ε M k T ] {\displaystyle (p_{1},\ldots ,p_{M})=\operatorname {softmax} \left[-{\frac {\varepsilon _{1}}{kT}},\ldots ,-{\frac {\varepsilon _{M}}{kT}}\right]} == Generalized Boltzmann distribution == A distribution of the form Pr ( ω ) ∝ exp ⁡ [ ∑ η = 1 n X η x η ( ω ) k B T − E ( ω ) k B T ] {\displaystyle \Pr \left(\omega \right)\propto \exp \left[\sum _{\eta =1}^{n}{\frac {X_{\eta }x_{\eta }^{\left(\omega \right)}}{k_{B}T}}-{\frac {E^{\left(\omega \right)}}{k_{B}T}}\right]} is called generalized Boltzmann distribution by some authors. The Boltzmann distribution is a special case of the generalized Boltzmann distribution. The generalized Boltzmann distribution is used in statistical mechanics to describe canonical ensemble, grand canonical ensemble and isothermal–isobaric ensemble. The generalized Boltzmann distribution is usually derived from the principle of maximum entropy, but there are other derivations. The generalized Boltzmann distribution has the following properties: It is the only distribution for which the entropy as defined by Gibbs entropy formula matches with the entropy as defined in classical thermodynamics. It is the only distribution that is mathematically consistent with the fundamental thermodynamic relation where state functions are described by ensemble average. == In statistical mechanics == The Boltzmann distribution appears in statistical mechanics when considering closed systems of fixed composition that are in thermal equilibrium (equilibrium with respect to energy exchange). The most general case is the probability distribution for the canonical ensemble. Some special cases (derivable from the canonical ensemble) show the Boltzmann distribution in different aspects: Canonical ensemble (general case) The canonical ensemble gives the probabilities of the various
{ "page_id": 4107, "source": null, "title": "Boltzmann distribution" }
possible states of a closed system of fixed volume, in thermal equilibrium with a heat bath. The canonical ensemble has a state probability distribution with the Boltzmann form. Statistical frequencies of subsystems' states (in a non-interacting collection) When the system of interest is a collection of many non-interacting copies of a smaller subsystem, it is sometimes useful to find the statistical frequency of a given subsystem state, among the collection. The canonical ensemble has the property of separability when applied to such a collection: as long as the non-interacting subsystems have fixed composition, then each subsystem's state is independent of the others and is also characterized by a canonical ensemble. As a result, the expected statistical frequency distribution of subsystem states has the Boltzmann form. Maxwell–Boltzmann statistics of classical gases (systems of non-interacting particles) In particle systems, many particles share the same space and regularly change places with each other; the single-particle state space they occupy is a shared space. Maxwell–Boltzmann statistics give the expected number of particles found in a given single-particle state, in a classical gas of non-interacting particles at equilibrium. This expected number distribution has the Boltzmann form. Although these cases have strong similarities, it is helpful to distinguish them as they generalize in different ways when the crucial assumptions are changed: When a system is in thermodynamic equilibrium with respect to both energy exchange and particle exchange, the requirement of fixed composition is relaxed and a grand canonical ensemble is obtained rather than canonical ensemble. On the other hand, if both composition and energy are fixed, then a microcanonical ensemble applies instead. If the subsystems within a collection do interact with each other, then the expected frequencies of subsystem states no longer follow a Boltzmann distribution, and even may not have an analytical solution. The canonical
{ "page_id": 4107, "source": null, "title": "Boltzmann distribution" }
ensemble can however still be applied to the collective states of the entire system considered as a whole, provided the entire system is in thermal equilibrium. With quantum gases of non-interacting particles in equilibrium, the number of particles found in a given single-particle state does not follow Maxwell–Boltzmann statistics, and there is no simple closed form expression for quantum gases in the canonical ensemble. In the grand canonical ensemble the state-filling statistics of quantum gases are described by Fermi–Dirac statistics or Bose–Einstein statistics, depending on whether the particles are fermions or bosons, respectively. == In mathematics == In more general mathematical settings, the Boltzmann distribution is also known as the Gibbs measure. In statistics and machine learning, it is called a log-linear model. In deep learning, the Boltzmann distribution is used in the sampling distribution of stochastic neural networks such as the Boltzmann machine, restricted Boltzmann machine, energy-based models and deep Boltzmann machine. In deep learning, the Boltzmann machine is considered to be one of the unsupervised learning models. In the design of Boltzmann machine in deep learning, as the number of nodes are increased the difficulty of implementing in real time applications becomes critical, so a different type of architecture named Restricted Boltzmann machine is introduced. == In economics == The Boltzmann distribution can be introduced to allocate permits in emissions trading. The new allocation method using the Boltzmann distribution can describe the most probable, natural, and unbiased distribution of emissions permits among multiple countries. The Boltzmann distribution has the same form as the multinomial logit model. As a discrete choice model, this is very well known in economics since Daniel McFadden made the connection to random utility maximization. == See also == Bose–Einstein statistics Fermi–Dirac statistics Negative temperature Softmax function == References ==
{ "page_id": 4107, "source": null, "title": "Boltzmann distribution" }
Bioleaching is the extraction or liberation of metals from their ores through the use of living organisms. Bioleaching is one of several applications within biohydrometallurgy and several methods are used to treat ores or concentrates containing copper, zinc, lead, arsenic, antimony, nickel, molybdenum, gold, silver, and cobalt. Bioleaching falls into two broad categories. The first, is the use of microorganisms to oxidize refractory minerals to release valuable metals such and gold and silver. Most commonly the minerals that are the target of oxidization are pyrite and arsenopyrite. The second category is leaching of sulphide minerals to release the associated metal, for example, leaching of pentlandite to release nickel, or the leaching of chalcocite, covellite or chalcopyrite to release copper. == Process == Bioleaching can involve numerous ferrous iron and sulfur oxidizing bacteria, including Acidithiobacillus ferrooxidans (formerly known as Thiobacillus ferrooxidans) and Acidithiobacillus thiooxidans (formerly known as Thiobacillus thiooxidans). As a general principle, in one proposed method of bacterial leaching known as Indirect Leaching, Fe3+ ions are used to oxidize the ore. This step is entirely independent of microbes. The role of the bacteria is further oxidation of the ore, but also the regeneration of the chemical oxidant Fe3+ from Fe2+. For example, bacteria catalyse the breakdown of the mineral pyrite (FeS2) by oxidising the sulfur and metal (in this case ferrous iron, (Fe2+)) using oxygen. This yields soluble products that can be further purified and refined to yield the desired metal. Pyrite leaching (FeS2): In the first step, disulfide is spontaneously oxidized to thiosulfate by ferric ion (Fe3+), which in turn is reduced to give ferrous ion (Fe2+): (1) F e S 2 + 6 F e 3 + + 3 H 2 O ⟶ 7 F e 2 + + S 2 O 3 2 − + 6 H
{ "page_id": 4111, "source": null, "title": "Bioleaching" }
+ {\displaystyle \mathrm {FeS_{2}+6\ Fe^{\,3+}+3\ H_{2}O\longrightarrow 7\ Fe^{\,2+}+S_{2}O_{3}^{\,2-}+6\ H^{+}} } spontaneous The ferrous ion is then oxidized by bacteria using oxygen: (2) 4 F e 2 + + O 2 + 4 H + ⟶ 4 F e 3 + + 2 H 2 O {\displaystyle \mathrm {4\ Fe^{\,2+}+\ O_{2}+4\ H^{+}\longrightarrow 4\ Fe^{\,3+}+2\ H_{2}O} } (iron oxidizers) Thiosulfate is also oxidized by bacteria to give sulfate: (3) S 2 O 3 2 − + 2 O 2 + H 2 O ⟶ 2 S O 4 2 − + 2 H + {\displaystyle \mathrm {S_{2}O_{3}^{\,2-}+2\ O_{2}+H_{2}O\longrightarrow 2\ SO_{4}^{\,2-}+2\ H^{+}} } (sulfur oxidizers) The ferric ion produced in reaction (2) oxidized more sulfide as in reaction (1), closing the cycle and given the net reaction: (4) 2 F e S 2 + 7 O 2 + 2 H 2 O ⟶ 2 F e 2 + + 4 S O 4 2 − + 4 H + {\displaystyle \mathrm {2\ FeS_{2}+7\ O_{2}+2\ H_{2}O\longrightarrow 2\ Fe^{\,2+}+4\ SO_{4}^{\,2-}+4\ H^{+}} } The net products of the reaction are soluble ferrous sulfate and sulfuric acid. The microbial oxidation process occurs at the cell membrane of the bacteria. The electrons pass into the cells and are used in biochemical processes to produce energy for the bacteria while reducing oxygen to water. The critical reaction is the oxidation of sulfide by ferric iron. The main role of the bacterial step is the regeneration of this reactant. The process for copper is very similar, but the efficiency and kinetics depend on the copper mineralogy. The most efficient minerals are supergene minerals such as chalcocite, Cu2S and covellite, CuS. The main copper mineral chalcopyrite (CuFeS2) is not leached very efficiently, which is why the dominant copper-producing technology remains flotation, followed by smelting and refining. The leaching of CuFeS2 follows
{ "page_id": 4111, "source": null, "title": "Bioleaching" }
the two stages of being dissolved and then further oxidised, with Cu2+ ions being left in solution. Chalcopyrite leaching: (1) C u F e S 2 + 4 F e 3 + ⟶ C u 2 + + 5 F e 2 + + 2 S 0 {\displaystyle \mathrm {CuFeS_{2}+4\ Fe^{\,3+}\longrightarrow Cu^{\,2+}+5\ Fe^{\,2+}+2\ S_{0}} } spontaneous (2) 4 F e 2 + + O 2 + 4 H + ⟶ 4 F e 3 + + 2 H 2 O {\displaystyle \mathrm {4\ Fe^{\,2+}+O_{2}+4\ H^{+}\longrightarrow 4\ Fe^{\,3+}+2\ H_{2}O} } (iron oxidizers) (3) 2 S 0 + 3 O 2 + 2 H 2 O ⟶ 2 S O 4 2 − + 4 H + {\displaystyle \mathrm {2\ S^{0}+3\ O_{2}+2\ H_{2}O\longrightarrow 2\ SO_{4}^{\,2-}+4\ H^{+}} } (sulfur oxidizers) net reaction: (4) C u F e S 2 + 4 O 2 ⟶ C u 2 + + F e 2 + + 2 S O 4 2 − {\displaystyle \mathrm {CuFeS_{2}+4\ O_{2}\longrightarrow Cu^{\,2+}+Fe^{\,2+}+2\ SO_{4}^{\,2-}} } In general, sulfides are first oxidized to elemental sulfur, whereas disulfides are oxidized to give thiosulfate, and the processes above can be applied to other sulfidic ores. Bioleaching of non-sulfidic ores such as pitchblende also uses ferric iron as an oxidant (e.g., UO2 + 2 Fe3+ ==> UO22+ + 2 Fe2+). In this case, the sole purpose of the bacterial step is the regeneration of Fe3+. Sulfidic iron ores can be added to speed up the process and provide a source of iron. Bioleaching of non-sulfidic ores by layering of waste sulfides and elemental sulfur, colonized by Acidithiobacillus spp., has been accomplished, which provides a strategy for accelerated leaching of materials that do not contain sulfide minerals. == Further processing == The dissolved copper (Cu2+) ions are removed from the solution by ligand exchange solvent extraction,
{ "page_id": 4111, "source": null, "title": "Bioleaching" }
which leaves other ions in the solution. The copper is removed by bonding to a ligand, which is a large molecule consisting of a number of smaller groups, each possessing a lone electron pair. The ligand-copper complex is extracted from the solution using an organic solvent such as kerosene: Cu2+(aq) + 2LH(organic) → CuL2(organic) + 2H+(aq) The ligand donates electrons to the copper, producing a complex - a central metal atom (copper) bonded to the ligand. Because this complex has no charge, it is no longer attracted to polar water molecules and dissolves in the kerosene, which is then easily separated from the solution. Because the initial reaction is reversible, it is determined by pH. Adding concentrated acid reverses the equation, and the copper ions go back into an aqueous solution. Then the copper is passed through an electro-winning process to increase its purity: An electric current is passed through the resulting solution of copper ions. Because copper ions have a 2+ charge, they are attracted to the negative cathodes and collect there. The copper can also be concentrated and separated by displacing the copper with Fe from scrap iron: Cu2+(aq) + Fe(s) → Cu(s) + Fe2+(aq) The electrons lost by the iron are taken up by the copper. Copper is the oxidising agent (it accepts electrons), and iron is the reducing agent (it loses electrons). Traces of precious metals such as gold may be left in the original solution. Treating the mixture with sodium cyanide in the presence of free oxygen dissolves the gold. The gold is removed from the solution by adsorbing (taking it up on the surface) to charcoal. == With fungi == Several species of fungi can be used for bioleaching. Fungi can be grown on many different substrates, such as electronic scrap, catalytic converters, and
{ "page_id": 4111, "source": null, "title": "Bioleaching" }
fly ash from municipal waste incineration. Experiments have shown that two fungal strains (Aspergillus niger, Penicillium simplicissimum) were able to mobilize Cu and Sn by 65%, and Al, Ni, Pb, and Zn by more than 95%. Aspergillus niger can produce some organic acids such as citric acid. This form of leaching does not rely on microbial oxidation of metal but rather uses microbial metabolism as source of acids that directly dissolve the metal. == Feasibility == === Economic feasibility === Bioleaching is in general simpler and, therefore, cheaper to operate and maintain than traditional processes, since fewer specialists are needed to operate complex chemical plants. And low concentrations are not a problem for bacteria because they simply ignore the waste that surrounds the metals, attaining extraction yields of over 90% in some cases. These microorganisms actually gain energy by breaking down minerals into their constituent elements. The company simply collects the ions out of the solution after the bacteria have finished. Bioleaching can be used to extract metals from low concentration ores such as gold that are too poor for other technologies. It can be used to partially replace the extensive crushing and grinding that translates to prohibitive cost and energy consumption in a conventional process. Because the lower cost of bacterial leaching outweighs the time it takes to extract the metal. High concentration ores, such as copper, are more economical to smelt rather bioleach due to the slow speed of the bacterial leaching process compared to smelting. The slow speed of bioleaching introduces a significant delay in cash flow for new mines. Nonetheless, at the largest copper mine of the world, Escondida in Chile the process seems to be favorable. Economically it is also very expensive and many companies once started can not keep up with the demand and
{ "page_id": 4111, "source": null, "title": "Bioleaching" }
end up in debt. === In space === In 2020 scientists showed, with an experiment with different gravity environments on the ISS, that microorganisms could be employed to mine useful elements from basaltic rocks via bioleaching in space. == Environmental impact == The process is more environmentally friendly than traditional extraction methods. For the company this can translate into profit, since the necessary limiting of sulfur dioxide emissions during smelting is expensive. Less landscape damage occurs, since the bacteria involved grow naturally, and the mine and surrounding area can be left relatively untouched. As the bacteria breed in the conditions of the mine, they are easily cultivated and recycled. Toxic chemicals are sometimes produced in the process. Sulfuric acid and H+ ions that have been formed can leak into the ground and surface water turning it acidic, causing environmental damage. Heavy ions such as iron, zinc, and arsenic leak during acid mine drainage. When the pH of this solution rises, as a result of dilution by fresh water, these ions precipitate, forming "Yellow Boy" pollution. For these reasons, a setup of bioleaching must be carefully planned, since the process can lead to a biosafety failure. Unlike other methods, once started, bioheap leaching cannot be quickly stopped, because leaching would still continue with rainwater and natural bacteria. Projects like Finnish Talvivaara proved to be environmentally and economically disastrous. == See also == Phytomining == References == == Further reading == T. A. Fowler and F. K. Crundwell – "Leaching of zinc sulfide with Thiobacillus ferrooxidans" Brandl H. (2001) "Microbial leaching of metals". In: Rehm H. J. (ed.) Biotechnology, Vol. 10. Wiley-VCH, Weinheim, pp. 191–224 Watling, H. R. (2006). "The bioleaching of sulphide minerals with emphasis on copper sulphides — A review". Hydrometallurgy. 84 (1–2): 81. Bibcode:2006HydMe..84...81W. doi:10.1016/j.hydromet.2006.05.001. Olson, G. J.; Brierley,
{ "page_id": 4111, "source": null, "title": "Bioleaching" }
J. A.; Brierley, C. L. (2003). "Bioleaching review part B". Applied Microbiology and Biotechnology. 63 (3): 249–57. doi:10.1007/s00253-003-1404-6. PMID 14566430. S2CID 24078490. Rohwerder, T.; Gehrke, T.; Kinzler, K.; Sand, W. (2003). "Bioleaching review part A". Applied Microbiology and Biotechnology. 63 (3): 239–248. doi:10.1007/s00253-003-1448-7. PMID 14566432. S2CID 25547087.
{ "page_id": 4111, "source": null, "title": "Bioleaching" }
KREEP, an acronym built from the letters K (the atomic symbol for potassium), REE (rare-earth elements) and P (for phosphorus), is a geochemical component of some lunar impact breccia and basaltic rocks. Its most significant feature is somewhat enhanced concentration of a majority of so-called "incompatible" elements (those that are concentrated in the liquid phase during magma crystallization) and the heat-producing elements, namely radioactive uranium, thorium, and potassium (due to presence of the radioactive 40K). == Typical composition == The typical composition of KREEP includes about one percent, by mass, of potassium and phosphorus oxides, 20 to 25 parts per million of rubidium, and a concentration of the element lanthanum that is 300 to 350 times the concentrations found in carbonaceous chondrites. Most of potassium, phosphorus and rare-earth elements in KREEP basalts are incorporated in the grains of the phosphate minerals apatite and merrillite. == Possible origin == Indirectly, it has been deduced that the origin of KREEP is contained in the origin of the Moon. This is now commonly thought to be the result of a rocky object the size of Mars that struck the Earth about 4.5 billion (4.5×109) years ago. This collision threw a large amount of broken rock into orbit around the Earth. This ultimately gathered together to form the Moon. Given the high energy such a collision would involve, it has been deduced that a large portion of the Moon would have been liquified, and this formed a lunar magma ocean. As the crystallization of this liquid rock proceeded, minerals such as olivine and pyroxene precipitated and sank to the bottom to form the lunar mantle. After the solidification was about 75% complete, the material anorthositic plagioclase began to crystallize, and because of its low density, it floated, forming a solid crust. Hence, elements that
{ "page_id": 200719, "source": null, "title": "KREEP" }
are usually incompatible (i.e., those that usually partition in the liquid phase) would have been progressively concentrated into the magma. Thus a KREEP-rich magma was formed that was sandwiched at first between the crust and mantle. The evidence for these processes comes from the highly anorthositic composition of the crust of the lunar highlands, as well as the presence of the rocks rich in KREEP. == Lunar Prospector measurements == Before the mission of Lunar Prospector lunar satellite, it was commonly thought that these KREEP materials had been formed in a widespread layer beneath the crust. However, the measurements from the gamma-ray spectrometer on-board this satellite showed that the KREEP-containing rocks are primarily concentrated underneath the Oceanus Procellarum and the Mare Imbrium. This is a unique lunar geological province that is now known as the Procellarum KREEP Terrane. Basins far from this province that dug deeply into the crust (and possibly the mantle), such as the Mare Crisium, the Mare Orientale, and the South Pole–Aitken basin, show only little or no enhancements of KREEP within their rims or ejecta. The enhancement of heat-producing radioactive elements within the crust (and/or the mantle) of the Procellarum KREEP Terrane is almost certainly responsible for the longevity and intensity of mare volcanism on the nearside of the Moon. == Possible use in lunar colonization == KREEP might be of interest in lunar mining if a lunar base were to be established. Potassium and phosphorus are important for plant growth (NPK fertilizer is used on earth) whereas uranium and thorium are potential fuels for nuclear power. However, the relatively low concentrations of the desired materials compared to earthbound ores may make extraction difficult. == See also == Geology of the Moon Lunar mare Lunar Prospector Moon Lunar resources == References == == External links ==
{ "page_id": 200719, "source": null, "title": "KREEP" }
Moon articles in Planetary Science Research Discoveries, including articles about KREEP
{ "page_id": 200719, "source": null, "title": "KREEP" }
A viability assay is an assay that is created to determine the ability of organs, cells or tissues to maintain or recover a state of survival. Viability can be distinguished from the all-or-nothing states of life and death by the use of a quantifiable index that ranges between the integers of 0 and 1 or, if more easily understood, the range of 0% and 100%. Viability can be observed through the physical properties of cells, tissues, and organs. Some of these include mechanical activity, motility, such as with spermatozoa and granulocytes, the contraction of muscle tissue or cells, mitotic activity in cellular functions, and more. Viability assays provide a more precise basis for measurement of an organism's level of vitality. Viability assays can lead to more findings than the difference of living versus nonliving. These techniques can be used to assess the success of cell culture techniques, cryopreservation techniques, the toxicity of substances, or the effectiveness of substances in mitigating effects of toxic substances. == Common methods == Though simple visual techniques of observing viability can be useful, it can be difficult to thoroughly measure an organism's/part of an organism's viability merely using the observation of physical properties. However, there are a variety of common protocols utilized for further observation of viability using assays. Tetrazolium reduction: One useful way to locate and measure viability is to complete a Tetrazolium Reduction Assay. The tetrazolium aspect of this assay, which utilizes both positive and negative charges in its formula, promotes the distinction of cell viability in a specimen. Resazurin reduction: Resazurin Reduction Assays perform very closely to that of a tetrazolium assay, except they use the power of redox to fuel their ability to represent cell viability. Protease viability marker: One can look at protease function in specimens if they wish to
{ "page_id": 28643345, "source": null, "title": "Viability assay" }
target viability in cells; this practice in research is known as "Protease Viability Marker Assay Concept". The actions of protease cease once a cell dies, so a clear-cut line is drawn in determining cell viability when using this technique. ATP:ATP is a common energy molecule that many researchers hold extensive knowledge of, thus carrying over to how one understands viability assays. The ATP Assay Concept is a well-known technique for determining the viability of cells using the assessment of ATP and a method known as "firefly luciferase". Sodium-potassium ratio: Another kind of assay practices the examination of the ratio of potassium to sodium in cells to serve as an index of viability. If the cells do not have high intracellular potassium and if intracellular sodium is low, then (1) the cell membrane may not be intact, and/or (2) the sodium-potassium pump may not be operating well. Cytolysis or membrane leakage: This category includes the lactate dehydrogenase assay. Assays such as these contain a stable enzyme common in all cells that can be readily detected when cell membranes are no longer intact. Examples of this type of assay include propidium iodide, trypan blue, and 7-Aminoactinomycin D (7-AAD). Mitochondrial activity or caspase: Resazurin and Formazan (MTT/XTT) can assay for various stages in the apoptosis process that foreshadows cell death. Functional: Assays of cell function will be highly specific to the types of cells being assayed. For example, motility is a widely used assay of sperm cell function. Gamete survival can generally be used to assay fertility. Red blood cells have been assayed in terms of deformability, osmotic fragility, hemolysis, ATP level, and hemoglobin content. For transplantable whole organs, the ultimate assay is the ability to sustain life after transplantation, an assay which is not helpful in preventing transplantation of non-functional organs. Genomic
{ "page_id": 28643345, "source": null, "title": "Viability assay" }
and proteomic: Cells can be assayed for activation of stress pathways using DNA microarrays and protein chips. Flow Cytometry: Automation allows for analysis of thousands of cells per second. As with many kinds of viability assays, quantitative measures of physiological function do not indicate whether damage repair and recovery is possible. An assay of the ability of a cell line to adhere and divide may be more indicative of incipient damage than membrane integrity. === Frogging and tadpoling === "Frogging" is a type of viability assay method that utilizes an agar plate for its environment and consists of plating serial dilutions by pinning them after they have been diluted in liquid. Some of its limitations include that it does not account for total viability and it is not particularly sensitive to low-viability assays; however, it is known for its quick pace. "Tadpoling", which is a method practiced after the development of "frogging", is similar to the "frogging" method, but its test cells are diluted in liquid and then kept in liquid through the examination process. The "tadpoling" method can be used to measure culture viability accurately, which is what depicts its main separation from "frogging". == List of viability assay methods == Calcein AM Clonogenic assay Ethidium homodimer assay Evans blue Fluorescein diacetate hydrolysis/Propidium iodide staining (FDA/PI staining) Flow cytometry Formazan-based assays (MTT/XTT) Green fluorescent protein Lactate dehydrogenase (LDH) Methyl violet Neutral red uptake (vital stain) Propidium iodide, DNA stain that can differentiate necrotic, apoptotic and normal cells. Resazurin TUNEL assay == See also == Cytotoxicity Vital stain == References == == Further reading ==
{ "page_id": 28643345, "source": null, "title": "Viability assay" }
Hydroperoxides or peroxols are compounds of the form ROOH, where R stands for any group, typically organic, which contain the hydroperoxy functional group (−OOH). Hydroperoxide also refers to the hydroperoxide anion (−OOH) and its salts, and the neutral hydroperoxyl radical (•OOH) consist of an unbond hydroperoxy group. When R is organic, the compounds are called organic hydroperoxides. Such compounds are a subset of organic peroxides, which have the formula ROOR. Organic hydroperoxides can either intentionally or unintentionally initiate explosive polymerisation in materials with saturated chemical bonds. == Properties == The O−O bond length in peroxides is about 1.45 Å, and the R−O−O angles (R = H, C) are about 110° (water-like). Characteristically, the C−O−O−H dihedral angles are about 120°. The O−O bond is relatively weak, with a bond dissociation energy of 45–50 kcal/mol (190–210 kJ/mol), less than half the strengths of C−C, C−H, and C−O bonds. Hydroperoxides are typically more volatile than the corresponding alcohols: tert-BuOOH (b.p. 36 °C) vs tert-BuOH (b.p. 82-83 °C) CH3OOH (b.p. 46 °C) vs CH3OH (b.p. 65 °C) cumene hydroperoxide (b.p. 153 °C) vs cumyl alcohol (b.p. 202 °C) == Miscellaneous reactions == Hydroperoxides are mildly acidic. The range is indicated by 11.5 for CH3OOH to 13.1 for Ph3COOH. Hydroperoxides can be reduced to alcohols with lithium aluminium hydride, as described in this idealized equation: 4 ROOH + LiAlH4 → LiAlO2 + 2 H2O + 4 ROH This reaction is the basis of methods for analysis of organic peroxides. Another way to evaluate the content of peracids and peroxides is the volumetric titration with alkoxides such as sodium ethoxide. The phosphite esters and tertiary phosphines also effect reduction: ROOH + PR3 → OPR3 + ROH == Uses == === Precursors to epoxides === "The single most important synthetic application of alkyl hydroperoxides is without doubt
{ "page_id": 3411988, "source": null, "title": "Hydroperoxide" }
the metal-catalysed epoxidation of alkenes." In the Halcon process tert-butyl hydroperoxide (TBHP) is employed for the production of propylene oxide. Of specialized interest, chiral epoxides are prepared using hydroperoxides as reagents in the Sharpless epoxidation. === Production of cyclohexanone and caprolactone === Hydroperoxides are intermediates in the production of many organic compounds in industry. For example, the cobalt catalyzed oxidation of cyclohexane to cyclohexanone: C6H12 + O2 → (CH2)5C=O + H2O Drying oils, as found in many paints and varnishes, function via the formation of hydroperoxides. ==== Hock processes ==== Compounds with allylic and benzylic C−H bonds are especially susceptible to oxygenation. Such reactivity is exploited industrially on a large scale for the production of phenol by the Cumene process or Hock process for its cumene and cumene hydroperoxide intermediates. Such reactions rely on radical initiators that reacts with oxygen to form an intermediate that abstracts a hydrogen atom from a weak C-H bond. The resulting radical binds O2, to give hydroperoxyl (ROO•), which then continues the cycle of H-atom abstraction. == Formation == === By autoxidation === The most important (in a commercial sense) peroxides are produced by autoxidation, the direct reaction of O2 with a hydrocarbon. Autoxidation is a radical reaction that begins with the abstraction of an H atom from a relatively weak C-H bond. Important compounds made in this way include tert-butyl hydroperoxide, cumene hydroperoxide and ethylbenzene hydroperoxide: R−H + O2 → R−OOH Auto-oxidation reaction is also observed with common ethers, such as diethyl ether, diisopropyl ether, tetrahydrofuran, and 1,4-dioxane. An illustrative product is diethyl ether peroxide. Such compounds can result in a serious explosion when distilled. To minimize this problem, commercial samples of THF are often inhibited with butylated hydroxytoluene (BHT). Distillation of THF to dryness is avoided because the explosive peroxides concentrate in the
{ "page_id": 3411988, "source": null, "title": "Hydroperoxide" }
residue. Although ether hydroperoxide often form adventitiously (i.e. autoxidation), they can be prepared in high yield by the acid-catalyzed addition of hydrogen peroxide to vinyl ethers: C2H5OCH=CH2 + H2O2 → C2H5OCH(OOH)CH3 === From hydrogen peroxide === Many industrial peroxides are produced using hydrogen peroxide. Reactions with aldehydes and ketones yield a series of compounds depending on conditions. Specific reactions include addition of hydrogen peroxide across the C=O double bond: R2C=O + H2O2 → R2C(OH)OOH In some cases, these hydroperoxides convert to give cyclic diperoxides: [R2C(O2H)]2O2 → [R2C]2(O2)2 + 2 H2O Addition of this initial adduct to a second equivalent of the carbonyl: R2C=O + R2C(OH)OOH → [R2C(OH)]2O2 Further replacement of alcohol groups: [R2C(OH)]2O2 + 2 H2O2 → [R2C(O2H)]2O2 + 2 H2O Triphenylmethanol reacts with hydrogen peroxide gives the unusually stable hydroperoxide, Ph3COOH. === Naturally occurring hydroperoxides === Many hydroperoxides are derived from fatty acids, steroids, and terpenes. The biosynthesis of these species is affected extensively by enzymes. == Inorganic hydroperoxides == Although hydroperoxide often refers to a class of organic compounds, many inorganic or metallo-organic compounds are hydroperoxides. One example involves sodium perborate, a commercially important bleaching agent with the formula Na2[(HO)2B]2(OO)2)]. It acts by hydrolysis to give a boron-hydroperoxide: [(HO)2B]2(OO)2)2− + 2 H2O ⇌ 2 [(HO)3B(OOH)]− This hydrogen peroxide then releases hydrogen peroxide: [(HO)3B(OOH)]− + H2O ⇌ B(OH)−4 + H2O2 Several metal hydroperoxide complexes have been characterized by X-ray crystallography, for example: triphenylsilicon and triphenylgermanium hydroperoxides can be obtained by reaction of initial chlorides with excess of hydrogen peroxide in presence of base. Some form by the reaction of metal hydrides with oxygen gas: LnM−H + O2 → LnM−O−O−H (Ln refers to other ligands bound to the metal) Some transition metal dioxygen complexes abstract H atoms (and sometimes protons) to give hydroperoxides: LnM(O2) + H → LnMOOH == References
{ "page_id": 3411988, "source": null, "title": "Hydroperoxide" }
==
{ "page_id": 3411988, "source": null, "title": "Hydroperoxide" }
The Big Bang is a physical theory that describes how the universe expanded from an initial state of high density and temperature. Various cosmological models based on the Big Bang concept explain a broad range of phenomena, including the abundance of light elements, the cosmic microwave background (CMB) radiation, and large-scale structure. The uniformity of the universe, known as the horizon and flatness problems, is explained through cosmic inflation: a phase of accelerated expansion during the earliest stages. A wide range of empirical evidence strongly favors the Big Bang event, which is now essentially universally accepted. Detailed measurements of the expansion rate of the universe place the Big Bang singularity at an estimated 13.787±0.02 billion years ago, which is considered the age of the universe. Extrapolating this cosmic expansion backward in time using the known laws of physics, the models describe an extraordinarily hot and dense primordial universe. Physics lacks a widely accepted theory that can model the earliest conditions of the Big Bang. As the universe expanded, it cooled sufficiently to allow the formation of subatomic particles, and later atoms. These primordial elements—mostly hydrogen, with some helium and lithium—then coalesced under the force of gravity aided by dark matter, forming early stars and galaxies. Measurements of the redshifts of supernovae indicate that the expansion of the universe is accelerating, an observation attributed to a concept called dark energy. The concept of an expanding universe was scientifically originated by the physicist Alexander Friedmann in 1922 with the mathematical derivation of the Friedmann equations. The earliest empirical observation of an expanding universe is known as Hubble's law, published in work by physicist Edwin Hubble in 1929, which discerned that galaxies are moving away from Earth at a rate that accelerates proportionally with distance. Independent of Friedmann's work, and independent of Hubble's
{ "page_id": 4116, "source": null, "title": "Big Bang" }
observations, physicist Georges Lemaître proposed that the universe emerged from a "primeval atom" in 1931, introducing the modern notion of the Big Bang. In 1964, the CMB was discovered. Over the next few years measurements showed this radiation to be uniform over directions in the sky and the shape of the energy versus intensity curve, both consistent with the Big Bang models of high temperatures and densities in the distant past. By the late 1960s most cosmologists were convinced that competing steady-state model of cosmic evolution was incorrect. There remain aspects of the observed universe that are not yet adequately explained by the Big Bang models. These include the unequal abundances of matter and antimatter known as baryon asymmetry, the detailed nature of dark matter surrounding galaxies, and the origin of dark energy. == Features of the models == === Assumptions === Big Bang cosmology models depend on three major assumptions: the universality of physical laws, the cosmological principle, and that the matter content can be modeled as a perfect fluid. The universality of physical laws is one of the underlying principles of the theory of relativity. The cosmological principle states that on large scales the universe is homogeneous and isotropic—appearing the same in all directions regardless of location. A perfect fluid has no viscosity; the pressure of a perfect fluid is proportional to its density.: 49 These ideas were initially taken as postulates, but later efforts were made to test each of them. For example, the first assumption has been tested by observations showing that the largest possible deviation of the fine-structure constant over much of the age of the universe is of order 10−5. The key physical law behind these models, general relativity has passed stringent tests on the scale of the Solar System and binary stars. The
{ "page_id": 4116, "source": null, "title": "Big Bang" }
cosmological principle has been confirmed to a level of 10−5 via observations of the temperature of the CMB. At the scale of the CMB horizon, the universe has been measured to be homogeneous with an upper bound on the order of 10% inhomogeneity, as of 1995. === Expansion prediction === The cosmological principle dramatically simplifies the equations of general relativity, giving the Friedmann–Lemaître–Robertson–Walker metric to describe the geometry of the universe and, with the assumption of a perfect fluid, the Friedmann equations giving the time dependence of that geometry. The only parameter at this level of description is the mass-energy density: the geometry of the universe and its expansion is a direct consequence of its density.: 73 All of the major features of Big Bang cosmology are related to these results.: 49 === Mass-energy density === In Big Bang cosmology, the mass–energy density controls the shape and evolution of the universe. By combining astronomical observations with known laws of thermodynamics and particle physics, cosmologists have worked out the components of the density over the lifespan of the universe. In the current universe, luminous matter, the stars, planets, and so on makes up less than 5% of the density. Dark matter accounts for 27% and dark energy the remaining 68%. === Horizons === An important feature of the Big Bang spacetime is the presence of particle horizons. Since the universe has a finite age, and light travels at a finite speed, there may be events in the past whose light has not yet had time to reach earth. This places a limit or a past horizon on the most distant objects that can be observed. Conversely, because space is expanding, and more distant objects are receding ever more quickly, light emitted by us today may never "catch up" to very distant
{ "page_id": 4116, "source": null, "title": "Big Bang" }
objects. This defines a future horizon, which limits the events in the future that we will be able to influence. The presence of either type of horizon depends on the details of the Friedmann–Lemaître–Robertson–Walker (FLRW) metric that describes the expansion of the universe. Our understanding of the universe back to very early times suggests that there is a past horizon, though in practice our view is also limited by the opacity of the universe at early times. So our view cannot extend further backward in time, though the horizon recedes in space. If the expansion of the universe continues to accelerate, there is a future horizon as well. === Thermalization === Some processes in the early universe occurred too slowly, compared to the expansion rate of the universe, to reach approximate thermodynamic equilibrium. Others were fast enough to reach thermalization. The parameter usually used to find out whether a process in the very early universe has reached thermal equilibrium is the ratio between the rate of the process (usually rate of collisions between particles) and the Hubble parameter. The larger the ratio, the more time particles had to thermalize before they were too far away from each other. == Timeline == According to the Big Bang models, the universe at the beginning was very hot and very compact, and since then it has been expanding and cooling. === Singularity === Existing theories of physics cannot tell us about the moment of the Big Bang. Extrapolation of the expansion of the universe backwards in time using only general relativity yields a gravitational singularity with infinite density and temperature at a finite time in the past, but the meaning of this extrapolation in the context of the Big Bang is unclear. Moreover, classical gravitational theories are expected to be inadequate to describe
{ "page_id": 4116, "source": null, "title": "Big Bang" }
physics under these conditions.: 275 Quantum gravity effects are expected to be dominant during the Planck epoch, when the temperature of the universe was close to the Planck scale (around 1032 K or 1028 eV). Even below the Planck scale, undiscovered physics could greatly influence the expansion history of the universe. The Standard Model of particle physics is only tested up to temperatures of order 1017K (10 TeV) in particle colliders, such as the Large Hadron Collider. Moreover, new physical phenomena decoupled from the Standard Model could have been important before the time of neutrino decoupling, when the temperature of the universe was only about 1010K (1 MeV). === Inflation and baryogenesis === The earliest phases of the Big Bang are subject to much speculation, given the lack of available data. In the most common models the universe was filled homogeneously and isotropically with a very high energy density and huge temperatures and pressures, and was very rapidly expanding and cooling. The period up to 10−43 seconds into the expansion, the Planck epoch, was a phase in which the four fundamental forces—the electromagnetic force, the strong nuclear force, the weak nuclear force, and the gravitational force, were unified as one. In this stage, the characteristic scale length of the universe was the Planck length, 1.6×10−35 m, and consequently had a temperature of approximately 1032 degrees Celsius. Even the very concept of a particle breaks down in these conditions. A proper understanding of this period awaits the development of a theory of quantum gravity. The Planck epoch was succeeded by the grand unification epoch beginning at 10−43 seconds, where gravitation separated from the other forces as the universe's temperature fell. At approximately 10−37 seconds into the expansion, a phase transition caused a cosmic inflation, during which the universe grew exponentially, unconstrained
{ "page_id": 4116, "source": null, "title": "Big Bang" }
by the light speed invariance, and temperatures dropped by a factor of 100,000. This concept is motivated by the flatness problem, where the density of matter and energy is very close to the critical density needed to produce a flat universe. That is, the shape of the universe has no overall geometric curvature due to gravitational influence. Microscopic quantum fluctuations that occurred because of Heisenberg's uncertainty principle were "frozen in" by inflation, becoming amplified into the seeds that would later form the large-scale structure of the universe. At a time around 10−36 seconds, the electroweak epoch begins when the strong nuclear force separates from the other forces, with only the electromagnetic force and weak nuclear force remaining unified. Inflation stopped locally at around 10−33 to 10−32 seconds, with the observable universe's volume having increased by a factor of at least 1078. Reheating followed as the inflaton field decayed, until the universe obtained the temperatures required for the production of a quark–gluon plasma as well as all other elementary particles. Temperatures were so high that the random motions of particles were at relativistic speeds, and particle–antiparticle pairs of all kinds were being continuously created and destroyed in collisions. At some point, an unknown reaction called baryogenesis violated the conservation of baryon number, leading to a very small excess of quarks and leptons over antiquarks and antileptons—of the order of one part in 30 million. This resulted in the predominance of matter over antimatter in the present universe. === Cooling === The universe continued to decrease in density and fall in temperature, hence the typical energy of each particle was decreasing. Symmetry-breaking phase transitions put the fundamental forces of physics and the parameters of elementary particles into their present form, with the electromagnetic force and weak nuclear force separating at about 10−12
{ "page_id": 4116, "source": null, "title": "Big Bang" }
seconds. After about 10−11 seconds, the picture becomes less speculative, since particle energies drop to values that can be attained in particle accelerators. At about 10−6 seconds, quarks and gluons combined to form baryons such as protons and neutrons. The small excess of quarks over antiquarks led to a small excess of baryons over antibaryons. The temperature was no longer high enough to create either new proton–antiproton or neutron–antineutron pairs. A mass annihilation immediately followed, leaving just one in 108 of the original matter particles and none of their antiparticles. A similar process happened at about 1 second for electrons and positrons. After these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy density of the universe was dominated by photons (with a minor contribution from neutrinos). A few minutes into the expansion, when the temperature was about a billion kelvin and the density of matter in the universe was comparable to the current density of Earth's atmosphere, neutrons combined with protons to form the universe's deuterium and helium nuclei in a process called Big Bang nucleosynthesis (BBN). Most protons remained uncombined as hydrogen nuclei. As the universe cooled, the rest energy density of matter came to gravitationally dominate over that of the photon and neutrino radiation at a time of about 50,000 years. At a time of about 380,000 years, the universe cooled enough that electrons and nuclei combined into neutral atoms (mostly hydrogen) in an event called recombination. This process made the previously opaque universe transparent, and the photons that last scattered during this epoch comprise the cosmic microwave background. === Structure formation === After the recombination epoch, the slightly denser regions of the uniformly distributed matter gravitationally attracted nearby matter and thus grew even denser, forming gas clouds, stars, galaxies, and
{ "page_id": 4116, "source": null, "title": "Big Bang" }
the other astronomical structures observable today. The details of this process depend on the amount and type of matter in the universe. The four possible types of matter are known as cold dark matter (CDM), warm dark matter, hot dark matter, and baryonic matter. The best measurements available, from the Wilkinson Microwave Anisotropy Probe (WMAP), show that the data is well-fit by a Lambda-CDM model in which dark matter is assumed to be cold. This CDM is estimated to make up about 23% of the matter/energy of the universe, while baryonic matter makes up about 4.6%. === Cosmic acceleration === Independent lines of evidence from Type Ia supernovae and the CMB imply that the universe today is dominated by a mysterious form of energy known as dark energy, which appears to homogeneously permeate all of space. Observations suggest that 73% of the total energy density of the present day universe is in this form. When the universe was very young it was likely infused with dark energy, but with everything closer together, gravity predominated, braking the expansion. Eventually, after billions of years of expansion, the declining density of matter relative to the density of dark energy allowed the expansion of the universe to begin to accelerate. Dark energy in its simplest formulation is modeled by a cosmological constant term in Einstein field equations of general relativity, but its composition and mechanism are unknown. More generally, the details of its equation of state and relationship with the Standard Model of particle physics continue to be investigated both through observation and theory. All of this cosmic evolution after the inflationary epoch can be rigorously described and modeled by the lambda-CDM model of cosmology, which uses the independent frameworks of quantum mechanics and general relativity. There are no easily testable models that would
{ "page_id": 4116, "source": null, "title": "Big Bang" }
describe the situation prior to approximately 10−15 seconds. Understanding this earliest of eras in the history of the universe is one of the greatest unsolved problems in physics. == Concept history == === Etymology === English astronomer Fred Hoyle is credited with coining the term "Big Bang" during a talk for a March 1949 BBC Radio broadcast, saying: "These theories were based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past." However, it did not catch on until the 1970s. It is popularly reported that Hoyle, who favored an alternative "steady-state" cosmological model, intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models. Helge Kragh writes that the evidence for the claim that it was meant as a pejorative is "unconvincing", and mentions a number of indications that it was not a pejorative. A primordial singularity is sometimes called "the Big Bang", but the term can also refer to a more generic early hot, dense phase. The term itself has been argued to be a misnomer because it evokes an explosion. The argument is that whereas an explosion suggests expansion into a surrounding space, the Big Bang only describes the intrinsic expansion of the contents of the universe. Another issue pointed out by Santhosh Mathew is that bang implies sound, which is not an important feature of the model. However, an attempt to find a more suitable alternative was not successful. According to Timothy Ferris: The term 'big bang' was coined with derisive intent by Fred Hoyle, and its endurance testifies to Sir Fred's creativity and wit. Indeed, the term survived an international competition in which three judges
{ "page_id": 4116, "source": null, "title": "Big Bang" }
— the television science reporter Hugh Downs, the astronomer Carl Sagan, and myself — sifted through 13,099 entries from 41 countries and concluded that none was apt enough to replace it. No winner was declared, and like it or not, we are stuck with 'big bang'. === Before the name === Early cosmological models developed from observations of the structure of the universe and from theoretical considerations. In 1912, Vesto Slipher measured the first Doppler shift of a "spiral nebula" (spiral nebula is the obsolete term for spiral galaxies), and soon discovered that almost all such nebulae were receding from Earth. He did not grasp the cosmological implications of this fact, and indeed at the time it was highly controversial whether or not these nebulae were "island universes" outside our Milky Way. Ten years later, Alexander Friedmann, a Russian cosmologist and mathematician, derived the Friedmann equations from the Einstein field equations, showing that the universe might be expanding in contrast to the static universe model advocated by Albert Einstein at that time. In 1924, American astronomer Edwin Hubble's measurement of the great distance to the nearest spiral nebulae showed that these systems were indeed other galaxies. Starting that same year, Hubble painstakingly developed a series of distance indicators, the forerunner of the cosmic distance ladder, using the 100-inch (2.5 m) Hooker telescope at Mount Wilson Observatory. This allowed him to estimate distances to galaxies whose redshifts had already been measured, mostly by Slipher. In 1929, Hubble discovered a correlation between distance and recessional velocity—now known as Hubble's law. Independently deriving Friedmann's equations in 1927, Georges Lemaître, a Belgian physicist and Roman Catholic priest, proposed that the recession of the nebulae was due to the expansion of the universe. He inferred the relation that Hubble would later observe, given the cosmological
{ "page_id": 4116, "source": null, "title": "Big Bang" }
principle. In 1931, Lemaître went further and suggested that the evident expansion of the universe, if projected back in time, meant that the further in the past the smaller the universe was, until at some finite time in the past all the mass of the universe was concentrated into a single point, a "primeval atom" where and when the fabric of time and space came into existence. In the 1920s and 1930s, almost every major cosmologist preferred an eternal steady-state universe, and several complained that the beginning of time implied by an expanding universe imported religious concepts into physics; this objection was later repeated by supporters of the steady-state theory. This perception was enhanced by the fact that the originator of the expanding universe concept, Lemaître, was a Roman Catholic priest. Arthur Eddington agreed with Aristotle that the universe did not have a beginning in time, viz., that matter is eternal. A beginning in time was "repugnant" to him. Lemaître, however, disagreed: If the world has begun with a single quantum, the notions of space and time would altogether fail to have any meaning at the beginning; they would only begin to have a sensible meaning when the original quantum had been divided into a sufficient number of quanta. If this suggestion is correct, the beginning of the world happened a little before the beginning of space and time. During the 1930s, other ideas were proposed as non-standard cosmologies to explain Hubble's observations, including the Milne model, the oscillatory universe (originally suggested by Friedmann, but advocated by Albert Einstein and Richard C. Tolman) and Fritz Zwicky's tired light hypothesis. After World War II, two distinct possibilities emerged. One was Fred Hoyle's steady-state model, whereby new matter would be created as the universe seemed to expand. In this model the universe
{ "page_id": 4116, "source": null, "title": "Big Bang" }
is roughly the same at any point in time. The other was Lemaître's expanding universe theory, advocated and developed by George Gamow, who used it to develop a theory for the abundance of chemical elements in the universe. and whose associates, Ralph Alpher and Robert Herman, predicted the cosmic background radiation. === As a named model === Ironically, it was Hoyle who coined the phrase that came to be applied to Lemaître's theory, referring to it as "this big bang idea" during a BBC Radio broadcast in March 1949. For a while, support was split between these two theories. Eventually, the observational evidence, most notably from radio source counts, began to favor Big Bang over steady state. The discovery and confirmation of the CMB in 1964 secured the Big Bang as the best theory of the origin and evolution of the universe. In 1968 and 1970, Roger Penrose, Stephen Hawking, and George F. R. Ellis published papers where they showed that mathematical singularities were an inevitable initial condition of relativistic models of the Big Bang. Then, from the 1970s to the 1990s, cosmologists worked on characterizing the features of the Big Bang universe and resolving outstanding problems. In 1981, Alan Guth made a breakthrough in theoretical work on resolving certain outstanding theoretical problems in the Big Bang models with the introduction of an epoch of rapid expansion in the early universe he called "inflation". Meanwhile, during these decades, two questions in observational cosmology that generated much discussion and disagreement were over the precise values of the Hubble Constant and the matter-density of the universe (before the discovery of dark energy, thought to be the key predictor for the eventual fate of the universe). Significant progress in Big Bang cosmology has been made since the late 1990s as a result of
{ "page_id": 4116, "source": null, "title": "Big Bang" }
advances in telescope technology as well as the analysis of data from satellites such as the Cosmic Background Explorer (COBE), the Hubble Space Telescope and WMAP. Cosmologists now have fairly precise and accurate measurements of many of the parameters of the Big Bang model, and have made the unexpected discovery that the expansion of the universe appears to be accelerating. == Observational evidence == The Big Bang models offer a comprehensive explanation for a broad range of observed phenomena, including the abundances of the light elements, the cosmic microwave background, large-scale structure, and Hubble's law. The earliest and most direct observational evidence of the validity of the theory are the expansion of the universe according to Hubble's law (as indicated by the redshifts of galaxies), discovery and measurement of the cosmic microwave background and the relative abundances of light elements produced by Big Bang nucleosynthesis (BBN). More recent evidence includes observations of galaxy formation and evolution, and the distribution of large-scale cosmic structures. These are sometimes called the "four pillars" of the Big Bang models. Precise modern models of the Big Bang appeal to various exotic physical phenomena that have not been observed in terrestrial laboratory experiments or incorporated into the Standard Model of particle physics. Of these features, dark matter is currently the subject of most active laboratory investigations. Remaining issues include the cuspy halo problem and the dwarf galaxy problem of cold dark matter. Dark energy is also an area of intense interest for scientists, but it is not clear whether direct detection of dark energy will be possible. Inflation and baryogenesis remain more speculative features of current Big Bang models. Viable, quantitative explanations for such phenomena are still being sought. These are unsolved problems in physics. === Hubble's law and the expansion of the universe === Observations
{ "page_id": 4116, "source": null, "title": "Big Bang" }
of distant galaxies and quasars show that these objects are redshifted: the light emitted from them has been shifted to longer wavelengths. This can be seen by taking a frequency spectrum of an object and matching the spectroscopic pattern of emission or absorption lines corresponding to atoms of the chemical elements interacting with the light. These redshifts are uniformly isotropic, distributed evenly among the observed objects in all directions. If the redshift is interpreted as a Doppler shift, the recessional velocity of the object can be calculated. For some galaxies, it is possible to estimate distances via the cosmic distance ladder. When the recessional velocities are plotted against these distances, a linear relationship known as Hubble's law is observed: v = H 0 D {\displaystyle v=H_{0}D} where v {\displaystyle v} is the recessional velocity of the galaxy or other distant object, D {\displaystyle D} is the proper distance to the object, and H 0 {\displaystyle H_{0}} is Hubble's constant, measured to be 70.4+1.3−1.4 km/s/Mpc by the WMAP. Hubble's law implies that the universe is uniformly expanding everywhere. This cosmic expansion was predicted from general relativity by Friedmann in 1922 and Lemaître in 1927, well before Hubble made his 1929 analysis and observations, and it remains the cornerstone of the Big Bang model as developed by Friedmann, Lemaître, Robertson, and Walker. The theory requires the relation v = H D {\displaystyle v=HD} to hold at all times, where D {\displaystyle D} is the proper distance, v {\displaystyle v} is the recessional velocity, and v {\displaystyle v} , H {\displaystyle H} , and D {\displaystyle D} vary as the universe expands (hence we write H 0 {\displaystyle H_{0}} to denote the present-day Hubble "constant"). For distances much smaller than the size of the observable universe, the Hubble redshift can be thought of
{ "page_id": 4116, "source": null, "title": "Big Bang" }
as the Doppler shift corresponding to the recession velocity v {\displaystyle v} . For distances comparable to the size of the observable universe, the attribution of the cosmological redshift becomes more ambiguous, although its interpretation as a kinematic Doppler shift remains the most natural one. An unexplained discrepancy with the determination of the Hubble constant is known as Hubble tension. Techniques based on observation of the CMB suggest a lower value of this constant compared to the quantity derived from measurements based on the cosmic distance ladder. === Cosmic microwave background radiation === In 1964, Arno Penzias and Robert Wilson serendipitously discovered the cosmic background radiation, an omnidirectional signal in the microwave band. Their discovery provided substantial confirmation of the big-bang predictions by Alpher, Herman and Gamow around 1950. Through the 1970s, the radiation was found to be approximately consistent with a blackbody spectrum in all directions; this spectrum has been redshifted by the expansion of the universe, and today corresponds to approximately 2.725 K. This tipped the balance of evidence in favor of the Big Bang model, and Penzias and Wilson were awarded the 1978 Nobel Prize in Physics. The surface of last scattering corresponding to emission of the CMB occurs shortly after recombination, the epoch when neutral hydrogen becomes stable. Prior to this, the universe comprised a hot dense photon-baryon plasma sea where photons were quickly scattered from free charged particles. Peaking at around 372±14 kyr, the mean free path for a photon becomes long enough to reach the present day and the universe becomes transparent. In 1989, NASA launched COBE, which made two major advances: in 1990, high-precision spectrum measurements showed that the CMB frequency spectrum is an almost perfect blackbody with no deviations at a level of 1 part in 104, and measured a residual temperature
{ "page_id": 4116, "source": null, "title": "Big Bang" }
of 2.726 K (more recent measurements have revised this figure down slightly to 2.7255 K); then in 1992, further COBE measurements discovered tiny fluctuations (anisotropies) in the CMB temperature across the sky, at a level of about one part in 105. John C. Mather and George Smoot were awarded the 2006 Nobel Prize in Physics for their leadership in these results. During the following decade, CMB anisotropies were further investigated by a large number of ground-based and balloon experiments. In 2000–2001, several experiments, most notably BOOMERanG, found the shape of the universe to be spatially almost flat by measuring the typical angular size (the size on the sky) of the anisotropies. In early 2003, the first results of the Wilkinson Microwave Anisotropy Probe were released, yielding what were at the time the most accurate values for some of the cosmological parameters. The results disproved several specific cosmic inflation models, but are consistent with the inflation theory in general. The Planck space probe was launched in May 2009. Other ground and balloon-based cosmic microwave background experiments are ongoing. === Abundance of primordial elements === Using Big Bang models, it is possible to calculate the expected concentration of the isotopes helium-4 (4He), helium-3 (3He), deuterium (2H), and lithium-7 (7Li) in the universe as ratios to the amount of ordinary hydrogen. The relative abundances depend on a single parameter, the ratio of photons to baryons. This value can be calculated independently from the detailed structure of CMB fluctuations. The ratios predicted (by mass, not by abundance) are about 0.25 for 4He:H, about 10−3 for 2H:H, about 10−4 for 3He:H, and about 10−9 for 7Li:H. The measured abundances all agree at least roughly with those predicted from a single value of the baryon-to-photon ratio. The agreement is excellent for deuterium, close but formally discrepant
{ "page_id": 4116, "source": null, "title": "Big Bang" }
for 4He, and off by a factor of two for 7Li (this anomaly is known as the cosmological lithium problem); in the latter two cases, there are substantial systematic uncertainties. Nonetheless, the general consistency with abundances predicted by BBN is strong evidence for the Big Bang, as the theory is the only known explanation for the relative abundances of light elements, and it is virtually impossible to "tune" the Big Bang to produce much more or less than 20–30% helium. Indeed, there is no obvious reason outside of the Big Bang that, for example, the young universe before star formation, as determined by studying matter supposedly free of stellar nucleosynthesis products, should have more helium than deuterium or more deuterium than 3He, and in constant ratios, too.: 182–185 === Galactic evolution and distribution === Detailed observations of the morphology and distribution of galaxies and quasars are in agreement with the current Big Bang models. A combination of observations and theory suggest that the first quasars and galaxies formed within a billion years after the Big Bang, and since then, larger structures have been forming, such as galaxy clusters and superclusters. Populations of stars have been aging and evolving, so that distant galaxies (which are observed as they were in the early universe) appear very different from nearby galaxies (observed in a more recent state). Moreover, galaxies that formed relatively recently appear markedly different from galaxies formed at similar distances but shortly after the Big Bang. These observations are strong arguments against the steady-state model. Observations of star formation, galaxy and quasar distributions and larger structures, agree well with Big Bang simulations of the formation of structure in the universe, and are helping to complete details of the theory. === Primordial gas clouds === In 2011, astronomers found what they believe
{ "page_id": 4116, "source": null, "title": "Big Bang" }
to be pristine clouds of primordial gas by analyzing absorption lines in the spectra of distant quasars. Before this discovery, all other astronomical objects have been observed to contain heavy elements that are formed in stars. Despite being sensitive to carbon, oxygen, and silicon, these three elements were not detected in these two clouds. Since the clouds of gas have no detectable levels of heavy elements, they likely formed in the first few minutes after the Big Bang, during BBN. === Other lines of evidence === The age of the universe as estimated from the Hubble expansion and the CMB is now in agreement with other estimates using the ages of the oldest stars, both as measured by applying the theory of stellar evolution to globular clusters and through radiometric dating of individual Population II stars. It is also in agreement with age estimates based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in the cosmic microwave background. The agreement of independent measurements of this age supports the Lambda-CDM (ΛCDM) model, since the model is used to relate some of the measurements to an age estimate, and all estimates turn agree. Still, some observations of objects from the relatively early universe (in particular quasar APM 08279+5255) raise concern as to whether these objects had enough time to form so early in the ΛCDM model. The prediction that the CMB temperature was higher in the past has been experimentally supported by observations of very low temperature absorption lines in gas clouds at high redshift. This prediction also implies that the amplitude of the Sunyaev–Zel'dovich effect in clusters of galaxies does not depend directly on redshift. Observations have found this to be roughly true, but this effect depends on cluster properties that do change with cosmic
{ "page_id": 4116, "source": null, "title": "Big Bang" }
time, making precise measurements difficult. === Future observations === Future gravitational-wave observatories might be able to detect primordial gravitational waves, relics of the early universe, up to less than a second after the Big Bang. == Problems and related issues in physics == As with any theory, a number of mysteries and problems have arisen as a result of the development of the Big Bang models. Some of these mysteries and problems have been resolved while others are still outstanding. Proposed solutions to some of the problems in the Big Bang model have revealed new mysteries of their own. For example, the horizon problem, the magnetic monopole problem, and the flatness problem are most commonly resolved with inflation theory, but the details of the inflationary universe are still left unresolved and many, including some founders of the theory, say it has been disproven. What follows are a list of the mysterious aspects of the Big Bang concept still under intense investigation by cosmologists and astrophysicists. === Baryon asymmetry === It is not yet understood why the universe has more matter than antimatter. It is generally assumed that when the universe was young and very hot it was in statistical equilibrium and contained equal numbers of baryons and antibaryons. Both matter and antimatter were much more abundant than today, with a tiny asymmetry of only one part in 10 billion. The matter and antimatter collided and annihilated, leaving only the residual amount of matter. Today observations suggest that the universe, including its most distant parts, is made almost entirely of normal matter with very little antimatter. If matter and antimatter were in complete symmetry, then annihilation would result in only photons and virtually no matter at all, which is obviously not what is observed. A process called baryogenesis was hypothesized to
{ "page_id": 4116, "source": null, "title": "Big Bang" }
account for the asymmetry. For baryogenesis to occur, the Sakharov conditions must be satisfied. These require that baryon number is not conserved, that C-symmetry and CP-symmetry are violated and that the universe depart from thermodynamic equilibrium. All these conditions occur in the Standard Model, but the effects are not strong enough to explain the present baryon asymmetry. === Dark energy === Measurements of the redshift–magnitude relation for type Ia supernovae indicate that the expansion of the universe has been accelerating since the universe was about half its present age. To explain this acceleration, cosmological models require that much of the energy in the universe consists of a component with large negative pressure, dubbed "dark energy". Dark energy, though speculative, solves numerous problems. Measurements of the cosmic microwave background indicate that the universe is very nearly spatially flat, and therefore according to general relativity the universe must have almost exactly the critical density of mass/energy. But the mass density of the universe can be measured from its gravitational clustering, and is found to have only about 30% of the critical density. Since theory suggests that dark energy does not cluster in the usual way it is the best explanation for the "missing" energy density. Dark energy also helps to explain two geometrical measures of the overall curvature of the universe, one using the frequency of gravitational lenses, and the other using the characteristic pattern of the large-scale structure--baryon acoustic oscillations--as a cosmic ruler. Negative pressure is believed to be a property of vacuum energy, but the exact nature and existence of dark energy remains one of the great mysteries of the Big Bang. Results from the WMAP team in 2008 are in accordance with a universe that consists of 73% dark energy, 23% dark matter, 4.6% regular matter and less than
{ "page_id": 4116, "source": null, "title": "Big Bang" }
1% neutrinos. According to theory, the energy density in matter decreases with the expansion of the universe, but the dark energy density remains constant (or nearly so) as the universe expands. Therefore, matter made up a larger fraction of the total energy of the universe in the past than it does today, but its fractional contribution will fall in the far future as dark energy becomes even more dominant. The dark energy component of the universe has been explained by theorists using a variety of competing theories including Einstein's cosmological constant but also extending to more exotic forms of quintessence or other modified gravity schemes. A cosmological constant problem, sometimes called the "most embarrassing problem in physics", results from the apparent discrepancy between the measured energy density of dark energy, and the one naively predicted from Planck units. === Dark matter === During the 1970s and the 1980s, various observations showed that there is not sufficient visible matter in the universe to account for the apparent strength of gravitational forces within and between galaxies. This led to the idea that up to 90% of the matter in the universe is dark matter that does not emit light or interact with normal baryonic matter. In addition, the assumption that the universe is mostly normal matter led to predictions that were strongly inconsistent with observations. In particular, the universe today is far more lumpy and contains far less deuterium than can be accounted for without dark matter. While dark matter has always been controversial, it is inferred by various observations: the anisotropies in the CMB, the galaxy rotation problem, galaxy cluster velocity dispersions, large-scale structure distributions, gravitational lensing studies, and X-ray measurements of galaxy clusters. Indirect evidence for dark matter comes from its gravitational influence on other matter, as no dark matter
{ "page_id": 4116, "source": null, "title": "Big Bang" }
particles have been observed in laboratories. Many particle physics candidates for dark matter have been proposed, and several projects to detect them directly are underway. Additionally, there are outstanding problems associated with the currently favored cold dark matter model which include the dwarf galaxy problem and the cuspy halo problem. Alternative theories have been proposed that do not require a large amount of undetected matter, but instead modify the laws of gravity established by Newton and Einstein; yet no alternative theory has been as successful as the cold dark matter proposal in explaining all extant observations. === Horizon problem === The horizon problem results from the premise that information cannot travel faster than light. In a universe of finite age this sets a limit—the particle horizon—on the separation of any two regions of space that are in causal contact. The observed isotropy of the CMB is problematic in this regard: if the universe had been dominated by radiation or matter at all times up to the epoch of last scattering, the particle horizon at that time would correspond to about 2 degrees on the sky. There would then be no mechanism to cause wider regions to have the same temperature.: 191 A resolution to this apparent inconsistency is offered by inflation theory in which a homogeneous and isotropic scalar energy field dominates the universe at some very early period (before baryogenesis). During inflation, the universe undergoes exponential expansion, and the particle horizon expands much more rapidly than previously assumed, so that regions presently on opposite sides of the observable universe are well inside each other's particle horizon. The observed isotropy of the CMB then follows from the fact that this larger region was in causal contact before the beginning of inflation.: 180 Heisenberg's uncertainty principle predicts that during the inflationary
{ "page_id": 4116, "source": null, "title": "Big Bang" }
phase there would be quantum thermal fluctuations, which would be magnified to a cosmic scale. These fluctuations served as the seeds for all the current structures in the universe.: 207 Inflation predicts that the primordial fluctuations are nearly scale invariant and Gaussian, which has been confirmed by measurements of the CMB.: sec 6 A related issue to the classic horizon problem arises because in most standard cosmological inflation models, inflation ceases well before electroweak symmetry breaking occurs, so inflation should not be able to prevent large-scale discontinuities in the electroweak vacuum since distant parts of the observable universe were causally separate when the electroweak epoch ended. === Magnetic monopoles === The magnetic monopole objection was raised in the late 1970s. Grand unified theories (GUTs) predicted topological defects in space that would manifest as magnetic monopoles. These objects would be produced efficiently in the hot early universe, resulting in a density much higher than is consistent with observations, given that no monopoles have been found. This problem is resolved by cosmic inflation, which removes all point defects from the observable universe, in the same way that it drives the geometry to flatness. === Flatness problem === The flatness problem (also known as the oldness problem) is an observational problem associated with a FLRW. The universe may have positive, negative, or zero spatial curvature depending on its total energy density. Curvature is negative if its density is less than the critical density; positive if greater; and zero at the critical density, in which case space is said to be flat. Observations indicate the universe is consistent with being flat. The problem is that any small departure from the critical density grows with time, and yet the universe today remains very close to flat. Given that a natural timescale for departure from flatness
{ "page_id": 4116, "source": null, "title": "Big Bang" }
might be the Planck time, 10−43 seconds, the fact that the universe has reached neither a heat death nor a Big Crunch after billions of years requires an explanation. For instance, even at the relatively late age of a few minutes (the time of nucleosynthesis), the density of the universe must have been within one part in 1014 of its critical value, or it would not exist as it does today. == Misconceptions == One of the common misconceptions about the Big Bang model is that it fully explains the origin of the universe. However, the Big Bang model does not describe how energy, time, and space were caused, but rather it describes the emergence of the present universe from an ultra-dense and high-temperature initial state. It is misleading to visualize the Big Bang by comparing its size to everyday objects. When the size of the universe at Big Bang is described, it refers to the size of the observable universe, and not the entire universe. Another common misconception is that the Big Bang must be understood as the expansion of space and not in terms of the contents of space exploding apart. In fact, either description can be accurate. The expansion of space (implied by the FLRW metric) is only a mathematical convention, corresponding to a choice of coordinates on spacetime. There is no generally covariant sense in which space expands. The recession speeds associated with Hubble's law are not velocities in a relativistic sense (for example, they are not related to the spatial components of 4-velocities). Therefore, it is not remarkable that according to Hubble's law, galaxies farther than the Hubble distance recede faster than the speed of light. Such recession speeds do not correspond to faster-than-light travel. Many popular accounts attribute the cosmological redshift to the expansion
{ "page_id": 4116, "source": null, "title": "Big Bang" }
of space. This can be misleading because the expansion of space is only a coordinate choice. The most natural interpretation of the cosmological redshift is that it is a Doppler shift. == Implications == Given current understanding, scientific extrapolations about the future of the universe are only possible for finite durations, albeit for much longer periods than the current age of the universe. Anything beyond that becomes increasingly speculative. Likewise, at present, a proper understanding of the origin of the universe can only be subject to conjecture. === Pre–Big Bang cosmology === The Big Bang explains the evolution of the universe from a starting density and temperature that is well beyond humanity's capability to replicate, so extrapolations to the most extreme conditions and earliest times are necessarily more speculative. Lemaître called this initial state the "primeval atom" while Gamow called the material "ylem". How the initial state of the universe originated is still an open question, but the Big Bang model does constrain some of its characteristics. For example, if specific laws of nature were to come to existence in a random way, inflation models show, some combinations of these are far more probable, partly explaining why our Universe is rather stable. Another possible explanation for the stability of the Universe could be a hypothetical multiverse, which assumes every possible universe to exist, and thinking species could only emerge in those stable enough. A flat universe implies a balance between gravitational potential energy and other energy forms, requiring no additional energy to be created. The Big Bang theory is built upon the equations of classical general relativity, which are not expected to be valid at the origin of cosmic time, as the temperature of the universe approaches the Planck scale. Correcting this will require the development of a correct treatment
{ "page_id": 4116, "source": null, "title": "Big Bang" }
of quantum gravity. Certain quantum gravity treatments, such as the Wheeler–DeWitt equation, imply that time itself could be an emergent property. As such, physics may conclude that time did not exist before the Big Bang. While it is not known what could have preceded the hot dense state of the early universe or how and why it originated, or even whether such questions are sensible, speculation abounds on the subject of "cosmogony". Some speculative proposals in this regard, each of which entails untested hypotheses, are: The simplest models, in which the Big Bang was caused by quantum fluctuations. That scenario had very little chance of happening, but, according to the totalitarian principle, even the most improbable event will eventually happen. It took place instantly, in our perspective, due to the absence of perceived time before the Big Bang. Emergent Universe models, which feature a low-activity past-eternal era before the Big Bang, resembling ancient ideas of a cosmic egg and birth of the world out of primordial chaos. Models in which the whole of spacetime is finite, including the Hartle–Hawking no-boundary condition. For these cases, the Big Bang does represent the limit of time but without a singularity. In such a case, the universe is self-sufficient. Brane cosmology models, in which inflation is due to the movement of branes in string theory; the pre-Big Bang model; the ekpyrotic model, in which the Big Bang is the result of a collision between branes; and the cyclic model, a variant of the ekpyrotic model in which collisions occur periodically. In the latter model the Big Bang was preceded by a Big Crunch and the universe cycles from one process to the other. Eternal inflation, in which universal inflation ends locally here and there in a random fashion, each end-point leading to a bubble
{ "page_id": 4116, "source": null, "title": "Big Bang" }
universe, expanding from its own big bang. This is sometimes referred to as pre-big bang inflation. Proposals in the last two categories see the Big Bang as an event in either a much larger and older universe or in a multiverse. === Ultimate fate of the universe === Before observations of dark energy, cosmologists considered two scenarios for the future of the universe. If the mass density of the universe were greater than the critical density, then the universe would reach a maximum size and then begin to collapse. It would become denser and hotter again, ending with a state similar to that in which it started—a Big Crunch. Alternatively, if the density in the universe were equal to or below the critical density, the expansion would slow down but never stop. Star formation would cease with the consumption of interstellar gas in each galaxy; stars would burn out, leaving white dwarfs, neutron stars, and black holes. Collisions between these would result in mass accumulating into larger and larger black holes. The average temperature of the universe would very gradually asymptotically approach absolute zero—a Big Freeze. Moreover, if protons are unstable, then baryonic matter would disappear, leaving only radiation and black holes. Eventually, black holes would evaporate by emitting Hawking radiation. The entropy of the universe would increase to the point where no organized form of energy could be extracted from it, a scenario known as heat death. Modern observations of accelerating expansion imply that more and more of the currently visible universe will pass beyond our event horizon and out of contact with us. The eventual result is not known. The ΛCDM model of the universe contains dark energy in the form of a cosmological constant. This theory suggests that only gravitationally bound systems, such as galaxies, will remain
{ "page_id": 4116, "source": null, "title": "Big Bang" }
together, and they too will be subject to heat death as the universe expands and cools. Other explanations of dark energy, called phantom energy theories, suggest that ultimately galaxy clusters, stars, planets, atoms, nuclei, and matter itself will be torn apart by the ever-increasing expansion in a so-called Big Rip. === Religious and philosophical interpretations === As a description of the origin of the universe, the Big Bang has significant bearing on religion and philosophy. As a result, it has become one of the liveliest areas in the discourse between science and religion. Some believe the Big Bang implies a creator, while others argue that Big Bang cosmology makes the notion of a creator superfluous. == See also == == Notes == == References == === Citations === === Bibliography === === Further reading === Alpher, Ralph A.; Herman, Robert (August 1988). "Reflections on Early Work on 'Big Bang' Cosmology". Physics Today. 41 (8): 24–34. Bibcode:1988PhT....41h..24A. doi:10.1063/1.881126. Barrow, John D. (1994). The Origin of the Universe. Science Masters. London: Weidenfeld & Nicolson. ISBN 978-0-297-81497-9. LCCN 94006343. OCLC 490957073. Block, David L. (2012). "Georges Lemaître and Stigler's Law of Eponymy". In Holder, Rodney D.; Mitton, Simon (eds.). A Hubble Eclipse: Lemaitre and Censorship. Astrophysics and Space Science Library. Vol. 395. Heidelberg; New York: Springer. pp. 89–96. arXiv:1106.3928v2. Bibcode:2012ASSL..395...89B. doi:10.1007/978-3-642-32254-9_8. ISBN 978-3-642-32253-2. LCCN 2012956159. OCLC 839779611. S2CID 119205665. Davies, Paul (1992). The Mind of God: The Scientific Basis for a Rational World. New York: Simon & Schuster. ISBN 978-0-671-71069-9. LCCN 91028606. OCLC 59940452. Farrell, John (2005). The Day Without Yesterday: Lemaître, Einstein, and the Birth of Modern Cosmology. New York: Thunder's Mouth Press. ISBN 978-1-56025-660-1. LCCN 2006272995. OCLC 61672162. d'Inverno, Ray (1992). Introducing Einstein's Relativity. Oxford, UK; New York: Clarendon Press; Oxford University Press. ISBN 978-0-19-859686-8. LCCN 91024894. OCLC 554124256. Lineweaver, Charles
{ "page_id": 4116, "source": null, "title": "Big Bang" }
H.; Davis, Tamara M. (March 2005). "Misconceptions about the Big Bang" (PDF). Scientific American. Vol. 292, no. 3. pp. 36–45. Archived (PDF) from the original on 9 October 2019. Retrieved 23 December 2019. Martínez-Delgado, David, ed. (2013). Local Group Cosmology. Cambridge, UK: Cambridge University Press. ISBN 978-1-107-02380-2. LCCN 2013012345. OCLC 875920635. "Lectures presented at the XX Canary Islands Winter School of Astrophysics, held in Tenerife, Spain, November 17–18, 2008." Mather, John C.; Boslough, John (1996). The Very First Light: The True Inside Story of the Scientific Journey Back to the Dawn of the Universe (1st ed.). New York: Basic Books. ISBN 978-0-465-01575-7. LCCN 96010781. OCLC 34357391. Riordan, Michael; Zajc, William A. (May 2006). "The First Few Microseconds" (PDF). Scientific American. Vol. 294, no. 5. pp. 34–41. Bibcode:2006SciAm.294e..34R. doi:10.1038/scientificamerican0506-34a. Archived (PDF) from the original on 30 November 2014. Singh, Simon (2005) [First U.S. edition published 2004]. Big Bang: The Origin of the Universe (Harper Perennial; illustrated ed.). New York, New York: Harper Perennial. ISBN 978-0007162215. Weinberg, Steven (1993) [Originally published 1977]. The First Three Minutes: A Modern View of the Origin of the Universe (Updated ed.). New York: Basic Books. ISBN 978-0-465-02437-7. LCCN 93232406. OCLC 488469247. 1st edition is available from the Internet Archive. Retrieved 23 December 2019. Woolfson, Michael (2013). Time, Space, Stars & Man: The Story of Big Bang (2nd ed.). London: Imperial College Press. ISBN 978-1-84816-933-3. LCCN 2013371163. OCLC 835115510. == External links == Once Upon a Universe Archived 22 June 2020 at the Wayback Machine – STFC funded project explaining the history of the universe in easy-to-understand language "Big Bang Cosmology" – NASA/WMAP Science Team "The Big Bang" – NASA Science "Big Bang, Big Bewilderment" – Big bang model with animated graphics by Johannes Koelman "The Trouble With "The Big Bang"" – A rash of recent articles
{ "page_id": 4116, "source": null, "title": "Big Bang" }
illustrates a longstanding confusion over the famous term. by Sabine Hossenfelde
{ "page_id": 4116, "source": null, "title": "Big Bang" }
In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function. Since it influences to what extent newly acquired information overrides old information, it metaphorically represents the speed at which a machine learning model "learns". In the adaptive control literature, the learning rate is commonly referred to as gain. In setting a learning rate, there is a trade-off between the rate of convergence and overshooting. While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum. In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method. The learning rate is related to the step length determined by inexact line search in quasi-Newton methods and related optimization algorithms. == Learning rate schedule == Initial rate can be left as system default or can be selected using a range of techniques. A learning rate schedule changes the learning rate during learning and is most often changed between epochs/iterations. This is mainly done with two parameters: decay and momentum. There are many different learning rate schedules but the
{ "page_id": 59969558, "source": null, "title": "Learning rate" }
most common are time-based, step-based and exponential. Decay serves to settle the learning in a nice place and avoid oscillations, a situation that may arise when a too high constant learning rate makes the learning jump back and forth over a minimum, and is controlled by a hyperparameter. Momentum is analogous to a ball rolling down a hill; we want the ball to settle at the lowest point of the hill (corresponding to the lowest error). Momentum both speeds up the learning (increasing the learning rate) when the error cost gradient is heading in the same direction for a long time and also avoids local minima by 'rolling over' small bumps. Momentum is controlled by a hyperparameter analogous to a ball's mass which must be chosen manually—too high and the ball will roll over minima which we wish to find, too low and it will not fulfil its purpose. The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras. Time-based learning schedules alter the learning rate depending on the learning rate of the previous time iteration. Factoring in the decay the mathematical formula for the learning rate is: η n + 1 = η n 1 + d n {\displaystyle \eta _{n+1}={\frac {\eta _{n}}{1+dn}}} where η {\displaystyle \eta } is the learning rate, d {\displaystyle d} is a decay parameter and n {\displaystyle n} is the iteration step. Step-based learning schedules changes the learning rate according to some predefined steps. The decay application formula is here defined as: η n = η 0 d ⌊ 1 + n r ⌋ {\displaystyle \eta _{n}=\eta _{0}d^{\left\lfloor {\frac {1+n}{r}}\right\rfloor }} where η n {\displaystyle \eta _{n}} is the learning rate at iteration n {\displaystyle n} , η
{ "page_id": 59969558, "source": null, "title": "Learning rate" }
0 {\displaystyle \eta _{0}} is the initial learning rate, d {\displaystyle d} is how much the learning rate should change at each drop (0.5 corresponds to a halving) and r {\displaystyle r} corresponds to the drop rate, or how often the rate should be dropped (10 corresponds to a drop every 10 iterations). The floor function ( ⌊ … ⌋ {\displaystyle \lfloor \dots \rfloor } ) here drops the value of its input to 0 for all values smaller than 1. Exponential learning schedules are similar to step-based, but instead of steps, a decreasing exponential function is used. The mathematical formula for factoring in the decay is: η n = η 0 e − d n {\displaystyle \eta _{n}=\eta _{0}e^{-dn}} where d {\displaystyle d} is a decay parameter. == Adaptive learning rate == The issue with learning rate schedules is that they all depend on hyperparameters that must be manually chosen for each given learning session and may vary greatly depending on the problem at hand or the model used. To combat this, there are many different types of adaptive gradient descent algorithms such as Adagrad, Adadelta, RMSprop, and Adam which are generally built into deep learning libraries such as Keras. == See also == == References == == Further reading == Géron, Aurélien (2017). "Gradient Descent". Hands-On Machine Learning with Scikit-Learn and TensorFlow. O'Reilly. pp. 113–124. ISBN 978-1-4919-6229-9. Plagianakos, V. P.; Magoulas, G. D.; Vrahatis, M. N. (2001). "Learning Rate Adaptation in Stochastic Gradient Descent". Advances in Convex Analysis and Global Optimization. Kluwer. pp. 433–444. ISBN 0-7923-6942-4. == External links == de Freitas, Nando (February 12, 2015). "Optimization". Deep Learning Lecture 6. University of Oxford – via YouTube.
{ "page_id": 59969558, "source": null, "title": "Learning rate" }
A belly cast is a three-dimensional plaster sculpture of a woman's pregnant abdomen as a keepsake of her pregnancy. It can also be known as a belly mask, pregnancy belly cast, a pregnant plaster cast, or prenatal cast. Belly casts are most often made toward the end of the third trimester of pregnancy, though a series of casts may also be made during the pregnancy. They are made by preparing the skin with a coating of Vaseline or a similar lubricant and adding strips of wet plaster gauze over the abdomen to make the cast. Some women also cast their breasts, arms, hands and thighs into a full torso sculpture. The plaster sets in about 20–30 minutes but some fast-setting strips set in five minutes. Once set, the cast is gently removed by the mother using a wriggling motion. It takes one or two days for the cast to dry completely. Gesso can be painted onto the mask after drying to stop it absorbing moisture from the air. Gesso also works as a surface primer for painting and preserving it. The cast can be decorated with any number of finishes or designs including the baby's hand and foot prints, or left in its natural state. Most popular decorations include painting and decoupage, while some belly casts are mosaicked. == References == == External links == Examples Of Belly Casts Make Your Own Belly Cast - Instructions The Best Comforting Partner In Pregnancy: Massage Chair
{ "page_id": 13111318, "source": null, "title": "Belly cast" }
Hydrodesulfurization (HDS), also called hydrotreatment or hydrotreating, is a catalytic chemical process widely used to remove sulfur (S) from natural gas and from refined petroleum products, such as gasoline or petrol, jet fuel, kerosene, diesel fuel, and fuel oils. The purpose of removing the sulfur, and creating products such as ultra-low-sulfur diesel, is to reduce the sulfur dioxide (SO2) emissions that result from using those fuels in automotive vehicles, aircraft, railroad locomotives, ships, gas or oil burning power plants, residential and industrial furnaces, and other forms of fuel combustion. Another important reason for removing sulfur from the naphtha streams within a petroleum refinery is that sulfur, even in extremely low concentrations, poisons the noble metal catalysts (platinum and rhenium) in the catalytic reforming units that are subsequently used to upgrade the octane rating of the naphtha streams. The industrial hydrodesulfurization processes include facilities for the capture and removal of the resulting hydrogen sulfide (H2S) gas. In petroleum refineries, the hydrogen sulfide gas is then subsequently converted into byproduct, sulfur (S) or sulfuric acid (H2SO4). In fact, the vast majority of the 64,000,000 metric tons of sulfur produced worldwide in 2005 was byproduct sulfur from refineries and other hydrocarbon processing plants. An HDS unit in the petroleum refining industry is also often referred to as a hydrotreater and is the most common of the processing units found in a modern refinery. There are more than 1600 active hydrotreating units across more than 600 refineries globally with a combined capacity in excess of 400 million barrels per day (including all forms of hydrotreating but excluding hydrocracking and reforming processes). == History == Although some reactions involving catalytic hydrogenation of organic substances were already known, the property of finely divided nickel to catalyze the fixation of hydrogen on hydrocarbon (ethylene, benzene) double bonds
{ "page_id": 3543062, "source": null, "title": "Hydrodesulfurization" }
was discovered by the French chemist Paul Sabatier in 1897. Through this work, he found that unsaturated hydrocarbons in the vapor phase could be converted into saturated hydrocarbons by using hydrogen and a catalytic metal, laying the foundation of the modern catalytic hydrogenation process. Soon after Sabatier's work, a German chemist, Wilhelm Normann, found that catalytic hydrogenation could be used to convert unsaturated fatty acids or glycerides in the liquid phase into saturated ones. He was awarded a patent in Germany in 1902 and in Britain in 1903, which was the beginning of what is now a worldwide industry. In the mid-1950s, the first noble metal catalytic reforming process (the Platformer process) was commercialized. At the same time, the catalytic hydrodesulfurization of the naphtha feed to such reformers was also commercialized. In the decades that followed, various proprietary catalytic hydrodesulfurization processes, such as the one depicted in the flow diagram below, have been commercialized. Currently, virtually all of the petroleum refineries worldwide have one or more HDS units. By 2006, miniature microfluidic HDS units had been implemented for treating JP-8 jet fuel to produce clean feed stock for a fuel cell hydrogen reformer. By 2007, this had been integrated into an operating 5 kW fuel cell generation system. == Process chemistry == Hydrogenation is a class of chemical reactions in which the net result is the addition of hydrogen (H). Hydrogenolysis is a type of hydrogenation and results in the cleavage of the C-X chemical bond, where C is a carbon atom and X is a sulfur (S), nitrogen (N) or oxygen (O) atom. The net result of a hydrogenolysis reaction is the formation of C-H and H-X chemical bonds. Thus, hydrodesulfurization is a hydrogenolysis reaction. Using ethanethiol (C2H5SH), a sulfur compound present in some petroleum products, as an example,
{ "page_id": 3543062, "source": null, "title": "Hydrodesulfurization" }
the hydrodesulfurization reaction can be simply expressed as C 2 H 5 SH Ethanethiol + H 2 Hydrogen ⟶ C 2 H 6 Ethane + H 2 S Hydrogen sulfide {\displaystyle {\ce {{\overset {Ethanethiol}{C2H5SH}}+{\overset {Hydrogen}{H2}}->{\overset {Ethane}{C2H6}}+{\overset {Hydrogen\ sulfide}{H2S}}}}} For the mechanistic aspects of, and the catalysts used in this reaction see the section catalysts and mechanisms. == Process description == In an industrial hydrodesulfurization unit, such as in a refinery, the hydrodesulfurization reaction takes place in a fixed-bed reactor at elevated temperatures ranging from 300 to 400 °C and elevated pressures ranging from 30 to 130 atmospheres of absolute pressure, typically in the presence of a catalyst consisting of an alumina base impregnated with cobalt and molybdenum (usually called a CoMo catalyst). Occasionally, a combination of nickel and molybdenum (called NiMo) is used, in addition to the CoMo catalyst, for specific difficult-to-treat feed stocks, such as those containing a high level of chemically bound nitrogen. The image below is a schematic depiction of the equipment and the process flow streams in a typical refinery HDS unit. The liquid feed (at the bottom left in the diagram) is pumped up to the required elevated pressure and is joined by a stream of hydrogen-rich recycle gas. The resulting liquid-gas mixture is preheated by flowing through a heat exchanger. The preheated feed then flows through a fired heater where the feed mixture is totally vaporized and heated to the required elevated temperature before entering the reactor and flowing through a fixed-bed of catalyst where the hydrodesulfurization reaction takes place. The hot reaction products are partially cooled by flowing through the heat exchanger where the reactor feed was preheated and then flows through a water-cooled heat exchanger before it flows through the pressure controller (PC) and undergoes a pressure reduction down to about 3
{ "page_id": 3543062, "source": null, "title": "Hydrodesulfurization" }
to 5 atmospheres. The resulting mixture of liquid and gas enters the gas separator pressure vessel at about 35 °C and 3 to 5 atmospheres of absolute pressure. Most of the hydrogen-rich gas from the gas separator vessel is recycle gas, which is routed through an amine contactor for removal of the reaction product H2S that it contains. The H2S-free hydrogen-rich gas is then recycled back for reuse in the reactor section. Any excess gas from the gas separator vessel joins the sour gas from the stripping of the reaction product liquid. The liquid from the gas separator vessel is routed through a reboiled stripper distillation tower. The bottoms product from the stripper is the final desulfurized liquid product from hydrodesulfurization unit. The overhead sour gas from the stripper contains hydrogen, methane, ethane, hydrogen sulfide, propane, and, perhaps, some butane and heavier components. That sour gas is sent to the refinery's central gas processing plant for removal of the hydrogen sulfide in the refinery's main amine gas treating unit and through a series of distillation towers for recovery of propane, butane and pentane or heavier components. The residual hydrogen, methane, ethane, and some propane is used as refinery fuel gas. The hydrogen sulfide removed and recovered by the amine gas treating unit is subsequently converted to elemental sulfur in a Claus process unit or to sulfuric acid in a wet sulfuric acid process or in the conventional Contact Process. Note that the above description assumes that the HDS unit feed contains no olefins. If the feed does contain olefins (for example, the feed is a naphtha derived from a refinery fluid catalytic cracker (FCC) unit), then the overhead gas from the HDS stripper may also contain some ethene, propene, butenes and pentenes, or heavier components. The amine solution to and
{ "page_id": 3543062, "source": null, "title": "Hydrodesulfurization" }
from the recycle gas contactor comes from and is returned to the refinery's main amine gas treating unit. == Sulfur compounds in refinery HDS feedstocks == The refinery HDS feedstocks (naphtha, kerosene, diesel oil, and heavier oils) contain a wide range of organic sulfur compounds, including thiols, thiophenes, organic sulfides and disulfides, and many others. These organic sulfur compounds are products of the degradation of sulfur containing biological components, present during the natural formation of the fossil fuel, petroleum crude oil. When the HDS process is used to desulfurize a refinery naphtha, it is necessary to remove the total sulfur down to the parts per million range or lower in order to prevent poisoning the noble metal catalysts in the subsequent catalytic reforming of the naphthas. When the process is used for desulfurizing diesel oils, the latest environmental regulations in the United States and Europe, requiring what is referred to as ultra-low-sulfur diesel (ULSD), in turn requires that very deep hydrodesulfurization is needed. In the very early 2000s, the governmental regulatory limits for highway vehicle diesel was within the range of 300 to 500 ppm by weight of total sulfur. As of 2006, the total sulfur limit for highway diesel is in the range of 15 to 30 ppm by weight. === Thiophenes === A family of substrates that are particularly common in petroleum are the aromatic sulfur-containing heterocycles called thiophenes. Many kinds of thiophenes occur in petroleum ranging from thiophene itself to more condensed derivatives, benzothiophenes and dibenzothiophenes. Thiophene itself and its alkyl derivatives are easier to hydrogenolyse, whereas dibenzothiophene, especially 4,6-dimethyldibenzothiophene is considered the most challenging substrates. Benzothiophenes are midway between the simple thiophenes and dibenzothiophenes in their susceptibility to HDS. == Catalysts and mechanisms == The main HDS catalysts are based on molybdenum disulfide (MoS2) together with
{ "page_id": 3543062, "source": null, "title": "Hydrodesulfurization" }
smaller amounts of other metals. The nature of the sites of catalytic activity remains an active area of investigation, but it is generally assumed basal planes of the MoS2 structure are not relevant to catalysis, rather the edges or rims of these sheet. At the edges of the MoS2 crystallites, the molybdenum centre can stabilize a coordinatively unsaturated site (CUS), also known as an anion vacancy. Substrates, such as thiophene, bind to this site and undergo a series of reactions that result in both C-S scission and C=C hydrogenation. Thus, the hydrogen serves multiple roles—generation of anion vacancy by removal of sulfide, hydrogenation, and hydrogenolysis. A simplified diagram for the cycle is shown: === Catalysts === Most metals catalyse HDS, but it is those at the middle of the transition metal series that are most active. Although not practical, ruthenium disulfide appears to be the single most active catalyst, but binary combinations of cobalt and molybdenum are also highly active. Aside from the basic cobalt-modified MoS2 catalyst, nickel and tungsten are also used, depending on the nature of the feed. For example, Ni-W catalysts are more effective for hydrodenitrogenation. === Supports === Metal sulfides are supported on materials with high surface areas. A typical support for HDS catalyst is γ-alumina. The support allows the more expensive catalyst to be more widely distributed, giving rise to a larger fraction of the MoS2 that is catalytically active. The interaction between the support and the catalyst is an area of intense interest, since the support is often not fully inert but participates in the catalysis. == Other uses == The basic hydrogenolysis reaction has a number of uses other than hydrodesulfurization. === Hydrodenitrogenation === The hydrogenolysis reaction is also used to reduce the nitrogen content of a petroleum stream in a process referred
{ "page_id": 3543062, "source": null, "title": "Hydrodesulfurization" }
to as hydrodenitrogenation (HDN). The process flow is the same as that for an HDS unit. Using pyridine (C5H5N), a nitrogen compound present in some petroleum fractionation products, as an example, the hydrodenitrogenation reaction has been postulated as occurring in three steps: C 5 H 5 N Pyridine + 5 H 2 Hydrogen ⟶ C 5 H 11 N Piperdine + 2 H 2 Hydrogen ⟶ C 5 H 11 NH 2 Amylamine + H 2 Hydrogen ⟶ C 5 H 12 Pentane + NH 3 Ammonia {\displaystyle {\ce {{\overset {Pyridine}{C5H5N}}+ {\overset {Hydrogen}{5H2}}-> {\overset {Piperdine}{C5H11N}}+ {\overset {Hydrogen}{2H2}}-> {\overset {Amylamine}{C5H11NH2}}+ {\overset {Hydrogen}{H2}}-> {\overset {Pentane}{C5H12}}+ {\overset {Ammonia}{NH3}}}}} and the overall reaction may be simply expressed as: C 5 H 5 N Pyridine + 5 H 2 Hydrogen ⟶ C 5 H 12 Pentane + NH 3 Ammonia {\displaystyle {\ce {{\overset {Pyridine}{C5H5N}}+ {\overset {Hydrogen}{5H2}}-> {\overset {Pentane}{C5H12}}+ {\overset {Ammonia}{NH3}}}}} Many HDS units for desulfurizing naphthas within petroleum refineries are actually simultaneously denitrogenating to some extent as well. === Saturation of olefins === The hydrogenolysis reaction may also be used to saturate or convert alkenes into alkanes. The process used is the same as for an HDS unit. As an example, the saturation of the olefin pentene can be simply expressed as: C 5 H 10 Pentene + H 2 Hydrogen ⟶ C 5 H 12 Pentane {\displaystyle {\ce {{\overset {Pentene}{C5H10}}+ {\overset {Hydrogen}{H2}}-> {\overset {Pentane}{C5H12}}}}} Some hydrogenolysis units within a petroleum refinery or a petrochemical plant may be used solely for the saturation of olefins or they may be used for simultaneously desulfurizing as well as denitrogenating and saturating olefins to some extent. == See also == Claus process Hydrogen pinch Timeline of hydrogen technologies == References == == External links == Criterion Catalysts Archived 2018-12-07 at the Wayback Machine (Hydroprocessing Catalyst Supplier) Haldor Topsoe
{ "page_id": 3543062, "source": null, "title": "Hydrodesulfurization" }
(Catalyzing Your Business) Albemarle Catalyst Company (Petrochemical catalysts supplier) UOP-Honeywell (Engineering design and construction of large-scale, industrial HDS plants) Hydrogenation for Low Trans and High Conjugated Fatty Acids by E.S. Jang, M.Y. Jung, D.B. Min, Comprehensive Reviews in Food Science and Food Safety, Vol.1, 2005 Oxo Alcohols (Engineered and constructed by Aker Kvaerner) Catalysts and technology for Oxo-Alcohols
{ "page_id": 3543062, "source": null, "title": "Hydrodesulfurization" }
Cutan is one of two waxy biopolymers which occur in the cuticle of some plants. The other and better-known polymer is cutin. Cutan is believed to be a hydrocarbon polymer, whereas cutin is a polyester, but the structure and synthesis of cutan are not yet fully understood. Cutan is not present in as many plants as once thought; for instance it is absent in Ginkgo. Cutan was first detected as a non-saponifiable component, resistant to de-esterification by alkaline hydrolysis, that increases in amount in cuticles of some species such as Clivia miniata as they reach maturity, apparently replacing the cutin secreted in the early stages of cuticle development. Evidence that cutan is a hydrocarbon polymer comes from the fact that its flash pyrolysis products are a characteristic homologous series of paired alkanes and alkenes, and through 13C-NMR analysis of present-day and fossil plants. Cutan's preservation potential is much greater than that of cutin. Despite this, the low proportion of cutan found in fossilized cuticle shows that it is probably not the cause for the widespread preservation of cuticle in the fossil record. == References == == Further reading == Boom, A.; Sinningedamste, J.; Deleeuw, J. (2005). "Cutan, a common aliphatic biopolymer in cuticles of drought-adapted plants". Organic Geochemistry. 36 (4): 595. Bibcode:2005OrGeo..36..595B. doi:10.1016/j.orggeochem.2004.10.017.
{ "page_id": 6033433, "source": null, "title": "Cutan (polymer)" }
The Oldstone Conference of 11 to 14 April 1949 was the third of three postwar conferences held to discuss quantum physics; arranged for the National Academy of Sciences by J. Robert Oppenheimer, who was again chairman. It followed the Shelter Island Conference of 1947 and the Pocono Conference of 1948. There were 24 participants; new participants were Robert Christie, Freeman Dyson (whose writings explained Feynman’s ideas), George Placzek, and Hideki Yukawa. Held at Oldstone-on-the-Hudson in Peekskill, New York, the main talking-point was Richard Feynman’s approach to quantum electrodynamics (QED); Feynman was now (at 30) the leading physicist of his generation. == See also == List of physics conferences == References == Gribbin, John & Mary (1997). Richard Feynman: A Life in Science. England: Viking Press. p. 118. ISBN 0-670-87245-8. Metra, Jagdish (1994). The Beat of a Different Drum: The life and science of Richard Feynman. Oxford, England: Clarendon Press. pp. 278–279. ISBN 0-19-853948-7.
{ "page_id": 32313371, "source": null, "title": "Oldstone Conference" }
In zoology, deep-sea gigantism or abyssal gigantism is the tendency for species of deep-sea dwelling animals to be larger than their shallower-water relatives across a large taxonomic range. Proposed explanations for this type of gigantism include necessary adaptation to colder temperature, food scarcity, reduced predation pressure and increased dissolved oxygen concentrations in the deep sea. The harsh conditions and inhospitality of the underwater environment in general, as well as the inaccessibility of the abyssal zone for most human-made underwater vehicles, have hindered the study of this topic. == Taxonomic range == In marine crustaceans, the trend of increasing size with depth has been observed in mysids, euphausiids, decapods, isopods, ostracods and amphipods. Non-arthropods in which deep-sea gigantism has been observed are cephalopods, cnidarians, and eels from the order Anguilliformes. Other [animals] attain under them gigantic proportions. It is especially certain crustacea which exhibit this latter peculiarity, but not all crustacea, for the crayfish like forms in the deep sea are of ordinary size. I have already referred to a gigantic Pycnogonid [sea spider] dredged by us. Louis Agassiz dredged a gigantic Isopod 11 inches [28 centimetres] in length. We also dredged a gigantic Ostracod. – Henry Nottidge Moseley, 1880 Notable organisms that exhibit deep-sea gigantism include the big red jellyfish, Stygiomedusa jellyfish, the giant isopod, giant ostracod, the giant sea spider, the giant amphipod, the Japanese spider crab, the giant oarfish, the deepwater stingray, the seven-arm octopus, and a number of squid species: the colossal squid (up to 14 m in length), the giant squid (up to 12 m), Megalocranchia fisheri, robust clubhook squid, Dana octopus squid, cockatoo squid, giant warty squid, and the bigfin squids of the genus Magnapinna. Deep-sea gigantism is not generally observed in the meiofauna (organisms that pass through a 1 mm (0.039 in) mesh), which
{ "page_id": 20975650, "source": null, "title": "Deep-sea gigantism" }
actually exhibit the reverse trend of decreasing size with depth. == Explanations == === Lower temperature === In crustaceans, it has been proposed that the explanation for the increase in size with depth is similar to that for the increase in size with latitude (Bergmann's rule): both trends involve increasing size with decreasing temperature. The trend with latitude has been observed in some of the same groups, both in comparisons of related species, as well as within widely distributed species. Decreasing temperature is thought to result in increased cell size and increased life span (the latter also being associated with delayed sexual maturity), both of which lead to an increase in maximum body size (continued growth throughout life is characteristic of crustaceans). In Arctic and Antarctic seas where there is a reduced vertical temperature gradient, there is also a reduced trend towards increased body size with depth, arguing against hydrostatic pressure being an important parameter. Temperature does not appear to have a similar role in influencing the size of giant tube worms. Riftia pachyptila, which lives in hydrothermal vent communities at ambient temperatures of 2–30 °C, reaches lengths of 2.7 m, comparable to those of Lamellibrachia luymesi, which lives in cold seeps. The former, however, has rapid growth rates and short life spans of about 2 years, while the latter is slow growing and may live over 250 years. === Food scarcity === Food scarcity at depths greater than 400 m is also thought to be a factor, since larger body size can improve ability to forage for widely scattered resources. In organisms with planktonic eggs or larvae, another possible advantage is that larger offspring, with greater initial stored food reserves, can drift for greater distances. As an example of adaptations to this situation, giant isopods gorge on food when
{ "page_id": 20975650, "source": null, "title": "Deep-sea gigantism" }