id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
616,670
https://en.wikipedia.org/wiki/Biochemical%20engineering
Biochemical engineering, also known as bioprocess engineering, is a field of study with roots stemming from chemical engineering and biological engineering. It mainly deals with the design, construction, and advancement of unit processes that involve biological organisms (such as fermentation) or organic molecules (often enzymes) and has various applications in areas of interest such as biofuels, food, pharmaceuticals, biotechnology, and water treatment processes. The role of a biochemical engineer is to take findings developed by biologists and chemists in a laboratory and translate that to a large-scale manufacturing process. History For hundreds of years, humans have made use of the chemical reactions of biological organisms in order to create goods. In the mid-1800s, Louis Pasteur was one of the first people to look into the role of these organisms when he researched fermentation. His work also contributed to the use of pasteurization, which is still used to this day. By the early 1900s, the use of microorganisms had expanded, and was used to make industrial products. Up to this point, biochemical engineering hadn't developed as a field yet. It wasn't until 1928 when Alexander Fleming discovered penicillin that the field of biochemical engineering was established. After this discovery, samples were gathered from around the world in order to continue research into the characteristics of microbes from places such as soils, gardens, forests, rivers, and streams. Today, biochemical engineers can be found working in a variety of industries, from food to pharmaceuticals. This is due to the increasing need for efficiency and production which requires knowledge of how biological systems and chemical reactions interact with each other and how they can be used to meet these needs. Applications Biotechnology Biotechnology and biochemical engineering are closely related to each other as biochemical engineering can be considered a sub-branch of biotechnology. One of the primary focuses of biotechnology is in the medical field, where biochemical engineers work to design pharmaceuticals, artificial organs, biomedical devices, chemical sensors, and drug delivery systems. Biochemical engineers use their knowledge of chemical processes in biological systems in order to create tangible products that improve people's health. Specific areas of studies include metabolic, enzyme, and tissue engineering. The study of cell cultures is widely used in biochemical engineering and biotechnology due to its many applications in developing natural fuels, improving the efficiency in producing drugs and pharmaceutical processes, and also creating cures for disease. Other medical applications of biochemical engineering within biotechnology are genetics testing and pharmacogenomics. Food Industry Biochemical engineers primarily focus on designing systems that will improve the production, processing, packaging, storage, and distribution of food. Some commonly processed foods include wheat, fruits, and milk which undergo processes such as milling, dehydration, and pasteurization in order to become products that can be sold. There are three levels of food processing: primary, secondary, and tertiary. Primary food processing involves turning agricultural products into other products that can be turned into food, secondary food processing is the making of food from readily available ingredients, and tertiary food processing is commercial production of ready-to eat or heat-and-serve foods. Drying, pickling, salting, and fermenting foods were some of the oldest food processing techniques used to preserve food by preventing yeasts, molds, and bacteria to cause spoiling. Methods for preserving food have evolved to meet current standards of food safety but still use the same processes as the past. Biochemical engineers also work to improve the nutritional value of food products, such as in golden rice, which was developed to prevent vitamin A deficiency in certain areas where this was an issue. Efforts to advance preserving technologies can also ensure lasting retention of nutrients as foods are stored. Packaging plays a key role in preserving as well as ensuring the safety of the food by protecting the product from contamination, physical damage, and tampering. Packaging can also make it easier to transport and serve food. A common job for biochemical engineers working in the food industry is to design ways to perform all these processes on a large scale in order to meet the demands of the population. Responsibilities for this career path include designing and performing experiments, optimizing processes, consulting with groups to develop new technologies, and preparing project plans for equipment and facilities. Pharmaceuticals In the pharmaceutical industry, bioprocess engineering plays a crucial role in the large-scale production of biopharmaceuticals, such as monoclonal antibodies, vaccines, and therapeutic proteins. The development and optimization of bioreactors and fermentation systems are essential for the mass production of these products, ensuring consistent quality and high yields. For example, recombinant proteins like insulin and erythropoietin are produced through cell culture systems using genetically modified cells. The bioprocess engineer’s role is to optimize variables like temperature, pH, nutrient availability, and oxygen levels to maximize the efficiency of these systems. The growing field of gene therapy also relies on bioprocessing techniques to produce viral vectors, which are used to deliver therapeutic genes to patients. This involves scaling up processes from laboratory to industrial scale while maintaining safety and regulatory compliance . As the demand for biopharmaceutical products increases, advancements in bioprocess engineering continue to enable more sustainable and cost-effective manufacturing methods. Education Auburn University University of Georgia (Biochemical Engineering) Michigan Technological University McMaster University Technical University of Munich University of Natural Resources and Life Sciences, Vienna Keck Graduate Institute of Applied Life Sciences (KGI Amgen Bioprocessing Center) Kungliga Tekniska högskolan- KTH – Royal Institute of Technology (Dept. of Industrial Biotechnology) Queensland University of Technology (QUT) University of Cape Town (Centre for Bioprocess Engineering Research) SUNY-ESF (Bioprocess Engineering Program) Université de Sherbrooke University of British Columbia UC Berkeley UC Davis Savannah Technical College University of Illinois Urbana-Champaign (Integrated Bioprocessing Research Laboratory) University of Iowa (Chemical and Biochemical Engineering) University of Minnesota (Bioproducts and Biosystems Engineering) East Carolina University Jacob School of Biotechnology and Bioengineering, Allahabad, India Indian Institute of Technology, Varanasi Indian Institute of Technology Kharagpur Institute of Chemical Technology, Mumbai Jadavpur University Universidade Federal de Itajubá (UNIFEI) Universiti Malaysia Kelantan (UMK) Universidade Federal de São João del Rei-UFSJ Federal University of Technology – Paraná Universidade Federal do Paraná-UFPR São Paulo State University Universidade Federal do Pará-UFPA University of Louvain (UCLouvain) University of Stellenbosch North Carolina Agricultural and Technical State University North Carolina State University Virginia Tech Ege University/Turkey (Department of Bioengineering) National University of Costa Rica University of Brawijaya (Department of Agricultural Engineering) University of Indonesia University College London (Department of Biochemical Engineering) Universiti Teknologi Malaysia Universiti Kuala Lumpur Malaysian Institute of Chemical and Bioengineering Technology University of Zagreb, Faculty of food technology and biotechnology, Croatia Villanova University Wageningen University University College Dublin Obafemi Awolowo University University of Birmingham Universidad Autónoma de Coahuila (Facultad de Ciencias Biológicas) Silpakorn University Thailand Universiti Malaysia Perlis (UniMAP), School of Bioprocess Engineering (SBE) Technische Universität Berlin, Chair of Bioprocess Engineering University of Queensland Technical University of Denmark, Department of Chemical and Biochemical Engineering, BioEng Research Centre South Dakota School of Mines and Technology National Institute of Applied Science and Technology Tunis (Industrial Biology Engineering Program) Technical University Hamburg (TUHH) Mapua University Biochemical engineering is not a major offered by many universities and is instead an area of interest under the chemical engineering. The following universities are known to offer degrees in biochemical engineering: Brown University – Providence, RI Christian Brothers University – Memphis, TN Colorado School of Mines – Golden, CO Rowan University – Glassboro, NJ University of Colorado Boulder – Boulder, CO University of Georgia – Athens, GA University of California, Davis – Davis, CA University College London – London, United Kingdom University of Southern California – Los Angeles, CA University of Western Ontario – Ontario, Canada Indian Institute of Technology (BHU) Varanasi – Varanasi, UP Indian Institute of Technology Delhi – Delhi Institute of Technology Tijuana – México University of Baghdad, College of Engineering, Al-Khwarizmi Biochemical See also Biochemical engineering Biofuel from algae Biological hydrogen production (algae) Bioprocess Bioproducts engineering Bioproducts Bioreactor landfill Biosystems engineering Cell therapy Downstream (bioprocess) Electrochemical energy conversion Food engineering Industrial biotechnology Microbiology Moss bioreactor Photobioreactor Physical chemistry Unit operations Upstream (bioprocess) Use of biotechnology in pharmaceutical manufacturing References Shukla, A. A., Thömmes, J., & Hackl, M. (2012). Recent advances in downstream processing of therapeutic monoclonal antibodies. Biotechnology Advances, 30(3), 1548-1557. Walsh, G. (2018). Biopharmaceuticals: Biochemistry and Biotechnology (3rd ed.). Wiley.
Biochemical engineering
[ "Chemistry", "Engineering", "Biology" ]
1,919
[ "Biochemistry", "Chemical engineering", "Biological engineering", "Biochemical engineering" ]
616,901
https://en.wikipedia.org/wiki/Polyadenylation
Polyadenylation is the addition of a poly(A) tail to an RNA transcript, typically a messenger RNA (mRNA). The poly(A) tail consists of multiple adenosine monophosphates; in other words, it is a stretch of RNA that has only adenine bases. In eukaryotes, polyadenylation is part of the process that produces mature mRNA for translation. In many bacteria, the poly(A) tail promotes degradation of the mRNA. It, therefore, forms part of the larger process of gene expression. The process of polyadenylation begins as the transcription of a gene terminates. The 3′-most segment of the newly made pre-mRNA is first cleaved off by a set of proteins; these proteins then synthesize the poly(A) tail at the RNA's 3′ end. In some genes these proteins add a poly(A) tail at one of several possible sites. Therefore, polyadenylation can produce more than one transcript from a single gene (alternative polyadenylation), similar to alternative splicing. The poly(A) tail is important for the nuclear export, translation and stability of mRNA. The tail is shortened over time, and, when it is short enough, the mRNA is enzymatically degraded. However, in a few cell types, mRNAs with short poly(A) tails are stored for later activation by re-polyadenylation in the cytosol. In contrast, when polyadenylation occurs in bacteria, it promotes RNA degradation. This is also sometimes the case for eukaryotic non-coding RNAs. mRNA molecules in both prokaryotes and eukaryotes have polyadenylated 3′-ends, with the prokaryotic poly(A) tails generally shorter and fewer mRNA molecules polyadenylated. Background on RNA RNAs are a type of large biological molecules, whose individual building blocks are called nucleotides. The name poly(A) tail (for polyadenylic acid tail) reflects the way RNA nucleotides are abbreviated, with a letter for the base the nucleotide contains (A for adenine, C for cytosine, G for guanine and U for uracil). RNAs are produced (transcribed) from a DNA template. By convention, RNA sequences are written in a 5′ to 3′ direction. The 5′ end is the part of the RNA molecule that is transcribed first, and the 3′ end is transcribed last. The 3′ end is also where the poly(A) tail is found on polyadenylated RNAs. Messenger RNA (mRNA) is RNA that has a coding region that acts as a template for protein synthesis (translation). The rest of the mRNA, the untranslated regions, tune how active the mRNA is. There are also many RNAs that are not translated, called non-coding RNAs. Like the untranslated regions, many of these non-coding RNAs have regulatory roles. Nuclear polyadenylation Function In nuclear polyadenylation, a poly(A) tail is added to an RNA at the end of transcription. On mRNAs, the poly(A) tail protects the mRNA molecule from enzymatic degradation in the cytoplasm and aids in transcription termination, export of the mRNA from the nucleus, and translation. Almost all eukaryotic mRNAs are polyadenylated, with the exception of animal replication-dependent histone mRNAs. These are the only mRNAs in eukaryotes that lack a poly(A) tail, ending instead in a stem-loop structure followed by a purine-rich sequence, termed histone downstream element, that directs where the RNA is cut so that the 3′ end of the histone mRNA is formed. Many eukaryotic non-coding RNAs are always polyadenylated at the end of transcription. There are small RNAs where the poly(A) tail is seen only in intermediary forms and not in the mature RNA as the ends are removed during processing, the notable ones being microRNAs. But, for many long noncoding RNAs – a seemingly large group of regulatory RNAs that, for example, includes the RNA Xist, which mediates X chromosome inactivation – a poly(A) tail is part of the mature RNA. Mechanism The processive polyadenylation complex in the nucleus of eukaryotes works on products of RNA polymerase II, such as precursor mRNA. Here, a multi-protein complex (see components on the right) cleaves the 3′-most part of a newly produced RNA and polyadenylates the end produced by this cleavage. The cleavage is catalysed by the enzyme CPSF and occurs 10–30 nucleotides downstream of its binding site. This site often has the polyadenylation signal sequence AAUAAA on the RNA, but variants of it that bind more weakly to CPSF exist. Two other proteins add specificity to the binding to an RNA: CstF and CFI. CstF binds to a GU-rich region further downstream of CPSF's site. CFI recognises a third site on the RNA (a set of UGUAA sequences in mammals) and can recruit CPSF even if the AAUAAA sequence is missing. The polyadenylation signal – the sequence motif recognised by the RNA cleavage complex – varies between groups of eukaryotes. Most human polyadenylation sites contain the AAUAAA sequence, but this sequence is less common in plants and fungi. The RNA is typically cleaved before transcription termination, as CstF also binds to RNA polymerase II. Through a poorly understood mechanism (as of 2002), it signals for RNA polymerase II to slip off of the transcript. Cleavage also involves the protein CFII, though it is unknown how. The cleavage site associated with a polyadenylation signal can vary up to some 50 nucleotides. When the RNA is cleaved, polyadenylation starts, catalysed by polyadenylate polymerase. Polyadenylate polymerase builds the poly(A) tail by adding adenosine monophosphate units from adenosine triphosphate to the RNA, cleaving off pyrophosphate. Another protein, PAB2, binds to the new, short poly(A) tail and increases the affinity of polyadenylate polymerase for the RNA. When the poly(A) tail is approximately 250 nucleotides long the enzyme can no longer bind to CPSF and polyadenylation stops, thus determining the length of the poly(A) tail. CPSF is in contact with RNA polymerase II, allowing it to signal the polymerase to terminate transcription. When RNA polymerase II reaches a "termination sequence" (⁵'TTTATT3' on the DNA template and ⁵'AAUAAA3' on the primary transcript), the end of transcription is signaled. The polyadenylation machinery is also physically linked to the spliceosome, a complex that removes introns from RNAs. Downstream effects The poly(A) tail acts as the binding site for poly(A)-binding protein. Poly(A)-binding protein promotes export from the nucleus and translation, and inhibits degradation. This protein binds to the poly(A) tail prior to mRNA export from the nucleus and in yeast also recruits poly(A) nuclease, an enzyme that shortens the poly(A) tail and allows the export of the mRNA. Poly(A)-binding protein is exported to the cytoplasm with the RNA. mRNAs that are not exported are degraded by the exosome. Poly(A)-binding protein also can bind to, and thus recruit, several proteins that affect translation, one of these is initiation factor-4G, which in turn recruits the 40S ribosomal subunit. However, a poly(A) tail is not required for the translation of all mRNAs. Further, poly(A) tailing (oligo-adenylation) can determine the fate of RNA molecules that are usually not poly(A)-tailed (such as (small) non-coding (sn)RNAs etc.) and thereby induce their RNA decay. Deadenylation In eukaryotic somatic cells, the poly(A) tails of most mRNAs in the cytoplasm gradually get shorter, and mRNAs with shorter poly(A) tail are translated less and degraded sooner. However, it can take many hours before an mRNA is degraded. This deadenylation and degradation process can be accelerated by microRNAs complementary to the 3′ untranslated region of an mRNA. In immature egg cells, mRNAs with shortened poly(A) tails are not degraded, but are instead stored and translationally inactive. These short tailed mRNAs are activated by cytoplasmic polyadenylation after fertilisation, during egg activation. In animals, poly(A) ribonuclease (PARN) can bind to the 5′ cap and remove nucleotides from the poly(A) tail. The level of access to the 5′ cap and poly(A) tail is important in controlling how soon the mRNA is degraded. PARN deadenylates less if the RNA is bound by the initiation factors 4E (at the 5′ cap) and 4G (at the poly(A) tail), which is why translation reduces deadenylation. The rate of deadenylation may also be regulated by RNA-binding proteins. Additionally, RNA triple helix structures and RNA motifs such as the poly(A) tail 3’ end binding pocket retard deadenylation process and inhibit poly(A) tail removal. Once the poly(A) tail is removed, the decapping complex removes the 5′ cap, leading to a degradation of the RNA. Several other proteins are involved in deadenylation in budding yeast and human cells, most notably the CCR4-Not complex. Cytoplasmic polyadenylation There is polyadenylation in the cytosol of some animal cell types, namely in the germline, during early embryogenesis and in post-synaptic sites of nerve cells. This lengthens the poly(A) tail of an mRNA with a shortened poly(A) tail, so that the mRNA will be translated. These shortened poly(A) tails are often less than 20 nucleotides, and are lengthened to around 80–150 nucleotides. In the early mouse embryo, cytoplasmic polyadenylation of maternal RNAs from the egg cell allows the cell to survive and grow even though transcription does not start until the middle of the 2-cell stage (4-cell stage in human). In the brain, cytoplasmic polyadenylation is active during learning and could play a role in long-term potentiation, which is the strengthening of the signal transmission from a nerve cell to another in response to nerve impulses and is important for learning and memory formation. Cytoplasmic polyadenylation requires the RNA-binding proteins CPSF and CPEB, and can involve other RNA-binding proteins like Pumilio. Depending on the cell type, the polymerase can be the same type of polyadenylate polymerase (PAP) that is used in the nuclear process, or the cytoplasmic polymerase GLD-2. Alternative polyadenylation Many protein-coding genes have more than one polyadenylation site, so a gene can code for several mRNAs that differ in their 3′ end. The 3’ region of a transcript contains many polyadenylation signals (PAS). When more proximal (closer towards 5’ end) PAS sites are utilized, this shortens the length of the 3’ untranslated region (3' UTR) of a transcript. Studies in both humans and flies have shown tissue specific APA. With neuronal tissues preferring distal PAS usage, leading to longer 3’ UTRs and testis tissues preferring proximal PAS leading to shorter 3’ UTRs. Studies have shown there is a correlation between a gene's conservation level and its tendency to do alternative polyadenylation, with highly conserved genes exhibiting more APA. Similarly, highly expressed genes follow this same pattern. Ribo-sequencing data (sequencing of only mRNAs inside ribosomes) has shown that mRNA isoforms with shorter 3’ UTRs are more likely to be translated. Since alternative polyadenylation changes the length of the 3' UTR, it can also change which binding sites are available for microRNAs in the 3′ UTR. MicroRNAs tend to repress translation and promote degradation of the mRNAs they bind to, although there are examples of microRNAs that stabilise transcripts. Alternative polyadenylation can also shorten the coding region, thus making the mRNA code for a different protein, but this is much less common than just shortening the 3′ untranslated region. The choice of poly(A) site can be influenced by extracellular stimuli and depends on the expression of the proteins that take part in polyadenylation. For example, the expression of CstF-64, a subunit of cleavage stimulatory factor (CstF), increases in macrophages in response to lipopolysaccharides (a group of bacterial compounds that trigger an immune response). This results in the selection of weak poly(A) sites and thus shorter transcripts. This removes regulatory elements in the 3′ untranslated regions of mRNAs for defense-related products like lysozyme and TNF-α. These mRNAs then have longer half-lives and produce more of these proteins. RNA-binding proteins other than those in the polyadenylation machinery can also affect whether a polyadenylation site is used, as can DNA methylation near the polyadenylation signal. In addition, numerous other components involved in transcription, splicing or other mechanisms regulating RNA biology can affect APA. Tagging for degradation in eukaryotes For many non-coding RNAs, including tRNA, rRNA, snRNA, and snoRNA, polyadenylation is a way of marking the RNA for degradation, at least in yeast. This polyadenylation is done in the nucleus by the TRAMP complex, which maintains a tail that is around 4 nucleotides long to the 3′ end. The RNA is then degraded by the exosome. Poly(A) tails have also been found on human rRNA fragments, both the form of homopolymeric (A only) and heterpolymeric (mostly A) tails. In prokaryotes and organelles In many bacteria, both mRNAs and non-coding RNAs can be polyadenylated. This poly(A) tail promotes degradation by the degradosome, which contains two RNA-degrading enzymes: polynucleotide phosphorylase and RNase E. Polynucleotide phosphorylase binds to the 3′ end of RNAs and the 3′ extension provided by the poly(A) tail allows it to bind to the RNAs whose secondary structure would otherwise block the 3′ end. Successive rounds of polyadenylation and degradation of the 3′ end by polynucleotide phosphorylase allows the degradosome to overcome these secondary structures. The poly(A) tail can also recruit RNases that cut the RNA in two. These bacterial poly(A) tails are about 30 nucleotides long. In as different groups as animals and trypanosomes, the mitochondria contain both stabilising and destabilising poly(A) tails. Destabilising polyadenylation targets both mRNA and noncoding RNAs. The poly(A) tails are 43 nucleotides long on average. The stabilising ones start at the stop codon, and without them the stop codon (UAA) is not complete as the genome only encodes the U or UA part. Plant mitochondria have only destabilising polyadenylation. Mitochondrial polyadenylation has never been observed in either budding or fission yeast. While many bacteria and mitochondria have polyadenylate polymerases, they also have another type of polyadenylation, performed by polynucleotide phosphorylase itself. This enzyme is found in bacteria, mitochondria, plastids and as a constituent of the archaeal exosome (in those archaea that have an exosome). It can synthesise a 3′ extension where the vast majority of the bases are adenines. Like in bacteria, polyadenylation by polynucleotide phosphorylase promotes degradation of the RNA in plastids and likely also archaea. Evolution Although polyadenylation is seen in almost all organisms, it is not universal. However, the wide distribution of this modification and the fact that it is present in organisms from all three domains of life implies that the last universal common ancestor of all living organisms, it is presumed, had some form of polyadenylation system. A few organisms do not polyadenylate mRNA, which implies that they have lost their polyadenylation machineries during evolution. Although no examples of eukaryotes that lack polyadenylation are known, mRNAs from the bacterium Mycoplasma gallisepticum and the salt-tolerant archaean Haloferax volcanii lack this modification. The most ancient polyadenylating enzyme is polynucleotide phosphorylase. This enzyme is part of both the bacterial degradosome and the archaeal exosome, two closely related complexes that recycle RNA into nucleotides. This enzyme degrades RNA by attacking the bond between the 3′-most nucleotides with a phosphate, breaking off a diphosphate nucleotide. This reaction is reversible, and so the enzyme can also extend RNA with more nucleotides. The heteropolymeric tail added by polynucleotide phosphorylase is very rich in adenine. The choice of adenine is most likely the result of higher ADP concentrations than other nucleotides as a result of using ATP as an energy currency, making it more likely to be incorporated in this tail in early lifeforms. It has been suggested that the involvement of adenine-rich tails in RNA degradation prompted the later evolution of polyadenylate polymerases (the enzymes that produce poly(A) tails with no other nucleotides in them). Polyadenylate polymerases are not as ancient. They have separately evolved in both bacteria and eukaryotes from CCA-adding enzyme, which is the enzyme that completes the 3′ ends of tRNAs. Its catalytic domain is homologous to that of other polymerases. It is presumed that the horizontal transfer of bacterial CCA-adding enzyme to eukaryotes allowed the archaeal-like CCA-adding enzyme to switch function to a poly(A) polymerase. Some lineages, like archaea and cyanobacteria, never evolved a polyadenylate polymerase. Polyadenylate tails are observed in several RNA viruses, including Influenza A, Coronavirus, Alfalfa mosaic virus, and Duck Hepatitis A. Some viruses, such as HIV-1 and Poliovirus, inhibit the cell's poly-A binding protein (PABPC1) in order to emphasize their own genes' expression over the host cell's. History Poly(A)polymerase was first identified in 1960 as an enzymatic activity in extracts made from cell nuclei that could polymerise ATP, but not ADP, into polyadenine. Although identified in many types of cells, this activity had no known function until 1971, when poly(A) sequences were found in mRNAs. The only function of these sequences was thought at first to be protection of the 3′ end of the RNA from nucleases, but later the specific roles of polyadenylation in nuclear export and translation were identified. The polymerases responsible for polyadenylation were first purified and characterized in the 1960s and 1970s, but the large number of accessory proteins that control this process were discovered only in the early 1990s. See also SV40 References Further reading External links Gene expression Messenger RNA
Polyadenylation
[ "Chemistry", "Biology" ]
4,282
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
617,058
https://en.wikipedia.org/wiki/Transcription%20preinitiation%20complex
The preinitiation complex (abbreviated PIC) is a complex of approximately 100 proteins that is necessary for the transcription of protein-coding genes in eukaryotes and archaea. The preinitiation complex positions RNA polymerase II (Pol II) at gene transcription start sites, denatures the DNA, and positions the DNA in the RNA polymerase II active site for transcription. The minimal PIC includes RNA polymerase II and six general transcription factors: TFIIA, TFIIB, TFIID, TFIIE, TFIIF, and TFIIH. Additional regulatory complexes (such as the mediator coactivator and chromatin remodeling complexes) may also be components of the PIC. Preinitiation complexes are also formed during RNA Polymerase I and RNA Polymerase III transcription. Assembly (RNA Polymerase II) A classical view of PIC formation at the promoter involves the following steps: TATA binding protein (TBP, a subunit of TFIID) binds the promoter, creating a sharp bend in the promoter DNA. Animals have some TBP-related factors (TRF; TBPL1/TBPL2). They can replace TBP in some special contexts. TBP recruits TFIIA, then TFIIB, to the promoter. TFIIB recruits RNA polymerase II and TFIIF to the promoter. TFIIE joins the growing complex and recruits TFIIH which has protein kinase activity (phosphorylates RNA polymerase II within the CTD) and DNA helicase activity (unwinds DNA at promoter). It also recruits nucleotide-excision repair proteins. Subunits within TFIIH that have ATPase and helicase activity create negative superhelical tension in the DNA. Negative superhelical tension causes approximately one turn of DNA to unwind and form the transcription bubble. The template strand of the transcription bubble engages with the RNA polymerase II active site. RNA synthesis begins. After synthesis of ~10 nucleotides of RNA, and an obligatory phase of several abortive transcription cycles, RNA polymerase II escapes the promoter region to transcribe the remainder of the gene. An alternative hypothesis of PIC assembly postulates the recruitment of a pre-assembled "RNA polymerase II holoenzyme" directly to the promoter (composed of all, or nearly all GTFs and RNA polymerase II and regulatory complexes), in a manner similar to the bacterial RNA polymerase (RNAP). Other preinitiation complexes In Archaea Archaea have a preinitiation complex resembling that of a minimized Pol II PIC, with a TBP and an Archaeal transcription factor B (TFB, a TFIIB homolog). The assembly follows a similar sequence, starting with TBP binding to the promoter. An interesting aspect is that the entire complex is bound in an inverse orientation compared to those found in eukaryotic PIC. They also use TFE, a TFIIE homolog, which assists in transcription initiation but is not required. RNA Polymerase I (Pol I) Formation of the Pol I preinitiation complex requires the binding of selective factor 1 (SL1 or TIF-IB) to the core element of the rDNA promoter. SL1 is a complex composed of TBP and at least three TBP-associated factors (TAFs). For basal levels of transcription, only SL1 and the initiation-competent form of Pol I (Pol Iβ), characterized by RRN3 binding, are required. For activated transcription levels, UBTF (UBF) is also required. UBTF binds as a dimer to both the upstream control element (UCE) and core element of the rDNA promoter, bending the DNA to form an enhanceosome. SL1 has been found to stabilize the binding of UBTF to the rDNA promoter. The subunits of the Pol I PIC differ between organisms. RNA Polymerase III (Pol III) Pol III has three classes of initiation, which start with different factors recognizing different control elements but all converging on TFIIIB (similar to TFIIB-TBP; consists of TBP/TRF, a TFIIB-related factor, and a B″ unit) recruiting the Pol III preinitiation complex. The overall architecture resembles that of Pol II. Only TFIIIB needs to remain attached during elongation. References External links Descriptive image – biochem.ucl.ac.uk Gene expression
Transcription preinitiation complex
[ "Chemistry", "Biology" ]
932
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
9,542,388
https://en.wikipedia.org/wiki/Hypothalamic%E2%80%93pituitary%E2%80%93thyroid%20axis
The hypothalamic–pituitary–thyroid axis (HPT axis for short, a.k.a. thyroid homeostasis or thyrotropic feedback control) is part of the neuroendocrine system responsible for the regulation of metabolism and also responds to stress. As its name suggests, it depends upon the hypothalamus, the pituitary gland, and the thyroid gland. The hypothalamus senses low circulating levels of thyroid hormone (Triiodothyronine (T3) and Thyroxine (T4)) and responds by releasing thyrotropin-releasing hormone (TRH). The TRH stimulates the anterior pituitary to produce thyroid-stimulating hormone (TSH). The TSH, in turn, stimulates the thyroid to produce thyroid hormone until levels in the blood return to normal. Thyroid hormone exerts negative feedback control over the hypothalamus as well as anterior pituitary, thus controlling the release of both TRH from hypothalamus and TSH from anterior pituitary gland. The HPA, HPG, and HPT axes are three pathways in which the hypothalamus and pituitary direct neuroendocrine function. Physiology Thyroid homeostasis results from a multi-loop feedback system that is found in virtually all higher vertebrates. Proper function of thyrotropic feedback control is indispensable for growth, differentiation, reproduction and intelligence. Very few animals (e.g. axolotls and sloths) have impaired thyroid homeostasis that exhibits a very low set-point that is assumed to underlie the metabolic and ontogenetic anomalies of these animals. The pituitary gland secretes thyrotropin (TSH; Thyroid Stimulating Hormone) that stimulates the thyroid to secrete thyroxine (T4) and, to a lesser degree, triiodothyronine (T3). The major portion of T3, however, is produced in peripheral organs, e.g. liver, adipose tissue, glia and skeletal muscle by deiodination from circulating T4. Deiodination is controlled by numerous hormones and nerval signals including TSH, vasopressin and catecholamines. Both peripheral thyroid hormones (iodothyronines) inhibit thyrotropin secretion from the pituitary (negative feedback). Consequently, equilibrium concentrations for all hormones are attained. TSH secretion is also controlled by thyrotropin releasing hormone (thyroliberin, TRH), whose secretion itself is again suppressed by plasma T4 and T3 in CSF (long feedback, Fekete–Lechan loop). Additional feedback loops are ultrashort feedback control of TSH secretion (Brokken-Wiersinga-Prummel loop) and linear feedback loops controlling plasma protein binding. Recent research suggested the existence of an additional feedforward motif linking TSH release to deiodinase activity in humans. The existence of this TSH-T3 shunt could explain why deiodinase activity is higher in hypothyroid patients and why a minor fraction of affected individuals may benefit from substitution therapy with T3. Convergence of multiple afferent signals in the control of TSH release including but not limited to T3, cytokines and TSH receptor antibodies may be the reason for the observation that the relation between free T4 concentration and TSH levels deviates from a pure loglinear relation that has previously been proposed. Recent research suggests that ghrelin also plays a role in the stimulation of T4 production and the subsequent suppression of TSH directly and by negative feedback. Functional states of thyrotropic feedback control Euthyroidism: Normal thyroid function Hypothyroidism: Reduced thyroid function primary hypothyroidism: Feedback loop interrupted by low thyroid secretory capacity, e.g. after thyroid surgery or in case of autoimmune thyroiditis secondary hypothyroidism: Feedback loop interrupted on the level of pituitary, e.g. in anterior pituitary failure tertiary hypothyroidism: Lacking stimulation by TRH, e.g. in hypothalamic failure, Pickardt–Fahlbusch syndrome or euthyroid sick syndrome. Hyperthyroidism: Inappropriately increased thyroid function primary hyperthyroidism: Inappropriate secretion of thyroid hormones, e.g. in case of Graves' disease. secondary hyperthyroidism: Rare condition, e.g. in case of TSH producing pituitary adenoma or partial thyroid hormone resistance. Thyrotoxicosis: Over-supply with thyroid hormones, e.g. by overdosed exogenously levothyroxine supplementation. Low-T3 syndrome and high-T3 syndrome: Consequences of step-up hypodeiodination, e.g. in critical illness as an example for type 1 allostasis, or hyperdeiodination, as in type 2 allostasis, including posttraumatic stress disorder. Resistance to thyroid hormone: Feedback loop interrupted on the level of pituitary thyroid hormone receptors. Diagnostics Standard procedures cover the determination of serum levels of the following hormones: TSH (thyrotropin, thyroid stimulating hormone) Free T4 Free T3 For special conditions the following assays and procedures may be required: Total T4 Total T3 TBG TRH test Thyroid's secretory capacity (GT) Sum activity of peripheral deiodinases (GD) TSH Index (TSHI) See also Thyroid function tests Hypothalamic–pituitary–adrenal axis Hypothalamic–pituitary–gonadal axis Hypothalamic–neurohypophyseal system SimThyr, a free computer simulation for thyroid homeostasis in humans References Further reading Hormones of the hypothalamus-pituitary-thyroid axis Biomedical cybernetics Human homeostasis
Hypothalamic–pituitary–thyroid axis
[ "Biology" ]
1,255
[ "Human homeostasis", "Homeostasis" ]
9,544,968
https://en.wikipedia.org/wiki/Coulomb%20damping
Coulomb damping is a type of constant mechanical damping in which the system's kinetic energy is absorbed via sliding friction (the friction generated by the relative motion of two surfaces that press against each other). Coulomb damping is a common damping mechanism that occurs in machinery. History Coulomb damping was so named because Charles-Augustin de Coulomb carried on research in mechanics. He later published a work on friction in 1781 entitled "Theory of Simple Machines" for an Academy of Sciences contest. Coulomb then gained much fame for his work with electricity and magnetism. Modes of Coulombian friction Coulomb damping absorbs energy with friction, which converts that kinetic energy into thermal energy, i.e. heat. Coulomb friction considers this under two distinct modes: either static, or kinetic. Static friction occurs when two objects are not in relative motion, e.g. if both are stationary. The force exerted between the objects does exceed—in magnitude—the product of the normal force and the coefficient of static friction : . Kinetic friction on the other hand, occurs when two objects are undergoing relative motion, as they slide against each other. The force exerted between the moving objects is equal in magnitude to the product of the normal force and the coefficient of kinetic friction : . Regardless of the mode, friction always acts to oppose the objects' relative motion. The normal force is taken perpendicularly to the direction of relative motion; under the influence of gravity, and in the common case of an object supported by a horizontal surface, the normal force is just the weight of the object itself. As there is no relative motion under static friction, no work is done, and hence no energy can be dissipated. An oscillating system is (by definition) only dampened via kinetic friction. Illustration Consider a block of mass that slides over a rough horizontal surface under the restraint of a spring with a spring constant . The spring is attached to the block and mounted to an immobile object on the other end allowing the block to be moved by the force of the spring , where is the horizontal displacement of the block from when the spring is unstretched. On a horizontal surface, the normal force is constant and equal to the weight of the block by Newton's third law, i.e. . As stated earlier, acts to opposite the motion of the block. Once in motion, the block will oscillate horizontally back and forth around the equilibrium. Newton's second law states that the equation of motion of the block is . Above, and respectively denote the velocity and acceleration of the block. Note that the sign of the kinetic friction term depends on —the direction the block is travelling in—but not the speed. A real-life example of Coulomb damping occurs in large structures with non-welded joints such as airplane wings. Theory Coulomb damping dissipates energy constantly because of sliding friction. The magnitude of sliding friction is a constant value; independent of surface area, displacement or position, and velocity. The system undergoing Coulomb damping is periodic or oscillating and restrained by the sliding friction. Essentially, the object in the system is vibrating back and forth around an equilibrium point. A system being acted upon by Coulomb damping is nonlinear because the frictional force always opposes the direction of motion of the system as stated earlier. And because there is friction present, the amplitude of the motion decreases or decays with time. Under the influence of Coulomb damping, the amplitude decays linearly with a slope of where ωn is the natural frequency. The natural frequency is the number of times the system oscillates between a fixed time interval in an undamped system. It should also be known that the frequency and the period of vibration do not change when the damping is constant, as in the case of Coulomb damping. The period τ is the amount of time between the repetition of phases during vibration. As time progresses, the object sliding slows and the distance it travels during these oscillations becomes smaller until it reaches zero, the equilibrium point. The position where the object stops, or its equilibrium position, could potentially be at a completely different position than when initially at rest because the system is nonlinear. Linear systems have only a single equilibrium point. See also Dry friction Viscous damping References External links Friction (Archived 2009-10-31) - Microsoft Encarta Online Encyclopedia 2006 Coulomb Damping - Science and Engineering Encyclopedia Mechanical vibrations
Coulomb damping
[ "Physics", "Engineering" ]
933
[ "Structural engineering", "Mechanics", "Mechanical vibrations" ]
9,549,222
https://en.wikipedia.org/wiki/Galactosylceramide
A galactosylceramide, or galactocerebroside is a type of cerebroside consisting of a ceramide with a galactose residue at the 1-hydroxyl moiety. The galactose is cleaved by galactosylceramidase. Galactosylceramide is a marker for oligodendrocytes in the brain, whether or not they form myelin. Additional images See also Alpha-Galactosylceramide Krabbe disease Myelin References External links CHEMBL110111 Glycolipids
Galactosylceramide
[ "Chemistry", "Biology" ]
124
[ "Carbohydrates", "Biotechnology stubs", "Glycolipids", "Biochemistry stubs", "Biochemistry", "Glycobiology" ]
9,552,096
https://en.wikipedia.org/wiki/Support%20polygon
For a rigid object in contact with a fixed environment and acted upon by gravity in the vertical direction, its support polygon is a horizontal region over which the center of mass must lie to achieve static stability. For example, for an object resting on a horizontal surface (e.g. a table), the support polygon is the convex hull of its "footprint" on the table. The support polygon succinctly represents the conditions necessary for an object to be at equilibrium under gravity. That is, if the object's center of mass lies over the support polygon, then there exist a set of forces over the region of contact that exactly counteracts the forces of gravity. Note that this is a necessary condition for stability, but not a sufficient one. Derivation Let the object be in contact at a finite number of points . At each point , let be the set of forces that can be applied on the object at that point. Here, is known as the friction cone, and for the Coulomb model of friction, is actually a cone with apex at the origin, extending to infinity in the normal direction of the contact. Let be the (unspecified) forces at the contact points. To balance the object in static equilibrium, the following Newton-Euler equations must be met on : for all where is the force of gravity on the object, and is its center of mass. The first two equations are the Newton-Euler equations, and the third requires all forces to be valid. If there is no set of forces that meet all these conditions, the object will not be in equilibrium. The second equation has no dependence on the vertical component of the center of mass, and thus if a solution exists for one , the same solution works for all . Therefore, the set of all that have solutions to the above conditions is a set that extends infinitely in the up and down directions. The support polygon is simply the projection of this set on the horizontal plane. These results can easily be extended to different friction models and an infinite number of contact points (i.e. a region of contact). Properties Even though the word "polygon" is used to describe this region, in general it can be any convex shape with curved edges. The support polygon is invariant under translations and rotations about the gravity vector (that is, if the contact points and friction cones were translated and rotated about the gravity vector, the support polygon is simply translated and rotated). If the friction cones are convex cones (as they typically are), the support polygon is always a convex region. It is also invariant to the mass of the object (provided it is nonzero). If all contacts lie on a (not necessarily horizontal) plane, and the friction cones at all contacts contain the negative gravity vector , then the support polygon is the convex hull of the contact points projected onto the horizontal plane. References Classical mechanics
Support polygon
[ "Physics" ]
594
[ "Mechanics", "Classical mechanics" ]
13,562,287
https://en.wikipedia.org/wiki/Cyclin-dependent%20kinase%201
Cyclin-dependent kinase 1 also known as CDK1 or cell division cycle protein 2 homolog is a highly conserved protein that functions as a serine/threonine protein kinase, and is a key player in cell cycle regulation. It has been highly studied in the budding yeast S. cerevisiae, and the fission yeast S. pombe, where it is encoded by genes cdc28 and cdc2, respectively. With its cyclin partners, Cdk1 forms complexes that phosphorylate a variety of target substrates (over 75 have been identified in budding yeast); phosphorylation of these proteins leads to cell cycle progression. Structure Cdk1 is a small protein (approximately 34 kilodaltons), and is highly conserved. The human homolog of Cdk1, CDK1, shares approximately 63% amino-acid identity with its yeast homolog. Furthermore, human CDK1 is capable of rescuing fission yeast carrying a cdc2 mutation. Cdk1 is comprised mostly by the bare protein kinase motif, which other protein kinases share. Cdk1, like other kinases, contains a cleft in which ATP fits. Substrates of Cdk1 bind near the mouth of the cleft, and Cdk1 residues catalyze the covalent bonding of the γ-phosphate to the oxygen of the hydroxyl serine/threonine of the substrate. In addition to this catalytic core, Cdk1, like other cyclin-dependent kinases, contains a T-loop, which, in the absence of an interacting cyclin, prevents substrate binding to the Cdk1 active site. Cdk1 also contains a PSTAIRE helix, which, upon cyclin binding, moves and rearranges the active site, facilitating Cdk1 kinase activities. Function When bound to its cyclin partners, Cdk1 phosphorylation leads to cell cycle progression. Cdk1 activity is best understood in S. cerevisiae, so Cdk1 S. cerevisiae activity is described here. In the budding yeast, initial cell cycle entry is controlled by two regulatory complexes, SBF (SCB-binding factor) and MBF (MCB-binding factor). These two complexes control G1/S gene transcription; however, they are normally inactive. SBF is inhibited by the protein Whi5; however, when phosphorylated by Cln3-Cdk1, Whi5 is ejected from the nucleus, allowing for transcription of the G1/S regulon, which includes the G1/S cyclins Cln1,2. G1/S cyclin-Cdk1 activity leads to preparation for S phase entry (e.g., duplication of centromeres or the spindle pole body), and a rise in the S cyclins (Clb5,6 in S. cerevisiae). Clb5,6-Cdk1 complexes directly lead to replication origin initiation; however, they are inhibited by Sic1, preventing premature S phase initiation. Cln1,2 and/or Clb5,6-Cdk1 complex activity leads to a sudden drop in Sic1 levels, allowing for coherent S phase entry. Finally, phosphorylation by M cyclins (e.g., Clb1, 2, 3 and 4) in complex with Cdk1 leads to spindle assembly and sister chromatid alignment. Cdk1 phosphorylation also leads to the activation of the ubiquitin-protein ligase APCCdc20, an activation which allows for chromatid segregation and, furthermore, degradation of M-phase cyclins. This destruction of M cyclins leads to the final events of mitosis (e.g., spindle disassembly, mitotic exit). Regulation Given its essential role in cell cycle progression, Cdk1 is highly regulated. Most obviously, Cdk1 is regulated by its binding with its cyclin partners. Cyclin binding alters access to the active site of Cdk1, allowing for Cdk1 activity; furthermore, cyclins impart specificity to Cdk1 activity. At least some cyclins contain a hydrophobic patch which may directly interact with substrates, conferring target specificity. Furthermore, cyclins can target Cdk1 to particular subcellular locations. In addition to regulation by cyclins, Cdk1 is regulated by phosphorylation. A conserved tyrosine (Tyr15 in humans) leads to inhibition of Cdk1; this phosphorylation is thought to alter ATP orientation, preventing efficient kinase activity. In S. pombe, for example, incomplete DNA synthesis may lead to stabilization of this phosphorylation, preventing mitotic progression. Wee1, conserved among all eukaryotes phosphorylates Tyr15, whereas members of the Cdc25 family are phosphatases, counteracting this activity. The balance between the two is thought to help govern cell cycle progression. Wee1 is controlled upstream by Cdr1, Cdr2, and Pom1. Cdk1-cyclin complexes are also governed by direct binding of Cdk inhibitor proteins (CKIs). One such protein, already discussed, is Sic1. Sic1 is a stoichiometric inhibitor that binds directly to Clb5,6-Cdk1 complexes. Multisite phosphorylation, by Cdk1-Cln1/2, of Sic1 is thought to time Sic1 ubiquitination and destruction, and by extension, the timing of S-phase entry. Only until Sic1 inhibition is overcome can Clb5,6 activity occur and S phase initiation may begin. Interactions Cdk1 has been shown to interact with: BCL2, CCNB1, CCNE1, CDKN3 DAB2, FANCC, GADD45A, LATS1, LYN, P53, and UBC. See also E2F#E2F.2FpRb complexes Hyperphosphorylation cdc25 Maturation promoting factor CDK cyclin A cyclin B cyclin D cyclin E Wee (cell cycle) Mastl References Further reading External links Cell cycle Proteins EC 2.7.11 Cell cycle regulators de:Cyclin-abhängige Kinase 1#Die Entdeckung des cdc2-Gens
Cyclin-dependent kinase 1
[ "Chemistry", "Biology" ]
1,370
[ "Biomolecules by chemical classification", "Signal transduction", "Cellular processes", "Molecular biology", "Proteins", "Cell cycle", "Cell cycle regulators" ]
13,563,938
https://en.wikipedia.org/wiki/Universal%20parabolic%20constant
The universal parabolic constant is a mathematical constant. It is defined as the ratio, for any parabola, of the arc length of the parabolic segment formed by the latus rectum to the focal parameter. The focal parameter is twice the focal length. The ratio is denoted P. In the diagram, the latus rectum is pictured in blue, the parabolic segment that it forms in red and the focal parameter in green. (The focus of the parabola is the point F and the directrix is the line L.) The value of P is . The circle and parabola are unique among conic sections in that they have a universal constant. The analogous ratios for ellipses and hyperbolas depend on their eccentricities. This means that all circles are similar and all parabolas are similar, whereas ellipses and hyperbolas are not. Derivation Take as the equation of the parabola. The focal parameter is and the semilatus rectum is . Properties P is a transcendental number. Proof. Suppose that P is algebraic. Then must also be algebraic. However, by the Lindemann–Weierstrass theorem, would be transcendental, which is not the case. Hence P is transcendental. Since P is transcendental, it is also irrational. Applications The average distance from a point randomly selected in the unit square to its center is Proof. There is also an interesting geometrical reason why this constant appears in unit squares. The average distance between a center of a unit square and a point on the square's boundary is . If we uniformly sample every point on the perimeter of the square, take line segments (drawn from the center) corresponding to each point, add them together by joining each line segment next to the other, scaling them down, the curve obtained is a parabola. References and footnotes Mathematical constants Conic sections Parabolas Real transcendental numbers
Universal parabolic constant
[ "Mathematics" ]
396
[ "Mathematical constants", "Mathematical objects", "Numbers", "nan" ]
13,566,263
https://en.wikipedia.org/wiki/Dukhin%20number
The Dukhin number () is a dimensionless quantity that characterizes the contribution of the surface conductivity to various electrokinetic and electroacoustic effects, as well as to electrical conductivity and permittivity of fluid heterogeneous systems. The number was named after Stanislav and Andrei Dukhin. Overview It was introduced by Lyklema in “Fundamentals of Interface and Colloid Science”. A recent IUPAC Technical Report used this term explicitly and detailed several means of measurement in physical systems. The Dukhin number is a ratio of the surface conductivity to the fluid bulk electrical conductivity Km multiplied by particle size a: There is another expression of this number that is valid when the surface conductivity is associated only with ions motion above the slipping plane in the double layer. In this case, the value of the surface conductivity depends on ζ-potential, which leads to the following expression for the Dukhin number for symmetrical electrolyte with equal ions diffusion coefficient: where the parameter m characterizes the contribution of electro-osmosis into motion of ions within the double layer F is Faraday constant T is absolute temperature R is gas constant C is ions concentration in bulk z is ion valency ζ is electrokinetic potential ε0 is vacuum dielectric permittivity εm is fluid dielectric permittivity η is dynamic viscosity D is diffusion coefficient References Chemical mixtures Colloidal chemistry Condensed matter physics Soft matter
Dukhin number
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
302
[ "Colloidal chemistry", "Soft matter", "Phases of matter", "Materials science", "Colloids", "Surface science", "Chemical mixtures", "Condensed matter physics", "nan", "Matter" ]
13,566,984
https://en.wikipedia.org/wiki/Double%20layer%20%28surface%20science%29
In surface science, a double layer (DL, also called an electrical double layer, EDL) is a structure that appears on the surface of an object when it is exposed to a fluid. The object might be a solid particle, a gas bubble, a liquid droplet, or a porous body. The DL refers to two parallel layers of charge surrounding the object. The first layer, the surface charge (either positive or negative), consists of ions which are adsorbed onto the object due to chemical interactions. The second layer is composed of ions attracted to the surface charge via the Coulomb force, electrically screening the first layer. This second layer is loosely associated with the object. It is made of free ions that move in the fluid under the influence of electric attraction and thermal motion rather than being firmly anchored. It is thus called the "diffuse layer". Interfacial DLs are most apparent in systems with a large surface-area-to-volume ratio, such as a colloid or porous bodies with particles or pores (respectively) on the scale of micrometres to nanometres. However, DLs are important to other phenomena, such as the electrochemical behaviour of electrodes. DLs play a fundamental role in many everyday substances. For instance, homogenized milk exists only because fat droplets are covered with a DL that prevents their coagulation into butter. DLs exist in practically all heterogeneous fluid-based systems, such as blood, paint, ink and ceramic and cement slurry. The DL is closely related to electrokinetic phenomena and electroacoustic phenomena. Development of the (interfacial) double layer Helmholtz When an electronic conductor is brought in contact with a solid or liquid ionic conductor (electrolyte), a common boundary (interface) among the two phases appears. Hermann von Helmholtz was the first to realize that charged electrodes immersed in electrolyte solutions repel the co-ions of the charge while attracting counterions to their surfaces. Two layers of opposite polarity form at the interface between electrode and electrolyte. In 1853, he showed that an electrical double layer (DL) is essentially a molecular dielectric and stores charge electrostatically. Below the electrolyte's decomposition voltage, the stored charge is linearly dependent on the voltage applied. This early model predicted a constant differential capacitance independent from the charge density depending on the dielectric constant of the electrolyte solvent and the thickness of the double-layer. This model, while a good foundation for the description of the interface, does not consider important factors including diffusion/mixing of ions in solution, the possibility of adsorption onto the surface, and the interaction between solvent dipole moments and the electrode. Gouy–Chapman Louis Georges Gouy in 1910 and David Leonard Chapman in 1913 both observed that capacitance was not a constant and that it depended on the applied potential and the ionic concentration. The "Gouy–Chapman model" made significant improvements by introducing a diffuse model of the DL. In this model, the charge distribution of ions as a function of distance from the metal surface allows Maxwell–Boltzmann statistics to be applied. Thus the electric potential decreases exponentially away from the surface of the fluid bulk. Gouy-Chapman layers may bear special relevance in bioelectrochemistry. The observation of long-distance inter-protein electron transfer through the aqueous solution has been attributed to a diffuse region between redox partner proteins (cytochromes c and c1) that is depleted of cations in comparison to the solution bulk, thereby leading to reduced screening, electric fields extending several nanometers, and currents decreasing quasi exponentially with the distance at rate ~1 nm−1. This region is termed "Gouy-Chapman conduit" and is strongly regulated by phosphorylation, which adds one negative charge to the protein surface that disrupts cationic depletion and prevents long-distance charge transport. Similar effects are observed at the redox active site of photosynthetic complexes. Stern The Gouy-Chapman model fails for highly charged DLs. In 1924, Otto Stern suggested combining the Helmholtz model with the Gouy-Chapman model: in Stern's model, some ions adhere to the electrode as suggested by Helmholtz, giving an internal Stern layer, while some form a Gouy-Chapman diffuse layer. The Stern layer accounts for ions' finite size and consequently an ion's closest approach to the electrode is on the order of the ionic radius. The Stern model has its own limitations, namely that it effectively treats ions as point charges, assumes all significant interactions in the diffuse layer are Coulombic, assumes dielectric permittivity to be constant throughout the double layer, and that fluid viscosity is constant plane. Grahame D. C. Grahame modified the Stern model in 1947. He proposed that some ionic or uncharged species can penetrate the Stern layer, although the closest approach to the electrode is normally occupied by solvent molecules. This could occur if ions lose their solvation shell as they approach the electrode. He called ions in direct contact with the electrode "specifically adsorbed ions". This model proposed the existence of three regions. The inner Helmholtz plane (IHP) passes through the centres of the specifically adsorbed ions. The outer Helmholtz plane (OHP) passes through the centres of solvated ions at the distance of their closest approach to the electrode. Finally the diffuse layer is the region beyond the OHP. Bockris/Devanathan/Müller (BDM) In 1963, J. O'M. Bockris, M. A. V. Devanathan and Klaus Müller proposed the BDM model of the double-layer that included the action of the solvent in the interface. They suggested that the attached molecules of the solvent, such as water, would have a fixed alignment to the electrode surface. This first layer of solvent molecules displays a strong orientation to the electric field depending on the charge. This orientation has great influence on the permittivity of the solvent that varies with field strength. The IHP passes through the centers of these molecules. Specifically adsorbed, partially solvated ions appear in this layer. The solvated ions of the electrolyte are outside the IHP. Through the centers of these ions pass the OHP. The diffuse layer is the region beyond the OHP. Trasatti/Buzzanca Further research with double layers on ruthenium dioxide films in 1971 by Sergio Trasatti and Giovanni Buzzanca demonstrated that the electrochemical behavior of these electrodes at low voltages with specific adsorbed ions was like that of capacitors. The specific adsorption of the ions in this region of potential could also involve a partial charge transfer between the ion and the electrode. It was the first step towards understanding pseudocapacitance. Conway Between 1975 and 1980, Brian Evans Conway conducted extensive fundamental and development work on ruthenium oxide electrochemical capacitors. In 1991, he described the difference between 'Supercapacitor' and 'Battery' behavior in electrochemical energy storage. In 1999, he coined the term supercapacitor to explain the increased capacitance by surface redox reactions with faradaic charge transfer between electrodes and ions. His "supercapacitor" stored electrical charge partially in the Helmholtz double-layer and partially as the result of faradaic reactions with "pseudocapacitance" charge transfer of electrons and protons between electrode and electrolyte. The working mechanisms of pseudocapacitors are redox reactions, intercalation and electrosorption. Marcus The physical and mathematical basics of electron charge transfer absent chemical bonds leading to pseudocapacitance was developed by Rudolph A. Marcus. Marcus Theory explains the rates of electron transfer reactions—the rate at which an electron can move from one chemical species to another. It was originally formulated to address outer sphere electron transfer reactions, in which two chemical species change only in their charge, with an electron jumping. For redox reactions without making or breaking bonds, Marcus theory takes the place of Henry Eyring's transition state theory which was derived for reactions with structural changes. Marcus received the Nobel Prize in Chemistry in 1992 for this theory. Mathematical description There are detailed descriptions of the interfacial DL in many books on colloid and interface science and microscale fluid transport. There is also a recent IUPAC technical report on the subject of interfacial double layer and related electrokinetic phenomena. As stated by Lyklema, "...the reason for the formation of a "relaxed" ("equilibrium") double layer is the non-electric affinity of charge-determining ions for a surface..." This process leads to the buildup of an electric surface charge, expressed usually in C/m2. This surface charge creates an electrostatic field that then affects the ions in the bulk of the liquid. This electrostatic field, in combination with the thermal motion of the ions, creates a counter charge, and thus screens the electric surface charge. The net electric charge in this screening diffuse layer is equal in magnitude to the net surface charge, but has the opposite polarity. As a result, the complete structure is electrically neutral. The diffuse layer, or at least part of it, can move under the influence of tangential stress. There is a conventionally introduced slipping plane that separates mobile fluid from fluid that remains attached to the surface. Electric potential at this plane is called electrokinetic potential or zeta potential (also denoted as ζ-potential). The electric potential on the external boundary of the Stern layer versus the bulk electrolyte is referred to as Stern potential. Electric potential difference between the fluid bulk and the surface is called the electric surface potential. Usually zeta potential is used for estimating the degree of DL charge. A characteristic value of this electric potential in the DL is 25 mV with a maximum value around 100 mV (up to several volts on electrodes). The chemical composition of the sample at which the ζ-potential is 0 is called the point of zero charge or the iso-electric point. It is usually determined by the solution pH value, since protons and hydroxyl ions are the charge-determining ions for most surfaces. Zeta potential can be measured using electrophoresis, electroacoustic phenomena, streaming potential, and electroosmotic flow. The characteristic thickness of the DL is the Debye length, κ−1. It is reciprocally proportional to the square root of the ion concentration C. In aqueous solutions it is typically on the scale of a few nanometers and the thickness decreases with increasing concentration of the electrolyte. The electric field strength inside the DL can be anywhere from zero to over 109 V/m. These steep electric potential gradients are the reason for the importance of the DLs. The theory for a flat surface and a symmetrical electrolyte is usually referred to as the Gouy-Chapman theory. It yields a simple relationship between electric charge in the diffuse layer σd and the Stern potential Ψd: There is no general analytical solution for mixed electrolytes, curved surfaces or even spherical particles. There is an asymptotic solution for spherical particles with low charged DLs. In the case when electric potential over DL is less than 25 mV, the so-called Debye-Huckel approximation holds. It yields the following expression for electric potential Ψ in the spherical DL as a function of the distance r from the particle center: There are several asymptotic models which play important roles in theoretical developments associated with the interfacial DL. The first one is "thin DL". This model assumes that DL is much thinner than the colloidal particle or capillary radius. This restricts the value of the Debye length and particle radius as following: This model offers tremendous simplifications for many subsequent applications. Theory of electrophoresis is just one example. The theory of electroacoustic phenomena is another example. The thin DL model is valid for most aqueous systems because the Debye length is only a few nanometers in such cases. It breaks down only for nano-colloids in solution with ionic strengths close to water. The opposing "thick DL" model assumes that the Debye length is larger than particle radius: This model can be useful for some nano-colloids and non-polar fluids, where the Debye length is much larger. The last model introduces "overlapped DLs". This is important in concentrated dispersions and emulsions when distances between particles become comparable with the Debye length. Electrical double layers The electrical double layer (EDL) is the result of the variation of electric potential near a surface, and has a significant influence on the behaviour of colloids and other surfaces in contact with solutions or solid-state fast ion conductors. The primary difference between a double layer on an electrode and one on an interface is the mechanism of surface charge formation. With an electrode, it is possible to regulate the surface charge by applying an external electric potential. This application, however, is impossible in colloidal and porous double layers, because for colloidal particles, one does not have access to the interior of the particle to apply a potential difference. EDLs are analogous to the double layer in plasma. Differential capacitance EDLs have an additional parameter defining their characterization: differential capacitance. Differential capacitance, denoted as C, is described by the equation below: where σ is the surface charge and ψ is the electric surface potential. Electron transfer in electrical double layer The formation of electrical double layer (EDL) has been traditionally assumed to be entirely dominated by ion adsorption and redistribution. With considering the fact that the contact electrification between solid-solid is dominated by electron transfer, it is suggested by Wang that the EDL is formed by a two-step process. In the first step, when the molecules in the solution first approach a virgin surface that has no pre-existing surface charges, it may be possible that the atoms/molecules in the solution directly interact with the atoms on the solid surface to form strong overlap of electron clouds. Electron transfer occurs first to make the “neutral” atoms on solid surface become charged, i.e., the formation of ions. In the second step, if there are ions existing in the liquid, such as H+ and OH–, the loosely distributed negative ions in the solution would be attracted to migrate toward the surface bonded ions due to electrostatic interactions, forming an EDL. Both electron transfer and ion transfer co-exist at liquid-solid interface. See also Depletion region (structure of semiconductor junction) DLVO theory Electroosmotic pump Interface and colloid science Nanofluidics Poisson-Boltzmann equation Supercapacitor References Further reading External links The Electrical Double Layer Chemical mixtures Colloidal chemistry Surface science Electrochemistry Matter Soft matter
Double layer (surface science)
[ "Physics", "Chemistry", "Materials_science" ]
3,100
[ "Colloidal chemistry", "Soft matter", "Surface science", "Colloids", "Electrochemistry", "Chemical mixtures", "Condensed matter physics", "nan", "Matter" ]
13,567,507
https://en.wikipedia.org/wiki/Saxon%20Shore
The Saxon Shore () was a military command of the Late Roman Empire, consisting of a series of fortifications on both sides of the Channel. It was established in the late 3rd century and was led by the "Count of the Saxon Shore". In the late 4th century, his functions were limited to Britain, while the fortifications in Gaul were established as separate commands. Several well-preserved Saxon Shore forts survive in east and south-east England. Background During the latter half of the 3rd century, the Roman Empire faced a grave crisis: Weakened by civil wars, the rapid succession of short-lived emperors, and secession in the provinces, the Romans now faced new waves of attacks by barbarian tribes. Most of Britain had been part of the empire since the mid-1st century. It was protected from raids in the north by the Hadrianic and Antonine Walls, while a fleet of some size was also available. However, as the frontiers came under increasing external pressure, fortifications were built throughout the Empire in order to protect cities and guard strategically important locations. It is in this context that the forts of the Saxon Shore were constructed. Already in the 230s, under Severus Alexander, several units had been withdrawn from the northern frontier and garrisoned at locations in the south, and had built new forts at Brancaster and Caister-on-Sea in Norfolk and Reculver in Kent. Dover was already fortified in the early 2nd century, and the other forts in this group were constructed in the period between the 270s and 290s. Meaning of the term and role The only contemporary reference we possess that mentions the name "Saxon Shore" comes in the late 4th-century Notitia Dignitatum, which lists its commander, the Comes Litoris Saxonici per Britanniam ("Count of the Saxon Shore in Britain"), and gives the names of the sites under his command and their respective complements of military personnel. However, due to the absence of further evidence, theories have varied among scholars as to the exact meaning of the name, and also the nature and purpose of the chain of forts it refers to. Two interpretations were put forward as to the meaning of the adjective "Saxon": either a shore attacked by Saxons, or a shore settled by Saxons. Some argue that the latter hypothesis is supported by Eutropius, who states that during the 280s the sea along the coasts of Belgica and Armorica was "infested with Franks and Saxons", and that this was why Carausius was first put in charge of the fleet there. It also receives support from archaeological finds, as artefacts of a Germanic style have been found in burials, while there is evidence of the presence of Saxons in southern England and the northern coasts of Gaul around Boulogne-sur-Mer and Bayeux from the middle of the 5th century onwards. This, in turn, could mirror a well documented practice of deliberately settling Germanic tribes (Franks became foederati in 358 AD under Emperor Julian) to strengthen Roman defences. Nevertheless, the evidence for extensive Saxon settlement in Britain typically dates to the 5th century, later than the channel defences of the late 3rd and 4th century associated with the Saxon Shore. The other interpretation holds that the forts fulfilled a coastal defence role against seaborne invaders, mostly Saxons and Franks, and acted as bases for the naval units operating against them. This view is reinforced by the parallel chain of fortifications across the Channel on the northern coasts of Gaul, which complemented the British forts, suggesting a unified defensive system, although this could also be accounted for the Saxons having been settled on both sides of the coast as the archeological evidence presented earlier suggests. Other scholars like John Cotterill however consider the threat posed by Germanic raiders, at least in the 3rd and early 4th centuries, to be exaggerated. They interpret the construction of the forts at Brancaster, Caister-on-Sea and Reculver in the early 3rd century and their location at the estuaries of navigable rivers as pointing to a different role: fortified points for transport and supply between Britain and Gaul, without any relation (at least at that time) to countering seaborne piracy. This view is supported by contemporary references to the supplying of the army of Julian the Apostate by Caesar with grain from Britain during his campaign in Gaul in 359, and their use as secure landing places by Count Theodosius during the suppression of the Great Conspiracy a few years later. Another theory, proposed by D.A. White, was that the extended system of large stone forts was disproportionate to any threat by seaborne Germanic raiders, and that it was actually conceived and constructed during the secession of Carausius and Allectus (the Carausian Revolt) in 289–296, and with an entirely different enemy in mind: they were to guard against an attempt at reconquest by the Empire. This view, although widely disputed, has found recent support from archaeological evidence at Pevensey, which dates the fort's construction to the early 290s. Whatever their original purpose, it is virtually certain that in the late 4th century the forts and their garrisons were employed in operations against Frankish and Saxon pirates. Britain was abandoned by Rome in 410, with Armorica following soon after. The forts on both sides continued to be inhabited in the following centuries, and in Britain in particular several continued in use well into the Anglo-Saxon period. The forts In Britain The nine forts mentioned in the Notitia Dignitatum for Britain are listed here, from north to south, with their garrisons. Branodunum (Brancaster, Norfolk). One of the earliest forts, dated to the 230s. It was built to guard the Wash approaches and is of a typical rectangular castrum layout. It was garrisoned by the Equites Dalmatae Brandodunenses, although evidence exists suggesting that its original garrison was the cohors I Aquitanorum. Gariannonum (Burgh Castle, Norfolk). Established between 260 and the mid-270s to guard the River Yare (Gariannus Fluvius), it was garrisoned by the Equites Stablesiani Gariannoneses. Although there is some discussion as to whether this is actually the fort at Caister-on-Sea, and being on the opposite bank of the same estuary as Burgh Castle. Othona (Bradwell-on-Sea, Essex). Garrisoned by the Numerus Fortensium. Regulbium (Reculver, Kent). Together with Brancaster one of the earliest forts, built in the 210s to guard the Thames estuary, it is likewise a castrum. It was garrisoned by the cohors I Baetasiorum since the 3rd century. Rutupiae (Richborough, Kent), garrisoned by parts of the Legio II Augusta. Dubris (Dover Castle, Kent), garrisoned by the Milites Tungrecani. Portus Lemanis (Lympne, Kent), garrisoned by the Numerus Turnacensium. Anderitum (Pevensey Castle, East Sussex), garrisoned by the Numerus Abulcorum. Portus Adurni (Portchester Castle, Hampshire), garrisoned by a Numerus Exploratorum. There are a few other sites that clearly belonged to the system of the British branch of the Saxon Shore (the so-called "Wash-Solent limes"), although they are not included in the Notitia, such as the forts at Walton Castle, Suffolk, which has by now sunk into the sea due to erosion, and at Caister-on-Sea in Norfolk. In the south, Carisbrooke Castle on the Isle of Wight and Clausentum (Bitterne, in modern Southampton) are also regarded as westward extensions of the fortification chain. Other sites probably connected to the Saxon Shore system are the sunken fort at Skegness, and the remains of possible signal stations at Thornham in Norfolk, Corton in Suffolk and Hadleigh in Essex. Further north on the coast, the precautions took the form of central depots at Lindum (Lincoln) and Malton with roads radiating to coastal signal stations. When an alert was relayed to the base, troops could be dispatched along the road. Further up the coast in North Yorkshire, a series of coastal watchtowers (at Huntcliff, Filey, Ravenscar, Goldsborough, and Scarborough) was constructed, linking the southern defences to the northern military zone of the Wall. Similar coastal fortifications are also found in Wales, at Cardiff and Caer Gybi. The only fort in this style in the northern military zone is Lancaster, Lancashire, built sometime in the mid-late 3rd century replacing an earlier fort and extramural community, which may reflect the extent of coastal protection on the north-west coast from invading tribes from Ireland. In Gaul The Notitia also includes two separate commands for the northern coast of Gaul, both of which belonged to the Saxon Shore system. However, when the list was compiled, in , Britain had been abandoned by Roman forces. The first command controlled the shores of the province Belgica Secunda (roughly between the estuaries of the Scheldt and the Somme), under the dux Belgicae Secundae with headquarters at Portus Aepatiaci: Marcae (unidentified location near Calais, possibly Marquise or Marck), garrisoned by the Equites Dalmatae. In the Notitia, together with Grannona, it is the only site on the Gallic shore to be explicitly referred to as lying in litore Saxonico. Locus Quartensis sive Hornensis (probably at the mouth of the Somme), the port of the classis Sambrica ("Fleet of the Somme") Portus Aepatiaci (possibly Étaples), garrisoned by the milites Nervii. Although not mentioned in the Notitia, the port of Gesoriacum or Bononia (Boulogne-sur-Mer), which until 296 was the main base of the Classis Britannica, would also have come under the dux Belgicae Secundae. To this group also belongs the Roman fort at Oudenburg in Belgium. Further west, under the dux tractus Armoricani et Nervicani, were mainly the coasts of Armorica, nowadays Normandy and Brittany. The Notitia lists the following sites: Grannona (disputed location, either at the mouths of the Seine or at Port-en-Bessin), the seat of the dux, garrisoned by the cohors prima nova Armoricana. In the Notitia, it is explicitly mentioned as lying in litore Saxonico. Rotomagus (Rouen), garrisoned by the milites Ursariensii Constantia (Coutances), garrisoned by the legio I Flavia Gallicana Constantia Abricantis (Avranches), garrisoned by the milites Dalmati Grannona (uncertain whether this is a different location than the first Grannona, perhaps Granville), garrisoned by the milites Grannonensii Aleto or Aletum (Aleth, near Saint-Malo), garrisoned by the milites Martensii Osismis (Brest), garrisoned by the milites Mauri Osismiaci Blabia (perhaps Hennebont), garrisoned by the milites Carronensii Benetis (possibly Vannes), garrisoned by the milites Mauri Beneti Manatias (Nantes), garrisoned by the milites superventores In addition, there are several other sites where a Roman military presence has been suggested. At Alderney, the fort known as "The Nunnery" is known to date to Roman times, and the settlement at Longy Common has been cited as evidence of a Roman military establishment, though the archaeological evidence there is, at best, scant. In popular culture In 1888, Alfred Church wrote a historical novel entitled The Count of the Saxon Shore. It is available online. The American band Saxon Shore takes its name from the region. The Saxon Shore is the fourth book in Jack Whyte's Camulod Chronicles. Since 1980, the "Saxon Shore Way" exists, a coastal footpath in Kent which passes by many of the forts. David Rudkin's play The Saxon Shore takes place near Hadrian's Wall as the Romans are withdrawing from Britain. References Notes Sources Cottrell, Leonard (1964). The Roman Forts of the Saxon Shore, London: HMSO. Myers John N.L. (1986) The English Settlements, Oxford University Press Strugnell, Kenneth Wenham (1973). Seagates to the Saxon Shore, Terence Dalton Ltd. External links The Saxon Shore forts on "Roman Britain" Sites of the Litus Saxonicum forts on Google Maps History of Pevensey Castle Fortifications in France Fortification lines 4th century in Roman Gaul Roman Britain Roman fortifications in England Roman fortifications in France Military history of the English Channel
Saxon Shore
[ "Engineering" ]
2,703
[ "Fortification lines", "Saxon Shore" ]
13,567,555
https://en.wikipedia.org/wiki/RAR-related%20orphan%20receptor%20alpha
RAR-related orphan receptor alpha (RORα), also known as NR1F1 (nuclear receptor subfamily 1, group F, member 1) is a nuclear receptor that in humans is encoded by the RORA gene. RORα participates in the transcriptional regulation of some genes involved in circadian rhythm. In mice, RORα is essential for development of cerebellum through direct regulation of genes expressed in Purkinje cells. It also plays an essential role in the development of type 2 innate lymphoid cells (ILC2) and mutant animals are ILC2 deficient. In addition, although present in normal numbers, the ILC3 and Th17 cells from RORα deficient mice are defective for cytokine production. Discovery The first three-human isoforms of RORα were initially cloned and characterized as nuclear receptors in 1994 by Giguère and colleagues, when their structure and function were first studied. In the early 2000s, various studies demonstrated that RORα displays rhythmic patterns of expression in a circadian cycle in the liver, kidney, retina, and lung. Of interest, it was around this time that RORα abundance was found to be circadian in the mammalian suprachiasmatic nucleus. RORα is necessary for normal circadian rhythms in mice, demonstrating its importance in chronobiology. Structure The protein encoded by this gene is a member of the NR1 subfamily of nuclear hormone receptors. In humans, 4 isoforms of RORα have been identified, which are generated via alternative splicing and promoter usage, and exhibit differential tissue-specific expression. The protein structure of RORα consists of four canonical functional groups: an N-terminal (A/B) domain, a DNA-binding domain containing two zinc fingers, a hinge domain, and a C-terminal ligand-binding domain. Within the ROR family, the DNA-binding domain is highly conserved, and the ligand-binding domain is only moderately conserved. Different isoforms of RORα have different binding specificities and strengths of transcriptional activity. Regulation of circadian rhythm The core mammalian circadian clock is a negative feedback loop which consists of Per1/Per2, Cry1/Cry2, Bmal1, and Clock. This feedback loop is stabilized through another loop involving the transcriptional regulation of Bmal1. Transactivation of Bmal1 is regulated through the upstream ROR/REV-ERB Response Element (RRE) in the Bmal1 promoter, to which RORα and REV-ERBα bind. This stabilizing regulatory loop itself is induced by the Bmal1/Clock heterodimer, which induces transcription of RORα and REV-ERBα. RORα, which activates transcription of Bmal1, and REV-ERBα, which represses transcription of Bmal1, compete to bind to the RRE. This feedback loop regulating the expression of Bmal1 is thought to stabilize the core clock mechanism, helping to buffer it against changes in the environment. Mechanism Specific association with ROR elements (RORE) in regulatory regions is necessary for RORα's function as a transcriptional activator. RORα achieves this by specific binding to a consensus core motif in RORE, RGGTCA. This interaction is possible through the association of RORα's first zinc finger with the core motif in the major groove, the P-box, and the association of its C-terminal extension with the AT-rich region in the 5’ region of RORE. Homology RORα, RORβ, and RORγ are all transcriptional activators recognizing ROR-response elements. ROR-alpha is expressed in a variety of cell types and is involved in regulating several aspects of development, inflammatory responses, and lymphocyte development. The RORα isoforms (RORα1 through RORα3) arise via alternative RNA processing, with RORα2 and RORα3 sharing an amino-terminal region different from RORα1. In contrast to RORα, RORβ is expressed in Central Nervous System (CNS) tissues involved in processing sensory information and in generating circadian rhythms while RORγ is critical in lymph node organogenesis and thymopoeisis. The DNA-binding domains of the DHR3 orphan receptor in Drosophila shows especially close homology within amino and carboxy regions adjacent to the second zinc finger region in RORα, suggesting that this group of residues is important for the proteins' functionalities. PDP1 and VRI in Drosophila regulate circadian rhythm's by competing for the same binding site, the VP box, similarly to how ROR and REV-ERB competitively bind to RRE. PDP1 and VRI constitute a feedback loop and are functional homologs of ROR and REV-ERB in mammals. Direct orthologs of this gene have been identified in mice and humans. Human cytochrome c pseudogene HC2 and RORα share overlapping genomic organization with the HC2 pseudogene located within the RORα2 transcription unit. The nucleotide and deduced amino acid sequences of cytochrome c-processed pseudogene are on the sense strand while those of the RORα2 amino-terminal exon are on the antisense strand. Interactions DNA: RORα binds to the P-box of the RORE. Co-activators: SRC-1, CBP, p300, TRIP-l, TRIP-230, transcription intermediary protein-1 (TIF-1), peroxisome proliferator-binding protein (PBP), and GRIP-1 physically interact with RORα. LXXLL motif: ROR interacts with SRC-1, GRIP-l, CBP, and p300 via the LXXLL (L=Leucine, X=any amino acid) motifs on these proteins. Ubiquitination: RORα is targeted for the proteasome by ubiquitination. A co-repressor, Hairless, stabilizes RORα by protecting it from this process, which also represses RORα. Sumoylation: UBE21/UBC9: Ubiquitin-conjugating enzyme I interacts with RORs, but its effect is not yet known. Phosphorylation: Phosphorylation of RORα1, which inhibits its transcriptional activity, is induced by Protein Kinase C. ERK2: Extracellular signal-regulated kinase-2 also phosphorylates RORα. ATXN1: ATXN1 and RORα form part of a protein complex in Purkinje cells. FOXP3: FOXP3 directly represses the transcriptional activity of RORs. NME1: ROR has been shown to specifically interact with NME1. NM23-2: NM23-2 is a nucleoside diphosphate kinase involved in organogenesis and differentiation. NM23-1: NM23-1 is the product of a tumor metastasis suppressor candidate gene. As a drug target Because RORα and REV-ERBα are nuclear receptors that share the same target genes and are involved in processes that regulate metabolism, development, immunity, and circadian rhythm, they show potential as drug targets. Synthetic ligands have a variety of potential therapeutic uses, and can be used to treat diseases such as diabetes, atherosclerosis, autoimmunity, and cancer. T0901317 and SR1001, two synthetic ligands, have been found to be RORα and RORγ inverse agonists that suppress reporter activity and have been shown to delay onset and clinical severity of multiple sclerosis and other Th17 cell-mediated autoimmune diseases. SR1078 has been discovered as a RORα and RORγ agonist that increases the expression of G6PC and FGF21, yielding the therapeutic potential to treat obesity and diabetes as well as cancer of the breast, ovaries, and prostate. SR3335 has also been discovered as a RORα inverse agonist. CGP 52608 See also RAR-related orphan receptor REV-ERBα Aromatase deficiency References Further reading External links Intracellular receptors Transcription factors
RAR-related orphan receptor alpha
[ "Chemistry", "Biology" ]
1,761
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
13,572,433
https://en.wikipedia.org/wiki/Gough%E2%80%93Joule%20effect
The Gough–Joule effect (a.k.a. Gow–Joule effect) is originally the tendency of elastomers to contract when heated if they are under tension. Elastomers that are not under tension do not see this effect. The term is also used more generally to refer to the dependence of the temperature of any solid on the mechanical deformation. This effect can be observed in nylon strings of classical guitars, whereby the string contracts as a result of heating. The effect is due to the decrease of entropy when long chain molecules are stretched. If an elastic band is first stretched and then subjected to heating, it will shrink rather than expand. This effect was first observed by John Gough in 1802, and was investigated further by James Joule in the 1850s, when it then became known as the Gough–Joule effect. Examples in Literature: Popular Science magazine, January 1972: "A stretched piece of rubber contracts when heated. In doing so, it exerts a measurable increase in its pull. This surprising property of rubber was first observed by James Prescott Joule about a hundred years ago and is known as the Joule effect." Rubber as an Engineering Material (book), by Khairi Nagdi: "The Joule effect is a phenomenon of practical importance that must be considered by machine designers. The simplest way of demonstrating this effect is to suspend a weight on a rubber band sufficient to elongate it at least 50%. When the stretched rubber band is warmed up by an infrared lamp, it does not elongate because of thermal expansion, as may be expected, but it retracts and lifts the weight." The effect is important in O-ring seal design, where the seals can be mounted in a peripherally compressed state in hot applications to prolong life. The effect is also relevant to rotary seals which can bind if the seal shrinks due to overheating. References External links O-ring Gland design notes, PSP Inc. A solar power science project using the Gow-Joule effect Elastomers Condensed matter physics Rubber properties James Prescott Joule
Gough–Joule effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
434
[ "Materials science stubs", "Synthetic materials", "Phases of matter", "Materials science", "Elastomers", "Condensed matter physics", "Condensed matter stubs", "Matter" ]
174,311
https://en.wikipedia.org/wiki/Benjamin%20Thompson
Colonel Sir Benjamin Thompson, Count Rumford, FRS (26 March 175321 August 1814), was an American-born British military officer, scientist, inventor and nobleman. Born in Woburn, Massachusetts, he supported the Loyalist cause during the American War of Independence, commanding the King's American Dragoons during the conflict. After the war ended in 1783, Thompson moved to London, where he was recognised for his administrative talents and received a knighthood from George III in 1784. A prolific scientist and inventor, Thompson also created several new warship designs. He subsequently moved to the Electorate of Bavaria and entered into the employ of the Bavarian government, heavily reorganising the Bavarian Army. Thompson was rewarded for his efforts by being made an Imperial Count in 1792 before dying in Paris in 1814. Early years Thompson was born in rural Woburn, in the Province of Massachusetts Bay, on 26 March 1753; his birthplace is preserved as a museum. He was educated mainly at the village school, although he sometimes walked almost ten miles to Cambridge with the older Loammi Baldwin to attend lectures by Professor John Winthrop of Harvard College. At the age of 13 he was apprenticed to John Appleton, a merchant of nearby Salem. Thompson excelled at his trade, and coming in contact with refined and well educated people for the first time, adopted many of their characteristics including an interest in science. While recuperating in Woburn in 1769 from an injury, Thompson conducted his first experiments studying the nature of heat and began to correspond with Baldwin and others about them. Later that year he worked several months for a Boston shopkeeper and then apprenticed himself briefly, and unsuccessfully, to a doctor in Woburn. Thompson's prospects were dim in 1772 but in that year they changed abruptly. He met, charmed and married a rich and well-connected widow, an heiress named Sarah Rolfe (née Walker). Her father was a minister, and her late husband left her property at Rumford, Province of New Hampshire, which is today in the modern city of Concord. They moved to Portsmouth, and through his wife's influence with the governor, he was appointed a major in the New Hampshire Militia. Their child (also named Sarah) was born in 1774. American Revolutionary War When the American Revolutionary War began, Thompson, by now a wealthy and influential landowner, came out in opposition to the uprising. He soon used his connections in the state militia to recruit and arm loyalists seeking to aid British forces fighting the rebels. This earned him the enmity of New Hampshire's Patriot faction; he was stripped of his command and a mob attacked and burned Thompson's house. He fled to the British lines, abandoning his wife, as it turned out, permanently. Thompson became a political and military advisor to General Thomas Gage (whom he was already passing information on the Americans to), and later assisted Lord George Germain in the organization and provisioning of Loyalist units. In 1781, Thompson financed his own military unit - The King's American Dragoons - which primarily served on Long Island in 1782 and early 1783, where they earned local notoriety for demolishing a church and burial ground in order to erect Fort Golgotha in Huntington. While working with the British armies in America he conducted experiments to measure the force of gunpowder, the results of which were widely acclaimed when published in 1781 in the Philosophical Transactions of the Royal Society. On the strength of this, he arrived in London at the end of the war with a reputation as an accomplished scientist. Bavarian maturity In 1785, he moved to Bavaria where he became an aide-de-camp to the Prince-elector Charles Theodore. He spent eleven years in Bavaria, reorganizing the army and establishing workhouses for the poor. He also invented Rumford's Soup, a soup for the poor, and established the cultivation of the potato in Bavaria. He studied methods of cooking, heating, and lighting, including the relative costs and efficiencies of wax candles, tallow candles, and oil lamps. On Prince Charles' behalf he created the Englischer Garten in Munich in 1789; it remains today and is known as one of the largest urban public parks in the world. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1789. For his efforts, in 1791 Thompson was made an Imperial Count, becoming Reichsgraf von Rumford. He took the name "Rumford" after the town of Rumford, New Hampshire, which was an older name for Concord where he had been married. Science and engineering Experiments on heat His experiments on gunnery and explosives led to an interest in heat. He devised a method for measuring the specific heat of a solid substance but was disappointed when Johan Wilcke published his parallel discovery first. Thompson next investigated the insulating properties of various materials, including fur, wool and feathers. He correctly appreciated that the insulating properties of these natural materials arise from the fact that they inhibit the convection of air. He then made the somewhat reckless, and incorrect, inference that air and, in fact, all gases, were perfect non-conductors of heat. He further saw this as evidence of the argument from design, contending that divine providence had arranged for fur on animals in such a way as to guarantee their comfort. In 1797, he extended his claim about non-conductivity to liquids. The idea raised considerable objections from the scientific establishment, John Dalton and John Leslie making particularly forthright attacks. Instrumentation far exceeding anything available in terms of accuracy and precision would have been needed to verify Thompson's claim. Again, he seems to have been influenced by his theological beliefs and it is likely that he wished to grant water a privileged and providential status in the regulation of human life. He is considered the founder of the sous-vide food preparation method owing to his experiment with a mutton shoulder. He described this method in one of his essays. Mechanical equivalent of heat Rumford's most important scientific work took place in Munich, and centred on the nature of heat, which he contended in "An Experimental Enquiry Concerning the Source of the Heat which is Excited by Friction" (1798) was not the caloric of then-current scientific thinking but a form of motion. Rumford had observed the frictional heat generated by boring cannon at the arsenal in Munich. Rumford immersed a cannon barrel in water and arranged for a specially blunted boring tool. He showed that the water could be boiled within roughly two and a half hours and that the supply of frictional heat was seemingly inexhaustible. Rumford confirmed that no physical change had taken place in the material of the cannon by comparing the specific heats of the material machined away and that remaining. Rumford argued that the seemingly indefinite generation of heat was incompatible with the caloric theory. He contended that the only thing communicated to the barrel was motion. Rumford made no attempt to further quantify the heat generated or to measure the mechanical equivalent of heat. Though this work met with a hostile reception, it was subsequently important in establishing the laws of conservation of energy later in the 19th century. Calorific and frigorific radiation He explained Pictet's experiment, which demonstrates the reflection of cold, by supposing that all bodies emit invisible rays, undulations in the ethereal fluid. He did experiments to support his theories of calorific and frigorific radiation and said the communication of heat was the net effect of calorific (hot) rays and frigorific (cold) rays and the rays emitted by the object. When an object absorbs radiation from a warmer object (calorific rays) its temperature rises, and when it absorbs radiation from a colder object (frigorific rays) its temperature falls. See note 8, "An enquiry concerning the nature of heat and the mode of its communication" Philosophical Transactions of the Royal Society, starting at page 112. Inventions and design improvements Thompson was an active and prolific inventor, developing improvements for chimneys, fireplaces and industrial furnaces, as well as inventing the double boiler, a kitchen range, and a coffee percolator roughly between 1810 and 1814. He invented a percolating coffee pot following his pioneering work with the Bavarian Army, where he improved the diet of the soldiers as well as their clothes. The Rumford fireplace created a sensation in London when he introduced the idea of restricting the chimney opening to increase the updraught, which was a much more efficient way to heat a room than earlier fireplaces. He and his workers modified fireplaces by inserting bricks into the hearth to make the side walls angled, and added a choke to the chimney to increase the speed of air going up the flue. The effect was to produce a streamlined air flow, so all the smoke would go up into the chimney rather than lingering and entering the room. It also had the effect of increasing the efficiency of the fire, and gave extra control of the rate of combustion of the fuel, whether wood or coal. Many fashionable London houses were modified to his instructions, and became smoke-free. Thompson became a celebrity when news of his success spread. His work was also very profitable, and much imitated when he published his analysis of the way chimneys worked. In many ways, he was similar to Benjamin Franklin, who also invented a new kind of heating stove. The retention of heat was a recurring theme in his work, as he is also credited with the invention of thermal underwear. Industrial furnaces Thompson also significantly improved the design of kilns used to produce quicklime, and Rumford furnaces were soon being constructed throughout Europe. The key innovation involved separating the burning fuel from the limestone, so that the lime produced by the heat of the furnace was not contaminated by ash from the fire. Light and photometry Rumford worked in photometry, the measurement of light. He made a photometer and introduced the standard candle, the predecessor of the candela, as a unit of luminous intensity. His standard candle was made from the oil of a sperm whale, to rigid specifications. He also published studies of "illusory" or subjective complementary colours, induced by the shadows created by two lights, one white and one coloured; these observations were cited and generalized by Michel-Eugène Chevreul as his "law of simultaneous colour contrast" in 1839. Later life After 1799, he divided his time between France and England. With Sir Joseph Banks, he established the Royal Institution of Great Britain in 1799. The pair chose Sir Humphry Davy as the first lecturer. The institution flourished and became world-famous as a result of Davy's pioneering research. His assistant, Michael Faraday, established the Institution as a premier research laboratory, and also justly famous for its series of public lectures popularizing science. That tradition continues to the present, and the Royal Institution Christmas lectures attract large audiences through their TV broadcasts. Thompson endowed the Rumford medals of the Royal Society and the American Academy of Arts and Sciences, and endowed the Rumford Chair of Physics at Harvard University. In 1803, he was elected a foreign member of the Royal Swedish Academy of Sciences, and as a member of the American Philosophical Society. After several affairs and a close friendship with Mary Temple, Viscountess Palmerston, in 1804, he married Marie-Anne Lavoisier, the widow of the great French chemist Antoine Lavoisier. (His American wife, Sarah—whom he had abandoned at the outbreak of the American Revolution—had died in 1792.) Thompson separated from his second wife after three years, but he settled in Paris and continued his scientific work until his death on 21 August 1814. Thompson is buried in the small cemetery of Auteuil in Paris, just across from Adrien-Marie Legendre. Upon his death, his daughter from his first marriage, Sarah Thompson, inherited his title as Countess Rumford. He was also known to have been a lover of George Germain, 1st Viscount Sackville. Honours Colonel, King's American Dragoons. Knighted, 1784. Count of the Holy Roman Empire, 1791. The crater Rumford on the Moon is named after him. Rumford baking powder (patented 1859) is named after him, having been invented by a former Rumford professor at Harvard University, Eben Norton Horsford (1818–1893), cofounder of the Rumford Chemical Works of East Providence, RI. Rumford Kitchen at the World's Fair in Chicago, 1893. A street in the inner city of Munich is named after him. Rumford Street (and the nearby Rumford Place) in Liverpool, England, are so named due to a soup kitchen established to Count Rumford's plan which formerly stood on land adjacent to Rumford Street. : Order of the White Eagle (1789). Bibliography An Essay on Chimney Fire-Places; With Proposals for Improving Them, to Save Fuel, to Render Dwelling-Houses More Comfortable and Salubrious, and Effectually to Prevent Chimnies from Smoking. Illustrated with Engravings, (1796). Collected Works of Count Rumford, Volume I, The Nature of Heat, (1968). Collected Works of Count Rumford, Volume II, Practical Applications of Heat, (1969). Collected Works of Count Rumford, Volume III, Devices and Techniques, (1969). Collected Works of Count Rumford, Volume IV, Light and Armament, (1970). Collected Works of Count Rumford, Volume V, Public Institutions, (1970). See also History of thermodynamics Citations References Further reading External links Eric Weisstein's World of Science. "Rumford, Benjamin Thompson". (1753–1814) Dr. Hugh C. Rowlinson "The Contribution of Count Rumford to Domestic Life in Jane Austen’s Time" An article not only detailing the Rumford fireplace, but also Rumford's life and other achievements. A Biography of Benjamin Thompson, Jr. Written in 1868 Escutcheons of Science Count Rumford's Birth Place and Museum Count Rumford Fireplaces website 1753 births 1814 deaths Loyalists in the American Revolution from New Hampshire American physicists British physicists Rumford Fellows of the American Academy of Arts and Sciences Fellows of the Royal Society Members of the Royal Swedish Academy of Sciences Harvard University people People from the Duchy of Bavaria People from colonial Massachusetts People from colonial New Hampshire People from Woburn, Massachusetts Recipients of the Copley Medal 18th-century American scientists 19th-century American people 18th-century British people 19th-century British people Recipients of the Order of the White Eagle (Poland) Thermodynamicists Knights Bachelor 18th-century English LGBTQ people English LGBTQ politicians
Benjamin Thompson
[ "Physics", "Chemistry" ]
2,990
[ "Thermodynamics", "Thermodynamicists" ]
174,396
https://en.wikipedia.org/wiki/Bohr%20radius
The Bohr radius () is a physical constant, approximately equal to the most probable distance between the nucleus and the electron in a hydrogen atom in its ground state. It is named after Niels Bohr, due to its role in the Bohr model of an atom. Its value is Definition and value The Bohr radius is defined as where is the permittivity of free space, is the reduced Planck constant, is the mass of an electron, is the elementary charge, is the speed of light in vacuum, and is the fine-structure constant. The CODATA value of the Bohr radius (in SI units) is History In the Bohr model for atomic structure, put forward by Niels Bohr in 1913, electrons orbit a central nucleus under electrostatic attraction. The original derivation posited that electrons have orbital angular momentum in integer multiples of the reduced Planck constant, which successfully matched the observation of discrete energy levels in emission spectra, along with predicting a fixed radius for each of these levels. In the simplest atom, hydrogen, a single electron orbits the nucleus, and its smallest possible orbit, with the lowest energy, has an orbital radius almost equal to the Bohr radius. (It is not exactly the Bohr radius due to the reduced mass effect. They differ by about 0.05%.) The Bohr model of the atom was superseded by an electron probability cloud adhering to the Schrödinger equation as published in 1926. This is further complicated by spin and quantum vacuum effects to produce fine structure and hyperfine structure. Nevertheless, the Bohr radius formula remains central in atomic physics calculations, due to its simple relationship with fundamental constants (this is why it is defined using the true electron mass rather than the reduced mass, as mentioned above). As such, it became the unit of length in atomic units. In Schrödinger's quantum-mechanical theory of the hydrogen atom, the Bohr radius is the value of the radial coordinate for which the radial probability density of the electron position is highest. The expected value of the radial distance of the electron, by contrast, is . Related constants The Bohr radius is one of a trio of related units of length, the other two being the Compton wavelength of the electron () and the classical electron radius (). Any one of these constants can be written in terms of any of the others using the fine-structure constant : Hydrogen atom and similar systems The Bohr radius including the effect of reduced mass in the hydrogen atom is given by where is the reduced mass of the electron–proton system (with being the mass of proton). The use of reduced mass is a generalization of the two-body problem from classical physics beyond the case in which the approximation that the mass of the orbiting body is negligible compared to the mass of the body being orbited. Since the reduced mass of the electron–proton system is a little bit smaller than the electron mass, the "reduced" Bohr radius is slightly larger than the Bohr radius ( meters). This result can be generalized to other systems, such as positronium (an electron orbiting a positron) and muonium (an electron orbiting an anti-muon) by using the reduced mass of the system and considering the possible change in charge. Typically, Bohr model relations (radius, energy, etc.) can be easily modified for these exotic systems (up to lowest order) by simply replacing the electron mass with the reduced mass for the system (as well as adjusting the charge when appropriate). For example, the radius of positronium is approximately , since the reduced mass of the positronium system is half the electron mass (). A hydrogen-like atom will have a Bohr radius which primarily scales as , with the number of protons in the nucleus. Meanwhile, the reduced mass () only becomes better approximated by in the limit of increasing nuclear mass. These results are summarized in the equation A table of approximate relationships is given below. See also Bohr magneton Rydberg energy References External links Length Scales in Physics: the Bohr Radius Atomic physics Physical constants Units of length Niels Bohr Atomic radius
Bohr radius
[ "Physics", "Chemistry", "Mathematics" ]
856
[ "Units of measurement", "Physical quantities", "Units of length", "Quantity", "Quantum mechanics", "Atomic radius", "Physical constants", "Atomic physics", " molecular", "Atomic", "Atoms", "Matter", " and optical physics" ]
174,412
https://en.wikipedia.org/wiki/Birefringence
Birefringence means double refraction. It is the optical property of a material having a refractive index that depends on the polarization and propagation direction of light. These optically anisotropic materials are described as birefringent or birefractive. The birefringence is often quantified as the maximum difference between refractive indices exhibited by the material. Crystals with non-cubic crystal structures are often birefringent, as are plastics under mechanical stress. Birefringence is responsible for the phenomenon of double refraction whereby a ray of light, when incident upon a birefringent material, is split by polarization into two rays taking slightly different paths. This effect was first described by Danish scientist Rasmus Bartholin in 1669, who observed it in Iceland spar (calcite) crystals which have one of the strongest birefringences. In the 19th century Augustin-Jean Fresnel described the phenomenon in terms of polarization, understanding light as a wave with field components in transverse polarization (perpendicular to the direction of the wave vector). Explanation A mathematical description of wave propagation in a birefringent medium is presented below. Following is a qualitative explanation of the phenomenon. Uniaxial materials The simplest type of birefringence is described as uniaxial, meaning that there is a single direction governing the optical anisotropy whereby all directions perpendicular to it (or at a given angle to it) are optically equivalent. Thus rotating the material around this axis does not change its optical behaviour. This special direction is known as the optic axis of the material. Light propagating parallel to the optic axis (whose polarization is always perpendicular to the optic axis) is governed by a refractive index (for "ordinary") regardless of its specific polarization. For rays with any other propagation direction, there is one linear polarization that is perpendicular to the optic axis, and a ray with that polarization is called an ordinary ray and is governed by the same refractive index value . For a ray propagating in the same direction but with a polarization perpendicular to that of the ordinary ray, the polarization direction will be partly in the direction of (parallel to) the optic axis, and this extraordinary ray will be governed by a different, direction-dependent refractive index. Because the index of refraction depends on the polarization when unpolarized light enters a uniaxial birefringent material, it is split into two beams travelling in different directions, one having the polarization of the ordinary ray and the other the polarization of the extraordinary ray. The ordinary ray will always experience a refractive index of , whereas the refractive index of the extraordinary ray will be in between and , depending on the ray direction as described by the index ellipsoid. The magnitude of the difference is quantified by the birefringence The propagation (as well as reflection coefficient) of the ordinary ray is simply described by as if there were no birefringence involved. The extraordinary ray, as its name suggests, propagates unlike any wave in an isotropic optical material. Its refraction (and reflection) at a surface can be understood using the effective refractive index (a value in between and ). Its power flow (given by the Poynting vector) is not exactly in the direction of the wave vector. This causes an additional shift in that beam, even when launched at normal incidence, as is popularly observed using a crystal of calcite as photographed above. Rotating the calcite crystal will cause one of the two images, that of the extraordinary ray, to rotate slightly around that of the ordinary ray, which remains fixed. When the light propagates either along or orthogonal to the optic axis, such a lateral shift does not occur. In the first case, both polarizations are perpendicular to the optic axis and see the same effective refractive index, so there is no extraordinary ray. In the second case the extraordinary ray propagates at a different phase velocity (corresponding to ) but still has the power flow in the direction of the wave vector. A crystal with its optic axis in this orientation, parallel to the optical surface, may be used to create a waveplate, in which there is no distortion of the image but an intentional modification of the state of polarization of the incident wave. For instance, a quarter-wave plate is commonly used to create circular polarization from a linearly polarized source. Biaxial materials The case of so-called biaxial crystals is substantially more complex. These are characterized by three refractive indices corresponding to three principal axes of the crystal. For most ray directions, both polarizations would be classified as extraordinary rays but with different effective refractive indices. Being extraordinary waves, the direction of power flow is not identical to the direction of the wave vector in either case. The two refractive indices can be determined using the index ellipsoids for given directions of the polarization. Note that for biaxial crystals the index ellipsoid will not be an ellipsoid of revolution ("spheroid") but is described by three unequal principle refractive indices , and . Thus there is no axis around which a rotation leaves the optical properties invariant (as there is with uniaxial crystals whose index ellipsoid is a spheroid). Although there is no axis of symmetry, there are two optical axes or binormals which are defined as directions along which light may propagate without birefringence, i.e., directions along which the wavelength is independent of polarization. For this reason, birefringent materials with three distinct refractive indices are called biaxial. Additionally, there are two distinct axes known as optical ray axes or biradials along which the group velocity of the light is independent of polarization. Double refraction When an arbitrary beam of light strikes the surface of a birefringent material at non-normal incidence, the polarization component normal to the optic axis (ordinary ray) and the other linear polarization (extraordinary ray) will be refracted toward somewhat different paths. Natural light, so-called unpolarized light, consists of equal amounts of energy in any two orthogonal polarizations. Even linearly polarized light has some energy in both polarizations, unless aligned along one of the two axes of birefringence. According to Snell's law of refraction, the two angles of refraction are governed by the effective refractive index of each of these two polarizations. This is clearly seen, for instance, in the Wollaston prism which separates incoming light into two linear polarizations using prisms composed of a birefringent material such as calcite. The different angles of refraction for the two polarization components are shown in the figure at the top of this page, with the optic axis along the surface (and perpendicular to the plane of incidence), so that the angle of refraction is different for the polarization (the "ordinary ray" in this case, having its electric vector perpendicular to the optic axis) and the polarization (the "extraordinary ray" in this case, whose electric field polarization includes a component in the direction of the optic axis). In addition, a distinct form of double refraction occurs, even with normal incidence, in cases where the optic axis is not along the refracting surface (nor exactly normal to it); in this case, the dielectric polarization of the birefringent material is not exactly in the direction of the wave's electric field for the extraordinary ray. The direction of power flow (given by the Poynting vector) for this inhomogenous wave is at a finite angle from the direction of the wave vector resulting in an additional separation between these beams. So even in the case of normal incidence, where one would compute the angle of refraction as zero (according to Snell's law, regardless of the effective index of refraction), the energy of the extraordinary ray is propagated at an angle. If exiting the crystal through a face parallel to the incoming face, the direction of both rays will be restored, but leaving a shift between the two beams. This is commonly observed using a piece of calcite cut along its natural cleavage, placed above a paper with writing, as in the above photographs. On the contrary, waveplates specifically have their optic axis along the surface of the plate, so that with (approximately) normal incidence there will be no shift in the image from light of either polarization, simply a relative phase shift between the two light waves. Terminology Much of the work involving polarization preceded the understanding of light as a transverse electromagnetic wave, and this has affected some terminology in use. Isotropic materials have symmetry in all directions and the refractive index is the same for any polarization direction. An anisotropic material is called "birefringent" because it will generally refract a single incoming ray in two directions, which we now understand correspond to the two different polarizations. This is true of either a uniaxial or biaxial material. In a uniaxial material, one ray behaves according to the normal law of refraction (corresponding to the ordinary refractive index), so an incoming ray at normal incidence remains normal to the refracting surface. As explained above, the other polarization can deviate from normal incidence, which cannot be described using the law of refraction. This thus became known as the extraordinary ray. The terms "ordinary" and "extraordinary" are still applied to the polarization components perpendicular to and not perpendicular to the optic axis respectively, even in cases where no double refraction is involved. A material is termed uniaxial when it has a single direction of symmetry in its optical behavior, which we term the optic axis. It also happens to be the axis of symmetry of the index ellipsoid (a spheroid in this case). The index ellipsoid could still be described according to the refractive indices, , and , along three coordinate axes; in this case two are equal. So if corresponding to the and axes, then the extraordinary index is corresponding to the axis, which is also called the optic axis in this case. Materials in which all three refractive indices are different are termed biaxial and the origin of this term is more complicated and frequently misunderstood. In a uniaxial crystal, different polarization components of a beam will travel at different phase velocities, except for rays in the direction of what we call the optic axis. Thus the optic axis has the particular property that rays in that direction do not exhibit birefringence, with all polarizations in such a beam experiencing the same index of refraction. It is very different when the three principal refractive indices are all different; then an incoming ray in any of those principal directions will still encounter two different refractive indices. But it turns out that there are two special directions (at an angle to all of the 3 axes) where the refractive indices for different polarizations are again equal. For this reason, these crystals were designated as biaxial, with the two "axes" in this case referring to ray directions in which propagation does not experience birefringence. Fast and slow rays In a birefringent material, a wave consists of two polarization components which generally are governed by different effective refractive indices. The so-called slow ray is the component for which the material has the higher effective refractive index (slower phase velocity), while the fast ray is the one with a lower effective refractive index. When a beam is incident on such a material from air (or any material with a lower refractive index), the slow ray is thus refracted more towards the normal than the fast ray. In the example figure at top of this page, it can be seen that refracted ray with s polarization (with its electric vibration along the direction of the optic axis, thus called the extraordinary ray) is the slow ray in given scenario. Using a thin slab of that material at normal incidence, one would implement a waveplate. In this case, there is essentially no spatial separation between the polarizations, the phase of the wave in the parallel polarization (the slow ray) will be retarded with respect to the perpendicular polarization. These directions are thus known as the slow axis and fast axis of the waveplate. Positive or negative Uniaxial birefringence is classified as positive when the extraordinary index of refraction is greater than the ordinary index . Negative birefringence means that is less than zero. In other words, the polarization of the fast (or slow) wave is perpendicular to the optic axis when the birefringence of the crystal is positive (or negative, respectively). In the case of biaxial crystals, all three of the principal axes have different refractive indices, so this designation does not apply. But for any defined ray direction one can just as well designate the fast and slow ray polarizations. Sources of optical birefringence While the best known source of birefringence is the entrance of light into an anisotropic crystal, it can result in otherwise optically isotropic materials in a few ways: Stress birefringence results when a normally isotropic solid is stressed and deformed (i.e., stretched or bent) causing a loss of physical isotropy and consequently a loss of isotropy in the material's permittivity tensor; Form birefringence, whereby structure elements such as rods, having one refractive index, are suspended in a medium with a different refractive index. When the lattice spacing is much smaller than a wavelength, such a structure is described as a metamaterial; By the Pockels or Kerr effect, whereby an applied electric field induces birefringence due to nonlinear optics; By the self or forced alignment into thin films of amphiphilic molecules such as lipids, some surfactants or liquid crystals; Circular birefringence takes place generally not in materials which are anisotropic but rather ones which are chiral. This can include liquids where there is an enantiomeric excess of a chiral molecule, that is, one that has stereo isomers; By the Faraday effect, where a longitudinal magnetic field causes some materials to become circularly birefringent (having slightly different indices of refraction for left- and right-handed circular polarizations), similar to optical activity while the field is applied. Common birefringent materials The best characterized birefringent materials are crystals. Due to their specific crystal structures their refractive indices are well defined. Depending on the symmetry of a crystal structure (as determined by one of the 32 possible crystallographic point groups), crystals in that group may be forced to be isotropic (not birefringent), to have uniaxial symmetry, or neither in which case it is a biaxial crystal. The crystal structures permitting uniaxial and biaxial birefringence are noted in the two tables, below, listing the two or three principal refractive indices (at wavelength 590 nm) of some better-known crystals. In addition to induced birefringence while under stress, many plastics obtain permanent birefringence during manufacture due to stresses which are "frozen in" due to mechanical forces present when the plastic is molded or extruded. For example, ordinary cellophane is birefringent. Polarizers are routinely used to detect stress, either applied or frozen-in, in plastics such as polystyrene and polycarbonate. Cotton fiber is birefringent because of high levels of cellulosic material in the fibre's secondary cell wall which is directionally aligned with the cotton fibers. Polarized light microscopy is commonly used in biological tissue, as many biological materials are linearly or circularly birefringent. Collagen, found in cartilage, tendon, bone, corneas, and several other areas in the body, is birefringent and commonly studied with polarized light microscopy. Some proteins are also birefringent, exhibiting form birefringence. Inevitable manufacturing imperfections in optical fiber leads to birefringence, which is one cause of pulse broadening in fiber-optic communications. Such imperfections can be geometrical (lack of circular symmetry), or due to unequal lateral stress applied to the optical fibre. Birefringence is intentionally introduced (for instance, by making the cross-section elliptical) in order to produce polarization-maintaining optical fibers. Birefringence can be induced (or corrected) in optical fibers through bending them which causes anisotropy in form and stress given the axis around which it is bent and radius of curvature. In addition to anisotropy in the electric polarizability that we have been discussing, anisotropy in the magnetic permeability could be a source of birefringence. At optical frequencies, there is no measurable magnetic polarizability () of natural materials, so this is not an actual source of birefringence. Measurement Birefringence and other polarization-based optical effects (such as optical rotation and linear or circular dichroism) can be observed by measuring any change in the polarization of light passing through the material. These measurements are known as polarimetry. Polarized light microscopes, which contain two polarizers that are at 90° to each other on either side of the sample, are used to visualize birefringence, since light that has not been affected by birefringence remains in a polarization that is totally rejected by the second polarizer ("analyzer"). The addition of quarter-wave plates permits examination using circularly polarized light. Determination of the change in polarization state using such an apparatus is the basis of ellipsometry, by which the optical properties of specular surfaces can be gauged through reflection. Birefringence measurements have been made with phase-modulated systems for examining the transient flow behaviour of fluids. Birefringence of lipid bilayers can be measured using dual-polarization interferometry. This provides a measure of the degree of order within these fluid layers and how this order is disrupted when the layer interacts with other biomolecules. For the 3D measurement of birefringence, a technique based on holographic tomography can be used. Applications Optical devices Birefringence is used in many optical devices. Liquid-crystal displays, the most common sort of flat-panel display, cause their pixels to become lighter or darker through rotation of the polarization (circular birefringence) of linearly polarized light as viewed through a sheet polarizer at the screen's surface. Similarly, light modulators modulate the intensity of light through electrically induced birefringence of polarized light followed by a polarizer. The Lyot filter is a specialized narrowband spectral filter employing the wavelength dependence of birefringence. Waveplates are thin birefringent sheets widely used in certain optical equipment for modifying the polarization state of light passing through it. To manufacture polarizers with high transmittance, birefringent crystals are used in devices such as the Glan–Thompson prism, Glan–Taylor prism and other variants. Layered birefringent polymer sheets can also be used for this purpose. Birefringence also plays an important role in second-harmonic generation and other nonlinear optical processes. The crystals used for these purposes are almost always birefringent. By adjusting the angle of incidence, the effective refractive index of the extraordinary ray can be tuned in order to achieve phase matching, which is required for the efficient operation of these devices. Medicine Birefringence is utilized in medical diagnostics. One powerful accessory used with optical microscopes is a pair of crossed polarizing filters. Light from the source is polarized in the direction after passing through the first polarizer, but above the specimen is a polarizer (a so-called analyzer) oriented in the direction. Therefore, no light from the source will be accepted by the analyzer, and the field will appear dark. Areas of the sample possessing birefringence will generally couple some of the -polarized light into the polarization; these areas will then appear bright against the dark background. Modifications to this basic principle can differentiate between positive and negative birefringence. For instance, needle aspiration of fluid from a gouty joint will reveal negatively birefringent monosodium urate crystals. Calcium pyrophosphate crystals, in contrast, show weak positive birefringence. Urate crystals appear yellow, and calcium pyrophosphate crystals appear blue when their long axes are aligned parallel to that of a red compensator filter, or a crystal of known birefringence is added to the sample for comparison. The birefringence of tissue inside a living human thigh was measured using polarization-sensitive optical coherence tomography at 1310 nm and a single mode fiber in a needle. Skeletal muscle birefringence was Δn = 1.79 × 10−3 ± 0.18×10−3, adipose Δn = 0.07 × 10−3 ± 0.50 × 10−3, superficial aponeurosis Δn = 5.08 × 10−3 ± 0.73 × 10−3 and interstitial tissue Δn = 0.65 × 10−3 ±0.39 × 10−3. These measurements may be important for the development of a less invasive method to diagnose Duchenne muscular dystrophy. Birefringence can be observed in amyloid plaques such as are found in the brains of Alzheimer's patients when stained with a dye such as Congo Red. Modified proteins such as immunoglobulin light chains abnormally accumulate between cells, forming fibrils. Multiple folds of these fibers line up and take on a beta-pleated sheet conformation. Congo red dye intercalates between the folds and, when observed under polarized light, causes birefringence. In ophthalmology, binocular retinal birefringence screening of the Henle fibers (photoreceptor axons that go radially outward from the fovea) provides a reliable detection of strabismus and possibly also of anisometropic amblyopia. In healthy subjects, the maximum retardation induced by the Henle fiber layer is approximately 22 degrees at 840 nm. Furthermore, scanning laser polarimetry uses the birefringence of the optic nerve fiber layer to indirectly quantify its thickness, which is of use in the assessment and monitoring of glaucoma. Polarization-sensitive optical coherence tomography measurements obtained from healthy human subjects have demonstrated a change in birefringence of the retinal nerve fiber layer as a function of location around the optic nerve head. The same technology was recently applied in the living human retina to quantify the polarization properties of vessel walls near the optic nerve. While retinal vessel walls become thicker and less birefringent in patients who suffer from hypertension, hinting at a decrease in vessel wall condition, the vessel walls of diabetic patients do not experience a change in thickness, but do see an increase in birefringence, presumably due to fibrosis or inflammation. Birefringence characteristics in sperm heads allow the selection of spermatozoa for intracytoplasmic sperm injection. Likewise, zona imaging uses birefringence on oocytes to select the ones with highest chances of successful pregnancy. Birefringence of particles biopsied from pulmonary nodules indicates silicosis. Dermatologists use dermatoscopes to view skin lesions. Dermoscopes use polarized light, allowing the user to view crystalline structures corresponding to dermal collagen in the skin. These structures may appear as shiny white lines or rosette shapes and are only visible under polarized dermoscopy. Stress-induced birefringence Isotropic solids do not exhibit birefringence. When they are under mechanical stress, birefringence results. The stress can be applied externally or is "frozen in" after a birefringent plastic ware is cooled after it is manufactured using injection molding. When such a sample is placed between two crossed polarizers, colour patterns can be observed, because polarization of a light ray is rotated after passing through a birefringent material and the amount of rotation is dependent on wavelength. The experimental method called photoelasticity used for analyzing stress distribution in solids is based on the same principle. There has been recent research on using stress-induced birefringence in a glass plate to generate an optical vortex and full Poincare beams (optical beams that have every possible polarization state across a cross-section). Other cases of birefringence Birefringence is observed in anisotropic elastic materials. In these materials, the two polarizations split according to their effective refractive indices, which are also sensitive to stress. The study of birefringence in shear waves traveling through the solid Earth (the Earth's liquid core does not support shear waves) is widely used in seismology. Birefringence is widely used in mineralogy to identify rocks, minerals, and gemstones. Theory In an isotropic medium (including free space) the so-called electric displacement () is just proportional to the electric field () according to where the material's permittivity is just a scalar (and equal to where is the index of refraction). In an anisotropic material exhibiting birefringence, the relationship between and must now be described using a tensor equation: where is now a 3 × 3 permittivity tensor. We assume linearity and no magnetic permeability in the medium: . The electric field of a plane wave of angular frequency can be written in the general form: where is the position vector, is time, and is a vector describing the electric field at , . Then we shall find the possible wave vectors . By combining Maxwell's equations for and , we can eliminate to obtain: With no free charges, Maxwell's equation for the divergence of vanishes: We can apply the vector identity to the left hand side of , and use the spatial dependence in which each differentiation in (for instance) results in multiplication by to find: The right hand side of can be expressed in terms of through application of the permittivity tensor and noting that differentiation in time results in multiplication by , then becomes: Applying the differentiation rule to we find: indicates that is orthogonal to the direction of the wavevector , even though that is no longer generally true for as would be the case in an isotropic medium. will not be needed for the further steps in the following derivation. Finding the allowed values of for a given is easiest done by using Cartesian coordinates with the , and axes chosen in the directions of the symmetry axes of the crystal (or simply choosing in the direction of the optic axis of a uniaxial crystal), resulting in a diagonal matrix for the permittivity tensor : where the diagonal values are squares of the refractive indices for polarizations along the three principal axes , and . With in this form, and substituting in the speed of light using , the component of the vector equation becomes where , , are the components of (at any given position in space and time) and , , are the components of . Rearranging, we can write (and similarly for the and components of ) This is a set of linear equations in , , , so it can have a nontrivial solution (that is, one other than ) as long as the following determinant is zero: Evaluating the determinant of , and rearranging the terms according to the powers of , the constant terms cancel. After eliminating the common factor from the remaining terms, we obtain In the case of a uniaxial material, choosing the optic axis to be in the direction so that and , this expression can be factored into Setting either of the factors in to zero will define an ellipsoidal surface in the space of wavevectors that are allowed for a given . The first factor being zero defines a sphere; this is the solution for so-called ordinary rays, in which the effective refractive index is exactly regardless of the direction of . The second defines a spheroid symmetric about the axis. This solution corresponds to the so-called extraordinary rays in which the effective refractive index is in between and , depending on the direction of . Therefore, for any arbitrary direction of propagation (other than in the direction of the optic axis), two distinct wavevectors are allowed corresponding to the polarizations of the ordinary and extraordinary rays. For a biaxial material a similar but more complicated condition on the two waves can be described; the locus of allowed vectors (the wavevector surface) is a 4th-degree two-sheeted surface, so that in a given direction there are generally two permitted vectors (and their opposites). By inspection one can see that is generally satisfied for two positive values of . Or, for a specified optical frequency and direction normal to the wavefronts , it is satisfied for two wavenumbers (or propagation constants) (and thus effective refractive indices) corresponding to the propagation of two linear polarizations in that direction. When those two propagation constants are equal then the effective refractive index is independent of polarization, and there is consequently no birefringence encountered by a wave traveling in that particular direction. For a uniaxial crystal, this is the optic axis, the ±z direction according to the above construction. But when all three refractive indices (or permittivities), , and are distinct, it can be shown that there are exactly two such directions, where the two sheets of the wave-vector surface touch; these directions are not at all obvious and do not lie along any of the three principal axes (, , according to the above convention). Historically that accounts for the use of the term "biaxial" for such crystals, as the existence of exactly two such special directions (considered "axes") was discovered well before polarization and birefringence were understood physically. These two special directions are generally not of particular interest; biaxial crystals are rather specified by their three refractive indices corresponding to the three axes of symmetry. A general state of polarization launched into the medium can always be decomposed into two waves, one in each of those two polarizations, which will then propagate with different wavenumbers . Applying the different phase of propagation to those two waves over a specified propagation distance will result in a generally different net polarization state at that point; this is the principle of the waveplate for instance. With a waveplate, there is no spatial displacement between the two rays as their vectors are still in the same direction. That is true when each of the two polarizations is either normal to the optic axis (the ordinary ray) or parallel to it (the extraordinary ray). In the more general case, there is a difference not only in the magnitude but the direction of the two rays. For instance, the photograph through a calcite crystal (top of page) shows a shifted image in the two polarizations; this is due to the optic axis being neither parallel nor normal to the crystal surface. And even when the optic axis is parallel to the surface, this will occur for waves launched at non-normal incidence (as depicted in the explanatory figure). In these cases the two vectors can be found by solving constrained by the boundary condition which requires that the components of the two transmitted waves' vectors, and the vector of the incident wave, as projected onto the surface of the interface, must all be identical. For a uniaxial crystal it will be found that there is not a spatial shift for the ordinary ray (hence its name) which will refract as if the material were non-birefringent with an index the same as the two axes which are not the optic axis. For a biaxial crystal neither ray is deemed "ordinary" nor would generally be refracted according to a refractive index equal to one of the principal axes. See also Cotton–Mouton effect Crystal optics Dichroism Iceland spar Index ellipsoid John Kerr Optical rotation Periodic poling Pleochroism Huygens principle of double refraction Notes References Bibliography M. Born and E. Wolf, 2002, Principles of Optics, 7th Ed., Cambridge University Press, 1999 (reprinted with corrections, 2002). A. Fresnel, 1827, "Mémoire sur la double réfraction", Mémoires de l'Académie Royale des Sciences de l'Institut de France, vol. (for 1824, printed 1827), pp.45–176; reprinted as "Second mémoire..." in Fresnel, 1866–70, vol. 2, pp.479–596; translated by A.W. Hobson as "Memoir on double refraction", in R.Taylor (ed.), Scientific Memoirs, vol. (London: Taylor & Francis, 1852), pp.238–333. (Cited page numbers are from the translation.) A. Fresnel (ed. H. de Sénarmont, E. Verdet, and L. Fresnel), 1866–70, Oeuvres complètes d'Augustin Fresnel (3 volumes), Paris: Imprimerie Impériale; vol. 1 (1866), vol. 2 (1868), vol. 3 (1870). External links Stress Analysis Apparatus (based on Birefringence theory) Video of stress birefringence in Polymethylmethacrylate (PMMA or Plexiglas). Artist Austine Wood Comarow employs birefringence to create kinetic figurative images. The Birefringence of Thin Ice (Tom Wagner, photographer) Polarization (waves) Optical mineralogy Asymmetry
Birefringence
[ "Physics" ]
7,191
[ "Astrophysics", "Polarization (waves)", "Symmetry", "Asymmetry" ]
174,431
https://en.wikipedia.org/wiki/Fiberglass
Fiberglass (American English) or fibreglass (Commonwealth English) is a common type of fiber-reinforced plastic using glass fiber. The fibers may be randomly arranged, flattened into a sheet called a chopped strand mat, or woven into glass cloth. The plastic matrix may be a thermoset polymer matrix—most often based on thermosetting polymers such as epoxy, polyester resin, or vinyl ester resin—or a thermoplastic. Cheaper and more flexible than carbon fiber, it is stronger than many metals by weight, non-magnetic, non-conductive, transparent to electromagnetic radiation, can be molded into complex shapes, and is chemically inert under many circumstances. Applications include aircraft, boats, automobiles, bath tubs and enclosures, swimming pools, hot tubs, septic tanks, water tanks, roofing, pipes, cladding, orthopedic casts, surfboards, and external door skins. Other common names for fiberglass are glass-reinforced plastic (GRP), glass-fiber reinforced plastic (GFRP) or GFK (from ). Because glass fiber itself is sometimes referred to as "fiberglass", the composite is also called fiberglass-reinforced plastic (FRP). This article uses "fiberglass" to refer to the complete fiber-reinforced composite material, rather than only to the glass fiber within it. History Glass fibers have been produced for centuries, but the earliest patent was awarded to the Prussian inventor Hermann Hammesfahr (1845–1914) in the U.S. in 1880. Mass production of glass strands was accidentally discovered in 1932 when Games Slayter, a researcher at Owens-Illinois, directed a jet of compressed air at a stream of molten glass and produced fibers. A patent for this method of producing glass wool was first applied for in 1933. Owens joined with the Corning company in 1935 and the method was adapted by Owens Corning to produce its patented "Fiberglas" (spelled with one "s") in 1936. Originally, Fiberglas was a glass wool with fibers entrapping a great deal of gas, making it useful as an insulator, especially at high temperatures. A suitable resin for combining the fiberglass with a plastic to produce a composite material was developed in 1936 by DuPont. The first ancestor of modern polyester resins is Cyanamid's resin of 1942. Peroxide curing systems were used by then. With the combination of fiberglass and resin the gas content of the material was replaced by plastic. This reduced the insulation properties to values typical of the plastic, but now for the first time, the composite showed great strength and promise as a structural and building material. Many glass fiber composites continued to be called "fiberglass" (as a generic name) and the name was also used for the low-density glass wool product containing gas instead of plastic. Ray Greene of Owens Corning is credited with producing the first composite boat in 1937 but did not proceed further at the time because of the brittle nature of the plastic used. In 1939 Russia was reported to have constructed a passenger boat of plastic materials, and the United States a fuselage and wings of an aircraft. The first car to have a fiberglass body was a 1946 prototype of the Stout Scarab, but the model did not enter production. Fiber Unlike glass fibers used for insulation, for the final structure to be strong, the fiber's surfaces must be almost entirely free of defects, as this permits the fibers to reach gigapascal tensile strengths. If a bulk piece of glass were defect-free, it would be as strong as glass fibers; however, it is generally impractical to produce and maintain bulk material in a defect-free state outside of laboratory conditions. Production The process of manufacturing fiberglass is called pultrusion. The manufacturing process for glass fibers suitable for reinforcement uses large furnaces to gradually melt the silica sand, limestone, kaolin clay, fluorspar, colemanite, dolomite and other minerals until a liquid forms. It is then extruded through bushings (spinneret), which are bundles of very small orifices (typically 5–25 micrometres in diameter for E-Glass, 9 micrometres for S-Glass). These filaments are then sized (coated) with a chemical solution. The individual filaments are now bundled in large numbers to provide a roving. The diameter of the filaments, and the number of filaments in the roving, determine its weight, typically expressed in one of two measurement systems: yield, or yards per pound (the number of yards of fiber in one pound of material; thus a smaller number means a heavier roving). Examples of standard yields are 225yield, 450yield, 675yield. tex, or grams per km (how many grams 1 km of roving weighs, inverted from yield; thus a smaller number means a lighter roving). Examples of standard tex are 750tex, 1100tex, 2200tex. These rovings are then either used directly in a composite application such as pultrusion, filament winding (pipe), gun roving (where an automated gun chops the glass into short lengths and drops it into a jet of resin, projected onto the surface of a mold), or in an intermediary step, to manufacture fabrics such as chopped strand mat (CSM) (made of randomly oriented small cut lengths of fiber all bonded together), woven fabrics, knit fabrics or unidirectional fabrics. Chopped strand mat Chopped strand mat (CSM) is a form of reinforcement used in fiberglass. It consists of glass fibers laid randomly across each other and held together by a binder. It is typically processed using the hand lay-up technique, where sheets of material are placed on a mold and brushed with resin. Because the binder dissolves in resin, the material easily conforms to different shapes when wetted out. After the resin cures, the hardened product can be taken from the mold and finished. Using chopped strand mat gives the fiberglass isotropic in-plane material properties. Sizing A coating or primer is applied to the roving to help protect the glass filaments for processing and manipulation and to ensure proper bonding to the resin matrix, thus allowing for the transfer of shear loads from the glass fibers to the thermoset plastic. Without this bonding, the fibers can 'slip' in the matrix causing localized failure. Properties An individual structural glass fiber is both stiff and strong in tension and compression—that is, along its axis. Although it might be assumed that the fiber is weak in compression, it is actually only the long aspect ratio of the fiber which makes it seem so; i.e., because a typical fiber is long and narrow, it buckles easily. On the other hand, the glass fiber is weak in shear—that is, across its axis. Therefore, if a collection of fibers can be arranged permanently in a preferred direction within a material, and if they can be prevented from buckling in compression, the material will be preferentially strong in that direction. Furthermore, by laying multiple layers of fiber on top of one another, with each layer oriented in various preferred directions, the material's overall stiffness and strength can be efficiently controlled. In fiberglass, it is the plastic matrix which permanently constrains the structural glass fibers to directions chosen by the designer. With chopped strand mat, this directionality is essentially an entire two-dimensional plane; with woven fabrics or unidirectional layers, directionality of stiffness and strength can be more precisely controlled within the plane. A fiberglass component is typically of a thin "shell" construction, sometimes filled on the inside with structural foam, as in the case of surfboards. The component may be of nearly arbitrary shape, limited only by the complexity and tolerances of the mold used for manufacturing the shell. The mechanical functionality of materials is heavily reliant on the combined performances of both the resin (AKA matrix) and fibers. For example, in severe temperature conditions (over 180 °C), the resin component of the composite may lose its functionality, partially due to bond deterioration of resin and fiber. However, GFRPs can still show significant residual strength after experiencing high temperatures (200 °C). One notable feature of fiberglass is that the resins used are subject to contraction during the curing process. For polyester this contraction is often 5–6%; for epoxy, about 2%. Because the fibers do not contract, this differential can create changes in the shape of the part during curing. Distortions can appear hours, days, or weeks after the resin has set. While this distortion can be minimized by symmetric use of the fibers in the design, a certain amount of internal stress is created; and if it becomes too great, cracks form. Types The most common types of glass fiber used in fiberglass is E-glass, which is alumino-borosilicate glass with less than 1% w/w alkali oxides, mainly used for glass-reinforced plastics. Other types of glass used are A-glass (Alkali-lime glass with little or no boron oxide), E-CR-glass (Electrical/Chemical Resistance; alumino-lime silicate with less than 1% w/w alkali oxides, with high acid resistance), C-glass (alkali-lime glass with high boron oxide content, used for glass staple fibers and insulation), D-glass (borosilicate glass, named for its low Dielectric constant), R-glass (alumino silicate glass without MgO and CaO with high mechanical requirements as Reinforcement), and S-glass (alumino silicate glass without CaO but with high MgO content with high tensile strength). Pure silica (silicon dioxide), when cooled as fused quartz into a glass with no true melting point, can be used as a glass fiber for fiberglass but has the drawback that it must be worked at very high temperatures. In order to lower the necessary work temperature, other materials are introduced as "fluxing agents" (i.e., components to lower the melting point). Ordinary A-glass ("A" for "alkali-lime") or soda lime glass, crushed and ready to be remelted, as so-called cullet glass, was the first type of glass used for fiberglass. E-glass ("E" because of initial Electrical application), is alkali-free and was the first glass formulation used for continuous filament formation. It now makes up most of the fiberglass production in the world, and also is the single largest consumer of boron minerals globally. It is susceptible to chloride ion attack and is a poor choice for marine applications. S-glass ("S" for "stiff") is used when tensile strength (high modulus) is important and is thus an important building and aircraft epoxy composite (it is called R-glass, "R" for "reinforcement" in Europe). C-glass ("C" for "chemical resistance") and T-glass ("T" is for "thermal insulator"—a North American variant of C-glass) are resistant to chemical attack; both are often found in insulation-grades of blown fiberglass. Table of some common fiberglass types Applications Fiberglass is versatile because it is lightweight, strong, weather-resistant, and can have a variety of surface textures. During World War II, fiberglass was developed as a replacement for the molded plywood used in aircraft radomes (fiberglass being transparent to microwaves). Its first main civilian application was for the building of boats and sports car bodies, where it gained acceptance in the 1950s. Its use has broadened to the automotive and sport equipment sectors. In the production of some products, such as aircraft, carbon fiber is now used instead of fiberglass, which is stronger by volume and weight. Advanced manufacturing techniques such as pre-pregs and fiber rovings extend fiberglass's applications and the tensile strength possible with fiber-reinforced plastics. Fiberglass is also used in the telecommunications industry for shrouding antennas, due to its RF permeability and low signal attenuation properties. It may also be used to conceal other equipment where no signal permeability is required, such as equipment cabinets and steel support structures, due to the ease with which it can be molded and painted to blend with existing structures and surfaces. Other uses include sheet-form electrical insulators and structural components commonly found in power-industry products. Because of fiberglass's lightweight and durability, it is often used in protective equipment such as helmets. Many sports use fiberglass protective gear, such as goaltenders' and catchers' masks. Storage tanks Storage tanks can be made of fiberglass with capacities up to about 300 tonnes. Smaller tanks can be made with chopped strand mat cast over a thermoplastic inner tank which acts as a preform during construction. Much more reliable tanks are made using woven mat or filament wound fiber, with the fiber orientation at right angles to the hoop stress imposed in the sidewall by the contents. Such tanks tend to be used for chemical storage because the plastic liner (often polypropylene) is resistant to a wide range of corrosive chemicals. Fiberglass is also used for septic tanks. House building Glass-reinforced plastics are also used to produce house building components such as roofing laminate, door surrounds, over-door canopies, window canopies and dormers, chimneys, coping systems, and heads with keystones and sills. The material's reduced weight and easier handling, compared to wood or metal, allows faster installation. Mass-produced fiberglass brick-effect panels can be used in the construction of composite housing, and can include insulation to reduce heat loss. Oil and gas artificial lift systems In rod pumping applications, fiberglass rods are often used for their high tensile strength to weight ratio. Fiberglass rods provide an advantage over steel rods because they stretch more elastically (lower Young's modulus) than steel for a given weight, meaning more oil can be lifted from the hydrocarbon reservoir to the surface with each stroke, all while reducing the load on the pumping unit. Fiberglass rods must be kept in tension, however, as they frequently part if placed in even a small amount of compression. The buoyancy of the rods within a fluid amplifies this tendency. Piping GRP and GRE pipe can be used in a variety of above- and below-ground systems, including those for desalination, water treatment, water distribution networks, chemical process plants, water used for firefighting, hot and cold drinking water, wastewater/sewage, municipal waste and liquified petroleum gas. Boating Fiberglass composite boats have been made since the early 1940s, and many sailing vessels made after 1950 were built using the fiberglass lay-up process. As of 2022, boats continue to be made with fiberglass, though more advanced techniques such as vacuum bag moulding are used in the construction process. Armour Though most bullet-resistant armours are made using different textiles, fiberglass composites have been shown to be effective as ballistic armor. Construction methods Filament winding Filament winding is a fabrication technique mainly used for manufacturing open (cylinders) or closed-end structures (pressure vessels or tanks). The process involves winding filaments under tension over a male mandrel. The mandrel rotates while a wind eye on a carriage moves horizontally, laying down fibers in the desired pattern. The most common filaments are carbon or glass fiber and are coated with synthetic resin as they are wound. Once the mandrel is completely covered to the desired thickness, the resin is cured; often the mandrel is placed in an oven to achieve this, though sometimes radiant heaters are used with the mandrel still turning in the machine. Once the resin has cured, the mandrel is removed, leaving the hollow final product. For some products such as gas bottles, the 'mandrel' is a permanent part of the finished product forming a liner to prevent gas leakage or as a barrier to protect the composite from the fluid to be stored. Filament winding is well suited to automation, and there are many applications, such as pipe and small pressure vessels that are wound and cured without any human intervention. The controlled variables for winding are fiber type, resin content, wind angle, tow or bandwidth and thickness of the fiber bundle. The angle at which the fiber has an effect on the properties of the final product. A high angle "hoop" will provide circumferential or "burst" strength, while lower angle patterns (polar or helical) will provide greater longitudinal tensile strength. Products currently being produced using this technique range from pipes, golf clubs, Reverse Osmosis Membrane Housings, oars, bicycle forks, bicycle rims, power and transmission poles, pressure vessels to missile casings, aircraft fuselages and lamp posts and yacht masts. Fiberglass hand lay-up operation A release agent, usually in either wax or liquid form, is applied to the chosen mold to allow the finished product to be cleanly removed from the mold. Resin—typically a 2-part thermoset polyester, vinyl, or epoxy—is mixed with its hardener and applied to the surface. Sheets of fiberglass matting are laid into the mold, then more resin mixture is added using a brush or roller. The material must conform to the mold, and air must not be trapped between the fiberglass and the mold. Additional resin is applied and possibly additional sheets of fiberglass. Hand pressure, vacuum or rollers are used to be sure the resin saturates and fully wets all layers, and that any air pockets are removed. The work must be done quickly before the resin starts to cure unless high-temperature resins are used which will not cure until the part is warmed in an oven. In some cases, the work is covered with plastic sheets and vacuum is drawn on the work to remove air bubbles and press the fiberglass to the shape of the mold. Fiberglass spray lay-up operation The fiberglass spray lay-up process is similar to the hand lay-up process but differs in the application of the fiber and resin to the mold. Spray-up is an open-molding composites fabrication process where resin and reinforcements are sprayed onto a mold. The resin and glass may be applied separately or simultaneously "chopped" in a combined stream from a chopper gun. Workers roll out the spray-up to compact the laminate. Wood, foam or other core material may then be added, and a secondary spray-up layer imbeds the core between the laminates. The part is then cured, cooled, and removed from the reusable mold. Pultrusion operation Pultrusion is a manufacturing method used to make strong, lightweight composite materials. In pultrusion, material is pulled through forming machinery using either a hand-over-hand method or a continuous-roller method (as opposed to extrusion, where the material is pushed through dies). In fiberglass pultrusion, fibers (the glass material) are pulled from spools through a device that coats them with a resin. They are then typically heat-treated and cut to length. Fiberglass produced this way can be made in a variety of shapes and cross-sections, such as W or S cross-sections. Health hazards Exposure People can be exposed to fiberglass in the workplace during its fabrication, installation or removal, by breathing it in, by skin contact, or by eye contact. Furthermore, in the manufacturing process of fiberglass, styrene vapors are released while the resins are cured. These are also irritating to mucous membranes and respiratory tract. The general population can get exposed to fibreglass from insulation and building materials or from fibers in the air near manufacturing facilities or when they are near building fires or implosions. The American Lung Association advises that fiberglass insulation should never be left exposed in an occupied area. Since work practices are not always followed, and fiberglass is often left exposed in basements that later become occupied, people can get exposed. No readily usable biological or clinical indices of exposure exist. Symptoms and signs, health effects Fiberglass will irritate the eyes, skin, and the respiratory system. Hence, symptoms can include itchy eyes, skin, nose, sore throat, hoarseness, dyspnea (breathing difficulty) and cough. Peak alveolar deposition was observed in rodents and humans for fibers with diameters of 1 to 2 μm. In animal experiments, adverse lung effects such as lung inflammation and lung fibrosis have occurred, and increased incidences of mesothelioma, pleural sarcoma, and lung carcinoma had been found with intrapleural or intratracheal instillations in rats. As of 2001, in humans only the more biopersistent materials like ceramic fibres, which are used industrially as insulation in high-temperature environments such as blast furnaces, and certain special-purpose glass wools not used as insulating materials remain classified as possible carcinogens (IARC Group 2B). The more commonly used glass fibre wools including insulation glass wool, rock wool and slag wool are considered not classifiable as to carcinogenicity to humans (IARC Group 3). In October 2001, all fiberglass wools commonly used for thermal and acoustical insulation were reclassified by the International Agency for Research on Cancer (IARC) as "not classifiable as to carcinogenicity to humans" (IARC group 3). "Epidemiologic studies published during the 15 years since the previous IARC monographs review of these fibers in 1988 provide no evidence of increased risks of lung cancer or mesothelioma (cancer of the lining of the body cavities) from occupational exposures during the manufacture of these materials, and inadequate evidence overall of any cancer risk." In June 2011, the US National Toxicology Program (NTP) removed from its Report on Carcinogens all biosoluble glass wool used in home and building insulation and for non-insulation products. However, NTP still considers fibrous glass dust to be "reasonably anticipated [as] a human carcinogen (Certain Glass Wool Fibers (Inhalable))". Similarly, California's Office of Environmental Health Hazard Assessment (OEHHA) published a November, 2011 modification to its Proposition 65 listing to include only "Glass wool fibers (inhalable and biopersistent)." Therefore a cancer warning label for biosoluble fiber glass home and building insulation is no longer required under federal or California law. As of 2012, the North American Insulation Manufacturers Association stated that fiberglass is safe to manufacture, install and use when recommended work practices are followed to reduce temporary mechanical irritation. As of 2012, the European Union and Germany have classified synthetic glass fibers as possibly or probably carcinogenic, but fibers can be exempt from this classification if they pass specific tests. A 2012 health hazard review for the European Commission stated that inhalation of fiberglass at concentrations of 3, 16 and 30 mg/m3 "did not induce fibrosis nor tumours except transient lung inflammation that disappeared after a post-exposure recovery period." Historic reviews of the epidemiology studies had been conducted by Harvard's Medical and Public Health Schools in 1995, the National Academy of Sciences in 2000, the Agency for Toxic Substances and Disease Registry ("ATSDR") in 2004, and the National Toxicology Program in 2011. which reached the same conclusion as IARC that there is no evidence of increased risk from occupational exposure to glass wool fibers. Pathophysiology Genetic and toxic effects are exerted through production of reactive oxygen species, which can damage DNA, and cause chromosomal aberrations, nuclear abnormalities, mutations, gene amplification in proto-oncogenes, and cell transformation in mammalian cells. There is also indirect, inflammation-driven genotoxicity through reactive oxygen species by inflammatory cells. The longer and thinner as well as the more durable (biopersistent) fibers were, the more potent they were in damage. Regulation, exposure limits In the US, fine mineral fiber emissions have been regulated by the EPA, but respirable fibers (“particulates not otherwise regulated”) are regulated by Occupational Safety and Health Administration (OSHA); OSHA has set the legal limit (permissible exposure limit) for fiberglass exposure in the workplace as 15 mg/m3 total and 5 mg/m3 in respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 3 fibers/cm3 (less than 3.5 micrometers in diameter and greater than 10 micrometers in length) as a time-weighted average over an 8-hour workday, and a 5 mg/m3 total limit. As of 2001, the Hazardous Substances Ordinance in Germany dictates a maximum occupational exposure limit of 86 mg/m3. In certain concentrations, a potentially explosive mixture may occur. Further manufacture of GRP components (grinding, cutting, sawing) creates fine dust and chips containing glass filaments, as well as tacky dust, in quantities high enough to affect health and the functionality of machines and equipment. The installation of effective extraction and filtration equipment is required to ensure safety and efficiency. See also Bulk moulding compound Fiberglass sheet laminating G-10 (material) Glass fiber reinforced concrete Hobas Ignace Dubus-Bonnel Sheet moulding compound Carbon-fiber-reinforced polymers reinforcement with carbon fibers. References External links American inventions Composite materials Fibre-reinforced polymers Glass applications
Fiberglass
[ "Physics", "Chemistry", "Materials_science" ]
5,371
[ "Composite materials", "Fiberglass", "Materials", "Polymer chemistry", "Matter" ]
174,706
https://en.wikipedia.org/wiki/Laplace%20operator
In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian of a function at a point measures by how much the average value of over small spheres or balls centered at deviates from . The Laplace operator is named after the French mathematician Pierre-Simon de Laplace (1749–1827), who first applied the operator to the study of celestial mechanics: the Laplacian of the gravitational potential due to a given mass density distribution is a constant multiple of that density distribution. Solutions of Laplace's equation are called harmonic functions and represent the possible gravitational potentials in regions of vacuum. The Laplacian occurs in many differential equations describing physical phenomena. Poisson's equation describes electric and gravitational potentials; the diffusion equation describes heat and fluid flow; the wave equation describes wave propagation; and the Schrödinger equation describes the wave function in quantum mechanics. In image processing and computer vision, the Laplacian operator has been used for various tasks, such as blob and edge detection. The Laplacian is the simplest elliptic operator and is at the core of Hodge theory as well as the results of de Rham cohomology. Definition The Laplace operator is a second-order differential operator in the n-dimensional Euclidean space, defined as the divergence () of the gradient (). Thus if is a twice-differentiable real-valued function, then the Laplacian of is the real-valued function defined by: where the latter notations derive from formally writing: Explicitly, the Laplacian of is thus the sum of all the unmixed second partial derivatives in the Cartesian coordinates : As a second-order differential operator, the Laplace operator maps functions to functions for . It is a linear operator , or more generally, an operator for any open set . Alternatively, the Laplace operator can be defined as: Where is the dimension of the space, is the average value of on the surface of a n-sphere of radius R, is the surface integral over a n-sphere of radius R, and is the hypervolume of the boundary of a unit n-sphere. Motivation Diffusion In the physical theory of diffusion, the Laplace operator arises naturally in the mathematical description of equilibrium. Specifically, if is the density at equilibrium of some quantity such as a chemical concentration, then the net flux of through the boundary (also called ) of any smooth region is zero, provided there is no source or sink within : where is the outward unit normal to the boundary of . By the divergence theorem, Since this holds for all smooth regions , one can show that it implies: The left-hand side of this equation is the Laplace operator, and the entire equation is known as Laplace's equation. Solutions of the Laplace equation, i.e. functions whose Laplacian is identically zero, thus represent possible equilibrium densities under diffusion. The Laplace operator itself has a physical interpretation for non-equilibrium diffusion as the extent to which a point represents a source or sink of chemical concentration, in a sense made precise by the diffusion equation. This interpretation of the Laplacian is also explained by the following fact about averages. Averages Given a twice continuously differentiable function and a point , the average value of over the ball with radius centered at is: Similarly, the average value of over the sphere (the boundary of a ball) with radius centered at is: Density associated with a potential If denotes the electrostatic potential associated to a charge distribution , then the charge distribution itself is given by the negative of the Laplacian of : where is the electric constant. This is a consequence of Gauss's law. Indeed, if is any smooth region with boundary , then by Gauss's law the flux of the electrostatic field across the boundary is proportional to the charge enclosed: where the first equality is due to the divergence theorem. Since the electrostatic field is the (negative) gradient of the potential, this gives: Since this holds for all regions , we must have The same approach implies that the negative of the Laplacian of the gravitational potential is the mass distribution. Often the charge (or mass) distribution are given, and the associated potential is unknown. Finding the potential function subject to suitable boundary conditions is equivalent to solving Poisson's equation. Energy minimization Another motivation for the Laplacian appearing in physics is that solutions to in a region are functions that make the Dirichlet energy functional stationary: To see this, suppose is a function, and is a function that vanishes on the boundary of . Then: where the last equality follows using Green's first identity. This calculation shows that if , then is stationary around . Conversely, if is stationary around , then by the fundamental lemma of calculus of variations. Coordinate expressions Two dimensions The Laplace operator in two dimensions is given by: In Cartesian coordinates, where and are the standard Cartesian coordinates of the -plane. In polar coordinates, where represents the radial distance and the angle. Three dimensions In three dimensions, it is common to work with the Laplacian in a variety of different coordinate systems. In Cartesian coordinates, In cylindrical coordinates, where represents the radial distance, the azimuth angle and the height. In spherical coordinates: or by expanding the first and second term, these expressions read where represents the azimuthal angle and the zenith angle or co-latitude. In particular, the above is equivalent to where is the Laplace-Beltrami operator on the unit sphere. In general curvilinear coordinates (): where summation over the repeated indices is implied, is the inverse metric tensor and are the Christoffel symbols for the selected coordinates. dimensions In arbitrary curvilinear coordinates in dimensions (), we can write the Laplacian in terms of the inverse metric tensor, : from the Voss-Weyl formula for the divergence. In spherical coordinates in dimensions, with the parametrization with representing a positive real radius and an element of the unit sphere , where is the Laplace–Beltrami operator on the -sphere, known as the spherical Laplacian. The two radial derivative terms can be equivalently rewritten as: As a consequence, the spherical Laplacian of a function defined on can be computed as the ordinary Laplacian of the function extended to so that it is constant along rays, i.e., homogeneous of degree zero. Euclidean invariance The Laplacian is invariant under all Euclidean transformations: rotations and translations. In two dimensions, for example, this means that: for all θ, a, and b. In arbitrary dimensions, whenever ρ is a rotation, and likewise: whenever τ is a translation. (More generally, this remains true when ρ is an orthogonal transformation such as a reflection.) In fact, the algebra of all scalar linear differential operators, with constant coefficients, that commute with all Euclidean transformations, is the polynomial algebra generated by the Laplace operator. Spectral theory The spectrum of the Laplace operator consists of all eigenvalues for which there is a corresponding eigenfunction with: This is known as the Helmholtz equation. If is a bounded domain in , then the eigenfunctions of the Laplacian are an orthonormal basis for the Hilbert space . This result essentially follows from the spectral theorem on compact self-adjoint operators, applied to the inverse of the Laplacian (which is compact, by the Poincaré inequality and the Rellich–Kondrachov theorem). It can also be shown that the eigenfunctions are infinitely differentiable functions. More generally, these results hold for the Laplace–Beltrami operator on any compact Riemannian manifold with boundary, or indeed for the Dirichlet eigenvalue problem of any elliptic operator with smooth coefficients on a bounded domain. When is the -sphere, the eigenfunctions of the Laplacian are the spherical harmonics. Vector Laplacian The vector Laplace operator, also denoted by , is a differential operator defined over a vector field. The vector Laplacian is similar to the scalar Laplacian; whereas the scalar Laplacian applies to a scalar field and returns a scalar quantity, the vector Laplacian applies to a vector field, returning a vector quantity. When computed in orthonormal Cartesian coordinates, the returned vector field is equal to the vector field of the scalar Laplacian applied to each vector component. The vector Laplacian of a vector field is defined as This definition can be seen as the Helmholtz decomposition of the vector Laplacian. In Cartesian coordinates, this reduces to the much simpler form as where , , and are the components of the vector field , and just on the left of each vector field component is the (scalar) Laplace operator. This can be seen to be a special case of Lagrange's formula; see Vector triple product. For expressions of the vector Laplacian in other coordinate systems see Del in cylindrical and spherical coordinates. Generalization The Laplacian of any tensor field ("tensor" includes scalar and vector) is defined as the divergence of the gradient of the tensor: For the special case where is a scalar (a tensor of degree zero), the Laplacian takes on the familiar form. If is a vector (a tensor of first degree), the gradient is a covariant derivative which results in a tensor of second degree, and the divergence of this is again a vector. The formula for the vector Laplacian above may be used to avoid tensor math and may be shown to be equivalent to the divergence of the Jacobian matrix shown below for the gradient of a vector: And, in the same manner, a dot product, which evaluates to a vector, of a vector by the gradient of another vector (a tensor of 2nd degree) can be seen as a product of matrices: This identity is a coordinate dependent result, and is not general. Use in physics An example of the usage of the vector Laplacian is the Navier-Stokes equations for a Newtonian incompressible flow: where the term with the vector Laplacian of the velocity field represents the viscous stresses in the fluid. Another example is the wave equation for the electric field that can be derived from Maxwell's equations in the absence of charges and currents: This equation can also be written as: where is the D'Alembertian, used in the Klein–Gordon equation. Some properties First of all, we say that a smooth function is superharmonic whenever . Let be a smooth function, and let be a connected compact set. If is superharmonic, then, for every , we have for some constant depending on and . Generalizations A version of the Laplacian can be defined wherever the Dirichlet energy functional makes sense, which is the theory of Dirichlet forms. For spaces with additional structure, one can give more explicit descriptions of the Laplacian, as follows. Laplace–Beltrami operator The Laplacian also can be generalized to an elliptic operator called the Laplace–Beltrami operator defined on a Riemannian manifold. The Laplace–Beltrami operator, when applied to a function, is the trace () of the function's Hessian: where the trace is taken with respect to the inverse of the metric tensor. The Laplace–Beltrami operator also can be generalized to an operator (also called the Laplace–Beltrami operator) which operates on tensor fields, by a similar formula. Another generalization of the Laplace operator that is available on pseudo-Riemannian manifolds uses the exterior derivative, in terms of which the "geometer's Laplacian" is expressed as Here is the codifferential, which can also be expressed in terms of the Hodge star and the exterior derivative. This operator differs in sign from the "analyst's Laplacian" defined above. More generally, the "Hodge" Laplacian is defined on differential forms by This is known as the Laplace–de Rham operator, which is related to the Laplace–Beltrami operator by the Weitzenböck identity. D'Alembertian The Laplacian can be generalized in certain ways to non-Euclidean spaces, where it may be elliptic, hyperbolic, or ultrahyperbolic. In Minkowski space the Laplace–Beltrami operator becomes the D'Alembert operator or D'Alembertian: It is the generalization of the Laplace operator in the sense that it is the differential operator which is invariant under the isometry group of the underlying space and it reduces to the Laplace operator if restricted to time-independent functions. The overall sign of the metric here is chosen such that the spatial parts of the operator admit a negative sign, which is the usual convention in high-energy particle physics. The D'Alembert operator is also known as the wave operator because it is the differential operator appearing in the wave equations, and it is also part of the Klein–Gordon equation, which reduces to the wave equation in the massless case. The additional factor of in the metric is needed in physics if space and time are measured in different units; a similar factor would be required if, for example, the direction were measured in meters while the direction were measured in centimeters. Indeed, theoretical physicists usually work in units such that in order to simplify the equation. The d'Alembert operator generalizes to a hyperbolic operator on pseudo-Riemannian manifolds. See also Laplace–Beltrami operator, generalization to submanifolds in Euclidean space and Riemannian and pseudo-Riemannian manifold. The Laplacian in differential geometry. The discrete Laplace operator is a finite-difference analog of the continuous Laplacian, defined on graphs and grids. The Laplacian is a common operator in image processing and computer vision (see the Laplacian of Gaussian, blob detector, and scale space). The list of formulas in Riemannian geometry contains expressions for the Laplacian in terms of Christoffel symbols. Weyl's lemma (Laplace equation). Earnshaw's theorem which shows that stable static gravitational, electrostatic or magnetic suspension is impossible. Del in cylindrical and spherical coordinates. Other situations in which a Laplacian is defined are: analysis on fractals, time scale calculus and discrete exterior calculus. Notes References The Feynman Lectures on Physics Vol. II Ch. 12: Electrostatic Analogs . . Further reading The Laplacian - Richard Fitzpatrick 2006 External links Laplacian in polar coordinates derivation Laplace equations on the fractal cubes and Casimir effect Differential operators Elliptic partial differential equations Fourier analysis Operator Harmonic functions Linear operators in calculus Multivariable calculus
Laplace operator
[ "Mathematics" ]
3,160
[ "Multivariable calculus", "Mathematical analysis", "Differential operators", "Calculus" ]
174,782
https://en.wikipedia.org/wiki/Gravitational%20field
In physics, a gravitational field or gravitational acceleration field is a vector field used to explain the influences that a body extends into the space around itself. A gravitational field is used to explain gravitational phenomena, such as the gravitational force field exerted on another massive body. It has dimension of acceleration (L/T2) and it is measured in units of newtons per kilogram (N/kg) or, equivalently, in meters per second squared (m/s2). In its original concept, gravity was a force between point masses. Following Isaac Newton, Pierre-Simon Laplace attempted to model gravity as some kind of radiation field or fluid, and since the 19th century, explanations for gravity in classical mechanics have usually been taught in terms of a field model, rather than a point attraction. It results from the spatial gradient of the gravitational potential field. In general relativity, rather than two particles attracting each other, the particles distort spacetime via their mass, and this distortion is what is perceived and measured as a "force". In such a model one states that matter moves in certain ways in response to the curvature of spacetime, and that there is either no gravitational force, or that gravity is a fictitious force. Gravity is distinguished from other forces by its obedience to the equivalence principle. Classical mechanics In classical mechanics, a gravitational field is a physical quantity. A gravitational field can be defined using Newton's law of universal gravitation. Determined in this way, the gravitational field around a single particle of mass is a vector field consisting at every point of a vector pointing directly towards the particle. The magnitude of the field at every point is calculated by applying the universal law, and represents the force per unit mass on any object at that point in space. Because the force field is conservative, there is a scalar potential energy per unit mass, , at each point in space associated with the force fields; this is called gravitational potential. The gravitational field equation is where is the gravitational force, is the mass of the test particle, is the radial vector of the test particle relative to the mass (or for Newton's second law of motion which is a time dependent function, a set of positions of test particles each occupying a particular point in space for the start of testing), is time, is the gravitational constant, and is the del operator. This includes Newton's law of universal gravitation, and the relation between gravitational potential and field acceleration. and are both equal to the gravitational acceleration (equivalent to the inertial acceleration, so same mathematical form, but also defined as gravitational force per unit mass). The negative signs are inserted since the force acts antiparallel to the displacement. The equivalent field equation in terms of mass density of the attracting mass is: which contains Gauss's law for gravity, and Poisson's equation for gravity. Newton's law implies Gauss's law, but not vice versa; see Relation between Gauss's and Newton's laws. These classical equations are differential equations of motion for a test particle in the presence of a gravitational field, i.e. setting up and solving these equations allows the motion of a test mass to be determined and described. The field around multiple particles is simply the vector sum of the fields around each individual particle. A test particle in such a field will experience a force that equals the vector sum of the forces that it would experience in these individual fields. This is i.e. the gravitational field on mass is the sum of all gravitational fields due to all other masses mi, except the mass itself. is the position vector of the gravitating particle , and is that of the test particle. General relativity In general relativity, the Christoffel symbols play the role of the gravitational force field and the metric tensor plays the role of the gravitational potential. In general relativity, the gravitational field is determined by solving the Einstein field equations where is the stress–energy tensor, is the Einstein tensor, and is the Einstein gravitational constant. The latter is defined as , where is the Newtonian constant of gravitation and is the speed of light. These equations are dependent on the distribution of matter, stress and momentum in a region of space, unlike Newtonian gravity, which is depends on only the distribution of matter. The fields themselves in general relativity represent the curvature of spacetime. General relativity states that being in a region of curved space is equivalent to accelerating up the gradient of the field. By Newton's second law, this will cause an object to experience a fictitious force if it is held still with respect to the field. This is why a person will feel himself pulled down by the force of gravity while standing still on the Earth's surface. In general the gravitational fields predicted by general relativity differ in their effects only slightly from those predicted by classical mechanics, but there are a number of easily verifiable differences, one of the most well known being the deflection of light in such fields. Embedding diagram Embedding diagrams are three dimensional graphs commonly used to educationally illustrate gravitational potential by drawing gravitational potential fields as a gravitational topography, depicting the potentials as so-called gravitational wells, sphere of influence. See also Classical mechanics Entropic gravity Gravitation Gravitational energy Gravitational potential Gravitational wave Gravity map Newton's law of universal gravitation Newton's laws of motion Potential energy Specific force Speed of gravity Tests of general relativity References Theories of gravity Geodesy General relativity
Gravitational field
[ "Physics", "Mathematics" ]
1,114
[ "Applied mathematics", "Theoretical physics", "Theory of relativity", "General relativity", "Theories of gravity", "Geodesy" ]
174,914
https://en.wikipedia.org/wiki/Atomic%20units
The atomic units are a system of natural units of measurement that is especially convenient for calculations in atomic physics and related scientific fields, such as computational chemistry and atomic spectroscopy. They were originally suggested and named by the physicist Douglas Hartree. Atomic units are often abbreviated "a.u." or "au", not to be confused with similar abbreviations used for astronomical units, arbitrary units, and absorbance units in other contexts. Motivation In the context of atomic physics, using the atomic units system can be a convenient shortcut, eliminating symbols and numbers and reducing the order of magnitude of most numbers involved. For example, the Hamiltonian operator in the Schrödinger equation for the helium atom with standard quantities, such as when using SI units, is but adopting the convention associated with atomic units that transforms quantities into dimensionless equivalents, it becomes In this convention, the constants , , , and all correspond to the value (see below). The distances relevant to the physics expressed in SI units are naturally on the order of , while expressed in atomic units distances are on the order of (one Bohr radius, the atomic unit of length). An additional benefit of expressing quantities using atomic units is that their values calculated and reported in atomic units do not change when values of fundamental constants are revised, since the fundamental constants are built into the conversion factors between atomic units and SI. History Hartree defined units based on three physical constants: Here, the modern equivalent of is the Rydberg constant , of is the electron mass , of is the Bohr radius , and of is the reduced Planck constant . Hartree's expressions that contain differ from the modern form due to a change in the definition of , as explained below. In 1957, Bethe and Salpeter's book Quantum mechanics of one-and two-electron atoms built on Hartree's units, which they called atomic units abbreviated "a.u.". They chose to use , their unit of action and angular momentum in place of Hartree's length as the base units. They noted that the unit of length in this system is the radius of the first Bohr orbit and their velocity is the electron velocity in Bohr's model of the first orbit. In 1959, Shull and Hall advocated atomic units based on Hartree's model but again chose to use as the defining unit. They explicitly named the distance unit a "Bohr radius"; in addition, they wrote the unit of energy as and called it a Hartree. These terms came to be used widely in quantum chemistry. In 1973 McWeeny extended the system of Shull and Hall by adding permittivity in the form of as a defining or base unit. Simultaneously he adopted the SI definition of so that his expression for energy in atomic units is , matching the expression in the 8th SI brochure. Definition A set of base units in the atomic system as in one proposal are the electron rest mass, the magnitude of the electronic charge, the Planck constant, and the permittivity. In the atomic units system, each of these takes the value 1; the corresponding values in the International System of Units are given in the table. Table notes Units Three of the defining constants (reduced Planck constant, elementary charge, and electron rest mass) are atomic units themselves – of action, electric charge, and mass, respectively. Two named units are those of length (Bohr radius ) and energy (hartree ). Conventions Different conventions are adopted in the use of atomic units, which vary in presentation, formality and convenience. Explicit units Many texts (e.g. Jerrard & McNiell, Shull & Hall) define the atomic units as quantities, without a transformation of the equations in use. As such, they do not suggest treating either quantities as dimensionless or changing the form of any equations. This is consistent with expressing quantities in terms of dimensional quantities, where the atomic unit is included explicitly as a symbol (e.g. , , or more ambiguously, ), and keeping equations unaltered with explicit constants. Provision for choosing more convenient closely related quantities that are more suited to the problem as units than universal fixed units are is also suggested, for example based on the reduced mass of an electron, albeit with careful definition thereof where used (for example, a unit , where for a specified mass ). A convention that eliminates units In atomic physics, it is common to simplify mathematical expressions by a transformation of all quantities: Hartree suggested that expression in terms of atomic units allows us "to eliminate various universal constants from the equations", which amounts to informally suggesting a transformation of quantities and equations such that all quantities are replaced by corresponding dimensionless quantities. He does not elaborate beyond examples. McWeeny suggests that "... their adoption permits all the fundamental equations to be written in a dimensionless form in which constants such as , and are absent and need not be considered at all during mathematical derivations or the processes of numerical solution; the units in which any calculated quantity must appear are implicit in its physical dimensions and may be supplied at the end." He also states that "An alternative convention is to interpret the symbols as the numerical measures of the quantities they represent, referred to some specified system of units: in this case the equations contain only pure numbers or dimensionless variables; ... the appropriate units are supplied at the end of a calculation, by reference to the physical dimensions of the quantity calculated. [This] convention has much to recommend it and is tacitly accepted in atomic and molecular physics whenever atomic units are introduced, for example for convenience in computation." An informal approach is often taken, in which "equations are expressed in terms of atomic units simply by setting ". This is a form of shorthand for the more formal process of transformation between quantities that is suggested by others, such as McWeeny. Physical constants Dimensionless physical constants retain their values in any system of units. Of note is the fine-structure constant , which appears in expressions as a consequence of the choice of units. For example, the numeric value of the speed of light, expressed in atomic units, is Bohr model in atomic units Atomic units are chosen to reflect the properties of electrons in atoms, which is particularly clear in the classical Bohr model of the hydrogen atom for the bound electron in its ground state: Mass = 1 a.u. of mass Charge = −1 a.u. of charge Orbital radius = 1 a.u. of length Orbital velocity = 1 a.u. of velocity Orbital period = 2π a.u. of time Orbital angular velocity = 1 radian per a.u. of time Orbital momentum = 1 a.u. of momentum Ionization energy = a.u. of energy Electric field (due to nucleus) = 1 a.u. of electric field Lorentz force (due to nucleus) = 1 a.u. of force References Systems of units Natural units Atomic physics
Atomic units
[ "Physics", "Chemistry", "Mathematics" ]
1,426
[ "Systems of units", "Quantity", "Quantum mechanics", "Atomic physics", " molecular", "Atomic", "Units of measurement", " and optical physics" ]
174,945
https://en.wikipedia.org/wiki/Elementary%20charge
The elementary charge, usually denoted by , is a fundamental physical constant, defined as the electric charge carried by a single proton (+1 e) or, equivalently, the magnitude of the negative electric charge carried by a single electron, which has charge −1 . In the SI system of units, the value of the elementary charge is exactly defined as or 160.2176634 zeptocoulombs (zC). Since the 2019 revision of the SI, the seven SI base units are defined in terms of seven fundamental physical constants, of which the elementary charge is one. In the centimetre–gram–second system of units (CGS), the corresponding quantity is . Robert A. Millikan and Harvey Fletcher's oil drop experiment first directly measured the magnitude of the elementary charge in 1909, differing from the modern accepted value by just 0.6%. Under assumptions of the then-disputed atomic theory, the elementary charge had also been indirectly inferred to ~3% accuracy from blackbody spectra by Max Planck in 1901 and (through the Faraday constant) at order-of-magnitude accuracy by Johann Loschmidt's measurement of the Avogadro number in 1865. As a unit In some natural unit systems, such as the system of atomic units, e functions as the unit of electric charge. The use of elementary charge as a unit was promoted by George Johnstone Stoney in 1874 for the first system of natural units, called Stoney units. Later, he proposed the name electron for this unit. At the time, the particle we now call the electron was not yet discovered and the difference between the particle electron and the unit of charge electron was still blurred. Later, the name electron was assigned to the particle and the unit of charge e lost its name. However, the unit of energy electronvolt (eV) is a remnant of the fact that the elementary charge was once called electron. In other natural unit systems, the unit of charge is defined as with the result that where is the fine-structure constant, is the speed of light, is the electric constant, and is the reduced Planck constant. Quantization Charge quantization is the principle that the charge of any object is an integer multiple of the elementary charge. Thus, an object's charge can be exactly 0 e, or exactly 1 e, −1 e, 2 e, etc., but not  e, or −3.8 e, etc. (There may be exceptions to this statement, depending on how "object" is defined; see below.) This is the reason for the terminology "elementary charge": it is meant to imply that it is an indivisible unit of charge. Fractional elementary charge There are two known sorts of exceptions to the indivisibility of the elementary charge: quarks and quasiparticles. Quarks, first posited in the 1960s, have quantized charge, but the charge is quantized into multiples of . However, quarks cannot be isolated; they exist only in groupings, and stable groupings of quarks (such as a proton, which consists of three quarks) all have charges that are integer multiples of e. For this reason, either 1 e or can be justifiably considered to be "the quantum of charge", depending on the context. This charge commensurability, "charge quantization", has partially motivated grand unified theories. Quasiparticles are not particles as such, but rather an emergent entity in a complex material system that behaves like a particle. In 1982 Robert Laughlin explained the fractional quantum Hall effect by postulating the existence of fractionally charged quasiparticles. This theory is now widely accepted, but this is not considered to be a violation of the principle of charge quantization, since quasiparticles are not elementary particles. Quantum of charge All known elementary particles, including quarks, have charges that are integer multiples of  e. Therefore, the "quantum of charge" is  e. In this case, one says that the "elementary charge" is three times as large as the "quantum of charge". On the other hand, all isolatable particles have charges that are integer multiples of e. (Quarks cannot be isolated: they exist only in collective states like protons that have total charges that are integer multiples of e.) Therefore, the "quantum of charge" is e, with the proviso that quarks are not to be included. In this case, "elementary charge" would be synonymous with the "quantum of charge". In fact, both terminologies are used. For this reason, phrases like "the quantum of charge" or "the indivisible unit of charge" can be ambiguous unless further specification is given. On the other hand, the term "elementary charge" is unambiguous: it refers to a quantity of charge equal to that of a proton. Lack of fractional charges Paul Dirac argued in 1931 that if magnetic monopoles exist, then electric charge must be quantized; however, it is unknown whether magnetic monopoles actually exist. It is currently unknown why isolatable particles are restricted to integer charges; much of the string theory landscape appears to admit fractional charges. Experimental measurements of the elementary charge The elementary charge is exactly defined since 20 May 2019 by the International System of Units. Prior to this change, the elementary charge was a measured quantity whose magnitude was determined experimentally. This section summarizes these historical experimental measurements. In terms of the Avogadro constant and Faraday constant If the Avogadro constant NA and the Faraday constant F are independently known, the value of the elementary charge can be deduced using the formula (In other words, the charge of one mole of electrons, divided by the number of electrons in a mole, equals the charge of a single electron.) This method is not how the most accurate values are measured today. Nevertheless, it is a legitimate and still quite accurate method, and experimental methodologies are described below. The value of the Avogadro constant NA was first approximated by Johann Josef Loschmidt who, in 1865, estimated the average diameter of the molecules in air by a method that is equivalent to calculating the number of particles in a given volume of gas. Today the value of NA can be measured at very high accuracy by taking an extremely pure crystal (often silicon), measuring how far apart the atoms are spaced using X-ray diffraction or another method, and accurately measuring the density of the crystal. From this information, one can deduce the mass (m) of a single atom; and since the molar mass (M) is known, the number of atoms in a mole can be calculated: . The value of F can be measured directly using Faraday's laws of electrolysis. Faraday's laws of electrolysis are quantitative relationships based on the electrochemical researches published by Michael Faraday in 1834. In an electrolysis experiment, there is a one-to-one correspondence between the electrons passing through the anode-to-cathode wire and the ions that plate onto or off of the anode or cathode. Measuring the mass change of the anode or cathode, and the total charge passing through the wire (which can be measured as the time-integral of electric current), and also taking into account the molar mass of the ions, one can deduce F. The limit to the precision of the method is the measurement of F: the best experimental value has a relative uncertainty of 1.6 ppm, about thirty times higher than other modern methods of measuring or calculating the elementary charge. Oil-drop experiment A famous method for measuring e is Millikan's oil-drop experiment. A small drop of oil in an electric field would move at a rate that balanced the forces of gravity, viscosity (of traveling through the air), and electric force. The forces due to gravity and viscosity could be calculated based on the size and velocity of the oil drop, so electric force could be deduced. Since electric force, in turn, is the product of the electric charge and the known electric field, the electric charge of the oil drop could be accurately computed. By measuring the charges of many different oil drops, it can be seen that the charges are all integer multiples of a single small charge, namely e. The necessity of measuring the size of the oil droplets can be eliminated by using tiny plastic spheres of a uniform size. The force due to viscosity can be eliminated by adjusting the strength of the electric field so that the sphere hovers motionless. Shot noise Any electric current will be associated with noise from a variety of sources, one of which is shot noise. Shot noise exists because a current is not a smooth continual flow; instead, a current is made up of discrete electrons that pass by one at a time. By carefully analyzing the noise of a current, the charge of an electron can be calculated. This method, first proposed by Walter H. Schottky, can determine a value of e of which the accuracy is limited to a few percent. However, it was used in the first direct observation of Laughlin quasiparticles, implicated in the fractional quantum Hall effect. From the Josephson and von Klitzing constants Another accurate method for measuring the elementary charge is by inferring it from measurements of two effects in quantum mechanics: The Josephson effect, voltage oscillations that arise in certain superconducting structures; and the quantum Hall effect, a quantum effect of electrons at low temperatures, strong magnetic fields, and confinement into two dimensions. The Josephson constant is where h is the Planck constant. It can be measured directly using the Josephson effect. The von Klitzing constant is It can be measured directly using the quantum Hall effect. From these two constants, the elementary charge can be deduced: CODATA method The relation used by CODATA to determine elementary charge was: where h is the Planck constant, α is the fine-structure constant, μ0 is the magnetic constant, ε0 is the electric constant, and c is the speed of light. Presently this equation reflects a relation between ε0 and α, while all others are fixed values. Thus the relative standard uncertainties of both will be same. Tests of the universality of elementary charge See also Committee on Data of the International Science Council Notes References Further reading Fundamentals of Physics, 7th Ed., Halliday, Robert Resnick, and Jearl Walker. Wiley, 2005 Physical constants Units of electrical charge es:Carga eléctrica#Carga eléctrica elemental
Elementary charge
[ "Physics", "Mathematics" ]
2,207
[ "Physical quantities", "Electric charge", "Quantity", "Physical constants", "Units of electrical charge", "Units of measurement" ]
174,955
https://en.wikipedia.org/wiki/Bohr%20magneton
In atomic physics, the Bohr magneton (symbol ) is a physical constant and the natural unit for expressing the magnetic moment of an electron caused by its orbital or spin angular momentum. In SI units, the Bohr magneton is defined as and in the Gaussian CGS units as where is the elementary charge, is the reduced Planck constant, is the electron mass, is the speed of light. History The idea of elementary magnets is due to Walther Ritz (1907) and Pierre Weiss. Already before the Rutherford model of atomic structure, several theorists commented that the magneton should involve the Planck constant h. By postulating that the ratio of electron kinetic energy to orbital frequency should be equal to h, Richard Gans computed a value that was twice as large as the Bohr magneton in September 1911. At the First Solvay Conference in November that year, Paul Langevin obtained a value of . Langevin assumed that the attractive force was inversely proportional to distance to the power and specifically The Romanian physicist Ștefan Procopiu had obtained the expression for the magnetic moment of the electron in 1913. The value is sometimes referred to as the "Bohr–Procopiu magneton" in Romanian scientific literature. The Weiss magneton was experimentally derived in 1911 as a unit of magnetic moment equal to joules per tesla, which is about 20% of the Bohr magneton. In the summer of 1913, the values for the natural units of atomic angular momentum and magnetic moment were obtained by the Danish physicist Niels Bohr as a consequence of his atom model. In 1920, Wolfgang Pauli gave the Bohr magneton its name in an article where he contrasted it with the magneton of the experimentalists which he called the Weiss magneton. Theory A magnetic moment of an electron in an atom is composed of two components. First, the orbital motion of an electron around a nucleus generates a magnetic moment by Ampère's circuital law. Second, the inherent rotation, or spin, of the electron has a spin magnetic moment. In the Bohr model of the atom, for an electron that is in the orbit of lowest energy, its orbital angular momentum has magnitude equal to the reduced Planck constant, denoted ħ. The Bohr magneton is the magnitude of the magnetic dipole moment of an electron orbiting an atom with this angular momentum. The spin angular momentum of an electron is ħ, but the intrinsic electron magnetic moment caused by its spin is also approximately one Bohr magneton, which results in the electron spin g-factor, a factor relating spin angular momentum to corresponding magnetic moment of a particle, having a value of approximately 2. See also Anomalous magnetic moment Electron magnetic moment Bohr radius Nuclear magneton Parson magneton Physical constant Zeeman effect References Atomic physics Niels Bohr Physical constants Quantum magnetism Magnetic moment
Bohr magneton
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
581
[ "Physical quantities", "Quantity", "Quantum mechanics", "Magnetic moment", "Quantum magnetism", "Physical constants", " molecular", " and optical physics", "Atomic physics", "Atomic", "Condensed matter physics", "Moment (physics)" ]
175,039
https://en.wikipedia.org/wiki/Czochralski%20method
The Czochralski method, also Czochralski technique or Czochralski process, is a method of crystal growth used to obtain single crystals of semiconductors (e.g. silicon, germanium and gallium arsenide), metals (e.g. palladium, platinum, silver, gold), salts and synthetic gemstones. The method is named after Polish scientist Jan Czochralski, who invented the method in 1915 while investigating the crystallization rates of metals. He made this discovery by accident: instead of dipping his pen into his inkwell, he dipped it in molten tin, and drew a tin filament, which later proved to be a single crystal. The method is still used in over 90 percent of all electronics in the world that use semiconductors. The most important application may be the growth of large cylindrical ingots, or boules, of single crystal silicon used in the electronics industry to make semiconductor devices like integrated circuits. Other semiconductors, such as gallium arsenide, can also be grown by this method, although lower defect densities in this case can be obtained using variants of the Bridgman–Stockbarger method. The method is not limited to production of metal or metalloid crystals. For example, it is used to manufacture very high-purity crystals of salts, including material with controlled isotopic composition, for use in particle physics experiments, with tight controls (part per billion measurements) on confounding metal ions and water absorbed during manufacture. Application Monocrystalline silicon (mono-Si) grown by the Czochralski method is often referred to as monocrystalline Czochralski silicon (Cz-Si). It is the basic material in the production of integrated circuits used in computers, TVs, mobile phones and all types of electronic equipment and semiconductor devices. Monocrystalline silicon is also used in large quantities by the photovoltaic industry for the production of conventional mono-Si solar cells. The almost perfect crystal structure yields the highest light-to-electricity conversion efficiency for silicon. Production of Czochralski silicon High-purity, semiconductor-grade silicon (only a few parts per million of impurities) is melted in a crucible at , usually made of quartz. Dopant impurity atoms such as boron or phosphorus can be added to the molten silicon in precise amounts to dope the silicon, thus changing it into p-type or n-type silicon, with different electronic properties. A precisely oriented rod-mounted seed crystal is dipped into the molten silicon. The seed crystal's rod is slowly pulled upwards and rotated simultaneously. By precisely controlling the temperature gradients, rate of pulling and speed of rotation, it is possible to extract a large, single-crystal, cylindrical ingot from the melt. Occurrence of unwanted instabilities in the melt can be avoided by investigating and visualizing the temperature and velocity fields during the crystal growth process. This process is normally performed in an inert atmosphere, such as argon, in an inert chamber, such as quartz. Crystal sizes Due to efficiencies of scale, the semiconductor industry often uses wafers with standardized dimensions, or common wafer specifications. Early on, boules were small, a few centimeters wide. With advanced technology, high-end device manufacturers use 200 mm and 300 mm diameter wafers. Width is controlled by precise control of temperature, speeds of rotation, and the speed at which the seed holder is withdrawn. The crystal ingots from which wafers are sliced can be up to 2 metres in length, weighing several hundred kilograms. Larger wafers allow improvements in manufacturing efficiency, as more chips can be fabricated on each wafer, with lower relative loss, so there has been a steady drive to increase silicon wafer sizes. The next step up, 450 mm, was scheduled for introduction in 2018. Silicon wafers are typically about 0.2–0.75 mm thick, and can be polished to great flatness for making integrated circuits or textured for making solar cells. Incorporating impurities When silicon is grown by the Czochralski method, the melt is contained in a silica (quartz) crucible. During growth, the walls of the crucible dissolve into the melt and Czochralski silicon therefore contains oxygen at a typical concentration of 10 cm. Oxygen impurities can have beneficial or detrimental effects. Carefully chosen annealing conditions can give rise to the formation of oxygen precipitates. These have the effect of trapping unwanted transition metal impurities in a process known as gettering, improving the purity of surrounding silicon. However, formation of oxygen precipitates at unintended locations can also destroy electrical structures. Additionally, oxygen impurities can improve the mechanical strength of silicon wafers by immobilising any dislocations which may be introduced during device processing. It was experimentally shown in the 1990s that the high oxygen concentration is also beneficial for the radiation hardness of silicon particle detectors used in harsh radiation environment (such as CERN's LHC/HL-LHC projects). Therefore, radiation detectors made of Czochralski- and magnetic Czochralski-silicon are considered to be promising candidates for many future high-energy physics experiments. It has also been shown that the presence of oxygen in silicon increases impurity trapping during post-implantation annealing processes. However, oxygen impurities can react with boron in an illuminated environment, such as that experienced by solar cells. This results in the formation of an electrically active boron–oxygen complex that detracts from cell performance. Module output drops by approximately 3% during the first few hours of light exposure. Mathematical form Impurity concentration in the final solid is given by where and are (respectively) the initial and final concentration, and the initial and final volume, and the segregation coefficient associated with impurities at the melting phase transition. This follows from the fact that impurities are removed from the melt when an infinitesimal volume freezes. See also Float-zone silicon References External links Czochralski doping process Industrial processes Semiconductor growth Crystals Science and technology in Poland Polish inventions Methods of crystal growth
Czochralski method
[ "Chemistry", "Materials_science" ]
1,275
[ "Crystallography", "Crystals", "Methods of crystal growth" ]
175,075
https://en.wikipedia.org/wiki/Yuga
A yuga, in Hinduism, is generally used to indicate an age of time. In the Rigveda, a yuga refers to generations, a period of time (whether long or short), or a yoke (joining of two things). In the Mahabharata, the words yuga and kalpa (a day of Brahma) are used interchangeably to describe the cycle of creation and destruction. In post-Vedic texts, the words "yuga" and "age" commonly denote a (pronounced chatur yuga), a cycle of four world ages—for example, in the Surya Siddhanta and Bhagavad Gita (part of the Mahabharata)—unless expressly limited by the name of one of its minor ages: Krita (Satya) Yuga, Treta Yuga, Dvapara Yuga, or Kali Yuga. Etymology Yuga () means "a yoke" (joining of two things), "generations", or "a period of time" such as an age, where its archaic spelling is yug, with other forms of yugam, , and yuge, derived from yuj (), believed derived from (Proto-Indo-European: 'to join or unite'). Meanings The term "yuga" has multiple meanings, including representing the number 4 and various periods of time. In early Indian astronomy, it referred to a five-year cycle starting with the conjunction of the sun and moon in the autumnal equinox. More commonly, "yuga" is used in the context of kalpas, composed of four yugas. According to the Manusmriti, a kalpa starts with a Satya Yuga (4,000 years), followed by a Treta Yuga (3,000 years), a Dvapara Yuga (2,000 years), and ends with a Kali Yuga (1,000 years). According to Vishnu Purana, each Mahayuga comprises a Satya Yuga (1,728,000 human years), a Treta Yuga (1,296,000 years), a Dvapara Yuga (864,000 years), and a Kali Yuga (432,000 years). Virtues According to the Manusmriti, the virtue (dharma) of human beings varies across the four yugas (ages). The text states: In the Krita Yuga, the virtue is austerity (tapas); in the Treta Yuga, it is knowledge (jnana); in the Dvapara Yuga, it is sacrifice (yajna); and in the Kali Yuga, it is charity (dāna). See also Hindu units of time Kalpa (day of Brahma) Manvantara (age of Manu) Pralaya (period of dissolution) Yuga Cycle (four yuga ages): Satya (Krita), Treta, Dvapara, and Kali List of numbers in Hindu scriptures Explanatory notes References External links Vedic Time System: Yuga Four Yugas Hindu astronomy Hindu philosophical concepts Time in Hinduism Units of time
Yuga
[ "Physics", "Mathematics" ]
666
[ "Physical quantities", "Time", "Units of time", "Quantity", "Spacetime", "Units of measurement" ]
175,146
https://en.wikipedia.org/wiki/Rudolf%20Clausius
Rudolf Julius Emanuel Clausius (; 2 January 1822 – 24 August 1888) was a German physicist and mathematician and is considered one of the central founding fathers of the science of thermodynamics. By his restatement of Sadi Carnot's principle known as the Carnot cycle, he gave the theory of heat a truer and sounder basis. His most important paper, "On the Moving Force of Heat", published in 1850, first stated the basic ideas of the second law of thermodynamics. In 1865 he introduced the concept of entropy. In 1870 he introduced the virial theorem, which applied to heat. Life Clausius was born in Köslin (now Koszalin, Poland) in the Province of Pomerania in Prussia. His father was a Protestant pastor and school inspector, and Rudolf studied in the school of his father. In 1838, he went to the Gymnasium in Stettin. Clausius graduated from the University of Berlin in 1844 where he had studied mathematics and physics since 1840 with, among others, Gustav Magnus, Peter Gustav Lejeune Dirichlet, and Jakob Steiner. He also studied history with Leopold von Ranke. During 1848, he got his doctorate from the University of Halle on optical effects in Earth's atmosphere. In 1850 he became professor of physics at the Royal Artillery and Engineering School in Berlin and Privatdozent at the Berlin University. In 1855 he became professor at the ETH Zürich, the Swiss Federal Institute of Technology in Zürich, where he stayed until 1867. During that year, he moved to Würzburg and two years later, in 1869 to Bonn. In 1870 Clausius organized an ambulance corps in the Franco-Prussian War. He was wounded in battle, leaving him with a lasting disability. He was awarded the Iron Cross for his services. His wife, Adelheid Rimpau died in 1875, leaving him to raise their six children. In 1886, he married Sophie Sack, and then had another child. Two years later, on 24 August 1888, he died in Bonn, Germany. Work Clausius's PhD thesis concerning the refraction of light proposed that we see a blue sky during the day, and various shades of red at sunrise and sunset (among other phenomena) due to reflection and refraction of light. Later, Lord Rayleigh would show that it was in fact due to the scattering of light. His most famous paper, Ueber die bewegende Kraft der Wärme ("On the Moving Force of Heat and the Laws of Heat which may be Deduced Therefrom") was published in 1850, and dealt with the mechanical theory of heat. In this paper, he showed there was a contradiction between Carnot's principle and the concept of conservation of energy. Clausius restated the two laws of thermodynamics to overcome this contradiction. This paper made him famous among scientists. (The third law was developed by Walther Nernst, during the years 1906–1912). Clausius's most famous statement of the second law of thermodynamics was published in German in 1854, and in English in 1856. During 1857, Clausius contributed to the field of kinetic theory after refining August Krönig's very simple gas-kinetic model to include translational, rotational and vibrational molecular motions. In this same work he introduced the concept of 'Mean free path' of a particle. Clausius deduced the Clausius–Clapeyron relation from thermodynamics. This relation, which is a way of characterizing the phase transition between two states of matter such as solid and liquid, had originally been developed in 1834 by Émile Clapeyron. Entropy In 1865, Clausius gave the first mathematical version of the concept of entropy, and also gave it its name. Clausius chose the word because the meaning (from Greek ἐν en "in" and τροπή tropē "transformation") is "content transformative" or "transformation content" ("Verwandlungsinhalt").He used the now abandoned unit 'Clausius' (symbol: Cl) for entropy. 1 Clausius (Cl) = 1 calorie/degree Celsius (cal/°C) = 4.1868 joules per kelvin (J/K) The landmark 1865 paper in which he introduced the concept of entropy ends with the following summary of the first and second laws of thermodynamics: Leon Cooper added that in this way he succeeded in coining a word that meant the same thing to everybody: nothing. Tributes Honorary Membership of the Institution of Engineers and Shipbuilders in Scotland in 1859.IESIS Institution of Engineers and Shipbuilders in Scotland Iron Cross of 1870 Fellow of the Royal Society of London in 1868 and received its Copley Medal in 1879. Member of the Royal Swedish Academy of Sciences in 1878. Huygens Medal in 1870. Foreign Member of the Accademia Nazionale dei Lincei in Rome in 1880 Member of the German Academy of Sciences Leopoldina in 1880 Poncelet Prize in 1883. Honorary doctorate from the University of Würzburg in 1882. Foreign Member of the Royal Netherlands Academy of Arts and Sciences in 1886. Pour le Mérite for Arts and Sciences in 1888 The lunar crater Clausius named in his honor. A memorial in his home town of Koszalin in 2009 Publications English translations of nine papers. See also Hans Peter Jørgen Julius Thomsen, one of the founders of the thermochemistry. References External links Revival of Kinetic Theory by Clausius 1822 births 1888 deaths People from Koszalin Academic staff of ETH Zurich Thermodynamicists German military personnel of the Franco-Prussian War 19th-century German physicists German fluid dynamicists People from the Province of Pomerania Recipients of the Iron Cross (1870) Recipients of the Copley Medal Recipients of the Pour le Mérite (civil class) Humboldt University of Berlin alumni Academic staff of the University of Bonn Academic staff of the University of Würzburg Foreign members of the Royal Society Foreign associates of the National Academy of Sciences Members of the Royal Netherlands Academy of Arts and Sciences Members of the Royal Swedish Academy of Sciences German theoretical physicists Prussian Army personnel
Rudolf Clausius
[ "Physics", "Chemistry" ]
1,272
[ "Thermodynamics", "Thermodynamicists" ]
175,217
https://en.wikipedia.org/wiki/Rhizome
In botany and dendrology, a rhizome ( ) is a modified subterranean plant stem that sends out roots and shoots from its nodes. Rhizomes are also called creeping rootstalks or just rootstalks. Rhizomes develop from axillary buds and grow horizontally. The rhizome also retains the ability to allow new shoots to grow upwards. A rhizome is the main stem of the plant that runs typically underground and horizontally to the soil surface. Rhizomes have nodes and internodes and auxiliary buds. Roots do not have nodes and internodes and have a root cap terminating their ends. In general, rhizomes have short internodes, send out roots from the bottom of the nodes, and generate new upward-growing shoots from the top of the nodes. A stolon is similar to a rhizome, but stolon sprouts from an existing stem having long internodes and generating new shoots at the ends, they are often also called runners such as in the strawberry plant. A stem tuber is a thickened part of a rhizome or stolon that has been enlarged for use as a storage organ. In general, a tuber is high in starch, e.g. the potato, which is a modified stolon. The term "tuber" is often used imprecisely and is sometimes applied to plants with rhizomes. The plant uses the rhizome to store starches, proteins, and other nutrients. These nutrients become useful for the plant when new shoots must be formed or when the plant dies back for the winter. If a rhizome is separated, each piece may be able to give rise to a new plant. This is a process known as vegetative reproduction and is used by farmers and gardeners to propagate certain plants. This also allows for lateral spread of grasses like bamboo and bunch grasses. Examples of plants that are propagated this way include hops, asparagus, ginger, irises, lily of the valley, cannas, and sympodial orchids. Stored rhizomes are subject to bacterial and fungal infections, making them unsuitable for replanting and greatly diminishing stocks. However, rhizomes can also be produced artificially from tissue cultures. The ability to easily grow rhizomes from tissue cultures leads to better stocks for replanting and greater yields. The plant hormones ethylene and jasmonic acid have been found to help induce and regulate the growth of rhizomes, specifically in rhubarb. Ethylene that was applied externally was found to affect internal ethylene levels, allowing easy manipulations of ethylene concentrations. Knowledge of how to use these hormones to induce rhizome growth could help farmers and biologists to produce plants grown from rhizomes, and more easily cultivate and grow better plants. Some plants have rhizomes that grow above ground or that lie at the soil surface, including some Iris species as well as ferns, whose spreading stems are rhizomes. Plants with underground rhizomes include gingers, bamboo, snake plant, the Venus flytrap, Chinese lantern, western poison-oak, hops, and Alstroemeria, and some grasses, such as Johnson grass, Bermuda grass, and purple nut sedge. Rhizomes generally form a single layer, but in giant horsetails, can be multi-tiered. Many rhizomes have culinary value, and some, such as zhe'ergen, are commonly consumed raw. Some rhizomes that are used directly in cooking include ginger, turmeric, galangal, fingerroot, and lotus. See also Aspen Bulb Corm Mycorrhiza Tuber Explanatory notes References External links Plant anatomy Plant physiology Plant reproduction Plant roots Plant stem morphology
Rhizome
[ "Biology" ]
812
[ "Plant physiology", "Behavior", "Plant reproduction", "Plants", "Reproduction" ]
175,622
https://en.wikipedia.org/wiki/Passivation%20%28chemistry%29
In physical chemistry and engineering, passivation is coating a material so that it becomes "passive", that is, less readily affected or corroded by the environment. Passivation involves creation of an outer layer of shield material that is applied as a microcoating, created by chemical reaction with the base material, or allowed to build by spontaneous oxidation in the air. As a technique, passivation is the use of a light coat of a protective material, such as metal oxide, to create a shield against corrosion. Passivation of silicon is used during fabrication of microelectronic devices. Undesired passivation of electrodes, called "fouling", increases the circuit resistance so it interferes with some electrochemical applications such as electrocoagulation for wastewater treatment, amperometric chemical sensing, and electrochemical synthesis. When exposed to air, many metals naturally form a hard, relatively inert surface layer, usually an oxide (termed the "native oxide layer") or a nitride, that serves as a passivation layer - i.e. these metals are "self-protecting". In the case of silver, the dark tarnish is a passivation layer of silver sulfide formed from reaction with environmental hydrogen sulfide. Aluminium similarly forms a stable protective oxide layer which is why it does not "rust". (In contrast, some base metals, notably iron, oxidize readily to form a rough, porous coating of rust that adheres loosely, is of higher volume than the original displaced metal, and sloughs off readily; all of which permit & promote further oxidation.) The passivation layer of oxide markedly slows further oxidation and corrosion in room-temperature air for aluminium, beryllium, chromium, zinc, titanium, and silicon (a metalloid). The inert surface layer formed by reaction with air has a thickness of about 1.5 nm for silicon, 1–10 nm for beryllium, and 1 nm initially for titanium, growing to 25 nm after several years. Similarly, for aluminium, it grows to about 5 nm after several years. In the context of the semiconductor device fabrication, such as silicon MOSFET transistors and solar cells, surface passivation refers not only to reducing the chemical reactivity of the surface but also to eliminating the dangling bonds and other defects that form electronic surface states, which impair performance of the devices. Surface passivation of silicon usually consists of high-temperature thermal oxidation. Mechanisms There has been much interest in determining the mechanisms that govern the increase of thickness of the oxide layer over time. Some of the important factors are the volume of oxide relative to the volume of the parent metal, the mechanism of oxygen diffusion through the metal oxide to the parent metal, and the relative chemical potential of the oxide. Boundaries between micro grains, if the oxide layer is crystalline, form an important pathway for oxygen to reach the unoxidized metal below. For this reason, vitreous oxide coatings – which lack grain boundaries – can retard oxidation. The conditions necessary, but not sufficient, for passivation are recorded in Pourbaix diagrams. Some corrosion inhibitors help the formation of a passivation layer on the surface of the metals to which they are applied. Some compounds, dissolved in solutions (chromates, molybdates) form non-reactive and low solubility films on metal surfaces. It has been shown using electrochemical scanning tunneling microscopy that during iron passivation, an n-type semiconductor Fe(III) oxide grows at the interface with the metal that leads to the buildup of an electronic barrier opposing electron flow and an electronic depletion region that prevents further oxidation reactions. These results indicate a mechanism of "electronic passivation". The electronic properties of this semiconducting oxide film also provide a mechanistic explanation of corrosion mediated by chloride, which creates surface states at the oxide surface that lead to electronic breakthrough, restoration of anodic currents, and disruption of the electronic passivation mechanism ("transpassivation"). History Discovery and etymology The fact that iron doesn't react with concentrated nitric acid was discovered by Mikhail Lomonosov in 1738 and rediscovered by James Keir in 1790, who also noted that such pre-immersed Fe doesn't reduce silver from nitrate anymore. In the 1830s, Michael Faraday and Christian Friedrich Schönbein studied that issue systematically and demonstrated that when a piece of iron is placed in dilute nitric acid, it will dissolve and produce hydrogen, but if the iron is placed in concentrated nitric acid and then returned to the dilute nitric acid, little or no reaction will take place. In 1836, Schönbein named the first state the active condition and the second the passive condition while Faraday proposed the modern explanation of the oxide film described above (Schönbein disagreed with it), which was experimentally proven by Ulick Richardson Evans only in 1927. Between 1955 and 1957, Carl Frosch and Lincoln Derrick discovered surface passivation of silicon wafers by silicon dioxide, using passivation to build the first silicon dioxide field effect transistors. Specific materials Aluminium Aluminium naturally forms a thin surface layer of aluminium oxide on contact with oxygen in the atmosphere through a process called oxidation, which creates a physical barrier to corrosion or further oxidation in many environments. Some aluminium alloys, however, do not form the oxide layer well, and thus are not protected against corrosion. There are methods to enhance the formation of the oxide layer for certain alloys. For example, prior to storing hydrogen peroxide in an aluminium container, the container can be passivated by rinsing it with a dilute solution of nitric acid and peroxide alternating with deionized water. The nitric acid and peroxide mixture oxidizes and dissolves any impurities on the inner surface of the container, and the deionized water rinses away the acid and oxidized impurities. Generally, there are two main ways to passivate aluminium alloys (not counting plating, painting, and other barrier coatings): chromate conversion coating and anodizing. Alclading, which metallurgically bonds thin layers of pure aluminium or alloy to different base aluminium alloy, is not strictly passivation of the base alloy. However, the aluminium layer clad on is designed to spontaneously develop the oxide layer and thus protect the base alloy. Chromate conversion coating converts the surface aluminium to an aluminium chromate coating in the range of in thickness. Aluminium chromate conversion coatings are amorphous in structure with a gel-like composition hydrated with water. Chromate conversion is a common way of passivating not only aluminium, but also zinc, cadmium, copper, silver, magnesium, and tin alloys. Anodizing is an electrolytic process that forms a thicker oxide layer. The anodic coating consists of hydrated aluminium oxide and is considered resistant to corrosion and abrasion. This finish is more robust than the other processes and also provides electrical insulation, which the other two processes may not. Carbon In carbon quantum dot (CQD) technology, CQDs are small carbon nanoparticles (less than 10 nm in size) with some form of surface passivation. Ferrous materials Ferrous materials, including steel, may be somewhat protected by promoting oxidation ("rust") and then converting the oxidation to a metalophosphate by using phosphoric acid and add further protection by surface coating. As the uncoated surface is water-soluble, a preferred method is to form manganese or zinc compounds by a process commonly known as parkerizing or phosphate conversion. Older, less effective but chemically similar electrochemical conversion coatings included black oxidizing, historically known as bluing or browning. Ordinary steel forms a passivating layer in alkali environments, as reinforcing bar does in concrete. Stainless steel Stainless steels are corrosion-resistant, but they are not completely impervious to rusting. One common mode of corrosion in corrosion-resistant steels is when small spots on the surface begin to rust because grain boundaries or embedded bits of foreign matter (such as grinding swarf) allow water molecules to oxidize some of the iron in those spots despite the alloying chromium. This is called rouging. Some grades of stainless steel are especially resistant to rouging; parts made from them may therefore forgo any passivation step, depending on engineering decisions. Common among all of the different specifications and types are the following steps: Prior to passivation, the object must be cleaned of any contaminants and generally must undergo a validating test to prove that the surface is 'clean.' The object is then placed in an acidic passivating bath that meets the temperature and chemical requirements of the method and type specified between customer and vendor. While nitric acid is commonly used as a passivating acid for stainless steel, citric acid is gaining in popularity as it is far less dangerous to handle, less toxic, and biodegradable, making disposal less of a challenge. Passivating temperatures can range from ambient to , while minimum passivation times are usually 20 to 30 minutes. After passivation, the parts are neutralized using a bath of aqueous sodium hydroxide, then rinsed with clean water and dried. The passive surface is validated using humidity, elevated temperature, a rusting agent (salt spray), or some combination of the three. The passivation process removes exogenous iron, creates/restores a passive oxide layer that prevents further oxidation (rust), and cleans the parts of dirt, scale, or other welding-generated compounds (e.g. oxides). Passivation processes are generally controlled by industry standards, the most prevalent among them today being ASTM A 967 and AMS 2700. These industry standards generally list several passivation processes that can be used, with the choice of specific method left to the customer and vendor. The "method" is either a nitric acid-based passivating bath, or a citric acid-based bath, these acids remove surface iron and rust, while sparing the chromium. The various 'types' listed under each method refer to differences in acid bath temperature and concentration. Sodium dichromate is often required as an additive to oxidise the chromium in certain 'types' of nitric-based acid baths, however this chemical is highly toxic. With citric acid, simply rinsing and drying the part and allowing the air to oxidise it, or in some cases the application of other chemicals, is used to perform the passivation of the surface. It is not uncommon for some aerospace manufacturers to have additional guidelines and regulations when passivating their products that exceed the national standard. Often, these requirements will be cascaded down using Nadcap or some other accreditation system. Various testing methods are available to determine the passivation (or passive state) of stainless steel. The most common methods for validating the passivity of a part is some combination of high humidity and heat for a period of time, intended to induce rusting. Electro-chemical testers can also be utilized to commercially verify passivation. Titanium The surface of titanium and of titanium-rich alloys oxidizes immediately upon exposure to air to form a thin passivation layer of titanium oxide, mostly titanium dioxide. This layer makes it resistant to further corrosion, aside from gradual growth of the oxide layer, thickening to ~25 nm after several years in air. This protective layer makes it suitable for use even in corrosive environments such as sea water. Titanium can be anodized to produce a thicker passivation layer. As with many other metals, this layer causes thin-film interference which makes the metal surface appear colored, with the thickness of the passivation layer directly affecting the color produced. Nickel Nickel can be used for handling elemental fluorine, owing to the formation of a passivation layer of nickel fluoride. This fact is useful in water treatment and sewage treatment applications. Silicon In the area of microelectronics and photovoltaic solar cells, surface passivation is usually implemented by thermal oxidation at about 1000 °C to form a coating of silicon dioxide. Surface passivation is critical to solar cell efficiency. The effect of passivation on the efficiency of solar cells ranges from 3–7%. The surface resistivity is high, > 100 Ωcm. Perovskite The easiest and most widely studied method to improve perovskite solar cells is passivation. These defects usually lead to deep energy level defects in solar cells due to the presence of hanging bonds on the surface of perovskite films. Usually, small molecules or polymers are doped to interact with the hanging bonds and thus reduce the defect states. This process is similar to Tetris, i.e., we always want the layer to be full. A small molecule with the function of passivation is some kind of square that can be inserted where there is an empty space and then a complete layer is obtained. These molecules will generally have lone electron pairs or pi-electrons, so they can bind to the defective states on the surface of the cell film and thus achieve passivation of the material. Therefore, molecules such as carbonyl, nitrogen-containing molecules, and sulfur-containing molecules are considered, and recently it has been shown that π electrons can also play a role. In addition, passivation not only improves the photoelectric conversion efficiency of perovskite cells, but also contributes to the improvement of device stability. For example, adding a passivation layer of a few nanometers thickness can effectively achieve passivation with the effect of stopping water vapor intrusion. See also Cold welding Deal–Grove model Pilling–Bedworth ratio References Further reading Chromate conversion coating (chemical film) per MIL-DTL-5541F for aluminium and aluminium alloy parts A standard overview on black oxide coatings is provided in MIL-HDBK-205, Phosphate & Black Oxide Coating of Ferrous Metals. Many of the specifics of Black Oxide coatings may be found in MIL-DTL-13924 (formerly MIL-C-13924). This Mil-Spec document additionally identifies various classes of Black Oxide coatings, for use in a variety of purposes for protecting ferrous metals against rust. Passivisation : Debate over Paintability http://www.coilworld.com/5-6_12/rlw3.htm Corrosion prevention Surface finishing German inventions Integrated circuits MOSFETs Semiconductor device fabrication Swiss inventions
Passivation (chemistry)
[ "Chemistry", "Materials_science", "Technology", "Engineering" ]
2,992
[ "Corrosion prevention", "Computer engineering", "Microtechnology", "Corrosion", "Semiconductor device fabrication", "Integrated circuits" ]
175,835
https://en.wikipedia.org/wiki/Biot%20number
The Biot number (Bi) is a dimensionless quantity used in heat transfer calculations, named for the eighteenth-century French physicist Jean-Baptiste Biot (1774–1862). The Biot number is the ratio of the thermal resistance for conduction inside a body to the resistance for convection at the surface of the body. This ratio indicates whether the temperature inside a body varies significantly in space when the body is heated or cooled over time by a heat flux at its surface. In general, problems involving small Biot numbers (much smaller than 1) are analytically simple, as a result of nearly uniform temperature fields inside the body. Biot numbers of order one or greater indicate more difficult problems with nonuniform temperature fields inside the body. The Biot number appears in a number of heat transfer problems, including transient heat conduction and fin heat transfer calculations. Definition The Biot number is defined as: where: is the thermal conductivity of the body [W/(m·K)] is a convective heat transfer coefficient [W/(m2·K)] is a characteristic length [m] of the geometry considered. (The Biot number should not be confused with the Nusselt number, which employs the thermal conductivity of the fluid rather than that of the body.) The characteristic length in most relevant problems becomes the heat characteristic length, i.e. the ratio between the body volume and the heated (or cooled) surface of the body: Here, the subscript Q, for heat, is used to denote that the surface to be considered is only the portion of the total surface through which the heat passes. The physical significance of Biot number can be understood by imagining the heat flow from a small hot metal sphere suddenly immersed in a pool, to the surrounding fluid. The heat flow experiences two resistances: the first for conduction within the solid metal (which is influenced by both the size and composition of the sphere), and the second for convection at the surface of the sphere. If the thermal resistance of the fluid/sphere interface exceeds that thermal resistance offered by the interior of the metal sphere, the Biot number will be less than one. For systems where it is much less than one, the interior of the sphere may be presumed to be a uniform temperature, although this temperature may be changing with time as heat passes into the sphere from the surface. The equation to describe this change in (relatively uniform) temperature inside the object, is a simple exponential one described by Newton's law of cooling. In contrast, the metal sphere may be large, so that the characteristic length is large and the Biot number is greater than one. Now, thermal gradients within the sphere become important, even though the sphere material is a good conductor. Equivalently, if the sphere is made of a poorly conducting (thermally insulating) material, such as wood or styrofoam, the interior resistance to heat flow will exceed that of convection at the fluid/sphere boundary, even for a much smaller sphere. In this case, again, the Biot number will be greater than one. Applications The value of the Biot number can indicate the applicability (or inapplicability) of certain methods of solving transient heat transfer problems. For example, a Biot number smaller than about 0.1 implies that heat conduction inside the body offers much lower thermal resistance than the heat convection at the surface, so that temperature gradients are negligible inside of the body (such bodies are sometimes labeled "thermally thin"). In this situation, the simple lumped-capacitance model may be used to evaluate a body's transient temperature variation. The opposite is also true: a Biot number greater than about 0.1 indicates that thermal resistance within the body is not negligible, and more complex methods are need in analyzing heat transfer to or from the body (such bodies are sometimes called "thermally thick"). Heat conduction for finite Biot number When the Biot number is greater than 0.1 or so, the heat equation must be solved to determine the time-varying and spatially-nonuniform temperature field within the body. Analytic methods for handling these problems, which may exist for simple geometric shapes and uniform material thermal conductivity, are described in the article on the heat equation. Examples of verified analytic solutions along with precise numerical values are available. Often such problems are too difficult to be done except numerically, with the use of a computer model of heat transfer. Heat conduction for Bi ≪ 1 As noted, a Biot number smaller than about 0.1 shows that the conduction resistance inside a body is much smaller than heat convection at the surface, so that temperature gradients are negligible inside of the body. In this case, the lumped-capacitance model of transient heat transfer can be used. (A Biot number less than 0.1 generally indicates less than 3% error will be present when using the lumped-capacitance model.) The simplest type of lumped capacity solution, for a step change in fluid temperature, shows that a body's temperature decays exponentially in time ("Newtonian" cooling or heating) because the internal energy of the body is directly proportional to the temperature of the body, and the difference between the body temperature and the fluid temperature is linearly proportional to rate of heat transfer into or out of the body. Combining these relationships with the First law of thermodynamics leads to a simple first-order linear differential equation. The corresponding lumped capacity solution can be written in which is the thermal time constant of the body, is the mass density (kg/m3), and is specific heat capacity (J/kg-K). The study of heat transfer in micro-encapsulated phase-change slurries is an application where the Biot number is useful. For the dispersed phase of the micro-encapsulated phase-change slurry, the micro-encapsulated phase-change material itself, the Biot number is calculated to be below 0.1 and so it can be assumed that thermal gradients within the dispersed phase are negligible. Mass transfer analogue An analogous version of the Biot number (usually called the "mass transfer Biot number", or ) is also used in mass diffusion processes: where: : convective mass transfer coefficient (analogous to the h of the heat transfer problem) : mass diffusivity (analogous to the k of heat transfer problem) : characteristic length See also Convection Fourier number Heat conduction References Dimensionless numbers of fluid mechanics Dimensionless numbers of thermodynamics Heat conduction
Biot number
[ "Physics", "Chemistry" ]
1,373
[ "Thermodynamic properties", "Physical quantities", "Dimensionless numbers of thermodynamics", "Thermodynamics", "Heat conduction" ]
175,875
https://en.wikipedia.org/wiki/Critical%20mass
In nuclear engineering, a critical mass is the smallest amount of fissile material needed for a sustained nuclear chain reaction. The critical mass of a fissionable material depends upon its nuclear properties (specifically, its nuclear fission cross-section), density, shape, enrichment, purity, temperature, and surroundings. The concept is important in nuclear weapon design. Point of criticality When a nuclear chain reaction in a mass of fissile material is self-sustaining, the mass is said to be in a critical state in which there is no increase or decrease in power, temperature, or neutron population. A numerical measure of a critical mass depends on the effective neutron multiplication factor , the average number of neutrons released per fission event that go on to cause another fission event rather than being absorbed or leaving the material. A subcritical mass is a mass that does not have the ability to sustain a fission chain reaction. A population of neutrons introduced to a subcritical assembly will exponentially decrease. In this case, known as , . A critical mass is a mass of fissile material that self-sustains a fission chain reaction. In this case, known as , . A steady rate of spontaneous fission causes a proportionally steady level of neutron activity. A supercritical mass is a mass which, once fission has started, will proceed at an increasing rate. In this case, known as , . The constant of proportionality increases as increases. The material may settle into equilibrium (i.e. become critical again) at an elevated temperature/power level or destroy itself. Due to spontaneous fission a supercritical mass will undergo a chain reaction. For example, a spherical critical mass of pure uranium-235 (235U) with a mass of about would experience around 15 spontaneous fission events per second. The probability that one such event will cause a chain reaction depends on how much the mass exceeds the critical mass. If there is uranium-238 (238U) present, the rate of spontaneous fission will be much higher. Fission can also be initiated by neutrons produced by cosmic rays. Changing the point of criticality The mass where criticality occurs may be changed by modifying certain attributes such as fuel, shape, temperature, density and the installation of a neutron-reflective substance. These attributes have complex interactions and interdependencies. These examples only outline the simplest ideal cases: Varying the amount of fuel It is possible for a fuel assembly to be critical at near zero power. If the perfect quantity of fuel were added to a slightly subcritical mass to create an "exactly critical mass", fission would be self-sustaining for only one neutron generation (fuel consumption then makes the assembly subcritical again). Similarly, if the perfect quantity of fuel were added to a slightly subcritical mass, to create a barely supercritical mass, the temperature of the assembly would increase to an initial maximum (for example: 1 K above the ambient temperature) and then decrease back to the ambient temperature after a period of time, because fuel consumed during fission brings the assembly back to subcriticality once again. Changing the shape A mass may be exactly critical without being a perfect homogeneous sphere. More closely refining the shape toward a perfect sphere will make the mass supercritical. Conversely changing the shape to a less perfect sphere will decrease its reactivity and make it subcritical. Changing the temperature A mass may be exactly critical at a particular temperature. Fission and absorption cross-sections increase as the relative neutron velocity decreases. As fuel temperature increases, neutrons of a given energy appear faster and thus fission/absorption is less likely. This is not unrelated to Doppler broadening of the 238U resonances but is common to all fuels/absorbers/configurations. Neglecting the very important resonances, the total neutron cross-section of every material exhibits an inverse relationship with relative neutron velocity. Hot fuel is always less reactive than cold fuel (over/under moderation in LWR is a different topic). Thermal expansion associated with temperature increase also contributes a negative coefficient of reactivity since fuel atoms are moving farther apart. A mass that is exactly critical at room temperature would be sub-critical in an environment anywhere above room temperature due to thermal expansion alone. Varying the density of the mass The higher the density, the lower the critical mass. The density of a material at a constant temperature can be changed by varying the pressure or tension or by changing crystal structure (see allotropes of plutonium). An ideal mass will become subcritical if allowed to expand or conversely the same mass will become supercritical if compressed. Changing the temperature may also change the density; however, the effect on critical mass is then complicated by temperature effects (see "Changing the temperature") and by whether the material expands or contracts with increased temperature. Assuming the material expands with temperature (enriched uranium-235 at room temperature for example), at an exactly critical state, it will become subcritical if warmed to lower density or become supercritical if cooled to higher density. Such a material is said to have a negative temperature coefficient of reactivity to indicate that its reactivity decreases when its temperature increases. Using such a material as fuel means fission decreases as the fuel temperature increases. Use of a neutron reflector Surrounding a spherical critical mass with a neutron reflector further reduces the mass needed for criticality. A common material for a neutron reflector is beryllium metal. This reduces the number of neutrons which escape the fissile material, resulting in increased reactivity. Use of a tamper In a bomb, a dense shell of material surrounding the fissile core will contain, via inertia, the expanding fissioning material, which increases the efficiency. This is known as a tamper. A tamper also tends to act as a neutron reflector. Because a bomb relies on fast neutrons (not ones moderated by reflection with light elements, as in a reactor), the neutrons reflected by a tamper are slowed by their collisions with the tamper nuclei, and because it takes time for the reflected neutrons to return to the fissile core, they take rather longer to be absorbed by a fissile nucleus. But they do contribute to the reaction, and can decrease the critical mass by a factor of four. Also, if the tamper is (e.g. depleted) uranium, it can fission due to the high energy neutrons generated by the primary explosion. This can greatly increase yield, especially if even more neutrons are generated by fusing hydrogen isotopes, in a so-called boosted configuration. Critical size The critical size is the minimum size of a nuclear reactor core or nuclear weapon that can be made for a specific geometrical arrangement and material composition. The critical size must at least include enough fissionable material to reach critical mass. If the size of the reactor core is less than a certain minimum, too many fission neutrons escape through its surface and the chain reaction is not sustained. Critical mass of a bare sphere The shape with minimal critical mass and the smallest physical dimensions is a sphere. Bare-sphere critical masses at normal density of some actinides are listed in the following table. Most information on bare sphere masses is considered classified, since it is critical to nuclear weapons design, but some documents have been declassified. The critical mass for lower-grade uranium depends strongly on the grade: with 45% 235U, the bare-sphere critical mass is around ; with 19.75% 235U it is over ; and with 15% 235U, it is well over . In all of these cases, the use of a neutron reflector like beryllium can substantially drop this amount, however: with a reflector, the critical mass of 19.75%-enriched uranium drops to , and with a reflector it drops to , for example. The critical mass is inversely proportional to the square of the density. If the density is 1% more and the mass 2% less, then the volume is 3% less and the diameter 1% less. The probability for a neutron per cm travelled to hit a nucleus is proportional to the density. It follows that 1% greater density means that the distance travelled before leaving the system is 1% less. This is something that must be taken into consideration when attempting more precise estimates of critical masses of plutonium isotopes than the approximate values given above, because plutonium metal has a large number of different crystal phases which can have widely varying densities. Note that not all neutrons contribute to the chain reaction. Some escape and others undergo radiative capture. Let q denote the probability that a given neutron induces fission in a nucleus. Consider only prompt neutrons, and let ν denote the number of prompt neutrons generated in a nuclear fission. For example, ν ≈ 2.5 for uranium-235. Then, criticality occurs when ν·q = 1. The dependence of this upon geometry, mass, and density appears through the factor q. Given a total interaction cross section σ (typically measured in barns), the mean free path of a prompt neutron is where n is the nuclear number density. Most interactions are scattering events, so that a given neutron obeys a random walk until it either escapes from the medium or causes a fission reaction. So long as other loss mechanisms are not significant, then, the radius of a spherical critical mass is rather roughly given by the product of the mean free path and the square root of one plus the number of scattering events per fission event (call this s), since the net distance travelled in a random walk is proportional to the square root of the number of steps: Note again, however, that this is only a rough estimate. In terms of the total mass M, the nuclear mass m, the density ρ, and a fudge factor f which takes into account geometrical and other effects, criticality corresponds to which clearly recovers the aforementioned result that critical mass depends inversely on the square of the density. Alternatively, one may restate this more succinctly in terms of the areal density of mass, Σ: where the factor f has been rewritten as f''' to account for the fact that the two values may differ depending upon geometrical effects and how one defines Σ. For example, for a bare solid sphere of 239Pu criticality is at 320 kg/m2, regardless of density, and for 235U at 550 kg/m2. In any case, criticality then depends upon a typical neutron "seeing" an amount of nuclei around it such that the areal density of nuclei exceeds a certain threshold. This is applied in implosion-type nuclear weapons where a spherical mass of fissile material that is substantially less than a critical mass is made supercritical by very rapidly increasing ρ (and thus Σ as well) (see below). Indeed, sophisticated nuclear weapons programs can make a functional device from less material than more primitive weapons programs require. Aside from the math, there is a simple physical analog that helps explain this result. Consider diesel fumes belched from an exhaust pipe. Initially the fumes appear black, then gradually you are able to see through them without any trouble. This is not because the total scattering cross section of all the soot particles has changed, but because the soot has dispersed. If we consider a transparent cube of length L on a side, filled with soot, then the optical depth of this medium is inversely proportional to the square of L, and therefore proportional to the areal density of soot particles: we can make it easier to see through the imaginary cube just by making the cube larger. Several uncertainties contribute to the determination of a precise value for critical masses, including (1) detailed knowledge of fission cross sections, (2) calculation of geometric effects. This latter problem provided significant motivation for the development of the Monte Carlo method in computational physics by Nicholas Metropolis and Stanislaw Ulam. In fact, even for a homogeneous solid sphere, the exact calculation is by no means trivial. Finally, note that the calculation can also be performed by assuming a continuum approximation for the neutron transport. This reduces it to a diffusion problem. However, as the typical linear dimensions are not significantly larger than the mean free path, such an approximation is only marginally applicable. Finally, note that for some idealized geometries, the critical mass might formally be infinite, and other parameters are used to describe criticality. For example, consider an infinite sheet of fissionable material. For any finite thickness, this corresponds to an infinite mass. However, criticality is only achieved once the thickness of this slab exceeds a critical value. Criticality in nuclear weapon design Until detonation is desired, a nuclear weapon must be kept subcritical. In the case of a uranium gun-type bomb, this can be achieved by keeping the fuel in a number of separate pieces, each below the critical size either because they are too small or unfavorably shaped. To produce detonation, the pieces of uranium are brought together rapidly. In Little Boy, this was achieved by firing a piece of uranium (a 'doughnut') down a gun barrel onto another piece (a 'spike'). This design is referred to as a gun-type fission weapon''. A theoretical 100% pure 239Pu weapon could also be constructed as a gun-type weapon, like the Manhattan Project's proposed Thin Man design. In reality, this is impractical because even "weapons grade" 239Pu is contaminated with a small amount of 240Pu, which has a strong propensity toward spontaneous fission. Because of this, a reasonably sized gun-type weapon would suffer nuclear reaction (predetonation) before the masses of plutonium would be in a position for a full-fledged explosion to occur. Instead, the plutonium is present as a subcritical sphere (or other shape), which may or may not be hollow. Detonation is produced by exploding a shaped charge surrounding the sphere, increasing the density (and collapsing the cavity, if present) to produce a prompt critical configuration. This is known as an implosion type weapon. Prompt criticality The event of fission must release, on the average, more than one free neutron of the desired energy level in order to sustain a chain reaction, and each must find other nuclei and cause them to fission. Most of the neutrons released from a fission event come immediately from that event, but a fraction of them come later, when the fission products decay, which may be on the average from microseconds to minutes later. This is fortunate for atomic power generation, for without this delay "going critical" would be an immediately catastrophic event, as it is in a nuclear bomb where upwards of 80 generations of chain reaction occur in less than a microsecond, far too fast for a human, or even a machine, to react. Physicists recognize two points in the gradual increase of neutron flux which are significant: critical, where the chain reaction becomes self-sustaining thanks to the contributions of both kinds of neutron generation, and prompt critical, where the immediate "prompt" neutrons alone will sustain the reaction without need for the decay neutrons. Nuclear power plants operate between these two points of reactivity, while above the prompt critical point is the domain of nuclear weapons, pulsed reactors designs such as TRIGA research reactors and the pulsed nuclear thermal rocket, and some nuclear power accidents, such as the 1961 US SL-1 accident and 1986 Soviet Chernobyl disaster. See also Criticality (status) Criticality accident Nuclear criticality safety Geometric and material buckling References Mass Nuclear technology Radioactivity Nuclear weapon design Nuclear fission
Critical mass
[ "Physics", "Chemistry", "Mathematics" ]
3,195
[ "Nuclear fission", "Scalar physical quantities", "Physical quantities", "Quantity", "Mass", "Size", "Nuclear technology", "Radioactivity", "Nuclear physics", "Wikipedia categories named after physical quantities", "Matter" ]
175,899
https://en.wikipedia.org/wiki/Robot%20%28camera%29
Robot was a German imaging company known originally for clockwork cameras, later producing surveillance (Traffipax) and bank security cameras. Originally created in 1934 as a brand of Otto Berning, it became part of the Jenoptik group of optical companies in 1999, and specializes in traffic surveillance today. The motorized amateur cameras powered by clockwork (spring) motors were first made in 1934, and ended with a special limited edition collector's model, "Star Classic", in 1996. The Robot film cameras used 35 mm film, mostly in square 24 × 24 mm image format, but many used 18 × 24 mm (half-frame) and 24 × 36 mm (standard Leica format), and non-standard formats such as 6 × 24 mm (Recorder 6), 12 × 24 mm (Recorder 12) and 16 × 16 mm (Robot SC). Camera Models Robot I Around 1930 Heinz Kilfitt, a trained watchmaker, designed a new 35 mm film compact camera using a 24×24 mm frame format (instead of the Leica 24×36 mm or cine 18×24 mm formats). The 24×24mm square frame provided many advantages, including allowing over 50 exposures per standard roll of Leica film instead of 36. Kodak and Agfa rejected the design, and it was sold to Hans Berning, who set up the Otto Berning firm. Otto Berning was granted its first Robot patent in 1934; a US patent was granted in 1936. The camera was originally intended to come in two versions: Robot I, without motor, and Robot II with a spring motor. Its release was delayed and already the first camera "Robot I" included its hallmark spring motor. The first production cameras had a stainless steel body, a spring drive that could shoot 4 frames per second, and a rotary shutter with speeds from 1 to 1/500th second. The camera used proprietary "Type K" cartridges, not the now-standard 35 mm cartridges introduced in the same year by Kodak's Dr. August Nagel Kamerawerk for the Retina. The camera does not have a rangefinder, as it was designed for use mostly with short focal length lenses (e.g. 40 mm) with great depth of field. The Robot I was quite small, the body measuring 108 mm (4¼ inches) long, 63 mm (2½ inches) high, and 32 mm (1¼ inches) deep. A very sharp zone-focusing f/2.8, 3.25 cm Zeiss Tessar lens added 125mm (1/2 inch) to the camera depth. It was about the size of the much later Olympus Stylus although it weighed about 567 grams (20 ounces), approximately the weight of a modern SLR. The die-cast zinc and stamped stainless steel body was crammed with clockwork. A spring motor on the top plate provided the driving force for a rotary behind-the-lens shutter and a sprocket film drive. The film was loaded into cassettes in a darkroom or changing bag. The cassettes appear to be based on the Agfa Memo cassette design, the now-standard Kodak 35 mm cassette not yet being popular in Germany. In place of the velvet light trap on modern cassettes, the Robot cassette used spring pressure and felt pads to close the film passage. When the camera back was shut, the compression opened the passage and the film could travel freely from one cassette to another. The rotary shutter and the film drive are like those used in cine cameras. When the shutter release is pressed, a light-blocking shield lifts and the shutter disc rotates a full turn exposing the film through its open sector; when the pressure is released the light-blocking shield returns to its position behind the lens, and the spring motor advances the film and recocks the shutter. This is almost instantaneous. With practice a photographer could take 4 or 5 pictures a second. Each winding of the spring motor was good for about 25 pictures, half a roll of film. Shutter speed was determined by spring tension and mechanical delay since the exposure sector was fixed. The Robot I had an exposure range of 1 to 1/500, and provision for time exposures. The camera had other features not specifically related to action photography. The small optical viewfinder could be rotated 90 degrees to permit pictures to be taken in one direction while the photographer was facing in another. When the viewfinder was rotated, the scene was viewed through a deep purple filter similar to those used by cinematographers to judge the black and white contrast of an image. The camera had a built-in deep yellow filter which could be positioned behind the lens. Robot II In 1938 Berning introduced the Robot II, a slightly larger camera with some significant improvements but still using the basic mechanism. Among the standard lenses were a 3 cm Zeiss Tessar and a 3.75 cm Zeiss Tessar in f/2.8 and 3.5 variations, a f/2.0, 40 mm Zeiss Biotar and f/4, 7.5 cm Zeiss Sonnar. The film cassette system was redesigned, and the 1951 IIa accepted a standard 35 mm cassette. The special Robot cassettes type-N continued to be used for take-up. A small Bakelite box was sold to allow colour film to be rewound into the original cassettes as required by film processing companies. The camera was synchronized for flash. The swinging viewfinder was retained, but now operated by a lever rather than moving the entire housing. Both the deep purple and yellow filters were eliminated in the redesign. Some versions were available with a double-wind motor which could expose 50 frames on one winding. Civilian versions of the Robot were discontinued at the outset of the Second World War, but it was used as a bomb damage assessment camera by the Luftwaffe, mounted in the tail of Ju 87 (Stuka) dive bombers. This was an electrically driven camera using large cassettes possibly as many as 300 24 x 24mm images. Unlike the central Leica 250GG camera in the Ju 87, which was switched on automatically when the dive brakes were applied, the Robot camera had to be switched on manually. In the stress of the automatic pull out, when it was not uncommon for the pilot to black out from the g levels, switching on the bomb damage assessment camera was frequently forgotten. Robot Star and Junior In the 1950s Robot introduced the Robot Star. Film could now be rewound into the feed cassette in the camera as in other 35 mm cameras. Robot then introduced the "Junior", an economy model with the quality and almost all the features of the "Star" but without the angle finder or the rewind mechanism. In the late 1950s the company, now called Robot-Berning, redesigned the Robot Star and created the Vollautomat Star II. The length stayed the same but the height increased by 125mm (half an inch). The new higher top housing disposed of the right angle finder and instead included an Albada finder with frames for the factory-fitted 38/40 mm and 75 mm lenses. The drive and shutter too were improved. By 1960 the hallmark stamped steel body was replaced by heavier die castings. The camera became, with slight changes, the Robot Star 25 and Star 50. The Robot Star 25 could expose 25 frames on a single winding, and the double-motor Robot Star 50 could expose 50 frames. Since most Robot cameras by then were sold for industrial use where the camera was fixed in position, Robot also introduced versions without a finder, and even without rewind. Although most production dates from the 1950-1960s era, essentially the same camera continued to be manufactured into the late 1990s. Robot Royal Robot-Berning also produced enlarged versions of the Robot, the Robot Royal 18, 24 and 36, with built-in rangefinder and with an autoburst mode of operation capable of shooting 6 frames per second. The camera was about the size of a Leica M3 and weighed 907g (almost 2 pounds). It was equipped with a Schneider Xenar 45 mm f/2.8 lens. The Robot Royal 36 took a standard 35 mm still picture but was identical to the Royal 24 in all other regards. They retained the behind-the-lens rotary shutter with speeds from 1/2 to 1/500 s. Robot Royal II is a larger viewfinder camera, it has no rangefinder, it does not have burst mode, it is a stripped down Robot Royal III. Robot Royal III has a main spring, when tightened, the camera can take 4 to 5 pictures in succession. It has a built in rangefinder, eight interchangeable bayonet mount lenses. There are two versions, Robot Royal 36, produce 36 24x36mm images on a roll of 135 film, Robot Royal 24 makes 50 24x24mm images on 135 film A version for instrumentation (and traffic) was also created on the basis of the Royal design: the Recorder. These cameras were like the Royal but without viewfinder or rangefinder. They had interfaces to motors and detachable backs to support bulk film cassettes. A special parallel series of the Royal too was available that included these features. While the Royal had only limited market success, the Recorder was well accepted. It became the centerpiece of the company's portable document capture, traffic control and security solutions, and continues to be the standard Robot camera for instrumentation applications. Military and government models During the Second World War, specially adapted models were made for the German Luftwaffe. During the Cold War, Robots had a large following in the espionage business. The small camera could be concealed in a briefcase or a handbag, the lens poking though a decorative hole, and activated repeatedly by a cable release concealed in the handle. The company was well aware of this market and produced a variety of accessories which made the camera even more suitable for covert photography. Sequence photography While the Robots were capable of sequence photography, the shutter that made this possible placed some constraints upon taking lenses and shutter speeds. To reach speeds as high as 1/500 second the inertia of the thin vulcanite shutter disc had to be kept at a minimum, requiring a small-diameter disc with a minimal sector opening. The screw in lens mount was 26 mm in diameter. The clear lens opening was only 20 mm. In contrast, Leica's mount was almost twice as large at 39 mm. Further, to permit lens interchangeability, the shutter was mounted behind the lens so the disc interrupted the expanding light cone. This placed some limits on lens design. While the 75mm Sonnar could be used with the aperture set to f/22, the Tele-Xenar suffered from some shutter disc vignetting unless opened more. The maximum focal length lens for general photographic use that could be fitted with acceptable vignetting was 75 mm, although telephotos up to 600 mm were offered. A 150 mm Tele-Xenar was available for long-distance action photography, but it produced a vignetted circular image on the 24 × 24 mm frame. The lack of a rangefinder on the Robot and Robot Star required zone focusing of these long lenses: every shot had to be estimated or pre-measured. All of the mechanical movement made for a noisy camera, although not as noisy as some modern motor drives. For an extra fee, Robot-Berning supplied silenced versions with nylon gears. Within their limits the Robots did an excellent job of sequence photography. The standard 38 mm f/2.8 Xenar lenses were extremely sharp, even by today's standards, and zone focusing worked well on rapid action with short focal length lenses. The reliable motor drive was as fast, if not faster, than later electrical drives, and there were no batteries to run down. Flash could be used at any speed. The square frame was big enough, with modern films, for A4 (210 x 297 mm, or 8.25"× 11.75") or greater enlargements, and 50 pictures could be taken on a standard 36-exposure roll. The cameras, especially the later ones built to industrial standards, will take much abuse and still keep functioning. References Robot 1 External links Heinz Kilfitt - Biographical Notes - German fan site Cameras
Robot (camera)
[ "Technology" ]
2,511
[ "Recording devices", "Cameras" ]
176,159
https://en.wikipedia.org/wiki/Polymer%20physics
Polymer physics is the field of physics that studies polymers, their fluctuations, mechanical properties, as well as the kinetics of reactions involving degradation of polymers and polymerisation of monomers. While it focuses on the perspective of condensed matter physics, polymer physics was originally a branch of statistical physics. Polymer physics and polymer chemistry are also related to the field of polymer science, which is considered to be the applicative part of polymers. Polymers are large molecules and thus are very complicated for solving using a deterministic method. Yet, statistical approaches can yield results and are often pertinent, since large polymers (i.e., polymers with many monomers) are describable efficiently in the thermodynamic limit of infinitely many monomers (although the actual size is clearly finite). Thermal fluctuations continuously affect the shape of polymers in liquid solutions, and modeling their effect requires the use of principles from statistical mechanics and dynamics. As a corollary, temperature strongly affects the physical behavior of polymers in solution, causing phase transitions, melts, and so on. The statistical approach to polymer physics is based on an analogy between polymer behavior and either Brownian motion or another type of a random walk, the self-avoiding walk. The simplest possible polymer model is presented by the ideal chain, corresponding to a simple random walk. Experimental approaches for characterizing polymers are also common, using polymer characterization methods, such as size exclusion chromatography, viscometry, dynamic light scattering, and Automatic Continuous Online Monitoring of Polymerization Reactions (ACOMP) for determining the chemical, physical, and material properties of polymers. These experimental methods help the mathematical modeling of polymers and give a better understanding of the properties of polymers. Flory is considered the first scientist establishing the field of polymer physics. French scientists contributed since the 70s (e.g. Pierre-Gilles de Gennes, J. des Cloizeaux). Doi and Edwards wrote a famous book in polymer physics. Soviet/Russian school of physics (I. M. Lifshitz, A. Yu. Grosberg, A.R. Khokhlov, V.N. Pokrovskii) have been very active in the development of polymer physics. Models Models of polymer chains are split into two types: "ideal" models, and "real" models. Ideal chain models assume that there are no interactions between chain monomers. This assumption is valid for certain polymeric systems, where the positive and negative interactions between the monomer effectively cancel out. Ideal chain models provide a good starting point for the investigation of more complex systems and are better suited for equations with more parameters. Ideal chains The freely-jointed chain is the simplest model of a polymer. In this model, fixed length polymer segments are linearly connected, and all bond and torsion angles are equiprobable. The polymer can therefore be described by a simple random walk and ideal chain. The model can be extended to include extensible segments in order to represent bond stretching. The freely-rotating chain improves the freely-jointed chain model by taking into account that polymer segments make a fixed bond angle to neighbouring units because of specific chemical bonding. Under this fixed angle, the segments are still free to rotate and all torsion angles are equally likely. The hindered rotation model assumes that the torsion angle is hindered by a potential energy. This makes the probability of each torsion angle proportional to a Boltzmann factor: , where is the potential determining the probability of each value of . In the rotational isomeric state model, the allowed torsion angles are determined by the positions of the minima in the rotational potential energy. Bond lengths and bond angles are constant. The Worm-like chain is a more complex model. It takes the persistence length into account. Polymers are not completely flexible; bending them requires energy. At the length scale below persistence length, the polymer behaves more or less like a rigid rod. The finite extensible nonlinear elastic model takes into account non-linearity for finite chains. It is used for computational simulations. Real chains Interactions between chain monomers can be modelled as excluded volume. This causes a reduction in the conformational possibilities of the chain, and leads to a self-avoiding random walk. Self-avoiding random walks have different statistics to simple random walks. Solvent and temperature effect The statistics of a single polymer chain depends upon the solubility of the polymer in the solvent. For a solvent in which the polymer is very soluble (a "good" solvent), the chain is more expanded, while for a solvent in which the polymer is insoluble or barely soluble (a "bad" solvent), the chain segments stay close to each other. In the limit of a very bad solvent the polymer chain merely collapses to form a hard sphere, while in a good solvent the chain swells in order to maximize the number of polymer-fluid contacts. For this case the radius of gyration is approximated using Flory's mean field approach which yields a scaling for the radius of gyration of: , where is the radius of gyration of the polymer, is the number of bond segments (equal to the degree of polymerization) of the chain and is the Flory exponent. For good solvent, ; for poor solvent, . Therefore, polymer in good solvent has larger size and behaves like a fractal object. In bad solvent it behaves like a solid sphere. In the so-called solvent, , which is the result of simple random walk. The chain behaves as if it were an ideal chain. The quality of solvent depends also on temperature. For a flexible polymer, low temperature may correspond to poor quality and high temperature makes the same solvent good. At a particular temperature called theta (θ) temperature, the solvent behaves as an ideal chain. Excluded volume interaction The ideal chain model assumes that polymer segments can overlap with each other as if the chain were a phantom chain. In reality, two segments cannot occupy the same space at the same time. This interaction between segments is called the excluded volume interaction. The simplest formulation of excluded volume is the self-avoiding random walk, a random walk that cannot repeat its previous path. A path of this walk of N steps in three dimensions represents a conformation of a polymer with excluded volume interaction. Because of the self-avoiding nature of this model, the number of possible conformations is significantly reduced. The radius of gyration is generally larger than that of the ideal chain. Flexibility and reptation Whether a polymer is flexible or not depends on the scale of interest. For example, the persistence length of double-stranded DNA is about 50 nm. Looking at length scale smaller than 50 nm, it behaves more or less like a rigid rod. At length scale much larger than 50 nm, it behaves like a flexible chain. Reptation is the thermal motion of very long linear, entangled basically macromolecules in polymer melts or concentrated polymer solutions. Derived from the word reptile, reptation suggests the movement of entangled polymer chains as being analogous to snakes slithering through one another. Pierre-Gilles de Gennes introduced (and named) the concept of reptation into polymer physics in 1971 to explain the dependence of the mobility of a macromolecule on its length. Reptation is used as a mechanism to explain viscous flow in an amorphous polymer. Sir Sam Edwards and Masao Doi later refined reptation theory. The consistent theory of thermal motion of polymers was given by Vladimir Pokrovskii . Similar phenomena also occur in proteins. Example model (simple random-walk, freely jointed) The study of long chain polymers has been a source of problems within the realms of statistical mechanics since about the 1950s. One of the reasons however that scientists were interested in their study is that the equations governing the behavior of a polymer chain were independent of the chain chemistry. What is more, the governing equation turns out to be a random walk, or diffusive walk, in space. Indeed, the Schrödinger equation is itself a diffusion equation in imaginary time, t' = it. Random walks in time The first example of a random walk is one in space, whereby a particle undergoes a random motion due to external forces in its surrounding medium. A typical example would be a pollen grain in a beaker of water. If one could somehow "dye" the path the pollen grain has taken, the path observed is defined as a random walk. Consider a toy problem, of a train moving along a 1D track in the x-direction. Suppose that the train moves either a distance of +b or −b (b is the same for each step), depending on whether a coin lands heads or tails when flipped. Lets start by considering the statistics of the steps the toy train takes (where Si is the ith step taken): ; due to a priori equal probabilities The second quantity is known as the correlation function. The delta is the kronecker delta which tells us that if the indices i and j are different, then the result is 0, but if i = j then the kronecker delta is 1, so the correlation function returns a value of b2. This makes sense, because if i = j then we are considering the same step. Rather trivially then it can be shown that the average displacement of the train on the x-axis is 0; As stated , so the sum is still 0. It can also be shown, using the same method demonstrated above, to calculate the root mean square value of problem. The result of this calculation is given below From the diffusion equation it can be shown that the distance a diffusing particle moves in a medium is proportional to the root of the time the system has been diffusing for, where the proportionality constant is the root of the diffusion constant. The above relation, although cosmetically different reveals similar physics, where N is simply the number of steps moved (is loosely connected with time) and b is the characteristic step length. As a consequence we can consider diffusion as a random walk process. Random walks in space Random walks in space can be thought of as snapshots of the path taken by a random walker in time. One such example is the spatial configuration of long chain polymers. There are two types of random walk in space: self-avoiding random walks, where the links of the polymer chain interact and do not overlap in space, and pure random walks, where the links of the polymer chain are non-interacting and links are free to lie on top of one another. The former type is most applicable to physical systems, but their solutions are harder to get at from first principles. By considering a freely jointed, non-interacting polymer chain, the end-to-end vector is where ri is the vector position of the i-th link in the chain. As a result of the central limit theorem, if N ≫ 1 then we expect a Gaussian distribution for the end-to-end vector. We can also make statements of the statistics of the links themselves; ; by the isotropy of space ; all the links in the chain are uncorrelated with one another Using the statistics of the individual links, it is easily shown that . Notice this last result is the same as that found for random walks in time. Assuming, as stated, that that distribution of end-to-end vectors for a very large number of identical polymer chains is gaussian, the probability distribution has the following form What use is this to us? Recall that according to the principle of equally likely a priori probabilities, the number of microstates, Ω, at some physical value is directly proportional to the probability distribution at that physical value, viz; where c is an arbitrary proportionality constant. Given our distribution function, there is a maxima corresponding to R = 0. Physically this amounts to there being more microstates which have an end-to-end vector of 0 than any other microstate. Now by considering where F is the Helmholtz free energy, and it can be shown that which has the same form as the potential energy of a spring, obeying Hooke's law. This result is known as the entropic spring result and amounts to saying that upon stretching a polymer chain you are doing work on the system to drag it away from its (preferred) equilibrium state. An example of this is a common elastic band, composed of long chain (rubber) polymers. By stretching the elastic band you are doing work on the system and the band behaves like a conventional spring, except that unlike the case with a metal spring, all of the work done appears immediately as thermal energy, much as in the thermodynamically similar case of compressing an ideal gas in a piston. It might at first be astonishing that the work done in stretching the polymer chain can be related entirely to the change in entropy of the system as a result of the stretching. However, this is typical of systems that do not store any energy as potential energy, such as ideal gases. That such systems are entirely driven by entropy changes at a given temperature, can be seen whenever it is the case that are allowed to do work on the surroundings (such as when an elastic band does work on the environment by contracting, or an ideal gas does work on the environment by expanding). Because the free energy change in such cases derives entirely from entropy change rather than internal (potential) energy conversion, in both cases the work done can be drawn entirely from thermal energy in the polymer, with 100% efficiency of conversion of thermal energy to work. In both the ideal gas and the polymer, this is made possible by a material entropy increase from contraction that makes up for the loss of entropy from absorption of the thermal energy, and cooling of the material. See also File dynamics Important publications in polymer physics. Polymer characterization Protein dynamics Reptation Soft matter Flory–Huggins solution theory Time–temperature superposition References External links Plastic & polymer formulations Statistical mechanics
Polymer physics
[ "Physics", "Chemistry", "Materials_science" ]
2,871
[ "Polymer physics", "Statistical mechanics", "Polymer chemistry" ]
176,399
https://en.wikipedia.org/wiki/Zeeman%20effect
The Zeeman effect ( , ) is the splitting of a spectral line into several components in the presence of a static magnetic field. It is caused by interaction of the magnetic field with the magnetic moment of the atomic electron associated to its orbital motion and spin; this interaction shifts some orbital energies more than others, resulting in the split spectrum. The effect is named after the Dutch physicist Pieter Zeeman, who discovered it in 1896 and received a Nobel Prize in Physics for this discovery. It is analogous to the Stark effect, the splitting of a spectral line into several components in the presence of an electric field. Also similar to the Stark effect, transitions between different components have, in general, different intensities, with some being entirely forbidden (in the dipole approximation), as governed by the selection rules. Since the distance between the Zeeman sub-levels is a function of magnetic field strength, this effect can be used to measure magnetic field strength, e.g. that of the Sun and other stars or in laboratory plasmas. Discovery In 1896 Zeeman learned that his laboratory had one of Henry Augustus Rowland's highest resolving diffraction gratings. Zeeman had read James Clerk Maxwell's article in Encyclopædia Britannica describing Michael Faraday's failed attempts to influence light with magnetism. Zeeman wondered if the new spectrographic techniques could succeed where early efforts had not. When illuminated by a slit shaped source, the grating produces a long array of slit images corresponding to different wavelengths. Zeeman placed a piece of asbestos soaked in salt water into a Bunsen burner flame at the source of the grating: he could easily see two lines for sodium light emission. Energizing a 10 kilogauss magnet around the flame he observed a slight broadening of the sodium images. When Zeeman switched to cadmium as the source he observed the images split when the magnet was energized. These splittings could be analyzed with Hendrik Lorentz's then-new electron theory. In retrospect we now know that the magnetic effects on sodium require quantum mechanical treatment. Zeeman and Lorentz were awarded the 1902 Nobel prize; in his acceptance speech Zeeman explained his apparatus and showed slides of the spectrographic images. Nomenclature Historically, one distinguishes between the normal and an anomalous Zeeman effect (discovered by Thomas Preston in Dublin, Ireland). The anomalous effect appears on transitions where the net spin of the electrons is non-zero. It was called "anomalous" because the electron spin had not yet been discovered, and so there was no good explanation for it at the time that Zeeman observed the effect. Wolfgang Pauli recalled that when asked by a colleague as to why he looked unhappy, he replied, "How can one look happy when he is thinking about the anomalous Zeeman effect?" At higher magnetic field strength the effect ceases to be linear. At even higher field strengths, comparable to the strength of the atom's internal field, the electron coupling is disturbed and the spectral lines rearrange. This is called the Paschen–Back effect. In the modern scientific literature, these terms are rarely used, with a tendency to use just the "Zeeman effect". Another rarely used obscure term is inverse Zeeman effect, referring to the Zeeman effect in an absorption spectral line. A similar effect, splitting of the nuclear energy levels in the presence of a magnetic field, is referred to as the nuclear Zeeman effect. Theoretical presentation The total Hamiltonian of an atom in a magnetic field is where is the unperturbed Hamiltonian of the atom, and is the perturbation due to the magnetic field: where is the magnetic moment of the atom. The magnetic moment consists of the electronic and nuclear parts; however, the latter is many orders of magnitude smaller and will be neglected here. Therefore, where is the Bohr magneton, is the total electronic angular momentum, and is the Landé g-factor. A more accurate approach is to take into account that the operator of the magnetic moment of an electron is a sum of the contributions of the orbital angular momentum and the spin angular momentum , with each multiplied by the appropriate gyromagnetic ratio: where and (the latter is called the anomalous gyromagnetic ratio; the deviation of the value from 2 is due to the effects of quantum electrodynamics). In the case of the LS coupling, one can sum over all electrons in the atom: where and are the total spin momentum and spin of the atom, and averaging is done over a state with a given value of the total angular momentum. If the interaction term is small (less than the fine structure), it can be treated as a perturbation; this is the Zeeman effect proper. In the Paschen–Back effect, described below, exceeds the LS coupling significantly (but is still small compared to ). In ultra-strong magnetic fields, the magnetic-field interaction may exceed , in which case the atom can no longer exist in its normal meaning, and one talks about Landau levels instead. There are intermediate cases which are more complex than these limit cases. Weak field (Zeeman effect) If the spin–orbit interaction dominates over the effect of the external magnetic field, and are not separately conserved, only the total angular momentum is. The spin and orbital angular momentum vectors can be thought of as precessing about the (fixed) total angular momentum vector . The (time-)"averaged" spin vector is then the projection of the spin onto the direction of : and for the (time-)"averaged" orbital vector: Thus, Using and squaring both sides, we get and: using and squaring both sides, we get Combining everything and taking , we obtain the magnetic potential energy of the atom in the applied external magnetic field, where the quantity in square brackets is the Landé g-factor gJ of the atom ( and ) and is the z-component of the total angular momentum. For a single electron above filled shells and , the Landé g-factor can be simplified into: Taking to be the perturbation, the Zeeman correction to the energy is Example: Lyman-alpha transition in hydrogen The Lyman-alpha transition in hydrogen in the presence of the spin–orbit interaction involves the transitions and In the presence of an external magnetic field, the weak-field Zeeman effect splits the 1S1/2 and 2P1/2 levels into 2 states each () and the 2P3/2 level into 4 states (). The Landé g-factors for the three levels are: for (j=1/2, l=0) for (j=1/2, l=1) for (j=3/2, l=1). Note in particular that the size of the energy splitting is different for the different orbitals, because the gJ values are different. On the left, fine structure splitting is depicted. This splitting occurs even in the absence of a magnetic field, as it is due to spin–orbit coupling. Depicted on the right is the additional Zeeman splitting, which occurs in the presence of magnetic fields. Strong field (Paschen–Back effect) The Paschen–Back effect is the splitting of atomic energy levels in the presence of a strong magnetic field. This occurs when an external magnetic field is sufficiently strong to disrupt the coupling between orbital () and spin () angular momenta. This effect is the strong-field limit of the Zeeman effect. When , the two effects are equivalent. The effect was named after the German physicists Friedrich Paschen and Ernst E. A. Back. When the magnetic-field perturbation significantly exceeds the spin–orbit interaction, one can safely assume . This allows the expectation values of and to be easily evaluated for a state . The energies are simply The above may be read as implying that the LS-coupling is completely broken by the external field. However and are still "good" quantum numbers. Together with the selection rules for an electric dipole transition, i.e., this allows to ignore the spin degree of freedom altogether. As a result, only three spectral lines will be visible, corresponding to the selection rule. The splitting is independent of the unperturbed energies and electronic configurations of the levels being considered. More precisely, if , each of these three components is actually a group of several transitions due to the residual spin–orbit coupling and relativistic corrections (which are of the same order, known as 'fine structure'). The first-order perturbation theory with these corrections yields the following formula for the hydrogen atom in the Paschen–Back limit: Example: Lyman-alpha transition in hydrogen In this example, the fine-structure corrections are ignored. Intermediate field for j = 1/2 In the magnetic dipole approximation, the Hamiltonian which includes both the hyperfine and Zeeman interactions is where is the hyperfine splitting (in Hz) at zero applied magnetic field, and are the Bohr magneton and nuclear magneton respectively, and are the electron and nuclear angular momentum operators and is the Landé g-factor: In the case of weak magnetic fields, the Zeeman interaction can be treated as a perturbation to the basis. In the high field regime, the magnetic field becomes so strong that the Zeeman effect will dominate, and one must use a more complete basis of or just since and will be constant within a given level. To get the complete picture, including intermediate field strengths, we must consider eigenstates which are superpositions of the and basis states. For , the Hamiltonian can be solved analytically, resulting in the Breit–Rabi formula (named after Gregory Breit and Isidor Isaac Rabi). Notably, the electric quadrupole interaction is zero for (), so this formula is fairly accurate. We now utilize quantum mechanical ladder operators, which are defined for a general angular momentum operator as These ladder operators have the property as long as lies in the range (otherwise, they return zero). Using ladder operators and We can rewrite the Hamiltonian as We can now see that at all times, the total angular momentum projection will be conserved. This is because both and leave states with definite and unchanged, while and either increase and decrease or vice versa, so the sum is always unaffected. Furthermore, since there are only two possible values of which are . Therefore, for every value of there are only two possible states, and we can define them as the basis: This pair of states is a two-level quantum mechanical system. Now we can determine the matrix elements of the Hamiltonian: Solving for the eigenvalues of this matrix – as can be done by hand (see two-level quantum mechanical system), or more easily, with a computer algebra system – we arrive at the energy shifts: where is the splitting (in units of Hz) between two hyperfine sublevels in the absence of magnetic field , is referred to as the 'field strength parameter' (Note: for the expression under the square root is an exact square, and so the last term should be replaced by ). This equation is known as the Breit–Rabi formula and is useful for systems with one valence electron in an () level. Note that index in should be considered not as total angular momentum of the atom but as asymptotic total angular momentum. It is equal to total angular momentum only if otherwise eigenvectors corresponding different eigenvalues of the Hamiltonian are the superpositions of states with different but equal (the only exceptions are ). Applications Astrophysics George Ellery Hale was the first to notice the Zeeman effect in the solar spectra, indicating the existence of strong magnetic fields in sunspots. Such fields can be quite high, on the order of 0.1 tesla or higher. Today, the Zeeman effect is used to produce magnetograms showing the variation of magnetic field on the Sun, and to analyse the magnetic field geometries in other stars. Laser cooling The Zeeman effect is utilized in many laser cooling applications such as a magneto-optical trap and the Zeeman slower. Spintronics Zeeman-energy mediated coupling of spin and orbital motions is used in spintronics for controlling electron spins in quantum dots through electric dipole spin resonance. Metrology Old high-precision frequency standards, i.e. hyperfine structure transition-based atomic clocks, may require periodic fine-tuning due to exposure to magnetic fields. This is carried out by measuring the Zeeman effect on specific hyperfine structure transition levels of the source element (cesium) and applying a uniformly precise, low-strength magnetic field to said source, in a process known as degaussing. The Zeeman effect may also be utilized to improve accuracy in atomic absorption spectroscopy. Biology A theory about the magnetic sense of birds assumes that a protein in the retina is changed due to the Zeeman effect. Nuclear spectroscopy The nuclear Zeeman effect is important in such applications as nuclear magnetic resonance spectroscopy, magnetic resonance imaging (MRI), and Mössbauer spectroscopy. Other The electron spin resonance spectroscopy is based on the Zeeman effect. Demonstrations The Zeeman effect can be demonstrated by placing a sodium vapor source in a powerful electromagnet and viewing a sodium vapor lamp through the magnet opening (see diagram). With magnet off, the sodium vapor source will block the lamp light; when the magnet is turned on the lamp light will be visible through the vapor. The sodium vapor can be created by sealing sodium metal in an evacuated glass tube and heating it while the tube is in the magnet. Alternatively, salt (sodium chloride) on a ceramic stick can be placed in the flame of Bunsen burner as the sodium vapor source. When the magnetic field is energized, the lamp image will be brighter. However, the magnetic field also affects the flame, making the observation depend upon more than just the Zeeman effect. These issues also plagued Zeeman's original work; he devoted considerable effort to ensure his observations were truly an effect of magnetism on light emission. When salt is added to the Bunsen burner, it dissociates to give sodium and chloride. The sodium atoms get excited due to photons from the sodium vapour lamp, with electrons excited from 3s to 3p states, absorbing light in the process. The sodium vapour lamp emits light at 589nm, which has precisely the energy to excite an electron of a sodium atom. If it was an atom of another element, like chlorine, shadow will not be formed. When a magnetic field is applied, due to the Zeeman effect the spectral line of sodium gets split into several components. This means the energy difference between the 3s and 3p atomic orbitals will change. As the sodium vapour lamp don't precisely deliver the right frequency any more, light doesn't get absorbed and passes through, resulting in the shadow dimming. As the magnetic field strength is increased, the shift in the spectral lines increases and lamp light is transmitted. See also Magneto-optic Kerr effect Voigt effect Faraday effect Cotton–Mouton effect Polarization spectroscopy Zeeman energy Stark effect Lamb shift References Historical (Chapter 16 provides a comprehensive treatment, as of 1935.) Modern External links Zeeman effect-Control light with magnetic fields Spectroscopy Quantum magnetism Foundational quantum physics Articles containing video clips Magneto-optic effects
Zeeman effect
[ "Physics", "Chemistry", "Materials_science" ]
3,179
[ "Physical phenomena", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Foundational quantum physics", "Quantum mechanics", "Electric and magnetic fields in matter", "Optical phenomena", "Quantum magnetism", "Condensed matter physics", "Magneto-optic effects", "Spe...
176,550
https://en.wikipedia.org/wiki/Standard%20molar%20entropy
In chemistry, the standard molar entropy is the entropy content of one mole of pure substance at a standard state of pressure and any temperature of interest. These are often (but not necessarily) chosen to be the standard temperature and pressure. The standard molar entropy at pressure = is usually given the symbol , and has units of joules per mole per kelvin (J⋅mol−1⋅K−1). Unlike standard enthalpies of formation, the value of is absolute. That is, an element in its standard state has a definite, nonzero value of at room temperature. The entropy of a pure crystalline structure can be 0J⋅mol−1⋅K−1 only at 0K, according to the third law of thermodynamics. However, this assumes that the material forms a 'perfect crystal' without any residual entropy. This can be due to crystallographic defects, dislocations, and/or incomplete rotational quenching within the solid, as originally pointed out by Linus Pauling. These contributions to the entropy are always present, because crystals always grow at a finite rate and at temperature. However, the residual entropy is often quite negligible and can be accounted for when it occurs using statistical mechanics. Thermodynamics If a mole of a solid substance is a perfectly ordered solid at 0K, then if the solid is warmed by its surroundings to 298.15K without melting, its absolute molar entropy would be the sum of a series of stepwise and reversible entropy changes. The limit of this sum as becomes an integral: In this example, and is the molar heat capacity at a constant pressure of the substance in the reversible process . The molar heat capacity is not constant during the experiment because it changes depending on the (increasing) temperature of the substance. Therefore, a table of values for is required to find the total molar entropy. The quantity represents the ratio of a very small exchange of heat energy to the temperature . The total molar entropy is the sum of many small changes in molar entropy, where each small change can be considered a reversible process. Chemistry The standard molar entropy of a gas at STP includes contributions from: The heat capacity of one mole of the solid from 0K to the melting point (including heat absorbed in any changes between different crystal structures). The latent heat of fusion of the solid. The heat capacity of the liquid from the melting point to the boiling point. The latent heat of vaporization of the liquid. The heat capacity of the gas from the boiling point to room temperature. Changes in entropy are associated with phase transitions and chemical reactions. Chemical equations make use of the standard molar entropy of reactants and products to find the standard entropy of reaction: The standard entropy of reaction helps determine whether the reaction will take place spontaneously. According to the second law of thermodynamics, a spontaneous reaction always results in an increase in total entropy of the system and its surroundings: Molar entropy is not the same for all gases. Under identical conditions, it is greater for a heavier gas. See also Entropy Heat Gibbs free energy Helmholtz free energy Standard state Third law of thermodynamics References External links Table of Standard Thermodynamic Properties for Selected Substances Chemical properties Thermodynamic entropy Molar quantities
Standard molar entropy
[ "Physics", "Chemistry" ]
684
[ "Physical quantities", "Intensive quantities", "Thermodynamic entropy", "Entropy", "nan", "Statistical mechanics", "Molar quantities" ]
11,034
https://en.wikipedia.org/wiki/Fluid%20dynamics
In physics, physical chemistry and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids – liquids and gases. It has several subdisciplines, including (the study of air and other gases in motion) and (the study of water and other liquids in motion). Fluid dynamics has a wide range of applications, including calculating forces and moments on aircraft, determining the mass flow rate of petroleum through pipelines, predicting weather patterns, understanding nebulae in interstellar space and modelling fission weapon detonation. Fluid dynamics offers a systematic structure—which underlies these practical disciplines—that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems. The solution to a fluid dynamics problem typically involves the calculation of various properties of the fluid, such as flow velocity, pressure, density, and temperature, as functions of space and time. Before the twentieth century, "hydrodynamics" was synonymous with fluid dynamics. This is still reflected in names of some fluid dynamics topics, like magnetohydrodynamics and hydrodynamic stability, both of which can also be applied to gases. Equations The foundational axioms of fluid dynamics are the conservation laws, specifically, conservation of mass, conservation of linear momentum, and conservation of energy (also known as the first law of thermodynamics). These are based on classical mechanics and are modified in quantum mechanics and general relativity. They are expressed using the Reynolds transport theorem. In addition to the above, fluids are assumed to obey the continuum assumption. At small scale, all fluids are composed of molecules that collide with one another and solid objects. However, the continuum assumption assumes that fluids are continuous, rather than discrete. Consequently, it is assumed that properties such as density, pressure, temperature, and flow velocity are well-defined at infinitesimally small points in space and vary continuously from one point to another. The fact that the fluid is made up of discrete molecules is ignored. For fluids that are sufficiently dense to be a continuum, do not contain ionized species, and have flow velocities that are small in relation to the speed of light, the momentum equations for Newtonian fluids are the Navier–Stokes equations—which is a non-linear set of differential equations that describes the flow of a fluid whose stress depends linearly on flow velocity gradients and pressure. The unsimplified equations do not have a general closed-form solution, so they are primarily of use in computational fluid dynamics. The equations can be simplified in several ways, all of which make them easier to solve. Some of the simplifications allow some simple fluid dynamics problems to be solved in closed form. In addition to the mass, momentum, and energy conservation equations, a thermodynamic equation of state that gives the pressure as a function of other thermodynamic variables is required to completely describe the problem. An example of this would be the perfect gas equation of state: where is pressure, is density, and is the absolute temperature, while is the gas constant and is molar mass for a particular gas. A constitutive relation may also be useful. Conservation laws Three conservation laws are used to solve fluid dynamics problems, and may be written in integral or differential form. The conservation laws may be applied to a region of the flow called a control volume. A control volume is a discrete volume in space through which fluid is assumed to flow. The integral formulations of the conservation laws are used to describe the change of mass, momentum, or energy within the control volume. Differential formulations of the conservation laws apply Stokes' theorem to yield an expression that may be interpreted as the integral form of the law applied to an infinitesimally small volume (at a point) within the flow. Classifications Compressible versus incompressible flow All fluids are compressible to an extent; that is, changes in pressure or temperature cause changes in density. However, in many situations the changes in pressure and temperature are sufficiently small that the changes in density are negligible. In this case the flow can be modelled as an incompressible flow. Otherwise the more general compressible flow equations must be used. Mathematically, incompressibility is expressed by saying that the density of a fluid parcel does not change as it moves in the flow field, that is, where is the material derivative, which is the sum of local and convective derivatives. This additional constraint simplifies the governing equations, especially in the case when the fluid has a uniform density. For flow of gases, to determine whether to use compressible or incompressible fluid dynamics, the Mach number of the flow is evaluated. As a rough guide, compressible effects can be ignored at Mach numbers below approximately 0.3. For liquids, whether the incompressible assumption is valid depends on the fluid properties (specifically the critical pressure and temperature of the fluid) and the flow conditions (how close to the critical pressure the actual flow pressure becomes). Acoustic problems always require allowing compressibility, since sound waves are compression waves involving changes in pressure and density of the medium through which they propagate. Newtonian versus non-Newtonian fluids All fluids, except superfluids, are viscous, meaning that they exert some resistance to deformation: neighbouring parcels of fluid moving at different velocities exert viscous forces on each other. The velocity gradient is referred to as a strain rate; it has dimensions . Isaac Newton showed that for many familiar fluids such as water and air, the stress due to these viscous forces is linearly related to the strain rate. Such fluids are called Newtonian fluids. The coefficient of proportionality is called the fluid's viscosity; for Newtonian fluids, it is a fluid property that is independent of the strain rate. Non-Newtonian fluids have a more complicated, non-linear stress-strain behaviour. The sub-discipline of rheology describes the stress-strain behaviours of such fluids, which include emulsions and slurries, some viscoelastic materials such as blood and some polymers, and sticky liquids such as latex, honey and lubricants. Inviscid versus viscous versus Stokes flow The dynamic of fluid parcels is described with the help of Newton's second law. An accelerating parcel of fluid is subject to inertial effects. The Reynolds number is a dimensionless quantity which characterises the magnitude of inertial effects compared to the magnitude of viscous effects. A low Reynolds number () indicates that viscous forces are very strong compared to inertial forces. In such cases, inertial forces are sometimes neglected; this flow regime is called Stokes or creeping flow. In contrast, high Reynolds numbers () indicate that the inertial effects have more effect on the velocity field than the viscous (friction) effects. In high Reynolds number flows, the flow is often modeled as an inviscid flow, an approximation in which viscosity is completely neglected. Eliminating viscosity allows the Navier–Stokes equations to be simplified into the Euler equations. The integration of the Euler equations along a streamline in an inviscid flow yields Bernoulli's equation. When, in addition to being inviscid, the flow is irrotational everywhere, Bernoulli's equation can completely describe the flow everywhere. Such flows are called potential flows, because the velocity field may be expressed as the gradient of a potential energy expression. This idea can work fairly well when the Reynolds number is high. However, problems such as those involving solid boundaries may require that the viscosity be included. Viscosity cannot be neglected near solid boundaries because the no-slip condition generates a thin region of large strain rate, the boundary layer, in which viscosity effects dominate and which thus generates vorticity. Therefore, to calculate net forces on bodies (such as wings), viscous flow equations must be used: inviscid flow theory fails to predict drag forces, a limitation known as the d'Alembert's paradox. A commonly used model, especially in computational fluid dynamics, is to use two flow models: the Euler equations away from the body, and boundary layer equations in a region close to the body. The two solutions can then be matched with each other, using the method of matched asymptotic expansions. Steady versus unsteady flow A flow that is not a function of time is called steady flow. Steady-state flow refers to the condition where the fluid properties at a point in the system do not change over time. Time dependent flow is known as unsteady (also called transient). Whether a particular flow is steady or unsteady, can depend on the chosen frame of reference. For instance, laminar flow over a sphere is steady in the frame of reference that is stationary with respect to the sphere. In a frame of reference that is stationary with respect to a background flow, the flow is unsteady. Turbulent flows are unsteady by definition. A turbulent flow can, however, be statistically stationary. The random velocity field is statistically stationary if all statistics are invariant under a shift in time. This roughly means that all statistical properties are constant in time. Often, the mean field is the object of interest, and this is constant too in a statistically stationary flow. Steady flows are often more tractable than otherwise similar unsteady flows. The governing equations of a steady problem have one dimension fewer (time) than the governing equations of the same problem without taking advantage of the steadiness of the flow field. Laminar versus turbulent flow Turbulence is flow characterized by recirculation, eddies, and apparent randomness. Flow in which turbulence is not exhibited is called laminar. The presence of eddies or recirculation alone does not necessarily indicate turbulent flow—these phenomena may be present in laminar flow as well. Mathematically, turbulent flow is often represented via a Reynolds decomposition, in which the flow is broken down into the sum of an average component and a perturbation component. It is believed that turbulent flows can be described well through the use of the Navier–Stokes equations. Direct numerical simulation (DNS), based on the Navier–Stokes equations, makes it possible to simulate turbulent flows at moderate Reynolds numbers. Restrictions depend on the power of the computer used and the efficiency of the solution algorithm. The results of DNS have been found to agree well with experimental data for some flows. Most flows of interest have Reynolds numbers much too high for DNS to be a viable option, given the state of computational power for the next few decades. Any flight vehicle large enough to carry a human ( > 3 m), moving faster than is well beyond the limit of DNS simulation ( = 4 million). Transport aircraft wings (such as on an Airbus A300 or Boeing 747) have Reynolds numbers of 40 million (based on the wing chord dimension). Solving these real-life flow problems requires turbulence models for the foreseeable future. Reynolds-averaged Navier–Stokes equations (RANS) combined with turbulence modelling provides a model of the effects of the turbulent flow. Such a modelling mainly provides the additional momentum transfer by the Reynolds stresses, although the turbulence also enhances the heat and mass transfer. Another promising methodology is large eddy simulation (LES), especially in the form of detached eddy simulation (DES) — a combination of LES and RANS turbulence modelling. Other approximations There are a large number of other possible approximations to fluid dynamic problems. Some of the more commonly used are listed below. The Boussinesq approximation neglects variations in density except to calculate buoyancy forces. It is often used in free convection problems where density changes are small. Lubrication theory and Hele–Shaw flow exploits the large aspect ratio of the domain to show that certain terms in the equations are small and so can be neglected. Slender-body theory is a methodology used in Stokes flow problems to estimate the force on, or flow field around, a long slender object in a viscous fluid. The shallow-water equations can be used to describe a layer of relatively inviscid fluid with a free surface, in which surface gradients are small. Darcy's law is used for flow in porous media, and works with variables averaged over several pore-widths. In rotating systems, the quasi-geostrophic equations assume an almost perfect balance between pressure gradients and the Coriolis force. It is useful in the study of atmospheric dynamics. Multidisciplinary types Flows according to Mach regimes While many flows (such as flow of water through a pipe) occur at low Mach numbers (subsonic flows), many flows of practical interest in aerodynamics or in turbomachines occur at high fractions of (transonic flows) or in excess of it (supersonic or even hypersonic flows). New phenomena occur at these regimes such as instabilities in transonic flow, shock waves for supersonic flow, or non-equilibrium chemical behaviour due to ionization in hypersonic flows. In practice, each of those flow regimes is treated separately. Reactive versus non-reactive flows Reactive flows are flows that are chemically reactive, which finds its applications in many areas, including combustion (IC engine), propulsion devices (rockets, jet engines, and so on), detonations, fire and safety hazards, and astrophysics. In addition to conservation of mass, momentum and energy, conservation of individual species (for example, mass fraction of methane in methane combustion) need to be derived, where the production/depletion rate of any species are obtained by simultaneously solving the equations of chemical kinetics. Magnetohydrodynamics Magnetohydrodynamics is the multidisciplinary study of the flow of electrically conducting fluids in electromagnetic fields. Examples of such fluids include plasmas, liquid metals, and salt water. The fluid flow equations are solved simultaneously with Maxwell's equations of electromagnetism. Relativistic fluid dynamics Relativistic fluid dynamics studies the macroscopic and microscopic fluid motion at large velocities comparable to the velocity of light. This branch of fluid dynamics accounts for the relativistic effects both from the special theory of relativity and the general theory of relativity. The governing equations are derived in Riemannian geometry for Minkowski spacetime. Fluctuating hydrodynamics This branch of fluid dynamics augments the standard hydrodynamic equations with stochastic fluxes that model thermal fluctuations. As formulated by Landau and Lifshitz, a white noise contribution obtained from the fluctuation-dissipation theorem of statistical mechanics is added to the viscous stress tensor and heat flux. Terminology The concept of pressure is central to the study of both fluid statics and fluid dynamics. A pressure can be identified for every point in a body of fluid, regardless of whether the fluid is in motion or not. Pressure can be measured using an aneroid, Bourdon tube, mercury column, or various other methods. Some of the terminology that is necessary in the study of fluid dynamics is not found in other similar areas of study. In particular, some of the terminology used in fluid dynamics is not used in fluid statics. Characteristic numbers Terminology in incompressible fluid dynamics The concepts of total pressure and dynamic pressure arise from Bernoulli's equation and are significant in the study of all fluid flows. (These two pressures are not pressures in the usual sense—they cannot be measured using an aneroid, Bourdon tube or mercury column.) To avoid potential ambiguity when referring to pressure in fluid dynamics, many authors use the term static pressure to distinguish it from total pressure and dynamic pressure. Static pressure is identical to pressure and can be identified for every point in a fluid flow field. A point in a fluid flow where the flow has come to rest (that is to say, speed is equal to zero adjacent to some solid body immersed in the fluid flow) is of special significance. It is of such importance that it is given a special name—a stagnation point. The static pressure at the stagnation point is of special significance and is given its own name—stagnation pressure. In incompressible flows, the stagnation pressure at a stagnation point is equal to the total pressure throughout the flow field. Terminology in compressible fluid dynamics In a compressible fluid, it is convenient to define the total conditions (also called stagnation conditions) for all thermodynamic state properties (such as total temperature, total enthalpy, total speed of sound). These total flow conditions are a function of the fluid velocity and have different values in frames of reference with different motion. To avoid potential ambiguity when referring to the properties of the fluid associated with the state of the fluid rather than its motion, the prefix "static" is commonly used (such as static temperature and static enthalpy). Where there is no prefix, the fluid property is the static condition (so "density" and "static density" mean the same thing). The static conditions are independent of the frame of reference. Because the total flow conditions are defined by isentropically bringing the fluid to rest, there is no need to distinguish between total entropy and static entropy as they are always equal by definition. As such, entropy is most commonly referred to as simply "entropy". See also List of publications in fluid dynamics List of fluid dynamicists References Further reading Originally published in 1879, the 6th extended edition appeared first in 1932. Originally published in 1938. Encyclopedia: Fluid dynamics Scholarpedia External links National Committee for Fluid Mechanics Films (NCFMF), containing films on several subjects in fluid dynamics (in RealMedia format) Gallery of fluid motion, "a visual record of the aesthetic and science of contemporary fluid mechanics," from the American Physical Society List of Fluid Dynamics books Piping Aerodynamics Continuum mechanics
Fluid dynamics
[ "Physics", "Chemistry", "Engineering" ]
3,710
[ "Continuum mechanics", "Building engineering", "Chemical engineering", "Classical mechanics", "Aerodynamics", "Mechanical engineering", "Aerospace engineering", "Piping", "Fluid dynamics" ]
11,149
https://en.wikipedia.org/wiki/Fresnel%20equations
The Fresnel equations (or Fresnel coefficients) describe the reflection and transmission of light (or electromagnetic radiation in general) when incident on an interface between different optical media. They were deduced by French engineer and physicist Augustin-Jean Fresnel () who was the first to understand that light is a transverse wave, when no one realized that the waves were electric and magnetic fields. For the first time, polarization could be understood quantitatively, as Fresnel's equations correctly predicted the differing behaviour of waves of the s and p polarizations incident upon a material interface. Overview When light strikes the interface between a medium with refractive index and a second medium with refractive index , both reflection and refraction of the light may occur. The Fresnel equations give the ratio of the reflected wave's electric field to the incident wave's electric field, and the ratio of the transmitted wave's electric field to the incident wave's electric field, for each of two components of polarization. (The magnetic fields can also be related using similar coefficients.) These ratios are generally complex, describing not only the relative amplitudes but also the phase shifts at the interface. The equations assume the interface between the media is flat and that the media are homogeneous and isotropic. The incident light is assumed to be a plane wave, which is sufficient to solve any problem since any incident light field can be decomposed into plane waves and polarizations. S and P polarizations There are two sets of Fresnel coefficients for two different linear polarization components of the incident wave. Since any polarization state can be resolved into a combination of two orthogonal linear polarizations, this is sufficient for any problem. Likewise, unpolarized (or "randomly polarized") light has an equal amount of power in each of two linear polarizations. The s polarization refers to polarization of a wave's electric field normal to the plane of incidence (the direction in the derivation below); then the magnetic field is in the plane of incidence. The p polarization refers to polarization of the electric field in the plane of incidence (the plane in the derivation below); then the magnetic field is normal to the plane of incidence. The names "s" and "p" for the polarization components refer to German "senkrecht" (perpendicular or normal) and "parallel" (parallel to the plane of incidence). Although the reflection and transmission are dependent on polarization, at normal incidence () there is no distinction between them so all polarization states are governed by a single set of Fresnel coefficients (and another special case is mentioned below in which that is true). Configuration In the diagram on the right, an incident plane wave in the direction of the ray strikes the interface between two media of refractive indices and at point . Part of the wave is reflected in the direction , and part refracted in the direction . The angles that the incident, reflected and refracted rays make to the normal of the interface are given as , and , respectively. The relationship between these angles is given by the law of reflection: and Snell's law: The behavior of light striking the interface is explained by considering the electric and magnetic fields that constitute an electromagnetic wave, and the laws of electromagnetism, as shown below. The ratio of waves' electric field (or magnetic field) amplitudes are obtained, but in practice one is more often interested in formulae which determine power coefficients, since power (or irradiance) is what can be directly measured at optical frequencies. The power of a wave is generally proportional to the square of the electric (or magnetic) field amplitude. Power (intensity) reflection and transmission coefficients We call the fraction of the incident power that is reflected from the interface the reflectance (or reflectivity, or power reflection coefficient) , and the fraction that is refracted into the second medium is called the transmittance (or transmissivity, or power transmission coefficient) . Note that these are what would be measured right at each side of an interface and do not account for attenuation of a wave in an absorbing medium following transmission or reflection. The reflectance for s-polarized light is while the reflectance for p-polarized light is where and are the wave impedances of media 1 and 2, respectively. We assume that the media are non-magnetic (i.e., ), which is typically a good approximation at optical frequencies (and for transparent media at other frequencies). Then the wave impedances are determined solely by the refractive indices and : where is the impedance of free space and . Making this substitution, we obtain equations using the refractive indices: The second form of each equation is derived from the first by eliminating using Snell's law and trigonometric identities. As a consequence of conservation of energy, one can find the transmitted power (or more correctly, irradiance: power per unit area) simply as the portion of the incident power that isn't reflected: and Note that all such intensities are measured in terms of a wave's irradiance in the direction normal to the interface; this is also what is measured in typical experiments. That number could be obtained from irradiances in the direction of an incident or reflected wave (given by the magnitude of a wave's Poynting vector) multiplied by for a wave at an angle to the normal direction (or equivalently, taking the dot product of the Poynting vector with the unit vector normal to the interface). This complication can be ignored in the case of the reflection coefficient, since , so that the ratio of reflected to incident irradiance in the wave's direction is the same as in the direction normal to the interface. Although these relationships describe the basic physics, in many practical applications one is concerned with "natural light" that can be described as unpolarized. That means that there is an equal amount of power in the s and p polarizations, so that the effective reflectivity of the material is just the average of the two reflectivities: For low-precision applications involving unpolarized light, such as computer graphics, rather than rigorously computing the effective reflection coefficient for each angle, Schlick's approximation is often used. Special cases Normal incidence For the case of normal incidence, , and there is no distinction between s and p polarization. Thus, the reflectance simplifies to For common glass () surrounded by air (), the power reflectance at normal incidence can be seen to be about 4%, or 8% accounting for both sides of a glass pane. Brewster's angle At a dielectric interface from to , there is a particular angle of incidence at which goes to zero and a p-polarised incident wave is purely refracted, thus all reflected light is s-polarised. This angle is known as Brewster's angle, and is around 56° for and (typical glass). Total internal reflection When light travelling in a denser medium strikes the surface of a less dense medium (i.e., ), beyond a particular incidence angle known as the critical angle, all light is reflected and . This phenomenon, known as total internal reflection, occurs at incidence angles for which Snell's law predicts that the sine of the angle of refraction would exceed unity (whereas in fact for all real ). For glass with surrounded by air, the critical angle is approximately 42°. 45° incidence Reflection at 45° incidence is very commonly used for making 90° turns. For the case of light traversing from a less dense medium into a denser one at 45° incidence (), it follows algebraically from the above equations that equals the square of : This can be used to either verify the consistency of the measurements of and , or to derive one of them when the other is known. This relationship is only valid for the simple case of a single plane interface between two homogeneous materials, not for films on substrates, where a more complex analysis is required. Measurements of and at 45° can be used to estimate the reflectivity at normal incidence. The "average of averages" obtained by calculating first the arithmetic as well as the geometric average of and , and then averaging these two averages again arithmetically, gives a value for with an error of less than about 3% for most common optical materials. This is useful because measurements at normal incidence can be difficult to achieve in an experimental setup since the incoming beam and the detector will obstruct each other. However, since the dependence of and on the angle of incidence for angles below 10° is very small, a measurement at about 5° will usually be a good approximation for normal incidence, while allowing for a separation of the incoming and reflected beam. Complex amplitude reflection and transmission coefficients The above equations relating powers (which could be measured with a photometer for instance) are derived from the Fresnel equations which solve the physical problem in terms of electromagnetic field complex amplitudes, i.e., considering phase shifts in addition to their amplitudes. Those underlying equations supply generally complex-valued ratios of those EM fields and may take several different forms, depending on the formalism used. The complex amplitude coefficients for reflection and transmission are usually represented by lower case and (whereas the power coefficients are capitalized). As before, we are assuming the magnetic permeability, of both media to be equal to the permeability of free space as is essentially true of all dielectrics at optical frequencies. In the following equations and graphs, we adopt the following conventions. For s polarization, the reflection coefficient is defined as the ratio of the reflected wave's complex electric field amplitude to that of the incident wave, whereas for p polarization is the ratio of the waves complex magnetic field amplitudes (or equivalently, the negative of the ratio of their electric field amplitudes). The transmission coefficient is the ratio of the transmitted wave's complex electric field amplitude to that of the incident wave, for either polarization. The coefficients and are generally different between the s and p polarizations, and even at normal incidence (where the designations s and p do not even apply!) the sign of is reversed depending on whether the wave is considered to be s or p polarized, an artifact of the adopted sign convention (see graph for an air-glass interface at 0° incidence). The equations consider a plane wave incident on a plane interface at angle of incidence , a wave reflected at angle , and a wave transmitted at angle . In the case of an interface into an absorbing material (where is complex) or total internal reflection, the angle of transmission does not generally evaluate to a real number. In that case, however, meaningful results can be obtained using formulations of these relationships in which trigonometric functions and geometric angles are avoided; the inhomogeneous waves launched into the second medium cannot be described using a single propagation angle. Using this convention, One can see that and . One can write very similar equations applying to the ratio of the waves' magnetic fields, but comparison of the electric fields is more conventional. Because the reflected and incident waves propagate in the same medium and make the same angle with the normal to the surface, the power reflection coefficient is just the squared magnitude of : On the other hand, calculation of the power transmission coefficient is less straightforward, since the light travels in different directions in the two media. What's more, the wave impedances in the two media differ; power (irradiance) is given by the square of the electric field amplitude divided by the characteristic impedance of the medium (or by the square of the magnetic field multiplied by the characteristic impedance). This results in: using the above definition of . The introduced factor of is the reciprocal of the ratio of the media's wave impedances. The factors adjust the waves' powers so they are reckoned in the direction normal to the interface, for both the incident and transmitted waves, so that full power transmission corresponds to . In the case of total internal reflection where the power transmission is zero, nevertheless describes the electric field (including its phase) just beyond the interface. This is an evanescent field which does not propagate as a wave (thus ) but has nonzero values very close to the interface. The phase shift of the reflected wave on total internal reflection can similarly be obtained from the phase angles of and (whose magnitudes are unity in this case). These phase shifts are different for s and p waves, which is the well-known principle by which total internal reflection is used to effect polarization transformations. Alternative forms In the above formula for , if we put (Snell's law) and multiply the numerator and denominator by , we obtain If we do likewise with the formula for , the result is easily shown to be equivalent to These formulas are known respectively as Fresnel's sine law and Fresnel's tangent law. Although at normal incidence these expressions reduce to 0/0, one can see that they yield the correct results in the limit as . Multiple surfaces When light makes multiple reflections between two or more parallel surfaces, the multiple beams of light generally interfere with one another, resulting in net transmission and reflection amplitudes that depend on the light's wavelength. The interference, however, is seen only when the surfaces are at distances comparable to or smaller than the light's coherence length, which for ordinary white light is few micrometers; it can be much larger for light from a laser. An example of interference between reflections is the iridescent colours seen in a soap bubble or in thin oil films on water. Applications include Fabry–Pérot interferometers, antireflection coatings, and optical filters. A quantitative analysis of these effects is based on the Fresnel equations, but with additional calculations to account for interference. The transfer-matrix method, or the recursive Rouard method can be used to solve multiple-surface problems. History In 1808, Étienne-Louis Malus discovered that when a ray of light was reflected off a non-metallic surface at the appropriate angle, it behaved like one of the two rays emerging from a doubly-refractive calcite crystal. He later coined the term polarization to describe this behavior.  In 1815, the dependence of the polarizing angle on the refractive index was determined experimentally by David Brewster. But the reason for that dependence was such a deep mystery that in late 1817, Thomas Young was moved to write: In 1821, however, Augustin-Jean Fresnel derived results equivalent to his sine and tangent laws (above), by modeling light waves as transverse elastic waves with vibrations perpendicular to what had previously been called the plane of polarization. Fresnel promptly confirmed by experiment that the equations correctly predicted the direction of polarization of the reflected beam when the incident beam was polarized at 45° to the plane of incidence, for light incident from air onto glass or water; in particular, the equations gave the correct polarization at Brewster's angle. The experimental confirmation was reported in a "postscript" to the work in which Fresnel first revealed his theory that light waves, including "unpolarized" waves, were purely transverse. Details of Fresnel's derivation, including the modern forms of the sine law and tangent law, were given later, in a memoir read to the French Academy of Sciences in January 1823. That derivation combined conservation of energy with continuity of the tangential vibration at the interface, but failed to allow for any condition on the normal component of vibration. The first derivation from electromagnetic principles was given by Hendrik Lorentz in 1875. In the same memoir of January 1823, Fresnel found that for angles of incidence greater than the critical angle, his formulas for the reflection coefficients ( and ) gave complex values with unit magnitudes. Noting that the magnitude, as usual, represented the ratio of peak amplitudes, he guessed that the argument represented the phase shift, and verified the hypothesis experimentally. The verification involved calculating the angle of incidence that would introduce a total phase difference of 90° between the s and p components, for various numbers of total internal reflections at that angle (generally there were two solutions), subjecting light to that number of total internal reflections at that angle of incidence, with an initial linear polarization at 45° to the plane of incidence, and checking that the final polarization was circular. Thus he finally had a quantitative theory for what we now call the Fresnel rhomb — a device that he had been using in experiments, in one form or another, since 1817 (see Fresnel rhomb §History). The success of the complex reflection coefficient inspired James MacCullagh and Augustin-Louis Cauchy, beginning in 1836, to analyze reflection from metals by using the Fresnel equations with a complex refractive index. Four weeks before he presented his completed theory of total internal reflection and the rhomb, Fresnel submitted a memoir in which he introduced the needed terms linear polarization, circular polarization, and elliptical polarization, and in which he explained optical rotation as a species of birefringence: linearly-polarized light can be resolved into two circularly-polarized components rotating in opposite directions, and if these propagate at different speeds, the phase difference between them — hence the orientation of their linearly-polarized resultant — will vary continuously with distance. Thus Fresnel's interpretation of the complex values of his reflection coefficients marked the confluence of several streams of his research and, arguably, the essential completion of his reconstruction of physical optics on the transverse-wave hypothesis (see Augustin-Jean Fresnel). Derivation Here we systematically derive the above relations from electromagnetic premises. Material parameters In order to compute meaningful Fresnel coefficients, we must assume that the medium is (approximately) linear and homogeneous. If the medium is also isotropic, the four field vectors are related by where and are scalars, known respectively as the (electric) permittivity and the (magnetic) permeability of the medium. For vacuum, these have the values and , respectively. Hence we define the relative permittivity (or dielectric constant) , and the relative permeability . In optics it is common to assume that the medium is non-magnetic, so that . For ferromagnetic materials at radio/microwave frequencies, larger values of must be taken into account. But, for optically transparent media, and for all other materials at optical frequencies (except possible metamaterials), is indeed very close to 1; that is, . In optics, one usually knows the refractive index of the medium, which is the ratio of the speed of light in vacuum () to the speed of light in the medium. In the analysis of partial reflection and transmission, one is also interested in the electromagnetic wave impedance , which is the ratio of the amplitude of to the amplitude of . It is therefore desirable to express and in terms of and , and thence to relate to . The last-mentioned relation, however, will make it convenient to derive the reflection coefficients in terms of the wave admittance , which is the reciprocal of the wave impedance . In the case of uniform plane sinusoidal waves, the wave impedance or admittance is known as the intrinsic impedance or admittance of the medium. This case is the one for which the Fresnel coefficients are to be derived. Electromagnetic plane waves In a uniform plane sinusoidal electromagnetic wave, the electric field has the form where is the (constant) complex amplitude vector, is the imaginary unit, is the wave vector (whose magnitude is the angular wavenumber), is the position vector, is the angular frequency, is time, and it is understood that the real part of the expression is the physical field.  The value of the expression is unchanged if the position varies in a direction normal to ; hence is normal to the wavefronts. To advance the phase by the angle ϕ, we replace by (that is, we replace by ), with the result that the (complex) field is multiplied by . So a phase advance is equivalent to multiplication by a complex constant with a negative argument. This becomes more obvious when the field () is factored as , where the last factor contains the time-dependence. That factor also implies that differentiation w.r.t. time corresponds to multiplication by . If ℓ is the component of in the direction of , the field () can be written .  If the argument of is to be constant,  ℓ must increase at the velocity known as the phase velocity . This in turn is equal to Solving for gives As usual, we drop the time-dependent factor , which is understood to multiply every complex field quantity. The electric field for a uniform plane sine wave will then be represented by the location-dependent phasor For fields of that form, Faraday's law and the Maxwell-Ampère law respectively reduce to Putting and , as above, we can eliminate and to obtain equations in only and : If the material parameters and are real (as in a lossless dielectric), these equations show that form a right-handed orthogonal triad, so that the same equations apply to the magnitudes of the respective vectors. Taking the magnitude equations and substituting from (), we obtain where and are the magnitudes of and . Multiplying the last two equations gives Dividing (or cross-multiplying) the same two equations gives , where This is the intrinsic admittance. From () we obtain the phase velocity For vacuum this reduces to Dividing the second result by the first gives For a non-magnetic medium (the usual case), this becomes . Taking the reciprocal of (), we find that the intrinsic impedance is In vacuum this takes the value known as the impedance of free space. By division, For a non-magnetic medium, this becomes Wave vectors In Cartesian coordinates , let the region have refractive index , intrinsic admittance , etc., and let the region have refractive index , intrinsic admittance , etc. Then the plane is the interface, and the axis is normal to the interface (see diagram). Let and (in bold roman type) be the unit vectors in the and directions, respectively. Let the plane of incidence be the plane (the plane of the page), with the angle of incidence measured from towards . Let the angle of refraction, measured in the same sense, be , where the subscript stands for transmitted (reserving for reflected). In the absence of Doppler shifts, ω does not change on reflection or refraction. Hence, by (), the magnitude of the wave vector is proportional to the refractive index. So, for a given , if we redefine as the magnitude of the wave vector in the reference medium (for which ), then the wave vector has magnitude in the first medium (region in the diagram) and magnitude in the second medium. From the magnitudes and the geometry, we find that the wave vectors are where the last step uses Snell's law. The corresponding dot products in the phasor form () are Hence: s components For the s polarization, the field is parallel to the axis and may therefore be described by its component in the  direction. Let the reflection and transmission coefficients be and , respectively. Then, if the incident field is taken to have unit amplitude, the phasor form () of its -component is and the reflected and transmitted fields, in the same form, are Under the sign convention used in this article, a positive reflection or transmission coefficient is one that preserves the direction of the transverse field, meaning (in this context) the field normal to the plane of incidence. For the s polarization, that means the field. If the incident, reflected, and transmitted fields (in the above equations) are in the -direction ("out of the page"), then the respective fields are in the directions of the red arrows, since form a right-handed orthogonal triad. The fields may therefore be described by their components in the directions of those arrows, denoted by . Then, since , At the interface, by the usual interface conditions for electromagnetic fields, the tangential components of the and fields must be continuous; that is, When we substitute from equations () to () and then from (), the exponential factors cancel out, so that the interface conditions reduce to the simultaneous equations which are easily solved for and , yielding and At normal incidence , indicated by an additional subscript 0, these results become and At grazing incidence , we have , hence and . p components For the p polarization, the incident, reflected, and transmitted fields are parallel to the red arrows and may therefore be described by their components in the directions of those arrows. Let those components be (redefining the symbols for the new context). Let the reflection and transmission coefficients be and . Then, if the incident field is taken to have unit amplitude, we have If the fields are in the directions of the red arrows, then, in order for to form a right-handed orthogonal triad, the respective fields must be in the -direction ("into the page") and may therefore be described by their components in that direction. This is consistent with the adopted sign convention, namely that a positive reflection or transmission coefficient is one that preserves the direction of the transverse field the field in the case of the p polarization. The agreement of the other field with the red arrows reveals an alternative definition of the sign convention: that a positive reflection or transmission coefficient is one for which the field vector in the plane of incidence points towards the same medium before and after reflection or transmission. So, for the incident, reflected, and transmitted fields, let the respective components in the -direction be . Then, since , At the interface, the tangential components of the and fields must be continuous; that is, When we substitute from equations () and () and then from (), the exponential factors again cancel out, so that the interface conditions reduce to Solving for and , we find and At normal incidence indicated by an additional subscript 0, these results become and At , we again have , hence and . Comparing () and () with () and (), we see that at normal incidence, under the adopted sign convention, the transmission coefficients for the two polarizations are equal, whereas the reflection coefficients have equal magnitudes but opposite signs. While this clash of signs is a disadvantage of the convention, the attendant advantage is that the signs agree at grazing incidence. Power ratios (reflectivity and transmissivity) The Poynting vector for a wave is a vector whose component in any direction is the irradiance (power per unit area) of that wave on a surface perpendicular to that direction. For a plane sinusoidal wave the Poynting vector is , where and are due only to the wave in question, and the asterisk denotes complex conjugation. Inside a lossless dielectric (the usual case), and are in phase, and at right angles to each other and to the wave vector ; so, for s polarization, using the and components of and respectively (or for p polarization, using the and components of and ), the irradiance in the direction of is given simply by , which is in a medium of intrinsic impedance . To compute the irradiance in the direction normal to the interface, as we shall require in the definition of the power transmission coefficient, we could use only the component (rather than the full component) of or or, equivalently, simply multiply by the proper geometric factor, obtaining . From equations () and (), taking squared magnitudes, we find that the reflectivity (ratio of reflected power to incident power) is for the s polarization, and for the p polarization. Note that when comparing the powers of two such waves in the same medium and with the same cosθ, the impedance and geometric factors mentioned above are identical and cancel out. But in computing the power transmission (below), these factors must be taken into account. The simplest way to obtain the power transmission coefficient (transmissivity, the ratio of transmitted power to incident power in the direction normal to the interface, i.e. the direction) is to use (conservation of energy). In this way we find for the s polarization, and for the p polarization. In the case of an interface between two lossless media (for which ϵ and μ are real and positive), one can obtain these results directly using the squared magnitudes of the amplitude transmission coefficients that we found earlier in equations () and (). But, for given amplitude (as noted above), the component of the Poynting vector in the direction is proportional to the geometric factor and inversely proportional to the wave impedance . Applying these corrections to each wave, we obtain two ratios multiplying the square of the amplitude transmission coefficient: for the s polarization, and for the p polarization. The last two equations apply only to lossless dielectrics, and only at incidence angles smaller than the critical angle (beyond which, of course, ). For unpolarized light: where . Equal refractive indices From equations () and (), we see that two dissimilar media will have the same refractive index, but different admittances, if the ratio of their permeabilities is the inverse of the ratio of their permittivities. In that unusual situation we have (that is, the transmitted ray is undeviated), so that the cosines in equations (), (), (), (), and () to () cancel out, and all the reflection and transmission ratios become independent of the angle of incidence; in other words, the ratios for normal incidence become applicable to all angles of incidence. When extended to spherical reflection or scattering, this results in the Kerker effect for Mie scattering. Non-magnetic media Since the Fresnel equations were developed for optics, they are usually given for non-magnetic materials. Dividing () by ()) yields For non-magnetic media we can substitute the vacuum permeability for , so that that is, the admittances are simply proportional to the corresponding refractive indices. When we make these substitutions in equations () to () and equations () to (), the factor cμ0 cancels out. For the amplitude coefficients we obtain: For the case of normal incidence these reduce to: The power reflection coefficients become: The power transmissions can then be found from . Brewster's angle For equal permeabilities (e.g., non-magnetic media), if and are complementary, we can substitute for , and for , so that the numerator in equation () becomes , which is zero (by Snell's law). Hence and only the s-polarized component is reflected. This is what happens at the Brewster angle. Substituting for in Snell's law, we readily obtain for Brewster's angle. Equal permittivities Although it is not encountered in practice, the equations can also apply to the case of two media with a common permittivity but different refractive indices due to different permeabilities. From equations () and (), if is fixed instead of , then becomes inversely proportional to , with the result that the subscripts 1 and 2 in equations () to () are interchanged (due to the additional step of multiplying the numerator and denominator by ). Hence, in () and (), the expressions for and in terms of refractive indices will be interchanged, so that Brewster's angle () will give instead of , and any beam reflected at that angle will be p-polarized instead of s-polarized. Similarly, Fresnel's sine law will apply to the p polarization instead of the s polarization, and his tangent law to the s polarization instead of the p polarization. This switch of polarizations has an analog in the old mechanical theory of light waves (see §History, above). One could predict reflection coefficients that agreed with observation by supposing (like Fresnel) that different refractive indices were due to different densities and that the vibrations were normal to what was then called the plane of polarization, or by supposing (like MacCullagh and Neumann) that different refractive indices were due to different elasticities and that the vibrations were parallel to that plane. Thus the condition of equal permittivities and unequal permeabilities, although not realistic, is of some historical interest. See also Jones calculus Polarization mixing Index-matching material Field and power quantities Fresnel rhomb, Fresnel's apparatus to produce circularly polarised light Reflection loss Specular reflection Schlick's approximation Snell's window X-ray reflectivity Plane of incidence Reflections of signals on conducting lines Notes References Sources M. Born and E. Wolf, 1970, Principles of Optics, 4th Ed., Oxford: Pergamon Press. J.Z. Buchwald, 1989, The Rise of the Wave Theory of Light: Optical Theory and Experiment in the Early Nineteenth Century, University of Chicago Press, . R.E. Collin, 1966, Foundations for Microwave Engineering, Tokyo: McGraw-Hill. O. Darrigol, 2012, A History of Optics: From Greek Antiquity to the Nineteenth Century, Oxford, . A. Fresnel, 1866 (ed. H. de Senarmont, E. Verdet, and L. Fresnel), Oeuvres complètes d'Augustin Fresnel, Paris: Imprimerie Impériale (3 vols., 1866–70), vol.1 (1866). E. Hecht, 1987, Optics, 2nd Ed., Addison Wesley, . E. Hecht, 2002, Optics, 4th Ed., Addison Wesley, . F.A. Jenkins and H.E. White, 1976, Fundamentals of Optics, 4th Ed., New York: McGraw-Hill, . H. Lloyd, 1834, "Report on the progress and present state of physical optics", Report of the Fourth Meeting of the British Association for the Advancement of Science (held at Edinburgh in 1834), London: J. Murray, 1835, pp.295–413. W. Whewell, 1857, History of the Inductive Sciences: From the Earliest to the Present Time, 3rd Ed., London: J.W. Parker & Son, vol.2. E. T. Whittaker, 1910, A History of the Theories of Aether and Electricity: From the Age of Descartes to the Close of the Nineteenth Century, London: Longmans, Green, & Co. External links Fresnel Equations – Wolfram. Fresnel equations calculator FreeSnell – Free software computes the optical properties of multilayer materials. Thinfilm – Web interface for calculating optical properties of thin films and multilayer materials (reflection & transmission coefficients, ellipsometric parameters Psi & Delta). Simple web interface for calculating single-interface reflection and refraction angles and strengths. Reflection and transmittance for two dielectrics – Mathematica interactive webpage that shows the relations between index of refraction and reflection. A self-contained first-principles derivation of the transmission and reflection probabilities from a multilayer with complex indices of refraction. Eponymous equations of physics Light Geometrical optics Physical optics Polarization (waves) History of physics
Fresnel equations
[ "Physics" ]
7,360
[ "Physical phenomena", "Equations of physics", "Spectrum (physical sciences)", "Eponymous equations of physics", "Electromagnetic spectrum", "Astrophysics", "Waves", "Light", "Polarization (waves)" ]
11,180
https://en.wikipedia.org/wiki/Functional%20analysis
Functional analysis is a branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limit-related structure (for example, inner product, norm, or topology) and the linear functions defined on these spaces and suitably respecting these structures. The historical roots of functional analysis lie in the study of spaces of functions and the formulation of properties of transformations of functions such as the Fourier transform as transformations defining, for example, continuous or unitary operators between function spaces. This point of view turned out to be particularly useful for the study of differential and integral equations. The usage of the word functional as a noun goes back to the calculus of variations, implying a function whose argument is a function. The term was first used in Hadamard's 1910 book on that subject. However, the general concept of a functional had previously been introduced in 1887 by the Italian mathematician and physicist Vito Volterra. The theory of nonlinear functionals was continued by students of Hadamard, in particular Fréchet and Lévy. Hadamard also founded the modern school of linear functional analysis further developed by Riesz and the group of Polish mathematicians around Stefan Banach. In modern introductory texts on functional analysis, the subject is seen as the study of vector spaces endowed with a topology, in particular infinite-dimensional spaces. In contrast, linear algebra deals mostly with finite-dimensional spaces, and does not use topology. An important part of functional analysis is the extension of the theories of measure, integration, and probability to infinite-dimensional spaces, also known as infinite dimensional analysis. Normed vector spaces The basic and historically first class of spaces studied in functional analysis are complete normed vector spaces over the real or complex numbers. Such spaces are called Banach spaces. An important example is a Hilbert space, where the norm arises from an inner product. These spaces are of fundamental importance in many areas, including the mathematical formulation of quantum mechanics, machine learning, partial differential equations, and Fourier analysis. More generally, functional analysis includes the study of Fréchet spaces and other topological vector spaces not endowed with a norm. An important object of study in functional analysis are the continuous linear operators defined on Banach and Hilbert spaces. These lead naturally to the definition of C*-algebras and other operator algebras. Hilbert spaces Hilbert spaces can be completely classified: there is a unique Hilbert space up to isomorphism for every cardinality of the orthonormal basis. Finite-dimensional Hilbert spaces are fully understood in linear algebra, and infinite-dimensional separable Hilbert spaces are isomorphic to . Separability being important for applications, functional analysis of Hilbert spaces consequently mostly deals with this space. One of the open problems in functional analysis is to prove that every bounded linear operator on a Hilbert space has a proper invariant subspace. Many special cases of this invariant subspace problem have already been proven. Banach spaces General Banach spaces are more complicated than Hilbert spaces, and cannot be classified in such a simple manner as those. In particular, many Banach spaces lack a notion analogous to an orthonormal basis. Examples of Banach spaces are -spaces for any real number Given also a measure on set then sometimes also denoted or has as its vectors equivalence classes of measurable functions whose absolute value's -th power has finite integral; that is, functions for which one has If is the counting measure, then the integral may be replaced by a sum. That is, we require Then it is not necessary to deal with equivalence classes, and the space is denoted written more simply in the case when is the set of non-negative integers. In Banach spaces, a large part of the study involves the dual space: the space of all continuous linear maps from the space into its underlying field, so-called functionals. A Banach space can be canonically identified with a subspace of its bidual, which is the dual of its dual space. The corresponding map is an isometry but in general not onto. A general Banach space and its bidual need not even be isometrically isomorphic in any way, contrary to the finite-dimensional situation. This is explained in the dual space article. Also, the notion of derivative can be extended to arbitrary functions between Banach spaces. See, for instance, the Fréchet derivative article. Linear functional analysis Major and foundational results There are four major theorems which are sometimes called the four pillars of functional analysis: the Hahn–Banach theorem the open mapping theorem the closed graph theorem the uniform boundedness principle, also known as the Banach–Steinhaus theorem. Important results of functional analysis include: Uniform boundedness principle The uniform boundedness principle or Banach–Steinhaus theorem is one of the fundamental results in functional analysis. Together with the Hahn–Banach theorem and the open mapping theorem, it is considered one of the cornerstones of the field. In its basic form, it asserts that for a family of continuous linear operators (and thus bounded operators) whose domain is a Banach space, pointwise boundedness is equivalent to uniform boundedness in operator norm. The theorem was first published in 1927 by Stefan Banach and Hugo Steinhaus but it was also proven independently by Hans Hahn. Spectral theorem There are many theorems known as the spectral theorem, but one in particular has many applications in functional analysis. This is the beginning of the vast research area of functional analysis called operator theory; see also the spectral measure. There is also an analogous spectral theorem for bounded normal operators on Hilbert spaces. The only difference in the conclusion is that now may be complex-valued. Hahn–Banach theorem The Hahn–Banach theorem is a central tool in functional analysis. It allows the extension of bounded linear functionals defined on a subspace of some vector space to the whole space, and it also shows that there are "enough" continuous linear functionals defined on every normed vector space to make the study of the dual space "interesting". Open mapping theorem The open mapping theorem, also known as the Banach–Schauder theorem (named after Stefan Banach and Juliusz Schauder), is a fundamental result which states that if a continuous linear operator between Banach spaces is surjective then it is an open map. More precisely, The proof uses the Baire category theorem, and completeness of both and is essential to the theorem. The statement of the theorem is no longer true if either space is just assumed to be a normed space, but is true if and are taken to be Fréchet spaces. Closed graph theorem Other topics Foundations of mathematics considerations Most spaces considered in functional analysis have infinite dimension. To show the existence of a vector space basis for such spaces may require Zorn's lemma. However, a somewhat different concept, the Schauder basis, is usually more relevant in functional analysis. Many theorems require the Hahn–Banach theorem, usually proved using the axiom of choice, although the strictly weaker Boolean prime ideal theorem suffices. The Baire category theorem, needed to prove many important theorems, also requires a form of axiom of choice. Points of view Functional analysis includes the following tendencies: Abstract analysis. An approach to analysis based on topological groups, topological rings, and topological vector spaces. Geometry of Banach spaces contains many topics. One is combinatorial approach connected with Jean Bourgain; another is a characterization of Banach spaces in which various forms of the law of large numbers hold. Noncommutative geometry. Developed by Alain Connes, partly building on earlier notions, such as George Mackey's approach to ergodic theory. Connection with quantum mechanics. Either narrowly defined as in mathematical physics, or broadly interpreted by, for example, Israel Gelfand, to include most types of representation theory. See also List of functional analysis topics Spectral theory References Further reading Aliprantis, C.D., Border, K.C.: Infinite Dimensional Analysis: A Hitchhiker's Guide, 3rd ed., Springer 2007, . Online (by subscription) Bachman, G., Narici, L.: Functional analysis, Academic Press, 1966. (reprint Dover Publications) Banach S. Theory of Linear Operations . Volume 38, North-Holland Mathematical Library, 1987, Brezis, H.: Analyse Fonctionnelle, Dunod or Conway, J. B.: A Course in Functional Analysis, 2nd edition, Springer-Verlag, 1994, Dunford, N. and Schwartz, J.T.: Linear Operators, General Theory, John Wiley & Sons, and other 3 volumes, includes visualization charts Edwards, R. E.: Functional Analysis, Theory and Applications, Hold, Rinehart and Winston, 1965. Eidelman, Yuli, Vitali Milman, and Antonis Tsolomitis: Functional Analysis: An Introduction, American Mathematical Society, 2004. Friedman, A.: Foundations of Modern Analysis, Dover Publications, Paperback Edition, July 21, 2010 Giles, J.R.: Introduction to the Analysis of Normed Linear Spaces, Cambridge University Press, 2000 Hirsch F., Lacombe G. - "Elements of Functional Analysis", Springer 1999. Hutson, V., Pym, J.S., Cloud M.J.: Applications of Functional Analysis and Operator Theory, 2nd edition, Elsevier Science, 2005, Kantorovitz, S.,Introduction to Modern Analysis, Oxford University Press, 2003,2nd ed.2006. Kolmogorov, A.N and Fomin, S.V.: Elements of the Theory of Functions and Functional Analysis, Dover Publications, 1999 Kreyszig, E.: Introductory Functional Analysis with Applications, Wiley, 1989. Lax, P.: Functional Analysis, Wiley-Interscience, 2002, Lebedev, L.P. and Vorovich, I.I.: Functional Analysis in Mechanics, Springer-Verlag, 2002 Michel, Anthony N. and Charles J. Herget: Applied Algebra and Functional Analysis, Dover, 1993. Pietsch, Albrecht: History of Banach spaces and linear operators, Birkhäuser Boston Inc., 2007, Reed, M., Simon, B.: "Functional Analysis", Academic Press 1980. Riesz, F. and Sz.-Nagy, B.: Functional Analysis, Dover Publications, 1990 Rudin, W.: Functional Analysis, McGraw-Hill Science, 1991 Saxe, Karen: Beginning Functional Analysis, Springer, 2001 Schechter, M.: Principles of Functional Analysis, AMS, 2nd edition, 2001 Shilov, Georgi E.: Elementary Functional Analysis, Dover, 1996. Sobolev, S.L.: Applications of Functional Analysis in Mathematical Physics, AMS, 1963 Vogt, D., Meise, R.: Introduction to Functional Analysis, Oxford University Press, 1997. Yosida, K.: Functional Analysis, Springer-Verlag, 6th edition, 1980 External links Topics in Real and Functional Analysis by Gerald Teschl, University of Vienna. Lecture Notes on Functional Analysis by Yevgeny Vilensky, New York University. Lecture videos on functional analysis by Greg Morrow from University of Colorado Colorado Springs
Functional analysis
[ "Mathematics" ]
2,301
[ "Functional analysis", "Functions and mappings", "Mathematical relations", "Mathematical objects" ]
11,274
https://en.wikipedia.org/wiki/Elementary%20particle
In particle physics, an elementary particle or fundamental particle is a subatomic particle that is not composed of other particles. The Standard Model presently recognizes seventeen distinct particles—twelve fermions and five bosons. As a consequence of flavor and color combinations and antimatter, the fermions and bosons are known to have 48 and 13 variations, respectively. Among the 61 elementary particles embraced by the Standard Model number: electrons and other leptons, quarks, and the fundamental bosons. Subatomic particles such as protons or neutrons, which contain two or more elementary particles, are known as composite particles. Ordinary matter is composed of atoms, themselves once thought to be indivisible elementary particles. The name atom comes from the Ancient Greek word ἄτομος (atomos) which means indivisible or uncuttable. Despite the theories about atoms that had existed for thousands of years, the factual existence of atoms remained controversial until 1905. In that year, Albert Einstein published his paper on Brownian motion, putting to rest theories that had regarded molecules as mathematical illusions. Einstein subsequently identified matter as ultimately composed of various concentrations of energy. Subatomic constituents of the atom were first identified toward the end of the 19th century, beginning with the electron, followed by the proton in 1919, the photon in the 1920s, and the neutron in 1932. By that time, the advent of quantum mechanics had radically altered the definition of a "particle" by putting forward an understanding in which they carried out a simultaneous existence as matter waves. Many theoretical elaborations upon, and beyond, the Standard Model have been made since its codification in the 1970s. These include notions of supersymmetry, which double the number of elementary particles by hypothesizing that each known particle associates with a "shadow" partner far more massive. However, like an additional elementary boson mediating gravitation, such superpartners remain undiscovered as of 2013. Overview All elementary particles are either bosons or fermions. These classes are distinguished by their quantum statistics: fermions obey Fermi–Dirac statistics and bosons obey Bose–Einstein statistics. Their spin is differentiated via the spin–statistics theorem: it is half-integer for fermions, and integer for bosons. In the Standard Model, elementary particles are represented for predictive utility as point particles. Though extremely successful, the Standard Model is limited by its omission of gravitation and has some parameters arbitrarily added but unexplained. Cosmic abundance of elementary particles According to the current models of Big Bang nucleosynthesis, the primordial composition of visible matter of the universe should be about 75% hydrogen and 25% helium-4 (in mass). Neutrons are made up of one up and two down quarks, while protons are made of two up and one down quark. Since the other common elementary particles (such as electrons, neutrinos, or weak bosons) are so light or so rare when compared to atomic nuclei, we can neglect their mass contribution to the observable universe's total mass. Therefore, one can conclude that most of the visible mass of the universe consists of protons and neutrons, which, like all baryons, in turn consist of up quarks and down quarks. Some estimates imply that there are roughly baryons (almost entirely protons and neutrons) in the observable universe. The number of protons in the observable universe is called the Eddington number. In terms of number of particles, some estimates imply that nearly all the matter, excluding dark matter, occurs in neutrinos, which constitute the majority of the roughly elementary particles of matter that exist in the visible universe. Other estimates imply that roughly elementary particles exist in the visible universe (not including dark matter), mostly photons and other massless force carriers. Standard Model The Standard Model of particle physics contains 12 flavors of elementary fermions, plus their corresponding antiparticles, as well as elementary bosons that mediate the forces and the Higgs boson, which was reported on July 4, 2012, as having been likely detected by the two main experiments at the Large Hadron Collider (ATLAS and CMS). The Standard Model is widely considered to be a provisional theory rather than a truly fundamental one, however, since it is not known if it is compatible with Einstein's general relativity. There may be hypothetical elementary particles not described by the Standard Model, such as the graviton, the particle that would carry the gravitational force, and sparticles, supersymmetric partners of the ordinary particles. Fundamental fermions The 12 fundamental fermions are divided into 3 generations of 4 particles each. Half of the fermions are leptons, three of which have an electric charge of −1 e, called the electron (), the muon (), and the tau (); the other three leptons are neutrinos (, , ), which are the only elementary fermions with neither electric nor color charge. The remaining six particles are quarks (discussed below). Generations Mass The following table lists current measured masses and mass estimates for all the fermions, using the same scale of measure: millions of electron-volts relative to square of light speed (MeV/c2). For example, the most accurately known quark mass is of the top quark () at , estimated using the on-shell scheme. Estimates of the values of quark masses depend on the version of quantum chromodynamics used to describe quark interactions. Quarks are always confined in an envelope of gluons that confer vastly greater mass to the mesons and baryons where quarks occur, so values for quark masses cannot be measured directly. Since their masses are so small compared to the effective mass of the surrounding gluons, slight differences in the calculation make large differences in the masses. Antiparticles There are also 12 fundamental fermionic antiparticles that correspond to these 12 particles. For example, the antielectron (positron) is the electron's antiparticle and has an electric charge of +1 e. Quarks Isolated quarks and antiquarks have never been detected, a fact explained by confinement. Every quark carries one of three color charges of the strong interaction; antiquarks similarly carry anticolor. Color-charged particles interact via gluon exchange in the same way that charged particles interact via photon exchange. Gluons are themselves color-charged, however, resulting in an amplification of the strong force as color-charged particles are separated. Unlike the electromagnetic force, which diminishes as charged particles separate, color-charged particles feel increasing force. Nonetheless, color-charged particles may combine to form color neutral composite particles called hadrons. A quark may pair up with an antiquark: the quark has a color and the antiquark has the corresponding anticolor. The color and anticolor cancel out, forming a color neutral meson. Alternatively, three quarks can exist together, one quark being "red", another "blue", another "green". These three colored quarks together form a color-neutral baryon. Symmetrically, three antiquarks with the colors "antired", "antiblue" and "antigreen" can form a color-neutral antibaryon. Quarks also carry fractional electric charges, but, since they are confined within hadrons whose charges are all integral, fractional charges have never been isolated. Note that quarks have electric charges of either  e or  e, whereas antiquarks have corresponding electric charges of either  e or  e. Evidence for the existence of quarks comes from deep inelastic scattering: firing electrons at nuclei to determine the distribution of charge within nucleons (which are baryons). If the charge is uniform, the electric field around the proton should be uniform and the electron should scatter elastically. Low-energy electrons do scatter in this way, but, above a particular energy, the protons deflect some electrons through large angles. The recoiling electron has much less energy and a jet of particles is emitted. This inelastic scattering suggests that the charge in the proton is not uniform but split among smaller charged particles: quarks. Fundamental bosons In the Standard Model, vector (spin-1) bosons (gluons, photons, and the W and Z bosons) mediate forces, whereas the Higgs boson (spin-0) is responsible for the intrinsic mass of particles. Bosons differ from fermions in the fact that multiple bosons can occupy the same quantum state (Pauli exclusion principle). Also, bosons can be either elementary, like photons, or a combination, like mesons. The spin of bosons are integers instead of half integers. Gluons Gluons mediate the strong interaction, which join quarks and thereby form hadrons, which are either baryons (three quarks) or mesons (one quark and one antiquark). Protons and neutrons are baryons, joined by gluons to form the atomic nucleus. Like quarks, gluons exhibit color and anticolor – unrelated to the concept of visual color and rather the particles' strong interactions – sometimes in combinations, altogether eight variations of gluons. Electroweak bosons There are three weak gauge bosons: W+, W−, and Z0; these mediate the weak interaction. The W bosons are known for their mediation in nuclear decay: The W− converts a neutron into a proton then decays into an electron and electron-antineutrino pair. The Z0 does not convert particle flavor or charges, but rather changes momentum; it is the only mechanism for elastically scattering neutrinos. The weak gauge bosons were discovered due to momentum change in electrons from neutrino-Z exchange. The massless photon mediates the electromagnetic interaction. These four gauge bosons form the electroweak interaction among elementary particles. Higgs boson Although the weak and electromagnetic forces appear quite different to us at everyday energies, the two forces are theorized to unify as a single electroweak force at high energies. This prediction was clearly confirmed by measurements of cross-sections for high-energy electron-proton scattering at the HERA collider at DESY. The differences at low energies is a consequence of the high masses of the W and Z bosons, which in turn are a consequence of the Higgs mechanism. Through the process of spontaneous symmetry breaking, the Higgs selects a special direction in electroweak space that causes three electroweak particles to become very heavy (the weak bosons) and one to remain with an undefined rest mass as it is always in motion (the photon). On 4 July 2012, after many years of experimentally searching for evidence of its existence, the Higgs boson was announced to have been observed at CERN's Large Hadron Collider. Peter Higgs who first posited the existence of the Higgs boson was present at the announcement. The Higgs boson is believed to have a mass of approximately . The statistical significance of this discovery was reported as 5 sigma, which implies a certainty of roughly 99.99994%. In particle physics, this is the level of significance required to officially label experimental observations as a discovery. Research into the properties of the newly discovered particle continues. Graviton The graviton is a hypothetical elementary spin-2 particle proposed to mediate gravitation. While it remains undiscovered due to the difficulty inherent in its detection, it is sometimes included in tables of elementary particles. The conventional graviton is massless, although some models containing massive Kaluza–Klein gravitons exist. Beyond the Standard Model Although experimental evidence overwhelmingly confirms the predictions derived from the Standard Model, some of its parameters were added arbitrarily, not determined by a particular explanation, which remain mysterious, for instance the hierarchy problem. Theories beyond the Standard Model attempt to resolve these shortcomings. Grand unification One extension of the Standard Model attempts to combine the electroweak interaction with the strong interaction into a single 'grand unified theory' (GUT). Such a force would be spontaneously broken into the three forces by a Higgs-like mechanism. This breakdown is theorized to occur at high energies, making it difficult to observe unification in a laboratory. The most dramatic prediction of grand unification is the existence of X and Y bosons, which cause proton decay. The non-observation of proton decay at the Super-Kamiokande neutrino observatory rules out the simplest GUTs, however, including SU(5) and SO(10). Supersymmetry Supersymmetry extends the Standard Model by adding another class of symmetries to the Lagrangian. These symmetries exchange fermionic particles with bosonic ones. Such a symmetry predicts the existence of supersymmetric particles, abbreviated as sparticles, which include the sleptons, squarks, neutralinos, and charginos. Each particle in the Standard Model would have a superpartner whose spin differs by from the ordinary particle. Due to the breaking of supersymmetry, the sparticles are much heavier than their ordinary counterparts; they are so heavy that existing particle colliders would not be powerful enough to produce them. Some physicists believe that sparticles will be detected by the Large Hadron Collider at CERN. String theory String theory is a model of physics whereby all "particles" that make up matter are composed of strings (measuring at the Planck length) that exist in an 11-dimensional (according to M-theory, the leading version) or 12-dimensional (according to F-theory) universe. These strings vibrate at different frequencies that determine mass, electric charge, color charge, and spin. A "string" can be open (a line) or closed in a loop (a one-dimensional sphere, that is, a circle). As a string moves through space it sweeps out something called a world sheet. String theory predicts 1- to 10-branes (a 1-brane being a string and a 10-brane being a 10-dimensional object) that prevent tears in the "fabric" of space using the uncertainty principle (e.g., the electron orbiting a hydrogen atom has the probability, albeit small, that it could be anywhere else in the universe at any given moment). String theory proposes that our universe is merely a 4-brane, inside which exist the three space dimensions and the one time dimension that we observe. The remaining 7 theoretical dimensions either are very tiny and curled up (and too small to be macroscopically accessible) or simply do not/cannot exist in our universe (because they exist in a grander scheme called the "multiverse" outside our known universe). Some predictions of the string theory include existence of extremely massive counterparts of ordinary particles due to vibrational excitations of the fundamental string and existence of a massless spin-2 particle behaving like the graviton. Technicolor Technicolor theories try to modify the Standard Model in a minimal way by introducing a new QCD-like interaction. This means one adds a new theory of so-called Techniquarks, interacting via so called Technigluons. The main idea is that the Higgs boson is not an elementary particle but a bound state of these objects. Preon theory According to preon theory there are one or more orders of particles more fundamental than those (or most of those) found in the Standard Model. The most fundamental of these are normally called preons, which is derived from "pre-quarks". In essence, preon theory tries to do for the Standard Model what the Standard Model did for the particle zoo that came before it. Most models assume that almost everything in the Standard Model can be explained in terms of three to six more fundamental particles and the rules that govern their interactions. Interest in preons has waned since the simplest models were experimentally ruled out in the 1980s. Acceleron theory Accelerons are the hypothetical subatomic particles that integrally link the newfound mass of the neutrino to the dark energy conjectured to be accelerating the expansion of the universe. In this theory, neutrinos are influenced by a new force resulting from their interactions with accelerons, leading to dark energy. Dark energy results as the universe tries to pull neutrinos apart. Accelerons are thought to interact with matter more infrequently than they do with neutrinos. See also Asymptotic freedom List of particles Physical ontology Quantum field theory Quantum gravity Quantum triviality UV fixed point Notes Further reading General readers Textbooks An undergraduate text for those not majoring in physics. External links The most important address about the current experimental and theoretical knowledge about elementary particle physics is the Particle Data Group, where different international institutions collect all experimental data and give short reviews over the contemporary theoretical understanding. other pages are: particleadventure.org, a well-made introduction also for non physicists CERNCourier: Season of Higgs and melodrama Interactions.org, particle physics news Symmetry Magazine, a joint Fermilab/SLAC publication Elementary Particles made thinkable, an interactive visualisation allowing physical properties to be compared Quantum mechanics Quantum field theory Subatomic particles
Elementary particle
[ "Physics" ]
3,663
[ "Quantum field theory", "Matter", "Elementary particles", "Theoretical physics", "Quantum mechanics", "Particle physics", "Nuclear physics", "Atoms", "Subatomic particles" ]
11,344
https://en.wikipedia.org/wiki/First-order%20predicate
In mathematical logic, a first-order predicate is a predicate that takes only individual(s) constants or variables as argument(s). Compare second-order predicate and higher-order predicate. This is not to be confused with a one-place predicate or monad, which is a predicate that takes only one argument. For example, the expression "is a planet" is a one-place predicate, while the expression "is father of" is a two-place predicate. See also First-order predicate calculus Monadic predicate calculus References Predicate logic Concepts in logic
First-order predicate
[ "Mathematics" ]
128
[ "Mathematical logic", "Predicate logic", "Basic concepts in set theory" ]
11,350
https://en.wikipedia.org/wiki/Firewall%20%28construction%29
A firewall is a fire-resistant barrier used to prevent the spread of fire. Firewalls are built between or through buildings, structures, or electrical substation transformers, or within an aircraft or vehicle. Applications Firewalls can be used to subdivide a building into separate fire areas and are constructed in accordance with the locally applicable building codes. Firewalls are a portion of a building's passive fire protection systems. Firewalls can be used to separate-high value transformers at an electrical substation in the event of a mineral oil tank rupture and ignition. The firewall serves as a fire containment wall between one oil-filled transformer and other neighboring transformers, building structures, and site equipment. Types There are three main classifications of fire rated walls: fire walls, fire barriers, and fire partitions. A firewall is an assembly of materials used to delay the spread of fire a wall assembly with a prescribed fire resistance duration and independent structural stability. This allows a building to be subdivided into smaller sections. If a section becomes structurally unstable due to fire or other causes, that section can break or fall away from the other sections in the building. A fire barrier wall, or a fire partition, is a fire-rated wall assembly that are not structurally self-sufficient. Fire barrier walls are typically continuous from an exterior wall to an exterior wall, or from a floor below to a floor or roof above, or from one fire barrier wall to another fire barrier wall, having a fire resistance rating equal to or greater than the required rating for the application. Fire barriers are continuous through concealed spaces (e.g., above a ceiling) to the floor deck or roof deck above the barrier. Fire partitions are not required to extend through concealed spaces if the construction assembly forming the bottom of the concealed space, such as the ceiling, has a fire resistance rating at least equal to or greater than the fire partition. A high challenge fire wall is a wall used to subdivide a building with high fire challenge occupancies, having enhanced fire resistance ratings and enhanced appurtenance protection to prevent the spread of fire, and having structural stability. Portions of structures that are subdivided by fire walls are permitted to be considered separate buildings, in that fire walls have sufficient structural stability to maintain the integrity of the wall in the event of the collapse of the building construction on either side of the wall. Characteristics Fire rating - Fire walls are constructed in such a way as to achieve a code-determined fire-resistance rating, thus forming part of a fire compartment's passive fire protection. Germany includes repeated impact force testing upon new fire wall systems. Other codes require impact resistance on a performance basis Design loads – Fire wall must withstand a minimum , and additional seismic loads. Substation transformer firewalls are typically free-standing modular walls custom designed and engineered to meet application needs. Building fire walls typically extend through the roof and terminate at a code-determined height above it. They are usually finished off on the top with flashing (sheet metal cap) for protection against the elements. Materials Building and structural fire walls in North America are usually made of concrete, concrete blocks, or reinforced concrete. Older fire walls, built prior to World War II, used brick materials. Fire barrier walls are typically constructed of drywall or gypsum board partitions with wood or metal framed studs. Penetrations – Penetrations through fire walls, such as for pipes and cables, must be protected with a listed firestop assembly designed to prevent the spread of fire through wall penetrations. Penetrations (holes) must not defeat the structural integrity of the wall, such that the wall cannot withstand the prescribed fire duration without threat of collapse. Openings – Other openings in fire walls, such as doors and windows, must also be fire-rated fire door assemblies and fire window assemblies. Performance based design Firewalls are used in varied applications that require specific design and performance specifications. Knowing the potential conditions that may exist during a fire are critical to selecting and installing an effective firewall. For example, a firewall designed to meet National Fire Protection Agency, (NFPA), 221-09 section A.5.7 which indicates an average temperature of , is not designed to withstand higher temperatures such as would be present in higher challenge fires, and as a result would fail to function for the expected duration of the listed wall rating. Performance based design takes into account the potential conditions during a fire. Understanding thermal limitations of materials is essential to using the correct material for the application. Laboratory testing is used to simulate fire scenarios and wall loading conditions. The testing results in an assigned listing number for the fire-rated assembly that defines the expected fire resistance duration and wall structural integrity under the tested conditions. Designers may elect to specify a listed fire wall assembly or design a wall system that would require performance testing to certify the expected protections before use of the designed fire-rated wall system. High-voltage transformer fire barriers Fire barriers are used around large electrical transformers as firewalls. These barriers are used to isolate one transformer in case of fire or explosions, preventing fire propagation to neighboring transformers. See also Firebreak (forestry) Fireproofing Firestop (construction) Firewall (engine) Listing and approval use and compliance High-voltage transformer fire barriers Notes External links FAA Regulation about firewalls in aircraft Firefighting Passive fire protection Types of wall
Firewall (construction)
[ "Engineering" ]
1,099
[ "Structural engineering", "Types of wall" ]
11,490
https://en.wikipedia.org/wiki/Fundamental%20frequency
The fundamental frequency, often referred to simply as the fundamental (abbreviated as 0 or 1 ), is defined as the lowest frequency of a periodic waveform. In music, the fundamental is the musical pitch of a note that is perceived as the lowest partial present. In terms of a superposition of sinusoids, the fundamental frequency is the lowest frequency sinusoidal in the sum of harmonically related frequencies, or the frequency of the difference between adjacent frequencies. In some contexts, the fundamental is usually abbreviated as 0, indicating the lowest frequency counting from zero. In other contexts, it is more common to abbreviate it as 1, the first harmonic. (The second harmonic is then 2 = 2⋅1, etc. In this context, the zeroth harmonic would be 0 Hz.) According to Benward's and Saker's Music: In Theory and Practice: Explanation All sinusoidal and many non-sinusoidal waveforms repeat exactly over time – they are periodic. The period of a waveform is the smallest positive value for which the following is true: Where is the value of the waveform . This means that the waveform's values over any interval of length is all that is required to describe the waveform completely (for example, by the associated Fourier series). Since any multiple of period also satisfies this definition, the fundamental period is defined as the smallest period over which the function may be described completely. The fundamental frequency is defined as its reciprocal: When the units of time are seconds, the frequency is in , also known as Hertz. Fundamental frequency of a pipe For a pipe of length with one end closed and the other end open the wavelength of the fundamental harmonic is , as indicated by the first two animations. Hence, Therefore, using the relation where is the speed of the wave, the fundamental frequency can be found in terms of the speed of the wave and the length of the pipe: If the ends of the same pipe are now both closed or both opened, the wavelength of the fundamental harmonic becomes . By the same method as above, the fundamental frequency is found to be In music In music, the fundamental is the musical pitch of a note that is perceived as the lowest partial present. The fundamental may be created by vibration over the full length of a string or air column, or a higher harmonic chosen by the player. The fundamental is one of the harmonics. A harmonic is any member of the harmonic series, an ideal set of frequencies that are positive integer multiples of a common fundamental frequency. The reason a fundamental is also considered a harmonic is because it is 1 times itself. The fundamental is the frequency at which the entire wave vibrates. Overtones are other sinusoidal components present at frequencies above the fundamental. All of the frequency components that make up the total waveform, including the fundamental and the overtones, are called partials. Together they form the harmonic series. Overtones which are perfect integer multiples of the fundamental are called harmonics. When an overtone is near to being harmonic, but not exact, it is sometimes called a harmonic partial, although they are often referred to simply as harmonics. Sometimes overtones are created that are not anywhere near a harmonic, and are just called partials or inharmonic overtones. The fundamental frequency is considered the first harmonic and the first partial. The numbering of the partials and harmonics is then usually the same; the second partial is the second harmonic, etc. But if there are inharmonic partials, the numbering no longer coincides. Overtones are numbered as they appear the fundamental. So strictly speaking, the first overtone is the second partial (and usually the second harmonic). As this can result in confusion, only harmonics are usually referred to by their numbers, and overtones and partials are described by their relationships to those harmonics. Mechanical systems Consider a spring, fixed at one end and having a mass attached to the other; this would be a single degree of freedom (SDoF) oscillator. Once set into motion, it will oscillate at its natural frequency. For a single degree of freedom oscillator, a system in which the motion can be described by a single coordinate, the natural frequency depends on two system properties: mass and stiffness; (providing the system is undamped). The natural frequency, or fundamental frequency, 0, can be found using the following equation: where: = stiffness of the spring = mass 0 = natural frequency in radians per second. To determine the natural frequency in Hz, the omega value is divided by 2. Or: where: 0 = natural frequency (SI unit: hertz) = stiffness of the spring (SI unit: newtons/metre or N/m) = mass (SI unit: kg). While doing a modal analysis, the frequency of the 1st mode is the fundamental frequency. This is also expressed as: where: 0 = natural frequency (SI unit: hertz) = length of the string (SI unit: metre) = mass per unit length of the string (SI unit: kg/m) = tension on the string (SI unit: newton) See also Greatest common divisor Hertz Missing fundamental Natural frequency Oscillation Harmonic series (music)#Terminology Pitch detection algorithm Scale of harmonics References Musical tuning Acoustics Fourier analysis Spectrum (physical sciences)
Fundamental frequency
[ "Physics" ]
1,110
[ "Physical phenomena", "Spectrum (physical sciences)", "Classical mechanics", "Acoustics", "Waves" ]
11,529
https://en.wikipedia.org/wiki/Fermion
In particle physics, a fermion is a subatomic particle that follows Fermi–Dirac statistics. Fermions have a half-odd-integer spin (spin , spin , etc.) and obey the Pauli exclusion principle. These particles include all quarks and leptons and all composite particles made of an odd number of these, such as all baryons and many atoms and nuclei. Fermions differ from bosons, which obey Bose–Einstein statistics. Some fermions are elementary particles (such as electrons), and some are composite particles (such as protons). For example, according to the spin-statistics theorem in relativistic quantum field theory, particles with integer spin are bosons. In contrast, particles with half-integer spin are fermions. In addition to the spin characteristic, fermions have another specific property: they possess conserved baryon or lepton quantum numbers. Therefore, what is usually referred to as the spin-statistics relation is, in fact, a spin statistics-quantum number relation. As a consequence of the Pauli exclusion principle, only one fermion can occupy a particular quantum state at a given time. Suppose multiple fermions have the same spatial probability distribution, then, at least one property of each fermion, such as its spin, must be different. Fermions are usually associated with matter, whereas bosons are generally force carrier particles. However, in the current state of particle physics, the distinction between the two concepts is unclear. Weakly interacting fermions can also display bosonic behavior under extreme conditions. For example, at low temperatures, fermions show superfluidity for uncharged particles and superconductivity for charged particles. Composite fermions, such as protons and neutrons, are the key building blocks of everyday matter. English theoretical physicist Paul Dirac coined the name fermion from the surname of Italian physicist Enrico Fermi. Elementary fermions The Standard Model recognizes two types of elementary fermions: quarks and leptons. In all, the model distinguishes 24 different fermions. There are six quarks (up, down, strange, charm, bottom and top), and six leptons (electron, electron neutrino, muon, muon neutrino, tauon and tauon neutrino), along with the corresponding antiparticle of each of these. Mathematically, there are many varieties of fermions, with the three most common types being: Weyl fermions (massless), Dirac fermions (massive), and Majorana fermions (each its own antiparticle). Most Standard Model fermions are believed to be Dirac fermions, although it is unknown at this time whether the neutrinos are Dirac or Majorana fermions (or both). Dirac fermions can be treated as a combination of two Weyl fermions. In July 2015, Weyl fermions have been experimentally realized in Weyl semimetals. Composite fermions Composite particles (such as hadrons, nuclei, and atoms) can be bosons or fermions depending on their constituents. More precisely, because of the relation between spin and statistics, a particle containing an odd number of fermions is itself a fermion. It will have half-integer spin. Examples include the following: A baryon, such as the proton or neutron, contains three fermionic quarks. The nucleus of a carbon-13 atom contains six protons and seven neutrons. The atom helium-3 (3He) consists of two protons, one neutron, and two electrons. The deuterium atom consists of one proton, one neutron, and one electron. The number of bosons within a composite particle made up of simple particles bound with a potential has no effect on whether it is a boson or a fermion. Fermionic or bosonic behavior of a composite particle (or system) is only seen at large (compared to size of the system) distances. At proximity, where spatial structure begins to be important, a composite particle (or system) behaves according to its constituent makeup. Fermions can exhibit bosonic behavior when they become loosely bound in pairs. This is the origin of superconductivity and the superfluidity of helium-3: in superconducting materials, electrons interact through the exchange of phonons, forming Cooper pairs, while in helium-3, Cooper pairs are formed via spin fluctuations. The quasiparticles of the fractional quantum Hall effect are also known as composite fermions; they consist of electrons with an even number of quantized vortices attached to them. See also Anyon, 2D quasiparticles Chirality (physics), left-handed and right-handed Fermionic condensate Weyl semimetal Fermionic field Identical particles Kogut–Susskind fermion, a type of lattice fermion Majorana fermion, each its own antiparticle Parastatistics Skyrmion, a hypothetical particle Notes External links Quantum field theory Enrico Fermi
Fermion
[ "Physics", "Materials_science" ]
1,083
[ "Quantum field theory", "Fermions", "Quantum mechanics", "Subatomic particles", "Condensed matter physics", "Matter" ]
11,617
https://en.wikipedia.org/wiki/Feynman%20diagram
In theoretical physics, a Feynman diagram is a pictorial representation of the mathematical expressions describing the behavior and interaction of subatomic particles. The scheme is named after American physicist Richard Feynman, who introduced the diagrams in 1948. The calculation of probability amplitudes in theoretical particle physics requires the use of large, complicated integrals over a large number of variables. Feynman diagrams instead represent these integrals graphically. Feynman diagrams give a simple visualization of what would otherwise be an arcane and abstract formula. According to David Kaiser, "Since the middle of the 20th century, theoretical physicists have increasingly turned to this tool to help them undertake critical calculations. Feynman diagrams have revolutionized nearly every aspect of theoretical physics." While the diagrams apply primarily to quantum field theory, they can be used in other areas of physics, such as solid-state theory. Frank Wilczek wrote that the calculations that won him the 2004 Nobel Prize in Physics "would have been literally unthinkable without Feynman diagrams, as would [Wilczek's] calculations that established a route to production and observation of the Higgs particle." A Feynman diagram is a graphical representation of a perturbative contribution to the transition amplitude or correlation function of a quantum mechanical or statistical field theory. Within the canonical formulation of quantum field theory, a Feynman diagram represents a term in the Wick's expansion of the perturbative -matrix. Alternatively, the path integral formulation of quantum field theory represents the transition amplitude as a weighted sum of all possible histories of the system from the initial to the final state, in terms of either particles or fields. The transition amplitude is then given as the matrix element of the -matrix between the initial and final states of the quantum system. Feynman used Ernst Stueckelberg's interpretation of the positron as if it were an electron moving backward in time. Thus, antiparticles are represented as moving backward along the time axis in Feynman diagrams. Motivation and history When calculating scattering cross-sections in particle physics, the interaction between particles can be described by starting from a free field that describes the incoming and outgoing particles, and including an interaction Hamiltonian to describe how the particles deflect one another. The amplitude for scattering is the sum of each possible interaction history over all possible intermediate particle states. The number of times the interaction Hamiltonian acts is the order of the perturbation expansion, and the time-dependent perturbation theory for fields is known as the Dyson series. When the intermediate states at intermediate times are energy eigenstates (collections of particles with a definite momentum) the series is called old-fashioned perturbation theory (or time-dependent/time-ordered perturbation theory). The Dyson series can be alternatively rewritten as a sum over Feynman diagrams, where at each vertex both the energy and momentum are conserved, but where the length of the energy-momentum four-vector is not necessarily equal to the mass, i.e. the intermediate particles are so-called off-shell. The Feynman diagrams are much easier to keep track of than "old-fashioned" terms, because the old-fashioned way treats the particle and antiparticle contributions as separate. Each Feynman diagram is the sum of exponentially many old-fashioned terms, because each internal line can separately represent either a particle or an antiparticle. In a non-relativistic theory, there are no antiparticles and there is no doubling, so each Feynman diagram includes only one term. Feynman gave a prescription for calculating the amplitude (the Feynman rules, below) for any given diagram from a field theory Lagrangian. Each internal line corresponds to a factor of the virtual particle's propagator; each vertex where lines meet gives a factor derived from an interaction term in the Lagrangian, and incoming and outgoing lines carry an energy, momentum, and spin. In addition to their value as a mathematical tool, Feynman diagrams provide deep physical insight into the nature of particle interactions. Particles interact in every way available; in fact, intermediate virtual particles are allowed to propagate faster than light. The probability of each final state is then obtained by summing over all such possibilities. This is closely tied to the functional integral formulation of quantum mechanics, also invented by Feynman—see path integral formulation. The naïve application of such calculations often produces diagrams whose amplitudes are infinite, because the short-distance particle interactions require a careful limiting procedure, to include particle self-interactions. The technique of renormalization, suggested by Ernst Stueckelberg and Hans Bethe and implemented by Dyson, Feynman, Schwinger, and Tomonaga compensates for this effect and eliminates the troublesome infinities. After renormalization, calculations using Feynman diagrams match experimental results with very high accuracy. Feynman diagram and path integral methods are also used in statistical mechanics and can even be applied to classical mechanics. Alternate names Murray Gell-Mann always referred to Feynman diagrams as Stueckelberg diagrams, after Swiss physicist Ernst Stueckelberg, who devised a similar notation many years earlier. Stueckelberg was motivated by the need for a manifestly covariant formalism for quantum field theory, but did not provide as automated a way to handle symmetry factors and loops, although he was first to find the correct physical interpretation in terms of forward and backward in time particle paths, all without the path-integral. Historically, as a book-keeping device of covariant perturbation theory, the graphs were called Feynman–Dyson diagrams or Dyson graphs, because the path integral was unfamiliar when they were introduced, and Freeman Dyson's derivation from old-fashioned perturbation theory borrowed from the perturbative expansions in statistical mechanics was easier to follow for physicists trained in earlier methods. Feynman had to lobby hard for the diagrams, which confused physicists trained in equations and graphs. Representation of physical reality In their presentations of fundamental interactions, written from the particle physics perspective, Gerard 't Hooft and Martinus Veltman gave good arguments for taking the original, non-regularized Feynman diagrams as the most succinct representation of the physics of quantum scattering of fundamental particles. Their motivations are consistent with the convictions of James Daniel Bjorken and Sidney Drell: The Feynman graphs and rules of calculation summarize quantum field theory in a form in close contact with the experimental numbers one wants to understand. Although the statement of the theory in terms of graphs may imply perturbation theory, use of graphical methods in the many-body problem shows that this formalism is flexible enough to deal with phenomena of nonperturbative characters ... Some modification of the Feynman rules of calculation may well outlive the elaborate mathematical structure of local canonical quantum field theory ... In quantum field theories, Feynman diagrams are obtained from a Lagrangian by Feynman rules. Dimensional regularization is a method for regularizing integrals in the evaluation of Feynman diagrams; it assigns values to them that are meromorphic functions of an auxiliary complex parameter , called the dimension. Dimensional regularization writes a Feynman integral as an integral depending on the spacetime dimension and spacetime points. Particle-path interpretation A Feynman diagram is a representation of quantum field theory processes in terms of particle interactions. The particles are represented by the diagram lines. The lines can be squiggly or straight, with an arrow or without, depending on the type of particle. A point where lines connect to other lines is a vertex, and this is where the particles meet and interact. The interactions are: emit/absorb particles, deflect particles, or change particle type. The three different types of lines are: internal lines, connecting vertices, incoming lines, extending from "the past" to a vertex, representing an initial state, and outgoing lines, extending from a vertex to "the future", representing the end state (the latter two are also known as external lines). Traditionally, the bottom of the diagram is the past and the top the future; alternatively, the past is to the left and the future to the right. When calculating correlation functions instead of scattering amplitudes, past and future are not relevant and all lines are internal. The particles then begin and end on small x's, which represent the positions of the operators whose correlation is calculated. Feynman diagrams are a pictorial representation of a contribution to the total amplitude for a process that can happen in different ways. When a group of incoming particles scatter off each other, the process can be thought of as one where the particles travel over all possible paths, including paths that go backward in time. Feynman diagrams are graphs that represent the interaction of particles rather than the physical position of the particle during a scattering process. They are not the same as spacetime diagrams and bubble chamber images even though they all describe particle scattering. Unlike a bubble chamber picture, only the sum of all relevant Feynman diagrams represent any given particle interaction; particles do not choose a particular diagram each time they interact. The law of summation is in accord with the principle of superposition—every diagram contributes to the total process's amplitude. Description A Feynman diagram represents a perturbative contribution to the amplitude of a quantum transition from some initial quantum state to some final quantum state. For example, in the process of electron-positron annihilation the initial state is one electron and one positron, while the final state is two photons. Conventionally, the initial state is at the left of the diagram and the final state at the right (although other layouts are also used). The particles in the initial state are depicted by lines pointing in the direction of the initial state (e.g., to the left). The particles in the final state are represented by lines pointing in the direction of the final state (e.g., to the right). QED involves two types of particles: matter particles such as electrons or positrons (called fermions) and exchange particles (called gauge bosons). They are represented in Feynman diagrams as follows: Electron in the initial state is represented by a solid line, with an arrow indicating the spin of the particle e.g. pointing toward the vertex (→•). Electron in the final state is represented by a line, with an arrow indicating the spin of the particle e.g. pointing away from the vertex: (•→). Positron in the initial state is represented by a solid line, with an arrow indicating the spin of the particle e.g. pointing away from the vertex: (←•). Positron in the final state is represented by a line, with an arrow indicating the spin of the particle e.g. pointing toward the vertex: (•←). Virtual Photon in the initial and the final states is represented by a wavy line (~• and •~). In QED each vertex has three lines attached to it: one bosonic line, one fermionic line with arrow toward the vertex, and one fermionic line with arrow away from the vertex. Vertices can be connected by a bosonic or fermionic propagator. A bosonic propagator is represented by a wavy line connecting two vertices (•~•). A fermionic propagator is represented by a solid line with an arrow connecting two vertices, (•←•). The number of vertices gives the order of the term in the perturbation series expansion of the transition amplitude. Electron–positron annihilation example The electron–positron annihilation interaction: e+ + e− → 2γ has a contribution from the second order Feynman diagram: In the initial state (at the bottom; early time) there is one electron (e−) and one positron (e+) and in the final state (at the top; late time) there are two photons (γ). Canonical quantization formulation The probability amplitude for a transition of a quantum system (between asymptotically free states) from the initial state to the final state is given by the matrix element where is the -matrix. In terms of the time-evolution operator , it is simply In the interaction picture, this expands to where is the interaction Hamiltonian and signifies the time-ordered product of operators. Dyson's formula expands the time-ordered matrix exponential into a perturbation series in the powers of the interaction Hamiltonian density, Equivalently, with the interaction Lagrangian , it is A Feynman diagram is a graphical representation of a single summand in the Wick's expansion of the time-ordered product in the th-order term of the Dyson series of the -matrix, where signifies the normal-ordered product of the operators and (±) takes care of the possible sign change when commuting the fermionic operators to bring them together for a contraction (a propagator) and represents all possible contractions. Feynman rules The diagrams are drawn according to the Feynman rules, which depend upon the interaction Lagrangian. For the QED interaction Lagrangian describing the interaction of a fermionic field with a bosonic gauge field , the Feynman rules can be formulated in coordinate space as follows: Each integration coordinate is represented by a point (sometimes called a vertex); A bosonic propagator is represented by a wiggly line connecting two points; A fermionic propagator is represented by a solid line connecting two points; A bosonic field is represented by a wiggly line attached to the point ; A fermionic field is represented by a solid line attached to the point with an arrow toward the point; An anti-fermionic field is represented by a solid line attached to the point with an arrow away from the point; Example: second order processes in QED The second order perturbation term in the -matrix is Scattering of fermions The Wick's expansion of the integrand gives (among others) the following term where is the electromagnetic contraction (propagator) in the Feynman gauge. This term is represented by the Feynman diagram at the right. This diagram gives contributions to the following processes: e− e− scattering (initial state at the right, final state at the left of the diagram); e+ e+ scattering (initial state at the left, final state at the right of the diagram); e− e+ scattering (initial state at the bottom/top, final state at the top/bottom of the diagram). Compton scattering and annihilation/generation of e− e+ pairs Another interesting term in the expansion is where is the fermionic contraction (propagator). Path integral formulation In a path integral, the field Lagrangian, integrated over all possible field histories, defines the probability amplitude to go from one field configuration to another. In order to make sense, the field theory must have a well-defined ground state, and the integral must be performed a little bit rotated into imaginary time, i.e. a Wick rotation. The path integral formalism is completely equivalent to the canonical operator formalism above. Scalar field Lagrangian A simple example is the free relativistic scalar field in dimensions, whose action integral is: The probability amplitude for a process is: where and are space-like hypersurfaces that define the boundary conditions. The collection of all the on the starting hypersurface give the field's initial value, analogous to the starting position for a point particle, and the field values at each point of the final hypersurface defines the final field value, which is allowed to vary, giving a different amplitude to end up at different values. This is the field-to-field transition amplitude. The path integral gives the expectation value of operators between the initial and final state: and in the limit that A and B recede to the infinite past and the infinite future, the only contribution that matters is from the ground state (this is only rigorously true if the path-integral is defined slightly rotated into imaginary time). The path integral can be thought of as analogous to a probability distribution, and it is convenient to define it so that multiplying by a constant does not change anything: The field's partition function is the normalization factor on the bottom, which coincides with the statistical mechanical partition function at zero temperature when rotated into imaginary time. The initial-to-final amplitudes are ill-defined if one thinks of the continuum limit right from the beginning, because the fluctuations in the field can become unbounded. So the path-integral can be thought of as on a discrete square lattice, with lattice spacing and the limit should be taken carefully. If the final results do not depend on the shape of the lattice or the value of , then the continuum limit exists. On a lattice On a lattice, (i), the field can be expanded in Fourier modes: Here the integration domain is over restricted to a cube of side length , so that large values of are not allowed. It is important to note that the -measure contains the factors of 2 from Fourier transforms, this is the best standard convention for -integrals in QFT. The lattice means that fluctuations at large are not allowed to contribute right away, they only start to contribute in the limit . Sometimes, instead of a lattice, the field modes are just cut off at high values of instead. It is also convenient from time to time to consider the space-time volume to be finite, so that the modes are also a lattice. This is not strictly as necessary as the space-lattice limit, because interactions in are not localized, but it is convenient for keeping track of the factors in front of the -integrals and the momentum-conserving delta functions that will arise. On a lattice, (ii), the action needs to be discretized: where is a pair of nearest lattice neighbors and . The discretization should be thought of as defining what the derivative means. In terms of the lattice Fourier modes, the action can be written: For near zero this is: Now we have the continuum Fourier transform of the original action. In finite volume, the quantity is not infinitesimal, but becomes the volume of a box made by neighboring Fourier modes, or . The field is real-valued, so the Fourier transform obeys: In terms of real and imaginary parts, the real part of is an even function of , while the imaginary part is odd. The Fourier transform avoids double-counting, so that it can be written: over an integration domain that integrates over each pair exactly once. For a complex scalar field with action the Fourier transform is unconstrained: and the integral is over all . Integrating over all different values of is equivalent to integrating over all Fourier modes, because taking a Fourier transform is a unitary linear transformation of field coordinates. When you change coordinates in a multidimensional integral by a linear transformation, the value of the new integral is given by the determinant of the transformation matrix. If then If is a rotation, then so that , and the sign depends on whether the rotation includes a reflection or not. The matrix that changes coordinates from to can be read off from the definition of a Fourier transform. and the Fourier inversion theorem tells you the inverse: which is the complex conjugate-transpose, up to factors of 2. On a finite volume lattice, the determinant is nonzero and independent of the field values. and the path integral is a separate factor at each value of . The factor is the infinitesimal volume of a discrete cell in -space, in a square lattice box where is the side-length of the box. Each separate factor is an oscillatory Gaussian, and the width of the Gaussian diverges as the volume goes to infinity. In imaginary time, the Euclidean action becomes positive definite, and can be interpreted as a probability distribution. The probability of a field having values is The expectation value of the field is the statistical expectation value of the field when chosen according to the probability distribution: Since the probability of is a product, the value of at each separate value of is independently Gaussian distributed. The variance of the Gaussian is , which is formally infinite, but that just means that the fluctuations are unbounded in infinite volume. In any finite volume, the integral is replaced by a discrete sum, and the variance of the integral is . Monte Carlo The path integral defines a probabilistic algorithm to generate a Euclidean scalar field configuration. Randomly pick the real and imaginary parts of each Fourier mode at wavenumber to be a Gaussian random variable with variance . This generates a configuration at random, and the Fourier transform gives . For real scalar fields, the algorithm must generate only one of each pair , and make the second the complex conjugate of the first. To find any correlation function, generate a field again and again by this procedure, and find the statistical average: where is the number of configurations, and the sum is of the product of the field values on each configuration. The Euclidean correlation function is just the same as the correlation function in statistics or statistical mechanics. The quantum mechanical correlation functions are an analytic continuation of the Euclidean correlation functions. For free fields with a quadratic action, the probability distribution is a high-dimensional Gaussian, and the statistical average is given by an explicit formula. But the Monte Carlo method also works well for bosonic interacting field theories where there is no closed form for the correlation functions. Scalar propagator Each mode is independently Gaussian distributed. The expectation of field modes is easy to calculate: for , since then the two Gaussian random variables are independent and both have zero mean. in finite volume , when the two -values coincide, since this is the variance of the Gaussian. In the infinite volume limit, Strictly speaking, this is an approximation: the lattice propagator is: But near , for field fluctuations long compared to the lattice spacing, the two forms coincide. The delta functions contain factors of 2, so that they cancel out the 2 factors in the measure for integrals. where is the ordinary one-dimensional Dirac delta function. This convention for delta-functions is not universal—some authors keep the factors of 2 in the delta functions (and in the -integration) explicit. Equation of motion The form of the propagator can be more easily found by using the equation of motion for the field. From the Lagrangian, the equation of motion is: and in an expectation value, this says: Where the derivatives act on , and the identity is true everywhere except when and coincide, and the operator order matters. The form of the singularity can be understood from the canonical commutation relations to be a delta-function. Defining the (Euclidean) Feynman propagator as the Fourier transform of the time-ordered two-point function (the one that comes from the path-integral): So that: If the equations of motion are linear, the propagator will always be the reciprocal of the quadratic-form matrix that defines the free Lagrangian, since this gives the equations of motion. This is also easy to see directly from the path integral. The factor of disappears in the Euclidean theory. Wick theorem Because each field mode is an independent Gaussian, the expectation values for the product of many field modes obeys Wick's theorem: is zero unless the field modes coincide in pairs. This means that it is zero for an odd number of , and for an even number of , it is equal to a contribution from each pair separately, with a delta function. where the sum is over each partition of the field modes into pairs, and the product is over the pairs. For example, An interpretation of Wick's theorem is that each field insertion can be thought of as a dangling line, and the expectation value is calculated by linking up the lines in pairs, putting a delta function factor that ensures that the momentum of each partner in the pair is equal, and dividing by the propagator. Higher Gaussian moments — completing Wick's theorem There is a subtle point left before Wick's theorem is proved—what if more than two of the s have the same momentum? If it's an odd number, the integral is zero; negative values cancel with the positive values. But if the number is even, the integral is positive. The previous demonstration assumed that the s would only match up in pairs. But the theorem is correct even when arbitrarily many of the are equal, and this is a notable property of Gaussian integration: Dividing by , If Wick's theorem were correct, the higher moments would be given by all possible pairings of a list of different : where the are all the same variable, the index is just to keep track of the number of ways to pair them. The first can be paired with others, leaving . The next unpaired can be paired with different leaving , and so on. This means that Wick's theorem, uncorrected, says that the expectation value of should be: and this is in fact the correct answer. So Wick's theorem holds no matter how many of the momenta of the internal variables coincide. Interaction Interactions are represented by higher order contributions, since quadratic contributions are always Gaussian. The simplest interaction is the quartic self-interaction, with an action: The reason for the combinatorial factor 4! will be clear soon. Writing the action in terms of the lattice (or continuum) Fourier modes: Where is the free action, whose correlation functions are given by Wick's theorem. The exponential of in the path integral can be expanded in powers of , giving a series of corrections to the free action. The path integral for the interacting action is then a power series of corrections to the free action. The term represented by should be thought of as four half-lines, one for each factor of . The half-lines meet at a vertex, which contributes a delta-function that ensures that the sum of the momenta are all equal. To compute a correlation function in the interacting theory, there is a contribution from the terms now. For example, the path-integral for the four-field correlator: which in the free field was only nonzero when the momenta were equal in pairs, is now nonzero for all values of . The momenta of the insertions can now match up with the momenta of the s in the expansion. The insertions should also be thought of as half-lines, four in this case, which carry a momentum , but one that is not integrated. The lowest-order contribution comes from the first nontrivial term in the Taylor expansion of the action. Wick's theorem requires that the momenta in the half-lines, the factors in , should match up with the momenta of the external half-lines in pairs. The new contribution is equal to: The 4! inside is canceled because there are exactly 4! ways to match the half-lines in to the external half-lines. Each of these different ways of matching the half-lines together in pairs contributes exactly once, regardless of the values of , by Wick's theorem. Feynman diagrams The expansion of the action in powers of gives a series of terms with progressively higher number of s. The contribution from the term with exactly s is called th order. The th order terms has: internal half-lines, which are the factors of from the s. These all end on a vertex, and are integrated over all possible . external half-lines, which are the come from the insertions in the integral. By Wick's theorem, each pair of half-lines must be paired together to make a line, and this line gives a factor of which multiplies the contribution. This means that the two half-lines that make a line are forced to have equal and opposite momentum. The line itself should be labelled by an arrow, drawn parallel to the line, and labeled by the momentum in the line . The half-line at the tail end of the arrow carries momentum , while the half-line at the head-end carries momentum . If one of the two half-lines is external, this kills the integral over the internal , since it forces the internal to be equal to the external . If both are internal, the integral over remains. The diagrams that are formed by linking the half-lines in the s with the external half-lines, representing insertions, are the Feynman diagrams of this theory. Each line carries a factor of , the propagator, and either goes from vertex to vertex, or ends at an insertion. If it is internal, it is integrated over. At each vertex, the total incoming is equal to the total outgoing . The number of ways of making a diagram by joining half-lines into lines almost completely cancels the factorial factors coming from the Taylor series of the exponential and the 4! at each vertex. Loop order A forest diagram is one where all the internal lines have momentum that is completely determined by the external lines and the condition that the incoming and outgoing momentum are equal at each vertex. The contribution of these diagrams is a product of propagators, without any integration. A tree diagram is a connected forest diagram. An example of a tree diagram is the one where each of four external lines end on an . Another is when three external lines end on an , and the remaining half-line joins up with another , and the remaining half-lines of this run off to external lines. These are all also forest diagrams (as every tree is a forest); an example of a forest that is not a tree is when eight external lines end on two s. It is easy to verify that in all these cases, the momenta on all the internal lines is determined by the external momenta and the condition of momentum conservation in each vertex. A diagram that is not a forest diagram is called a loop diagram, and an example is one where two lines of an are joined to external lines, while the remaining two lines are joined to each other. The two lines joined to each other can have any momentum at all, since they both enter and leave the same vertex. A more complicated example is one where two s are joined to each other by matching the legs one to the other. This diagram has no external lines at all. The reason loop diagrams are called loop diagrams is because the number of -integrals that are left undetermined by momentum conservation is equal to the number of independent closed loops in the diagram, where independent loops are counted as in homology theory. The homology is real-valued (actually valued), the value associated with each line is the momentum. The boundary operator takes each line to the sum of the end-vertices with a positive sign at the head and a negative sign at the tail. The condition that the momentum is conserved is exactly the condition that the boundary of the -valued weighted graph is zero. A set of valid -values can be arbitrarily redefined whenever there is a closed loop. A closed loop is a cyclical path of adjacent vertices that never revisits the same vertex. Such a cycle can be thought of as the boundary of a hypothetical 2-cell. The -labellings of a graph that conserve momentum (i.e. which has zero boundary) up to redefinitions of (i.e. up to boundaries of 2-cells) define the first homology of a graph. The number of independent momenta that are not determined is then equal to the number of independent homology loops. For many graphs, this is equal to the number of loops as counted in the most intuitive way. Symmetry factors The number of ways to form a given Feynman diagram by joining half-lines is large, and by Wick's theorem, each way of pairing up the half-lines contributes equally. Often, this completely cancels the factorials in the denominator of each term, but the cancellation is sometimes incomplete. The uncancelled denominator is called the symmetry factor of the diagram. The contribution of each diagram to the correlation function must be divided by its symmetry factor. For example, consider the Feynman diagram formed from two external lines joined to one , and the remaining two half-lines in the joined to each other. There are 4 × 3 ways to join the external half-lines to the , and then there is only one way to join the two remaining lines to each other. The comes divided by , but the number of ways to link up the half lines to make the diagram is only 4 × 3, so the contribution of this diagram is divided by two. For another example, consider the diagram formed by joining all the half-lines of one to all the half-lines of another . This diagram is called a vacuum bubble, because it does not link up to any external lines. There are 4! ways to form this diagram, but the denominator includes a 2! (from the expansion of the exponential, there are two s) and two factors of 4!. The contribution is multiplied by = . Another example is the Feynman diagram formed from two s where each links up to two external lines, and the remaining two half-lines of each are joined to each other. The number of ways to link an to two external lines is 4 × 3, and either could link up to either pair, giving an additional factor of 2. The remaining two half-lines in the two s can be linked to each other in two ways, so that the total number of ways to form the diagram is , while the denominator is . The total symmetry factor is 2, and the contribution of this diagram is divided by 2. The symmetry factor theorem gives the symmetry factor for a general diagram: the contribution of each Feynman diagram must be divided by the order of its group of automorphisms, the number of symmetries that it has. An automorphism of a Feynman graph is a permutation of the lines and a permutation of the vertices with the following properties: If a line goes from vertex to vertex , then goes from to . If the line is undirected, as it is for a real scalar field, then can go from to too. If a line ends on an external line, ends on the same external line. If there are different types of lines, should preserve the type. This theorem has an interpretation in terms of particle-paths: when identical particles are present, the integral over all intermediate particles must not double-count states that differ only by interchanging identical particles. Proof: To prove this theorem, label all the internal and external lines of a diagram with a unique name. Then form the diagram by linking a half-line to a name and then to the other half line. Now count the number of ways to form the named diagram. Each permutation of the s gives a different pattern of linking names to half-lines, and this is a factor of . Each permutation of the half-lines in a single gives a factor of 4!. So a named diagram can be formed in exactly as many ways as the denominator of the Feynman expansion. But the number of unnamed diagrams is smaller than the number of named diagram by the order of the automorphism group of the graph. Connected diagrams: linked-cluster theorem Roughly speaking, a Feynman diagram is called connected if all vertices and propagator lines are linked by a sequence of vertices and propagators of the diagram itself. If one views it as an undirected graph it is connected. The remarkable relevance of such diagrams in QFTs is due to the fact that they are sufficient to determine the quantum partition function . More precisely, connected Feynman diagrams determine To see this, one should recall that with constructed from some (arbitrary) Feynman diagram that can be thought to consist of several connected components . If one encounters (identical) copies of a component within the Feynman diagram one has to include a symmetry factor . However, in the end each contribution of a Feynman diagram to the partition function has the generic form where labels the (infinitely) many connected Feynman diagrams possible. A scheme to successively create such contributions from the to is obtained by and therefore yields To establish the normalization one simply calculates all connected vacuum diagrams, i.e., the diagrams without any sources (sometimes referred to as external legs of a Feynman diagram). The linked-cluster theorem was first proved to order four by Keith Brueckner in 1955, and for infinite orders by Jeffrey Goldstone in 1957. Vacuum bubbles An immediate consequence of the linked-cluster theorem is that all vacuum bubbles, diagrams without external lines, cancel when calculating correlation functions. A correlation function is given by a ratio of path-integrals: The top is the sum over all Feynman diagrams, including disconnected diagrams that do not link up to external lines at all. In terms of the connected diagrams, the numerator includes the same contributions of vacuum bubbles as the denominator: Where the sum over diagrams includes only those diagrams each of whose connected components end on at least one external line. The vacuum bubbles are the same whatever the external lines, and give an overall multiplicative factor. The denominator is the sum over all vacuum bubbles, and dividing gets rid of the second factor. The vacuum bubbles then are only useful for determining itself, which from the definition of the path integral is equal to: where is the energy density in the vacuum. Each vacuum bubble contains a factor of zeroing the total at each vertex, and when there are no external lines, this contains a factor of , because the momentum conservation is over-enforced. In finite volume, this factor can be identified as the total volume of space time. Dividing by the volume, the remaining integral for the vacuum bubble has an interpretation: it is a contribution to the energy density of the vacuum. Sources Correlation functions are the sum of the connected Feynman diagrams, but the formalism treats the connected and disconnected diagrams differently. Internal lines end on vertices, while external lines go off to insertions. Introducing sources unifies the formalism, by making new vertices where one line can end. Sources are external fields, fields that contribute to the action, but are not dynamical variables. A scalar field source is another scalar field that contributes a term to the (Lorentz) Lagrangian: In the Feynman expansion, this contributes H terms with one half-line ending on a vertex. Lines in a Feynman diagram can now end either on an vertex, or on an vertex, and only one line enters an vertex. The Feynman rule for an vertex is that a line from an with momentum gets a factor of . The sum of the connected diagrams in the presence of sources includes a term for each connected diagram in the absence of sources, except now the diagrams can end on the source. Traditionally, a source is represented by a little "×" with one line extending out, exactly as an insertion. where is the connected diagram with external lines carrying momentum as indicated. The sum is over all connected diagrams, as before. The field is not dynamical, which means that there is no path integral over : is just a parameter in the Lagrangian, which varies from point to point. The path integral for the field is: and it is a function of the values of at every point. One way to interpret this expression is that it is taking the Fourier transform in field space. If there is a probability density on , the Fourier transform of the probability density is: The Fourier transform is the expectation of an oscillatory exponential. The path integral in the presence of a source is: which, on a lattice, is the product of an oscillatory exponential for each field value: The Fourier transform of a delta-function is a constant, which gives a formal expression for a delta function: This tells you what a field delta function looks like in a path-integral. For two scalar fields and , which integrates over the Fourier transform coordinate, over . This expression is useful for formally changing field coordinates in the path integral, much as a delta function is used to change coordinates in an ordinary multi-dimensional integral. The partition function is now a function of the field , and the physical partition function is the value when is the zero function: The correlation functions are derivatives of the path integral with respect to the source: In Euclidean space, source contributions to the action can still appear with a factor of , so that they still do a Fourier transform. Spin ; "photons" and "ghosts" Spin : Grassmann integrals The field path integral can be extended to the Fermi case, but only if the notion of integration is expanded. A Grassmann integral of a free Fermi field is a high-dimensional determinant or Pfaffian, which defines the new type of Gaussian integration appropriate for Fermi fields. The two fundamental formulas of Grassmann integration are: where is an arbitrary matrix and are independent Grassmann variables for each index , and where is an antisymmetric matrix, is a collection of Grassmann variables, and the is to prevent double-counting (since ). In matrix notation, where and are Grassmann-valued row vectors, and are Grassmann-valued column vectors, and is a real-valued matrix: where the last equality is a consequence of the translation invariance of the Grassmann integral. The Grassmann variables are external sources for , and differentiating with respect to pulls down factors of . again, in a schematic matrix notation. The meaning of the formula above is that the derivative with respect to the appropriate component of and gives the matrix element of . This is exactly analogous to the bosonic path integration formula for a Gaussian integral of a complex bosonic field: So that the propagator is the inverse of the matrix in the quadratic part of the action in both the Bose and Fermi case. For real Grassmann fields, for Majorana fermions, the path integral is a Pfaffian times a source quadratic form, and the formulas give the square root of the determinant, just as they do for real Bosonic fields. The propagator is still the inverse of the quadratic part. The free Dirac Lagrangian: formally gives the equations of motion and the anticommutation relations of the Dirac field, just as the Klein Gordon Lagrangian in an ordinary path integral gives the equations of motion and commutation relations of the scalar field. By using the spatial Fourier transform of the Dirac field as a new basis for the Grassmann algebra, the quadratic part of the Dirac action becomes simple to invert: The propagator is the inverse of the matrix linking and , since different values of do not mix together. The analog of Wick's theorem matches and in pairs: where S is the sign of the permutation that reorders the sequence of and to put the ones that are paired up to make the delta-functions next to each other, with the coming right before the . Since a pair is a commuting element of the Grassmann algebra, it does not matter what order the pairs are in. If more than one pair have the same , the integral is zero, and it is easy to check that the sum over pairings gives zero in this case (there are always an even number of them). This is the Grassmann analog of the higher Gaussian moments that completed the Bosonic Wick's theorem earlier. The rules for spin- Dirac particles are as follows: The propagator is the inverse of the Dirac operator, the lines have arrows just as for a complex scalar field, and the diagram acquires an overall factor of −1 for each closed Fermi loop. If there are an odd number of Fermi loops, the diagram changes sign. Historically, the −1 rule was very difficult for Feynman to discover. He discovered it after a long process of trial and error, since he lacked a proper theory of Grassmann integration. The rule follows from the observation that the number of Fermi lines at a vertex is always even. Each term in the Lagrangian must always be Bosonic. A Fermi loop is counted by following Fermionic lines until one comes back to the starting point, then removing those lines from the diagram. Repeating this process eventually erases all the Fermionic lines: this is the Euler algorithm to 2-color a graph, which works whenever each vertex has even degree. The number of steps in the Euler algorithm is only equal to the number of independent Fermionic homology cycles in the common special case that all terms in the Lagrangian are exactly quadratic in the Fermi fields, so that each vertex has exactly two Fermionic lines. When there are four-Fermi interactions (like in the Fermi effective theory of the weak nuclear interactions) there are more -integrals than Fermi loops. In this case, the counting rule should apply the Euler algorithm by pairing up the Fermi lines at each vertex into pairs that together form a bosonic factor of the term in the Lagrangian, and when entering a vertex by one line, the algorithm should always leave with the partner line. To clarify and prove the rule, consider a Feynman diagram formed from vertices, terms in the Lagrangian, with Fermion fields. The full term is Bosonic, it is a commuting element of the Grassmann algebra, so the order in which the vertices appear is not important. The Fermi lines are linked into loops, and when traversing the loop, one can reorder the vertex terms one after the other as one goes around without any sign cost. The exception is when you return to the starting point, and the final half-line must be joined with the unlinked first half-line. This requires one permutation to move the last to go in front of the first , and this gives the sign. This rule is the only visible effect of the exclusion principle in internal lines. When there are external lines, the amplitudes are antisymmetric when two Fermi insertions for identical particles are interchanged. This is automatic in the source formalism, because the sources for Fermi fields are themselves Grassmann valued. Spin 1: photons The naive propagator for photons is infinite, since the Lagrangian for the A-field is: The quadratic form defining the propagator is non-invertible. The reason is the gauge invariance of the field; adding a gradient to does not change the physics. To fix this problem, one needs to fix a gauge. The most convenient way is to demand that the divergence of is some function , whose value is random from point to point. It does no harm to integrate over the values of , since it only determines the choice of gauge. This procedure inserts the following factor into the path integral for : The first factor, the delta function, fixes the gauge. The second factor sums over different values of that are inequivalent gauge fixings. This is simply The additional contribution from gauge-fixing cancels the second half of the free Lagrangian, giving the Feynman Lagrangian: which is just like four independent free scalar fields, one for each component of . The Feynman propagator is: The one difference is that the sign of one propagator is wrong in the Lorentz case: the timelike component has an opposite sign propagator. This means that these particle states have negative norm—they are not physical states. In the case of photons, it is easy to show by diagram methods that these states are not physical—their contribution cancels with longitudinal photons to only leave two physical photon polarization contributions for any value of . If the averaging over is done with a coefficient different from , the two terms do not cancel completely. This gives a covariant Lagrangian with a coefficient , which does not affect anything: and the covariant propagator for QED is: Spin 1: non-Abelian ghosts To find the Feynman rules for non-Abelian gauge fields, the procedure that performs the gauge fixing must be carefully corrected to account for a change of variables in the path-integral. The gauge fixing factor has an extra determinant from popping the delta function: To find the form of the determinant, consider first a simple two-dimensional integral of a function that depends only on , not on the angle . Inserting an integral over : The derivative-factor ensures that popping the delta function in removes the integral. Exchanging the order of integration, but now the delta-function can be popped in , The integral over just gives an overall factor of 2, while the rate of change of with a change in is just , so this exercise reproduces the standard formula for polar integration of a radial function: In the path-integral for a nonabelian gauge field, the analogous manipulation is: The factor in front is the volume of the gauge group, and it contributes a constant, which can be discarded. The remaining integral is over the gauge fixed action. To get a covariant gauge, the gauge fixing condition is the same as in the Abelian case: Whose variation under an infinitesimal gauge transformation is given by: where is the adjoint valued element of the Lie algebra at every point that performs the infinitesimal gauge transformation. This adds the Faddeev Popov determinant to the action: which can be rewritten as a Grassmann integral by introducing ghost fields: The determinant is independent of , so the path-integral over can give the Feynman propagator (or a covariant propagator) by choosing the measure for as in the abelian case. The full gauge fixed action is then the Yang Mills action in Feynman gauge with an additional ghost action: The diagrams are derived from this action. The propagator for the spin-1 fields has the usual Feynman form. There are vertices of degree 3 with momentum factors whose couplings are the structure constants, and vertices of degree 4 whose couplings are products of structure constants. There are additional ghost loops, which cancel out timelike and longitudinal states in loops. In the Abelian case, the determinant for covariant gauges does not depend on , so the ghosts do not contribute to the connected diagrams. Particle-path representation Feynman diagrams were originally discovered by Feynman, by trial and error, as a way to represent the contribution to the S-matrix from different classes of particle trajectories. Schwinger representation The Euclidean scalar propagator has a suggestive representation: The meaning of this identity (which is an elementary integration) is made clearer by Fourier transforming to real space. The contribution at any one value of to the propagator is a Gaussian of width . The total propagation function from 0 to is a weighted sum over all proper times of a normalized Gaussian, the probability of ending up at after a random walk of time . The path-integral representation for the propagator is then: which is a path-integral rewrite of the Schwinger representation. The Schwinger representation is both useful for making manifest the particle aspect of the propagator, and for symmetrizing denominators of loop diagrams. Combining denominators The Schwinger representation has an immediate practical application to loop diagrams. For example, for the diagram in the theory formed by joining two s together in two half-lines, and making the remaining lines external, the integral over the internal propagators in the loop is: Here one line carries momentum and the other . The asymmetry can be fixed by putting everything in the Schwinger representation. Now the exponent mostly depends on , except for the asymmetrical little bit. Defining the variable and , the variable goes from 0 to , while goes from 0 to 1. The variable is the total proper time for the loop, while parametrizes the fraction of the proper time on the top of the loop versus the bottom. The Jacobian for this transformation of variables is easy to work out from the identities: and "wedging" gives . This allows the integral to be evaluated explicitly: leaving only the -integral. This method, invented by Schwinger but usually attributed to Feynman, is called combining denominator. Abstractly, it is the elementary identity: But this form does not provide the physical motivation for introducing ; is the proportion of proper time on one of the legs of the loop. Once the denominators are combined, a shift in to symmetrizes everything: This form shows that the moment that is more negative than four times the mass of the particle in the loop, which happens in a physical region of Lorentz space, the integral has a cut. This is exactly when the external momentum can create physical particles. When the loop has more vertices, there are more denominators to combine: The general rule follows from the Schwinger prescription for denominators: The integral over the Schwinger parameters can be split up as before into an integral over the total proper time and an integral over the fraction of the proper time in all but the first segment of the loop for . The are positive and add up to less than 1, so that the integral is over an -dimensional simplex. The Jacobian for the coordinate transformation can be worked out as before: Wedging all these equations together, one obtains This gives the integral: where the simplex is the region defined by the conditions as well as Performing the integral gives the general prescription for combining denominators: Since the numerator of the integrand is not involved, the same prescription works for any loop, no matter what the spins are carried by the legs. The interpretation of the parameters is that they are the fraction of the total proper time spent on each leg. Scattering The correlation functions of a quantum field theory describe the scattering of particles. The definition of "particle" in relativistic field theory is not self-evident, because if you try to determine the position so that the uncertainty is less than the compton wavelength, the uncertainty in energy is large enough to produce more particles and antiparticles of the same type from the vacuum. This means that the notion of a single-particle state is to some extent incompatible with the notion of an object localized in space. In the 1930s, Wigner gave a mathematical definition for single-particle states: they are a collection of states that form an irreducible representation of the Poincaré group. Single particle states describe an object with a finite mass, a well defined momentum, and a spin. This definition is fine for protons and neutrons, electrons and photons, but it excludes quarks, which are permanently confined, so the modern point of view is more accommodating: a particle is anything whose interaction can be described in terms of Feynman diagrams, which have an interpretation as a sum over particle trajectories. A field operator can act to produce a one-particle state from the vacuum, which means that the field operator produces a superposition of Wigner particle states. In the free field theory, the field produces one particle states only. But when there are interactions, the field operator can also produce 3-particle, 5-particle (if there is no +/− symmetry also 2, 4, 6 particle) states too. To compute the scattering amplitude for single particle states only requires a careful limit, sending the fields to infinity and integrating over space to get rid of the higher-order corrections. The relation between scattering and correlation functions is the LSZ-theorem: The scattering amplitude for particles to go to particles in a scattering event is the given by the sum of the Feynman diagrams that go into the correlation function for field insertions, leaving out the propagators for the external legs. For example, for the interaction of the previous section, the order contribution to the (Lorentz) correlation function is: Stripping off the external propagators, that is, removing the factors of , gives the invariant scattering amplitude : which is a constant, independent of the incoming and outgoing momentum. The interpretation of the scattering amplitude is that the sum of over all possible final states is the probability for the scattering event. The normalization of the single-particle states must be chosen carefully, however, to ensure that is a relativistic invariant. Non-relativistic single particle states are labeled by the momentum , and they are chosen to have the same norm at every value of . This is because the nonrelativistic unit operator on single particle states is: In relativity, the integral over the -states for a particle of mass m integrates over a hyperbola in space defined by the energy–momentum relation: If the integral weighs each point equally, the measure is not Lorentz-invariant. The invariant measure integrates over all values of and , restricting to the hyperbola with a Lorentz-invariant delta function: So the normalized -states are different from the relativistically normalized -states by a factor of The invariant amplitude is then the probability amplitude for relativistically normalized incoming states to become relativistically normalized outgoing states. For nonrelativistic values of , the relativistic normalization is the same as the nonrelativistic normalization (up to a constant factor ). In this limit, the invariant scattering amplitude is still constant. The particles created by the field scatter in all directions with equal amplitude. The nonrelativistic potential, which scatters in all directions with an equal amplitude (in the Born approximation), is one whose Fourier transform is constant—a delta-function potential. The lowest order scattering of the theory reveals the non-relativistic interpretation of this theory—it describes a collection of particles with a delta-function repulsion. Two such particles have an aversion to occupying the same point at the same time. Nonperturbative effects Thinking of Feynman diagrams as a perturbation series, nonperturbative effects like tunneling do not show up, because any effect that goes to zero faster than any polynomial does not affect the Taylor series. Even bound states are absent, since at any finite order particles are only exchanged a finite number of times, and to make a bound state, the binding force must last forever. But this point of view is misleading, because the diagrams not only describe scattering, but they also are a representation of the short-distance field theory correlations. They encode not only asymptotic processes like particle scattering, they also describe the multiplication rules for fields, the operator product expansion. Nonperturbative tunneling processes involve field configurations that on average get big when the coupling constant gets small, but each configuration is a coherent superposition of particles whose local interactions are described by Feynman diagrams. When the coupling is small, these become collective processes that involve large numbers of particles, but where the interactions between each of the particles is simple. (The perturbation series of any interacting quantum field theory has zero radius of convergence, complicating the limit of the infinite series of diagrams needed (in the limit of vanishing coupling) to describe such field configurations.) This means that nonperturbative effects show up asymptotically in resummations of infinite classes of diagrams, and these diagrams can be locally simple. The graphs determine the local equations of motion, while the allowed large-scale configurations describe non-perturbative physics. But because Feynman propagators are nonlocal in time, translating a field process to a coherent particle language is not completely intuitive, and has only been explicitly worked out in certain special cases. In the case of nonrelativistic bound states, the Bethe–Salpeter equation describes the class of diagrams to include to describe a relativistic atom. For quantum chromodynamics, the Shifman–Vainshtein–Zakharov sum rules describe non-perturbatively excited long-wavelength field modes in particle language, but only in a phenomenological way. The number of Feynman diagrams at high orders of perturbation theory is very large, because there are as many diagrams as there are graphs with a given number of nodes. Nonperturbative effects leave a signature on the way in which the number of diagrams and resummations diverge at high order. It is only because non-perturbative effects appear in hidden form in diagrams that it was possible to analyze nonperturbative effects in string theory, where in many cases a Feynman description is the only one available. In popular culture The use of the above diagram of the virtual particle producing a quark–antiquark pair was featured in the television sit-com The Big Bang Theory, in the episode "The Bat Jar Conjecture". PhD Comics of January 11, 2012, shows Feynman diagrams that visualize and describe quantum academic interactions, i.e. the paths followed by Ph.D. students when interacting with their advisors. Vacuum Diagrams, a science fiction story by Stephen Baxter, features the titular vacuum diagram, a specific type of Feynman diagram. Feynman and his wife, Gweneth Howarth, bought a Dodge Tradesman Maxivan in 1975, and had it painted with Feynman diagrams. The van is currently owned by video game designer and physicist Seamus Blackley. Qantum was the license plate ID. See also One-loop Feynman diagram Julian Schwinger#Schwinger and Feynman Stueckelberg–Feynman interpretation Penguin diagram Path integral formulation Propagator List of Feynman diagrams Angular momentum diagrams (quantum mechanics) Notes References Sources (expanded, updated version of 't Hooft & Veltman, 1973, cited above) External links AMS article: "What's New in Mathematics: Finite-dimensional Feynman Diagrams" Draw Feynman diagrams explained by Flip Tanedo at Quantumdiaries.com Drawing Feynman diagrams with FeynDiagram C++ library that produces PostScript output. Online Diagram Tool A graphical application for creating publication ready diagrams. JaxoDraw A Java program for drawing Feynman diagrams. Concepts in physics Scattering theory Quantum field theory Diagrams Richard Feynman 1948 introductions Eponymous theorems of physics
Feynman diagram
[ "Physics", "Chemistry" ]
13,017
[ "Quantum field theory", "Scattering theory", "Equations of physics", "Quantum mechanics", "Eponymous theorems of physics", "Scattering", "nan", "Physics theorems" ]
11,671
https://en.wikipedia.org/wiki/Fick%27s%20laws%20of%20diffusion
Fick's laws of diffusion describe diffusion and were first posited by Adolf Fick in 1855 on the basis of largely experimental results. They can be used to solve for the diffusion coefficient, . Fick's first law can be used to derive his second law which in turn is identical to the diffusion equation. Fick's first law: Movement of particles from high to low concentration (diffusive flux) is directly proportional to the particle's concentration gradient. Fick's second law: Prediction of change in concentration gradient with time due to diffusion. A diffusion process that obeys Fick's laws is called normal or Fickian diffusion; otherwise, it is called anomalous diffusion or non-Fickian diffusion. History In 1855, physiologist Adolf Fick first reported his now well-known laws governing the transport of mass through diffusive means. Fick's work was inspired by the earlier experiments of Thomas Graham, which fell short of proposing the fundamental laws for which Fick would become famous. Fick's law is analogous to the relationships discovered at the same epoch by other eminent scientists: Darcy's law (hydraulic flow), Ohm's law (charge transport), and Fourier's law (heat transport). Fick's experiments (modeled on Graham's) dealt with measuring the concentrations and fluxes of salt, diffusing between two reservoirs through tubes of water. It is notable that Fick's work primarily concerned diffusion in fluids, because at the time, diffusion in solids was not considered generally possible. Today, Fick's laws form the core of our understanding of diffusion in solids, liquids, and gases (in the absence of bulk fluid motion in the latter two cases). When a diffusion process does not follow Fick's laws (which happens in cases of diffusion through porous media and diffusion of swelling penetrants, among others), it is referred to as non-Fickian. Fick's first law Fick's first law relates the diffusive flux to the gradient of the concentration. It postulates that the flux goes from regions of high concentration to regions of low concentration, with a magnitude that is proportional to the concentration gradient (spatial derivative), or in simplistic terms the concept that a solute will move from a region of high concentration to a region of low concentration across a concentration gradient. In one (spatial) dimension, the law can be written in various forms, where the most common form (see) is in a molar basis: where is the diffusion flux, of which the dimension is the amount of substance per unit area per unit time. measures the amount of substance that will flow through a unit area during a unit time interval, is the diffusion coefficient or diffusivity. Its dimension is area per unit time, is the concentration gradient, (for ideal mixtures) is the concentration, with a dimension of amount of substance per unit volume, is position, the dimension of which is length. is proportional to the squared velocity of the diffusing particles, which depends on the temperature, viscosity of the fluid and the size of the particles according to the Stokes–Einstein relation. In dilute aqueous solutions the diffusion coefficients of most ions are similar and have values that at room temperature are in the range of . For biological molecules the diffusion coefficients normally range from 10−10 to 10−11 m2/s. In two or more dimensions we must use , the del or gradient operator, which generalises the first derivative, obtaining where denotes the diffusion flux vector. The driving force for the one-dimensional diffusion is the quantity , which for ideal mixtures is the concentration gradient. Variations of the first law Another form for the first law is to write it with the primary variable as mass fraction (, given for example in kg/kg), then the equation changes to: where the index denotes the th species, is the diffusion flux vector of the th species (for example in mol/m2-s), is the molar mass of the th species, is the mixture density (for example in kg/m3). The is outside the gradient operator. This is because: where is the partial density of the th species. Beyond this, in chemical systems other than ideal solutions or mixtures, the driving force for the diffusion of each species is the gradient of chemical potential of this species. Then Fick's first law (one-dimensional case) can be written where the index denotes the th species, is the concentration (mol/m3), is the universal gas constant (J/K/mol), is the absolute temperature (K), is the chemical potential (J/mol). The driving force of Fick's law can be expressed as a fugacity difference: where is the fugacity in Pa. is a partial pressure of component in a vapor or liquid phase. At vapor liquid equilibrium the evaporation flux is zero because . Derivation of Fick's first law for gases Four versions of Fick's law for binary gas mixtures are given below. These assume: thermal diffusion is negligible; the body force per unit mass is the same on both species; and either pressure is constant or both species have the same molar mass. Under these conditions, Ref. shows in detail how the diffusion equation from the kinetic theory of gases reduces to this version of Fick's law: where is the diffusion velocity of species . In terms of species flux this is If, additionally, , this reduces to the most common form of Fick's law, If (instead of or in addition to ) both species have the same molar mass, Fick's law becomes where is the mole fraction of species . Fick's second law Fick's second law predicts how diffusion causes the concentration to change with respect to time. It is a partial differential equation which in one dimension reads: where is the concentration in dimensions of , example mol/m3; is a function that depends on location and time , is time, example s, is the diffusion coefficient in dimensions of , example m2/s, is the position, example m. In two or more dimensions we must use the Laplacian , which generalises the second derivative, obtaining the equation Fick's second law has the same mathematical form as the Heat equation and its fundamental solution is the same as the Heat kernel, except switching thermal conductivity with diffusion coefficient : Derivation of Fick's second law Fick's second law can be derived from Fick's first law and the mass conservation in absence of any chemical reactions: Assuming the diffusion coefficient to be a constant, one can exchange the orders of the differentiation and multiply by the constant: and, thus, receive the form of the Fick's equations as was stated above. For the case of diffusion in two or more dimensions Fick's second law becomes which is analogous to the heat equation. If the diffusion coefficient is not a constant, but depends upon the coordinate or concentration, Fick's second law yields An important example is the case where is at a steady state, i.e. the concentration does not change by time, so that the left part of the above equation is identically zero. In one dimension with constant , the solution for the concentration will be a linear change of concentrations along . In two or more dimensions we obtain which is Laplace's equation, the solutions to which are referred to by mathematicians as harmonic functions. Example solutions and generalization Fick's second law is a special case of the convection–diffusion equation in which there is no advective flux and no net volumetric source. It can be derived from the continuity equation: where is the total flux and is a net volumetric source for . The only source of flux in this situation is assumed to be diffusive flux: Plugging the definition of diffusive flux to the continuity equation and assuming there is no source (), we arrive at Fick's second law: If flux were the result of both diffusive flux and advective flux, the convection–diffusion equation is the result. Example solution 1: constant concentration source and diffusion length A simple case of diffusion with time in one dimension (taken as the -axis) from a boundary located at position , where the concentration is maintained at a value is where is the complementary error function. This is the case when corrosive gases diffuse through the oxidative layer towards the metal surface (if we assume that concentration of gases in the environment is constant and the diffusion space – that is, the corrosion product layer – is semi-infinite, starting at 0 at the surface and spreading infinitely deep in the material). If, in its turn, the diffusion space is infinite (lasting both through the layer with , and that with , ), then the solution is amended only with coefficient in front of (as the diffusion now occurs in both directions). This case is valid when some solution with concentration is put in contact with a layer of pure solvent. (Bokstein, 2005) The length is called the diffusion length and provides a measure of how far the concentration has propagated in the -direction by diffusion in time (Bird, 1976). As a quick approximation of the error function, the first two terms of the Taylor series can be used: If is time-dependent, the diffusion length becomes This idea is useful for estimating a diffusion length over a heating and cooling cycle, where varies with temperature. Example solution 2: Brownian particle and mean squared displacement Another simple case of diffusion is the Brownian motion of one particle. The particle's Mean squared displacement from its original position is: where is the dimension of the particle's Brownian motion. For example, the diffusion of a molecule across a cell membrane 8 nm thick is 1-D diffusion because of the spherical symmetry; However, the diffusion of a molecule from the membrane to the center of a eukaryotic cell is a 3-D diffusion. For a cylindrical cactus, the diffusion from photosynthetic cells on its surface to its center (the axis of its cylindrical symmetry) is a 2-D diffusion. The square root of MSD, , is often used as a characterization of how far the particle has moved after time has elapsed. The MSD is symmetrically distributed over the 1D, 2D, and 3D space. Thus, the probability distribution of the magnitude of MSD in 1D is Gaussian and 3D is a Maxwell-Boltzmann distribution. Generalizations In non-homogeneous media, the diffusion coefficient varies in space, . This dependence does not affect Fick's first law but the second law changes: In anisotropic media, the diffusion coefficient depends on the direction. It is a symmetric tensor . Fick's first law changes to it is the product of a tensor and a vector: For the diffusion equation this formula gives The symmetric matrix of diffusion coefficients should be positive definite. It is needed to make the right-hand side operator elliptic. For inhomogeneous anisotropic media these two forms of the diffusion equation should be combined in The approach based on Einstein's mobility and Teorell formula gives the following generalization of Fick's equation for the multicomponent diffusion of the perfect components: where are concentrations of the components and is the matrix of coefficients. Here, indices and are related to the various components and not to the space coordinates. The Chapman–Enskog formulae for diffusion in gases include exactly the same terms. These physical models of diffusion are different from the test models which are valid for very small deviations from the uniform equilibrium. Earlier, such terms were introduced in the Maxwell–Stefan diffusion equation. For anisotropic multicomponent diffusion coefficients one needs a rank-four tensor, for example , where refer to the components and correspond to the space coordinates. Applications Equations based on Fick's law have been commonly used to model transport processes in foods, neurons, biopolymers, pharmaceuticals, porous soils, population dynamics, nuclear materials, plasma physics, and semiconductor doping processes. The theory of voltammetric methods is based on solutions of Fick's equation. On the other hand, in some cases a "Fickian (another common approximation of the transport equation is that of the diffusion theory)" description is inadequate. For example, in polymer science and food science a more general approach is required to describe transport of components in materials undergoing a glass transition. One more general framework is the Maxwell–Stefan diffusion equations of multi-component mass transfer, from which Fick's law can be obtained as a limiting case, when the mixture is extremely dilute and every chemical species is interacting only with the bulk mixture and not with other species. To account for the presence of multiple species in a non-dilute mixture, several variations of the Maxwell–Stefan equations are used. See also non-diagonal coupled transport processes (Onsager relationship). Fick's flow in liquids When two miscible liquids are brought into contact, and diffusion takes place, the macroscopic (or average) concentration evolves following Fick's law. On a mesoscopic scale, that is, between the macroscopic scale described by Fick's law and molecular scale, where molecular random walks take place, fluctuations cannot be neglected. Such situations can be successfully modeled with Landau-Lifshitz fluctuating hydrodynamics. In this theoretical framework, diffusion is due to fluctuations whose dimensions range from the molecular scale to the macroscopic scale. In particular, fluctuating hydrodynamic equations include a Fick's flow term, with a given diffusion coefficient, along with hydrodynamics equations and stochastic terms describing fluctuations. When calculating the fluctuations with a perturbative approach, the zero order approximation is Fick's law. The first order gives the fluctuations, and it comes out that fluctuations contribute to diffusion. This represents somehow a tautology, since the phenomena described by a lower order approximation is the result of a higher approximation: this problem is solved only by renormalizing the fluctuating hydrodynamics equations. Sorption rate and collision frequency of diluted solute Adsorption, absorption, and collision of molecules, particles, and surfaces are important problems in many fields. These fundamental processes regulate chemical, biological, and environmental reactions. Their rate can be calculated using the diffusion constant and Fick's laws of diffusion especially when these interactions happen in diluted solutions. Typically, the diffusion constant of molecules and particles defined by Fick's equation can be calculated using the Stokes–Einstein equation. In the ultrashort time limit, in the order of the diffusion time a2/D, where a is the particle radius, the diffusion is described by the Langevin equation. At a longer time, the Langevin equation merges into the Stokes–Einstein equation. The latter is appropriate for the condition of the diluted solution, where long-range diffusion is considered. According to the fluctuation-dissipation theorem based on the Langevin equation in the long-time limit and when the particle is significantly denser than the surrounding fluid, the time-dependent diffusion constant is: where (all in SI units) kB is the Boltzmann constant, T is the absolute temperature, μ is the mobility of the particle in the fluid or gas, which can be calculated using the Einstein relation (kinetic theory), m is the mass of the particle, t is time. For a single molecule such as organic molecules or biomolecules (e.g. proteins) in water, the exponential term is negligible due to the small product of mμ in the ultrafast picosecond region, thus irrelevant to the relatively slower adsorption of diluted solute. The adsorption or absorption rate of a dilute solute to a surface or interface in a (gas or liquid) solution can be calculated using Fick's laws of diffusion. The accumulated number of molecules adsorbed on the surface is expressed by the Langmuir-Schaefer equation by integrating the diffusion flux equation over time as shown in the simulated molecular diffusion in the first section of this page: is the surface area (m2). is the number concentration of the adsorber molecules (solute) in the bulk solution (#/m3). is diffusion coefficient of the adsorber (m2/s). is elapsed time (s). is the accumulated number of molecules in unit # molecules adsorbed during the time . The equation is named after American chemists Irving Langmuir and Vincent Schaefer. Briefly as explained in, the concentration gradient profile near a newly created (from ) absorptive surface (placed at ) in a once uniform bulk solution is solved in the above sections from Fick's equation, where is the number concentration of adsorber molecules at (#/m3). The concentration gradient at the subsurface at is simplified to the pre-exponential factor of the distribution And the rate of diffusion (flux) across area of the plane is Integrating over time, The Langmuir–Schaefer equation can be extended to the Ward–Tordai Equation to account for the "back-diffusion" of rejected molecules from the surface: where is the bulk concentration, is the sub-surface concentration (which is a function of time depending on the reaction model of the adsorption), and is a dummy variable. Monte Carlo simulations show that these two equations work to predict the adsorption rate of systems that form predictable concentration gradients near the surface but have troubles for systems without or with unpredictable concentration gradients, such as typical biosensing systems or when flow and convection are significant. A brief history of diffusive adsorption is shown in the right figure. A noticeable challenge of understanding the diffusive adsorption at the single-molecule level is the fractal nature of diffusion. Most computer simulations pick a time step for diffusion which ignores the fact that there are self-similar finer diffusion events (fractal) within each step. Simulating the fractal diffusion shows that a factor of two corrections should be introduced for the result of a fixed time-step adsorption simulation, bringing it to be consistent with the above two equations. A more problematic result of the above equations is they predict the lower limit of adsorption under ideal situations but is very difficult to predict the actual adsorption rates. The equations are derived at the long-time-limit condition when a stable concentration gradient has been formed near the surface. But real adsorption is often done much faster than this infinite time limit i.e. the concentration gradient, decay of concentration at the sub-surface, is only partially formed before the surface has been saturated or flow is on to maintain a certain gradient, thus the adsorption rate measured is almost always faster than the equations have predicted for low or none energy barrier adsorption (unless there is a significant adsorption energy barrier that slows down the absorption significantly), for example, thousands to millions time faster in the self-assembly of monolayers at the water-air or water-substrate interfaces. As such, it is necessary to calculate the evolution of the concentration gradient near the surface and find out a proper time to stop the imagined infinite evolution for practical applications. While it is hard to predict when to stop but it is reasonably easy to calculate the shortest time that matters, the critical time when the first nearest neighbor from the substrate surface feels the building-up of the concentration gradient. This yields the upper limit of the adsorption rate under an ideal situation when there are no other factors than diffusion that affect the absorber dynamics: where: is the adsorption rate assuming under adsorption energy barrier-free situation, in unit #/s, is the area of the surface of interest on an "infinite and flat" substrate (m2), is the concentration of the absorber molecule in the bulk solution (#/m3), is the diffusion constant of the absorber (solute) in the solution (m2/s) defined with Fick's law. This equation can be used to predict the initial adsorption rate of any system; It can be used to predict the steady-state adsorption rate of a typical biosensing system when the binding site is just a very small fraction of the substrate surface and a near-surface concentration gradient is never formed; It can also be used to predict the adsorption rate of molecules on the surface when there is a significant flow to push the concentration gradient very shallowly in the sub-surface. This critical time is significantly different from the first passenger arriving time or the mean free-path time. Using the average first-passenger time and Fick's law of diffusion to estimate the average binding rate will significantly over-estimate the concentration gradient because the first passenger usually comes from many layers of neighbors away from the target, thus its arriving time is significantly longer than the nearest neighbor diffusion time. Using the mean free path time plus the Langmuir equation will cause an artificial concentration gradient between the initial location of the first passenger and the target surface because the other neighbor layers have no change yet, thus significantly lower estimate the actual binding time, i.e., the actual first passenger arriving time itself, the inverse of the above rate, is difficult to calculate. If the system can be simplified to 1D diffusion, then the average first passenger time can be calculated using the same nearest neighbor critical diffusion time for the first neighbor distance to be the MSD, where: (unit m) is the average nearest neighbor distance approximated as cubic packing, where is the solute concentration in the bulk solution (unit # molecule / m3), is the diffusion coefficient defined by Fick's equation (unit m2/s), is the critical time (unit s). In this critical time, it is unlikely the first passenger has arrived and adsorbed. But it sets the speed of the layers of neighbors to arrive. At this speed with a concentration gradient that stops around the first neighbor layer, the gradient does not project virtually in the longer time when the actual first passenger arrives. Thus, the average first passenger coming rate (unit # molecule/s) for this 3D diffusion simplified in 1D problem, where is a factor of converting the 3D diffusive adsorption problem into a 1D diffusion problem whose value depends on the system, e.g., a fraction of adsorption area over solute nearest neighbor sphere surface area assuming cubic packing each unit has 8 neighbors shared with other units. This example fraction converges the result to the 3D diffusive adsorption solution shown above with a slight difference in pre-factor due to different packing assumptions and ignoring other neighbors. When the area of interest is the size of a molecule (specifically, a long cylindrical molecule such as DNA), the adsorption rate equation represents the collision frequency of two molecules in a diluted solution, with one molecule a specific side and the other no steric dependence, i.e., a molecule (random orientation) hit one side of the other. The diffusion constant need to be updated to the relative diffusion constant between two diffusing molecules. This estimation is especially useful in studying the interaction between a small molecule and a larger molecule such as a protein. The effective diffusion constant is dominated by the smaller one whose diffusion constant can be used instead. The above hitting rate equation is also useful to predict the kinetics of molecular self-assembly on a surface. Molecules are randomly oriented in the bulk solution. Assuming 1/6 of the molecules has the right orientation to the surface binding sites, i.e. 1/2 of the z-direction in x, y, z three dimensions, thus the concentration of interest is just 1/6 of the bulk concentration. Put this value into the equation one should be able to calculate the theoretical adsorption kinetic curve using the Langmuir adsorption model. In a more rigid picture, 1/6 can be replaced by the steric factor of the binding geometry. The bimolecular collision frequency related to many reactions including protein coagulation/aggregation is initially described by Smoluchowski coagulation equation proposed by Marian Smoluchowski in a seminal 1916 publication, derived from Brownian motion and Fick's laws of diffusion. Under an idealized reaction condition for A + B → product in a diluted solution, Smoluchovski suggested that the molecular flux at the infinite time limit can be calculated from Fick's laws of diffusion yielding a fixed/stable concentration gradient from the target molecule, e.g. B is the target molecule holding fixed relatively, and A is the moving molecule that creates a concentration gradient near the target molecule B due to the coagulation reaction between A and B. Smoluchowski calculated the collision frequency between A and B in the solution with unit #/s/m3: where: is the radius of the collision, is the relative diffusion constant between A and B (m2/s), and are number concentrations of A and B respectively (#/m3). The reaction order of this bimolecular reaction is 2 which is the analogy to the result from collision theory by replacing the moving speed of the molecule with diffusive flux. In the collision theory, the traveling time between A and B is proportional to the distance which is a similar relationship for the diffusion case if the flux is fixed. However, under a practical condition, the concentration gradient near the target molecule is evolving over time with the molecular flux evolving as well, and on average the flux is much bigger than the infinite time limit flux Smoluchowski has proposed. Before the first passenger arrival time, Fick's equation predicts a concentration gradient over time which does not build up yet in reality. Thus, this Smoluchowski frequency represents the lower limit of the real collision frequency. In 2022, Chen calculates the upper limit of the collision frequency between A and B in a solution assuming the bulk concentration of the moving molecule is fixed after the first nearest neighbor of the target molecule. Thus the concentration gradient evolution stops at the first nearest neighbor layer given a stop-time to calculate the actual flux. He named this the critical time and derived the diffusive collision frequency in unit #/s/m3: where: is the area of the cross-section of the collision (m2), is the relative diffusion constant between A and B (m2/s), and are number concentrations of A and B respectively (#/m3), represents 1/<d>, where d is the average distance between two molecules. This equation assumes the upper limit of a diffusive collision frequency between A and B is when the first neighbor layer starts to feel the evolution of the concentration gradient, whose reaction order is instead of 2. Both the Smoluchowski equation and the JChen equation satisfy dimensional checks with SI units. But the former is dependent on the radius and the latter is on the area of the collision sphere. From dimensional analysis, there will be an equation dependent on the volume of the collision sphere but eventually, all equations should converge to the same numerical rate of the collision that can be measured experimentally. The actual reaction order for a bimolecular unit reaction could be between 2 and , which makes sense because the diffusive collision time is squarely dependent on the distance between the two molecules. These new equations also avoid the singularity on the adsorption rate at time zero for the Langmuir-Schaefer equation. The infinity rate is justifiable under ideal conditions because when you introduce target molecules magically in a solution of probe molecule or vice versa, there always be a probability of them overlapping at time zero, thus the rate of that two molecules association is infinity. It does not matter that other millions of molecules have to wait for their first mate to diffuse and arrive. The average rate is thus infinity. But statistically this argument is meaningless. The maximum rate of a molecule in a period of time larger than zero is 1, either meet or not, thus the infinite rate at time zero for that molecule pair really should just be one, making the average rate 1/millions or more and statistically negligible. This does not even count in reality no two molecules can magically meet at time zero. Biological perspective The first law gives rise to the following formula: where is the permeability, an experimentally determined membrane "conductance" for a given gas at a given temperature, is the difference in concentration of the gas across the membrane for the direction of flow (from to ). Fick's first law is also important in radiation transfer equations. However, in this context, it becomes inaccurate when the diffusion constant is low and the radiation becomes limited by the speed of light rather than by the resistance of the material the radiation is flowing through. In this situation, one can use a flux limiter. The exchange rate of a gas across a fluid membrane can be determined by using this law together with Graham's law. Under the condition of a diluted solution when diffusion takes control, the membrane permeability mentioned in the above section can be theoretically calculated for the solute using the equation mentioned in the last section (use with particular care because the equation is derived for dense solutes, while biological molecules are not denser than water. Also, this equation assumes ideal concentration gradient forms near the membrane and evolves): where: is the total area of the pores on the membrane (unit m2), transmembrane efficiency (unitless), which can be calculated from the stochastic theory of chromatography, D is the diffusion constant of the solute unit m2⋅s−1, t is time unit s, c2, c1 concentration should use unit mol m−3, so flux unit becomes mol s−1. The flux is decay over the square root of time because a concentration gradient builds up near the membrane over time under ideal conditions. When there is flow and convection, the flux can be significantly different than the equation predicts and show an effective time t with a fixed value, which makes the flux stable instead of decay over time. A critical time has been estimated under idealized flow conditions when there is no gradient formed. This strategy is adopted in biology such as blood circulation. Semiconductor fabrication applications The semiconductor is a collective term for a series of devices. It mainly includes three categories:two-terminal devices, three-terminal devices, and four-terminal devices. The combination of the semiconductors is called an integrated circuit. The relationship between Fick's law and semiconductors: the principle of the semiconductor is transferring chemicals or dopants from a layer to a layer. Fick's law can be used to control and predict the diffusion by knowing how much the concentration of the dopants or chemicals move per meter and second through mathematics. Therefore, different types and levels of semiconductors can be fabricated. Integrated circuit fabrication technologies, model processes like CVD, thermal oxidation, wet oxidation, doping, etc. use diffusion equations obtained from Fick's law. CVD method of fabricate semiconductor The wafer is a kind of semiconductor whose silicon substrate is coated with a layer of CVD-created polymer chain and films. This film contains n-type and p-type dopants and takes responsibility for dopant conductions. The principle of CVD relies on the gas phase and gas-solid chemical reaction to create thin films. The viscous flow regime of CVD is driven by a pressure gradient. CVD also includes a diffusion component distinct from the surface diffusion of adatoms. In CVD, reactants and products must also diffuse through a boundary layer of stagnant gas that exists next to the substrate. The total number of steps required for CVD film growth are gas phase diffusion of reactants through the boundary layer, adsorption and surface diffusion of adatoms, reactions on the substrate, and gas phase diffusion of products away through the boundary layer. The velocity profile for gas flow is: where: is the thickness, is the Reynolds number, is the length of the substrate, at any surface, is viscosity, is density. Integrated the from to , it gives the average thickness: To keep the reaction balanced, reactants must diffuse through the stagnant boundary layer to reach the substrate. So a thin boundary layer is desirable. According to the equations, increasing vo would result in more wasted reactants. The reactants will not reach the substrate uniformly if the flow becomes turbulent. Another option is to switch to a new carrier gas with lower viscosity or density. The Fick's first law describes diffusion through the boundary layer. As a function of pressure (p) and temperature (T) in a gas, diffusion is determined. where: is the standard pressure, is the standard temperature, is the standard diffusitivity. The equation tells that increasing the temperature or decreasing the pressure can increase the diffusivity. Fick's first law predicts the flux of the reactants to the substrate and product away from the substrate: where: is the thickness , is the first reactant's concentration. In ideal gas law , the concentration of the gas is expressed by partial pressure. where is the gas constant, is the partial pressure gradient. As a result, Fick's first law tells us we can use a partial pressure gradient to control the diffusivity and control the growth of thin films of semiconductors. In many realistic situations, the simple Fick's law is not an adequate formulation for the semiconductor problem. It only applies to certain conditions, for example, given the semiconductor boundary conditions: constant source concentration diffusion, limited source concentration, or moving boundary diffusion (where junction depth keeps moving into the substrate). Invalidity of Fickian diffusion Even though Fickian diffusion has been used to model diffusion processes in semiconductor manufacturing (including CVD reactors) in early days, it often fails to validate the diffusion in advanced semiconductor nodes (< 90 nm). This mostly stems from the inability of Fickian diffusion to model diffusion processes accurately at molecular level and smaller. In advanced semiconductor manufacturing, it is important to understand the movement at atomic scales, which is failed by continuum diffusion. Today, most semiconductor manufacturers use random walk to study and model diffusion processes. This allows us to study the effects of diffusion in a discrete manner to understand the movement of individual atoms, molecules, plasma etc. In such a process, the movements of diffusing species (atoms, molecules, plasma etc.) are treated as a discrete entity, following a random walk through the CVD reactor, boundary layer, material structures etc. Sometimes, the movements might follow a biased-random walk depending on the processing conditions. Statistical analysis is done to understand variation/stochasticity arising from the random walk of the species, which in-turn affects the overall process and electrical variations. Food production and cooking The formulation of Fick's first law can explain a variety of complex phenomena in the context of food and cooking: Diffusion of molecules such as ethylene promotes plant growth and ripening, salt and sugar molecules promotes meat brining and marinating, and water molecules promote dehydration. Fick's first law can also be used to predict the changing moisture profiles across a spaghetti noodle as it hydrates during cooking. These phenomena are all about the spontaneous movement of particles of solutes driven by the concentration gradient. In different situations, there is different diffusivity which is a constant. By controlling the concentration gradient, the cooking time, shape of the food, and salting can be controlled. See also Advection Churchill–Bernstein equation Diffusion False diffusion Gas exchange Mass flux Maxwell–Stefan diffusion Nernst–Planck equation Osmosis Citations Further reading – reprinted in External links Fick's equations, Boltzmann's transformation, etc. (with figures and animations) Fick's Second Law on OpenStax Diffusion Eponymous laws of physics Mathematics in medicine Physical chemistry Statistical mechanics de:Diffusion#Erstes Fick'sches Gesetz
Fick's laws of diffusion
[ "Physics", "Chemistry", "Mathematics" ]
7,467
[ "Transport phenomena", "Physical phenomena", "Applied and interdisciplinary physics", "Diffusion", "Applied mathematics", "nan", "Statistical mechanics", "Mathematics in medicine", "Physical chemistry" ]
11,712
https://en.wikipedia.org/wiki/Facilitated%20diffusion
Facilitated diffusion (also known as facilitated transport or passive-mediated transport) is the process of spontaneous passive transport (as opposed to active transport) of molecules or ions across a biological membrane via specific transmembrane integral proteins. Being passive, facilitated transport does not directly require chemical energy from ATP hydrolysis in the transport step itself; rather, molecules and ions move down their concentration gradient according to the principles of diffusion. Facilitated diffusion differs from simple diffusion in several ways: The transport relies on molecular binding between the cargo and the membrane-embedded channel or carrier protein. The rate of facilitated diffusion is saturable with respect to the concentration difference between the two phases; unlike free diffusion which is linear in the concentration difference. The temperature dependence of facilitated transport is substantially different due to the presence of an activated binding event, as compared to free diffusion where the dependence on temperature is mild. Polar molecules and large ions dissolved in water cannot diffuse freely across the plasma membrane due to the hydrophobic nature of the fatty acid tails of the phospholipids that comprise the lipid bilayer. Only small, non-polar molecules, such as oxygen and carbon dioxide, can diffuse easily across the membrane. Hence, small polar molecules are transported by proteins in the form of transmembrane channels. These channels are gated, meaning that they open and close, and thus deregulate the flow of ions or small polar molecules across membranes, sometimes against the osmotic gradient. Larger molecules are transported by transmembrane carrier proteins, such as permeases, that change their conformation as the molecules are carried across (e.g. glucose or amino acids). Non-polar molecules, such as retinol or lipids, are poorly soluble in water. They are transported through aqueous compartments of cells or through extracellular space by water-soluble carriers (e.g. retinol binding protein). The metabolites are not altered because no energy is required for facilitated diffusion. Only permease changes its shape in order to transport metabolites. The form of transport through a cell membrane in which a metabolite is modified is called group translocation transportation. Glucose, sodium ions, and chloride ions are just a few examples of molecules and ions that must efficiently cross the plasma membrane but to which the lipid bilayer of the membrane is virtually impermeable. Their transport must therefore be "facilitated" by proteins that span the membrane and provide an alternative route or bypass mechanism. Some examples of proteins that mediate this process are glucose transporters, organic cation transport proteins, urea transporter, monocarboxylate transporter 8 and monocarboxylate transporter 10. In vivo model of facilitated diffusion Many physical and biochemical processes are regulated by diffusion. Facilitated diffusion is one form of diffusion and it is important in several metabolic processes. Facilitated diffusion is the main mechanism behind the binding of Transcription Factors (TFs) to designated target sites on the DNA molecule. The in vitro model, which is a very well known method of facilitated diffusion, that takes place outside of a living cell, explains the 3-dimensional pattern of diffusion in the cytosol and the 1-dimensional diffusion along the DNA contour. After carrying out extensive research on processes occurring out of the cell, this mechanism was generally accepted but there was a need to verify that this mechanism could take place in vivo or inside of living cells. Bauer & Metzler (2013) therefore carried out an experiment using a bacterial genome in which they investigated the average time for TF – DNA binding to occur. After analyzing the process for the time it takes for TF's to diffuse across the contour and cytoplasm of the bacteria's DNA, it was concluded that in vitro and in vivo are similar in that the association and dissociation rates of TF's to and from the DNA are similar in both. Also, on the DNA contour, the motion is slower and target sites are easy to localize while in the cytoplasm, the motion is faster but the TF's are not sensitive to their targets and so binding is restricted. Intracellular facilitated diffusion Single-molecule imaging is an imaging technique which provides an ideal resolution necessary for the study of the Transcription factor binding mechanism in living cells. In prokaryotic bacteria cells such as E. coli, facilitated diffusion is required in order for regulatory proteins to locate and bind to target sites on DNA base pairs. There are 2 main steps involved: the protein binds to a non-specific site on the DNA and then it diffuses along the DNA chain until it locates a target site, a process referred to as sliding. According to Brackley et al. (2013), during the process of protein sliding, the protein searches the entire length of the DNA chain using 3-D and 1-D diffusion patterns. During 3-D diffusion, the high incidence of Crowder proteins creates an osmotic pressure which brings searcher proteins (e.g. Lac Repressor) closer to the DNA to increase their attraction and enable them to bind, as well as steric effect which exclude the Crowder proteins from this region (Lac operator region). Blocker proteins participate in 1-D diffusion only i.e. bind to and diffuse along the DNA contour and not in the cytosol. Facilitated diffusion of proteins on chromatin The in vivo model mentioned above clearly explains 3-D and 1-D diffusion along the DNA strand and the binding of proteins to target sites on the chain. Just like prokaryotic cells, in eukaryotes, facilitated diffusion occurs in the nucleoplasm on chromatin filaments, accounted for by the switching dynamics of a protein when it is either bound to a chromatin thread or when freely diffusing in the nucleoplasm. In addition, given that the chromatin molecule is fragmented, its fractal properties need to be considered. After calculating the search time for a target protein, alternating between the 3-D and 1-D diffusion phases on the chromatin fractal structure, it was deduced that facilitated diffusion in eukaryotes precipitates the searching process and minimizes the searching time by increasing the DNA-protein affinity. For oxygen The oxygen affinity with hemoglobin on red blood cell surfaces enhances this bonding ability. In a system of facilitated diffusion of oxygen, there is a tight relationship between the ligand which is oxygen and the carrier which is either hemoglobin or myoglobin. This mechanism of facilitated diffusion of oxygen by hemoglobin or myoglobin was discovered and initiated by Wittenberg and Scholander. They carried out experiments to test for the steady-state of diffusion of oxygen at various pressures. Oxygen-facilitated diffusion occurs in a homogeneous environment where oxygen pressure can be relatively controlled. For oxygen diffusion to occur, there must be a full saturation pressure (more) on one side of the membrane and full reduced pressure (less) on the other side of the membrane i.e. one side of the membrane must be of higher concentration. During facilitated diffusion, hemoglobin increases the rate of constant diffusion of oxygen and facilitated diffusion occurs when oxyhemoglobin molecule is randomly displaced. For carbon monoxide Facilitated diffusion of carbon monoxide is similar to that of oxygen. Carbon monoxide also combines with hemoglobin and myoglobin, but carbon monoxide has a dissociation velocity that 100 times less than that of oxygen. Its affinity for myoglobin is 40 times higher and 250 times higher for hemoglobin, compared to oxygen. For glucose Since glucose is a large molecule, its diffusion across a membrane is difficult. Hence, it diffuses across membranes through facilitated diffusion, down the concentration gradient. The carrier protein at the membrane binds to the glucose and alters its shape such that it can easily to be transported. Movement of glucose into the cell could be rapid or slow depending on the number of membrane-spanning protein. It is transported against the concentration gradient by a dependent glucose symporter which provides a driving force to other glucose molecules in the cells. Facilitated diffusion helps in the release of accumulated glucose into the extracellular space adjacent to the blood capillary. See also Major facilitator superfamily References External links Facilitated Diffusion - Description and Animation Facilitated Diffusion- Definition and Supplement Diffusion Transport proteins
Facilitated diffusion
[ "Physics", "Chemistry" ]
1,727
[ "Transport phenomena", "Physical phenomena", "Diffusion" ]
12,024
https://en.wikipedia.org/wiki/General%20relativity
General relativity, also known as the general theory of relativity, and as Einstein's theory of gravity, is the geometric theory of gravitation published by Albert Einstein in 1915 and is the current description of gravitation in modern physics. General relativity generalizes special relativity and refines Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time, or four-dimensional spacetime. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever present matter and radiation. The relation is specified by the Einstein field equations, a system of second-order partial differential equations. Newton's law of universal gravitation, which describes classical gravity, can be seen as a prediction of general relativity for the almost flat spacetime geometry around stationary mass distributions. Some predictions of general relativity, however, are beyond Newton's law of universal gravitation in classical physics. These predictions concern the passage of time, the geometry of space, the motion of bodies in free fall, and the propagation of light, and include gravitational time dilation, gravitational lensing, the gravitational redshift of light, the Shapiro time delay and singularities/black holes. So far, all tests of general relativity have been shown to be in agreement with the theory. The time-dependent solutions of general relativity enable us to talk about the history of the universe and have provided the modern framework for cosmology, thus leading to the discovery of the Big Bang and cosmic microwave background radiation. Despite the introduction of a number of alternative theories, general relativity continues to be the simplest theory consistent with experimental data. Reconciliation of general relativity with the laws of quantum physics remains a problem, however, as there is a lack of a self-consistent theory of quantum gravity. It is not yet known how gravity can be unified with the three non-gravitational forces: strong, weak and electromagnetic. Einstein's theory has astrophysical implications, including the prediction of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape from them. Black holes are the end-state for massive stars. Microquasars and active galactic nuclei are believed to be stellar black holes and supermassive black holes. It also predicts gravitational lensing, where the bending of light results in multiple images of the same distant astronomical phenomenon. Other predictions include the existence of gravitational waves, which have been observed directly by the physics collaboration LIGO and other observatories. In addition, general relativity has provided the base of cosmological models of an expanding universe. Widely acknowledged as a theory of extraordinary beauty, general relativity has often been described as the most beautiful of all existing physical theories. History Henri Poincaré's 1905 theory of the dynamics of the electron was a relativistic theory which he applied to all forces, including gravity. While others thought that gravity was instantaneous or of electromagnetic origin, he suggested that relativity was "something due to our methods of measurement". In his theory, he showed that gravitational waves propagate at the speed of light. Soon afterwards, Einstein started thinking about how to incorporate gravity into his relativistic framework. In 1907, beginning with a simple thought experiment involving an observer in free fall (FFO), he embarked on what would be an eight-year search for a relativistic theory of gravity. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations, which form the core of Einstein's general theory of relativity. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present. A version of non-Euclidean geometry, called Riemannian geometry, enabled Einstein to develop general relativity by providing the key mathematical framework on which he fit his physical ideas of gravity. This idea was pointed out by mathematician Marcel Grossmann and published by Grossmann and Einstein in 1913. The Einstein field equations are nonlinear and considered difficult to solve. Einstein used approximation methods in working out initial predictions of the theory. But in 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the final stages of gravitational collapse, and the objects known today as black holes. In the same year, the first steps towards generalizing Schwarzschild's solution to electrically charged objects were taken, eventually resulting in the Reissner–Nordström solution, which is now associated with electrically charged black holes. In 1917, Einstein applied his theory to the universe as a whole, initiating the field of relativistic cosmology. In line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption. By 1929, however, the work of Hubble and others had shown that the universe is expanding. This is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which the universe has evolved from an extremely hot and dense earlier state. Einstein later declared the cosmological constant the biggest blunder of his life. During that period, general relativity remained something of a curiosity among physical theories. It was clearly superior to Newtonian gravity, being consistent with special relativity and accounting for several effects unexplained by the Newtonian theory. Einstein showed in 1915 how his theory explained the anomalous perihelion advance of the planet Mercury without any arbitrary parameters ("fudge factors"), and in 1919 an expedition led by Eddington confirmed general relativity's prediction for the deflection of starlight by the Sun during the total solar eclipse of 29 May 1919, instantly making Einstein famous. Yet the theory remained outside the mainstream of theoretical physics and astrophysics until developments between approximately 1960 and 1975, now known as the golden age of general relativity. Physicists began to understand the concept of a black hole, and to identify quasars as one of these objects' astrophysical manifestations. Ever more precise solar system tests confirmed the theory's predictive power, and relativistic cosmology also became amenable to direct observational tests. General relativity has acquired a reputation as a theory of extraordinary beauty. Subrahmanyan Chandrasekhar has noted that at multiple levels, general relativity exhibits what Francis Bacon has termed a "strangeness in the proportion" (i.e. elements that excite wonderment and surprise). It juxtaposes fundamental concepts (space and time versus matter and motion) which had previously been considered as entirely independent. Chandrasekhar also noted that Einstein's only guides in his search for an exact theory were the principle of equivalence and his sense that a proper description of gravity should be geometrical at its basis, so that there was an "element of revelation" in the manner in which Einstein arrived at his theory. Other elements of beauty associated with the general theory of relativity are its simplicity and symmetry, the manner in which it incorporates invariance and unification, and its perfect logical consistency. In the preface to Relativity: The Special and the General Theory, Einstein said "The present book is intended, as far as possible, to give an exact insight into the theory of Relativity to those readers who, from a general scientific and philosophical point of view, are interested in the theory, but who are not conversant with the mathematical apparatus of theoretical physics. The work presumes a standard of education corresponding to that of a university matriculation examination, and, despite the shortness of the book, a fair amount of patience and force of will on the part of the reader. The author has spared himself no pains in his endeavour to present the main ideas in the simplest and most intelligible form, and on the whole, in the sequence and connection in which they actually originated." From classical mechanics to general relativity General relativity can be understood by examining its similarities with and departures from classical physics. The first step is the realization that classical mechanics and Newton's law of gravity admit a geometric description. The combination of this description with the laws of special relativity results in a heuristic derivation of general relativity. Geometry of Newtonian gravity At the base of classical mechanics is the notion that a body's motion can be described as a combination of free (or inertial) motion, and deviations from this free motion. Such deviations are caused by external forces acting on a body in accordance with Newton's second law of motion, which states that the net force acting on a body is equal to that body's (inertial) mass multiplied by its acceleration. The preferred inertial motions are related to the geometry of space and time: in the standard reference frames of classical mechanics, objects in free motion move along straight lines at constant speed. In modern parlance, their paths are geodesics, straight world lines in curved spacetime. Conversely, one might expect that inertial motions, once identified by observing the actual motions of bodies and making allowances for the external forces (such as electromagnetism or friction), can be used to define the geometry of space, as well as a time coordinate. However, there is an ambiguity once gravity comes into play. According to Newton's law of gravity, and independently verified by experiments such as that of Eötvös and its successors (see Eötvös experiment), there is a universality of free fall (also known as the weak equivalence principle, or the universal equality of inertial and passive-gravitational mass): the trajectory of a test body in free fall depends only on its position and initial speed, but not on any of its material properties. A simplified version of this is embodied in Einstein's elevator experiment, illustrated in the figure on the right: for an observer in an enclosed room, it is impossible to decide, by mapping the trajectory of bodies such as a dropped ball, whether the room is stationary in a gravitational field and the ball accelerating, or in free space aboard a rocket that is accelerating at a rate equal to that of the gravitational field versus the ball which upon release has nil acceleration. Given the universality of free fall, there is no observable distinction between inertial motion and motion under the influence of the gravitational force. This suggests the definition of a new class of inertial motion, namely that of objects in free fall under the influence of gravity. This new class of preferred motions, too, defines a geometry of space and time—in mathematical terms, it is the geodesic motion associated with a specific connection which depends on the gradient of the gravitational potential. Space, in this construction, still has the ordinary Euclidean geometry. However, spacetime as a whole is more complicated. As can be shown using simple thought experiments following the free-fall trajectories of different test particles, the result of transporting spacetime vectors that can denote a particle's velocity (time-like vectors) will vary with the particle's trajectory; mathematically speaking, the Newtonian connection is not integrable. From this, one can deduce that spacetime is curved. The resulting Newton–Cartan theory is a geometric formulation of Newtonian gravity using only covariant concepts, i.e. a description which is valid in any desired coordinate system. In this geometric description, tidal effects—the relative acceleration of bodies in free fall—are related to the derivative of the connection, showing how the modified geometry is caused by the presence of mass. Relativistic generalization As intriguing as geometric Newtonian gravity may be, its basis, classical mechanics, is merely a limiting case of (special) relativistic mechanics. In the language of symmetry: where gravity can be neglected, physics is Lorentz invariant as in special relativity rather than Galilei invariant as in classical mechanics. (The defining symmetry of special relativity is the Poincaré group, which includes translations, rotations, boosts and reflections.) The differences between the two become significant when dealing with speeds approaching the speed of light, and with high-energy phenomena. With Lorentz symmetry, additional structures come into play. They are defined by the set of light cones (see image). The light-cones define a causal structure: for each event , there is a set of events that can, in principle, either influence or be influenced by via signals or interactions that do not need to travel faster than light (such as event in the image), and a set of events for which such an influence is impossible (such as event in the image). These sets are observer-independent. In conjunction with the world-lines of freely falling particles, the light-cones can be used to reconstruct the spacetime's semi-Riemannian metric, at least up to a positive scalar factor. In mathematical terms, this defines a conformal structure or conformal geometry. Special relativity is defined in the absence of gravity. For practical applications, it is a suitable model whenever gravity can be neglected. Bringing gravity into play, and assuming the universality of free fall motion, an analogous reasoning as in the previous section applies: there are no global inertial frames. Instead there are approximate inertial frames moving alongside freely falling particles. Translated into the language of spacetime: the straight time-like lines that define a gravity-free inertial frame are deformed to lines that are curved relative to each other, suggesting that the inclusion of gravity necessitates a change in spacetime geometry. A priori, it is not clear whether the new local frames in free fall coincide with the reference frames in which the laws of special relativity hold—that theory is based on the propagation of light, and thus on electromagnetism, which could have a different set of preferred frames. But using different assumptions about the special-relativistic frames (such as their being earth-fixed, or in free fall), one can derive different predictions for the gravitational redshift, that is, the way in which the frequency of light shifts as the light propagates through a gravitational field (cf. below). The actual measurements show that free-falling frames are the ones in which light propagates as it does in special relativity. The generalization of this statement, namely that the laws of special relativity hold to good approximation in freely falling (and non-rotating) reference frames, is known as the Einstein equivalence principle, a crucial guiding principle for generalizing special-relativistic physics to include gravity. The same experimental data shows that time as measured by clocks in a gravitational field—proper time, to give the technical term—does not follow the rules of special relativity. In the language of spacetime geometry, it is not measured by the Minkowski metric. As in the Newtonian case, this is suggestive of a more general geometry. At small scales, all reference frames that are in free fall are equivalent, and approximately Minkowskian. Consequently, we are now dealing with a curved generalization of Minkowski space. The metric tensor that defines the geometry—in particular, how lengths and angles are measured—is not the Minkowski metric of special relativity, it is a generalization known as a semi- or pseudo-Riemannian metric. Furthermore, each Riemannian metric is naturally associated with one particular kind of connection, the Levi-Civita connection, and this is, in fact, the connection that satisfies the equivalence principle and makes space locally Minkowskian (that is, in suitable locally inertial coordinates, the metric is Minkowskian, and its first partial derivatives and the connection coefficients vanish). Einstein's equations Having formulated the relativistic, geometric version of the effects of gravity, the question of gravity's source remains. In Newtonian gravity, the source is mass. In special relativity, mass turns out to be part of a more general quantity called the energy–momentum tensor, which includes both energy and momentum densities as well as stress: pressure and shear. Using the equivalence principle, this tensor is readily generalized to curved spacetime. Drawing further upon the analogy with geometric Newtonian gravity, it is natural to assume that the field equation for gravity relates this tensor and the Ricci tensor, which describes a particular class of tidal effects: the change in volume for a small cloud of test particles that are initially at rest, and then fall freely. In special relativity, conservation of energy–momentum corresponds to the statement that the energy–momentum tensor is divergence-free. This formula, too, is readily generalized to curved spacetime by replacing partial derivatives with their curved-manifold counterparts, covariant derivatives studied in differential geometry. With this additional condition—the covariant divergence of the energy–momentum tensor, and hence of whatever is on the other side of the equation, is zero—the simplest nontrivial set of equations are what are called Einstein's (field) equations: On the left-hand side is the Einstein tensor, , which is symmetric and a specific divergence-free combination of the Ricci tensor and the metric. In particular, is the curvature scalar. The Ricci tensor itself is related to the more general Riemann curvature tensor as On the right-hand side, is a constant and is the energy–momentum tensor. All tensors are written in abstract index notation. Matching the theory's prediction to observational results for planetary orbits or, equivalently, assuring that the weak-gravity, low-speed limit is Newtonian mechanics, the proportionality constant is found to be , where is the Newtonian constant of gravitation and the speed of light in vacuum. When there is no matter present, so that the energy–momentum tensor vanishes, the results are the vacuum Einstein equations, In general relativity, the world line of a particle free from all external, non-gravitational force is a particular type of geodesic in curved spacetime. In other words, a freely moving or falling particle always moves along a geodesic. The geodesic equation is: where is a scalar parameter of motion (e.g. the proper time), and are Christoffel symbols (sometimes called the affine connection coefficients or Levi-Civita connection coefficients) which is symmetric in the two lower indices. Greek indices may take the values: 0, 1, 2, 3 and the summation convention is used for repeated indices and . The quantity on the left-hand-side of this equation is the acceleration of a particle, and so this equation is analogous to Newton's laws of motion which likewise provide formulae for the acceleration of a particle. This equation of motion employs the Einstein notation, meaning that repeated indices are summed (i.e. from zero to three). The Christoffel symbols are functions of the four spacetime coordinates, and so are independent of the velocity or acceleration or other characteristics of a test particle whose motion is described by the geodesic equation. Total force in general relativity In general relativity, the effective gravitational potential energy of an object of mass m revolving around a massive central body M is given by A conservative total force can then be obtained as its negative gradient where L is the angular momentum. The first term represents the force of Newtonian gravity, which is described by the inverse-square law. The second term represents the centrifugal force in the circular motion. The third term represents the relativistic effect. Alternatives to general relativity There are alternatives to general relativity built upon the same premises, which include additional rules and/or constraints, leading to different field equations. Examples are Whitehead's theory, Brans–Dicke theory, teleparallelism, f(R) gravity and Einstein–Cartan theory. Definition and basic applications The derivation outlined in the previous section contains all the information needed to define general relativity, describe its key properties, and address a question of crucial importance in physics, namely how the theory can be used for model-building. Definition and basic properties General relativity is a metric theory of gravitation. At its core are Einstein's equations, which describe the relation between the geometry of a four-dimensional pseudo-Riemannian manifold representing spacetime, and the energy–momentum contained in that spacetime. Phenomena that in classical mechanics are ascribed to the action of the force of gravity (such as free-fall, orbital motion, and spacecraft trajectories), correspond to inertial motion within a curved geometry of spacetime in general relativity; there is no gravitational force deflecting objects from their natural, straight paths. Instead, gravity corresponds to changes in the properties of space and time, which in turn changes the straightest-possible paths that objects will naturally follow. The curvature is, in turn, caused by the energy–momentum of matter. Paraphrasing the relativist John Archibald Wheeler, spacetime tells matter how to move; matter tells spacetime how to curve. While general relativity replaces the scalar gravitational potential of classical physics by a symmetric rank-two tensor, the latter reduces to the former in certain limiting cases. For weak gravitational fields and slow speed relative to the speed of light, the theory's predictions converge on those of Newton's law of universal gravitation. As it is constructed using tensors, general relativity exhibits general covariance: its laws—and further laws formulated within the general relativistic framework—take on the same form in all coordinate systems. Furthermore, the theory does not contain any invariant geometric background structures, i.e. it is background independent. It thus satisfies a more stringent general principle of relativity, namely that the laws of physics are the same for all observers. Locally, as expressed in the equivalence principle, spacetime is Minkowskian, and the laws of physics exhibit local Lorentz invariance. Model-building The core concept of general-relativistic model-building is that of a solution of Einstein's equations. Given both Einstein's equations and suitable equations for the properties of matter, such a solution consists of a specific semi-Riemannian manifold (usually defined by giving the metric in specific coordinates), and specific matter fields defined on that manifold. Matter and geometry must satisfy Einstein's equations, so in particular, the matter's energy–momentum tensor must be divergence-free. The matter must, of course, also satisfy whatever additional equations were imposed on its properties. In short, such a solution is a model universe that satisfies the laws of general relativity, and possibly additional laws governing whatever matter might be present. Einstein's equations are nonlinear partial differential equations and, as such, difficult to solve exactly. Nevertheless, a number of exact solutions are known, although only a few have direct physical applications. The best-known exact solutions, and also those most interesting from a physics point of view, are the Schwarzschild solution, the Reissner–Nordström solution and the Kerr metric, each corresponding to a certain type of black hole in an otherwise empty universe, and the Friedmann–Lemaître–Robertson–Walker and de Sitter universes, each describing an expanding cosmos. Exact solutions of great theoretical interest include the Gödel universe (which opens up the intriguing possibility of time travel in curved spacetimes), the Taub–NUT solution (a model universe that is homogeneous, but anisotropic), and anti-de Sitter space (which has recently come to prominence in the context of what is called the Maldacena conjecture). Given the difficulty of finding exact solutions, Einstein's field equations are also solved frequently by numerical integration on a computer, or by considering small perturbations of exact solutions. In the field of numerical relativity, powerful computers are employed to simulate the geometry of spacetime and to solve Einstein's equations for interesting situations such as two colliding black holes. In principle, such methods may be applied to any system, given sufficient computer resources, and may address fundamental questions such as naked singularities. Approximate solutions may also be found by perturbation theories such as linearized gravity and its generalization, the post-Newtonian expansion, both of which were developed by Einstein. The latter provides a systematic approach to solving for the geometry of a spacetime that contains a distribution of matter that moves slowly compared with the speed of light. The expansion involves a series of terms; the first terms represent Newtonian gravity, whereas the later terms represent ever smaller corrections to Newton's theory due to general relativity. An extension of this expansion is the parametrized post-Newtonian (PPN) formalism, which allows quantitative comparisons between the predictions of general relativity and alternative theories. Consequences of Einstein's theory General relativity has a number of physical consequences. Some follow directly from the theory's axioms, whereas others have become clear only in the course of many years of research that followed Einstein's initial publication. Gravitational time dilation and frequency shift Assuming that the equivalence principle holds, gravity influences the passage of time. Light sent down into a gravity well is blueshifted, whereas light sent in the opposite direction (i.e., climbing out of the gravity well) is redshifted; collectively, these two effects are known as the gravitational frequency shift. More generally, processes close to a massive body run more slowly when compared with processes taking place farther away; this effect is known as gravitational time dilation. Gravitational redshift has been measured in the laboratory and using astronomical observations. Gravitational time dilation in the Earth's gravitational field has been measured numerous times using atomic clocks, while ongoing validation is provided as a side effect of the operation of the Global Positioning System (GPS). Tests in stronger gravitational fields are provided by the observation of binary pulsars. All results are in agreement with general relativity. However, at the current level of accuracy, these observations cannot distinguish between general relativity and other theories in which the equivalence principle is valid. Light deflection and gravitational time delay General relativity predicts that the path of light will follow the curvature of spacetime as it passes near a massive object. This effect was initially confirmed by observing the light of stars or distant quasars being deflected as it passes the Sun. This and related predictions follow from the fact that light follows what is called a light-like or null geodesic—a generalization of the straight lines along which light travels in classical physics. Such geodesics are the generalization of the invariance of lightspeed in special relativity. As one examines suitable model spacetimes (either the exterior Schwarzschild solution or, for more than a single mass, the post-Newtonian expansion), several effects of gravity on light propagation emerge. Although the bending of light can also be derived by extending the universality of free fall to light, the angle of deflection resulting from such calculations is only half the value given by general relativity. Closely related to light deflection is the Shapiro Time Delay, the phenomenon that light signals take longer to move through a gravitational field than they would in the absence of that field. There have been numerous successful tests of this prediction. In the parameterized post-Newtonian formalism (PPN), measurements of both the deflection of light and the gravitational time delay determine a parameter called γ, which encodes the influence of gravity on the geometry of space. Gravitational waves Predicted in 1916 by Albert Einstein, there are gravitational waves: ripples in the metric of spacetime that propagate at the speed of light. These are one of several analogies between weak-field gravity and electromagnetism in that, they are analogous to electromagnetic waves. On 11 February 2016, the Advanced LIGO team announced that they had directly detected gravitational waves from a pair of black holes merging. The simplest type of such a wave can be visualized by its action on a ring of freely floating particles. A sine wave propagating through such a ring towards the reader distorts the ring in a characteristic, rhythmic fashion (animated image to the right). Since Einstein's equations are non-linear, arbitrarily strong gravitational waves do not obey linear superposition, making their description difficult. However, linear approximations of gravitational waves are sufficiently accurate to describe the exceedingly weak waves that are expected to arrive here on Earth from far-off cosmic events, which typically result in relative distances increasing and decreasing by or less. Data analysis methods routinely make use of the fact that these linearized waves can be Fourier decomposed. Some exact solutions describe gravitational waves without any approximation, e.g., a wave train traveling through empty space or Gowdy universes, varieties of an expanding cosmos filled with gravitational waves. But for gravitational waves produced in astrophysically relevant situations, such as the merger of two black holes, numerical methods are presently the only way to construct appropriate models. Orbital effects and the relativity of direction General relativity differs from classical mechanics in a number of predictions concerning orbiting bodies. It predicts an overall rotation (precession) of planetary orbits, as well as orbital decay caused by the emission of gravitational waves and effects related to the relativity of direction. Precession of apsides In general relativity, the apsides of any orbit (the point of the orbiting body's closest approach to the system's center of mass) will precess; the orbit is not an ellipse, but akin to an ellipse that rotates on its focus, resulting in a rose curve-like shape (see image). Einstein first derived this result by using an approximate metric representing the Newtonian limit and treating the orbiting body as a test particle. For him, the fact that his theory gave a straightforward explanation of Mercury's anomalous perihelion shift, discovered earlier by Urbain Le Verrier in 1859, was important evidence that he had at last identified the correct form of the gravitational field equations. The effect can also be derived by using either the exact Schwarzschild metric (describing spacetime around a spherical mass) or the much more general post-Newtonian formalism. It is due to the influence of gravity on the geometry of space and to the contribution of self-energy to a body's gravity (encoded in the nonlinearity of Einstein's equations). Relativistic precession has been observed for all planets that allow for accurate precession measurements (Mercury, Venus, and Earth), as well as in binary pulsar systems, where it is larger by five orders of magnitude. In general relativity the perihelion shift , expressed in radians per revolution, is approximately given by where: is the semi-major axis is the orbital period is the speed of light in vacuum is the orbital eccentricity Orbital decay According to general relativity, a binary system will emit gravitational waves, thereby losing energy. Due to this loss, the distance between the two orbiting bodies decreases, and so does their orbital period. Within the Solar System or for ordinary double stars, the effect is too small to be observable. This is not the case for a close binary pulsar, a system of two orbiting neutron stars, one of which is a pulsar: from the pulsar, observers on Earth receive a regular series of radio pulses that can serve as a highly accurate clock, which allows precise measurements of the orbital period. Because neutron stars are immensely compact, significant amounts of energy are emitted in the form of gravitational radiation. The first observation of a decrease in orbital period due to the emission of gravitational waves was made by Hulse and Taylor, using the binary pulsar PSR1913+16 they had discovered in 1974. This was the first detection of gravitational waves, albeit indirect, for which they were awarded the 1993 Nobel Prize in physics. Since then, several other binary pulsars have been found, in particular the double pulsar PSR J0737−3039, where both stars are pulsars and which was last reported to also be in agreement with general relativity in 2021 after 16 years of observations. Geodetic precession and frame-dragging Several relativistic effects are directly related to the relativity of direction. One is geodetic precession: the axis direction of a gyroscope in free fall in curved spacetime will change when compared, for instance, with the direction of light received from distant stars—even though such a gyroscope represents the way of keeping a direction as stable as possible ("parallel transport"). For the Moon–Earth system, this effect has been measured with the help of lunar laser ranging. More recently, it has been measured for test masses aboard the satellite Gravity Probe B to a precision of better than 0.3%. Near a rotating mass, there are gravitomagnetic or frame-dragging effects. A distant observer will determine that objects close to the mass get "dragged around". This is most extreme for rotating black holes where, for any object entering a zone known as the ergosphere, rotation is inevitable. Such effects can again be tested through their influence on the orientation of gyroscopes in free fall. Somewhat controversial tests have been performed using the LAGEOS satellites, confirming the relativistic prediction. Also the Mars Global Surveyor probe around Mars has been used. Astrophysical applications Gravitational lensing The deflection of light by gravity is responsible for a new class of astronomical phenomena. If a massive object is situated between the astronomer and a distant target object with appropriate mass and relative distances, the astronomer will see multiple distorted images of the target. Such effects are known as gravitational lensing. Depending on the configuration, scale, and mass distribution, there can be two or more images, a bright ring known as an Einstein ring, or partial rings called arcs. The earliest example was discovered in 1979; since then, more than a hundred gravitational lenses have been observed. Even if the multiple images are too close to each other to be resolved, the effect can still be measured, e.g., as an overall brightening of the target object; a number of such "microlensing events" have been observed. Gravitational lensing has developed into a tool of observational astronomy. It is used to detect the presence and distribution of dark matter, provide a "natural telescope" for observing distant galaxies, and to obtain an independent estimate of the Hubble constant. Statistical evaluations of lensing data provide valuable insight into the structural evolution of galaxies. Gravitational-wave astronomy Observations of binary pulsars provide strong indirect evidence for the existence of gravitational waves (see Orbital decay, above). Detection of these waves is a major goal of current relativity-related research. Several land-based gravitational wave detectors are currently in operation, most notably the interferometric detectors GEO 600, LIGO (two detectors), TAMA 300 and VIRGO. Various pulsar timing arrays are using millisecond pulsars to detect gravitational waves in the 10−9 to 10−6 hertz frequency range, which originate from binary supermassive blackholes. A European space-based detector, eLISA / NGO, is currently under development, with a precursor mission (LISA Pathfinder) having launched in December 2015. Observations of gravitational waves promise to complement observations in the electromagnetic spectrum. They are expected to yield information about black holes and other dense objects such as neutron stars and white dwarfs, about certain kinds of supernova implosions, and about processes in the very early universe, including the signature of certain types of hypothetical cosmic string. In February 2016, the Advanced LIGO team announced that they had detected gravitational waves from a black hole merger. Black holes and other compact objects Whenever the ratio of an object's mass to its radius becomes sufficiently large, general relativity predicts the formation of a black hole, a region of space from which nothing, not even light, can escape. In the currently accepted models of stellar evolution, neutron stars of around 1.4 solar masses, and stellar black holes with a few to a few dozen solar masses, are thought to be the final state for the evolution of massive stars. Usually a galaxy has one supermassive black hole with a few million to a few billion solar masses in its center, and its presence is thought to have played an important role in the formation of the galaxy and larger cosmic structures. Astronomically, the most important property of compact objects is that they provide a supremely efficient mechanism for converting gravitational energy into electromagnetic radiation. Accretion, the falling of dust or gaseous matter onto stellar or supermassive black holes, is thought to be responsible for some spectacularly luminous astronomical objects, notably diverse kinds of active galactic nuclei on galactic scales and stellar-size objects such as microquasars. In particular, accretion can lead to relativistic jets, focused beams of highly energetic particles that are being flung into space at almost light speed. General relativity plays a central role in modelling all these phenomena, and observations provide strong evidence for the existence of black holes with the properties predicted by the theory. Black holes are also sought-after targets in the search for gravitational waves (cf. Gravitational waves, above). Merging black hole binaries should lead to some of the strongest gravitational wave signals reaching detectors here on Earth, and the phase directly before the merger ("chirp") could be used as a "standard candle" to deduce the distance to the merger events–and hence serve as a probe of cosmic expansion at large distances. The gravitational waves produced as a stellar black hole plunges into a supermassive one should provide direct information about the supermassive black hole's geometry. Cosmology The current models of cosmology are based on Einstein's field equations, which include the cosmological constant since it has important influence on the large-scale dynamics of the cosmos, where is the spacetime metric. Isotropic and homogeneous solutions of these enhanced equations, the Friedmann–Lemaître–Robertson–Walker solutions, allow physicists to model a universe that has evolved over the past 14 billion years from a hot, early Big Bang phase. Once a small number of parameters (for example the universe's mean matter density) have been fixed by astronomical observation, further observational data can be used to put the models to the test. Predictions, all successful, include the initial abundance of chemical elements formed in a period of primordial nucleosynthesis, the large-scale structure of the universe, and the existence and properties of a "thermal echo" from the early cosmos, the cosmic background radiation. Astronomical observations of the cosmological expansion rate allow the total amount of matter in the universe to be estimated, although the nature of that matter remains mysterious in part. About 90% of all matter appears to be dark matter, which has mass (or, equivalently, gravitational influence), but does not interact electromagnetically and, hence, cannot be observed directly. There is no generally accepted description of this new kind of matter, within the framework of known particle physics or otherwise. Observational evidence from redshift surveys of distant supernovae and measurements of the cosmic background radiation also show that the evolution of our universe is significantly influenced by a cosmological constant resulting in an acceleration of cosmic expansion or, equivalently, by a form of energy with an unusual equation of state, known as dark energy, the nature of which remains unclear. An inflationary phase, an additional phase of strongly accelerated expansion at cosmic times of around 10−33 seconds, was hypothesized in 1980 to account for several puzzling observations that were unexplained by classical cosmological models, such as the nearly perfect homogeneity of the cosmic background radiation. Recent measurements of the cosmic background radiation have resulted in the first evidence for this scenario. However, there is a bewildering variety of possible inflationary scenarios, which cannot be restricted by current observations. An even larger question is the physics of the earliest universe, prior to the inflationary phase and close to where the classical models predict the big bang singularity. An authoritative answer would require a complete theory of quantum gravity, which has not yet been developed (cf. the section on quantum gravity, below). Exotic solutions: time travel, warp drives Kurt Gödel showed that solutions to Einstein's equations exist that contain closed timelike curves (CTCs), which allow for loops in time. The solutions require extreme physical conditions unlikely ever to occur in practice, and it remains an open question whether further laws of physics will eliminate them completely. Since then, other—similarly impractical—GR solutions containing CTCs have been found, such as the Tipler cylinder and traversable wormholes. Stephen Hawking introduced chronology protection conjecture, which is an assumption beyond those of standard general relativity to prevent time travel. Some exact solutions in general relativity such as Alcubierre drive present examples of warp drive but these solutions requires exotic matter distribution, and generally suffers from semiclassical instability. Advanced concepts Asymptotic symmetries The spacetime symmetry group for special relativity is the Poincaré group, which is a ten-dimensional group of three Lorentz boosts, three rotations, and four spacetime translations. It is logical to ask what symmetries, if any, might apply in General Relativity. A tractable case might be to consider the symmetries of spacetime as seen by observers located far away from all sources of the gravitational field. The naive expectation for asymptotically flat spacetime symmetries might be simply to extend and reproduce the symmetries of flat spacetime of special relativity, viz., the Poincaré group. In 1962 Hermann Bondi, M. G. van der Burg, A. W. Metzner and Rainer K. Sachs addressed this asymptotic symmetry problem in order to investigate the flow of energy at infinity due to propagating gravitational waves. Their first step was to decide on some physically sensible boundary conditions to place on the gravitational field at light-like infinity to characterize what it means to say a metric is asymptotically flat, making no a priori assumptions about the nature of the asymptotic symmetry group—not even the assumption that such a group exists. Then after designing what they considered to be the most sensible boundary conditions, they investigated the nature of the resulting asymptotic symmetry transformations that leave invariant the form of the boundary conditions appropriate for asymptotically flat gravitational fields. What they found was that the asymptotic symmetry transformations actually do form a group and the structure of this group does not depend on the particular gravitational field that happens to be present. This means that, as expected, one can separate the kinematics of spacetime from the dynamics of the gravitational field at least at spatial infinity. The puzzling surprise in 1962 was their discovery of a rich infinite-dimensional group (the so-called BMS group) as the asymptotic symmetry group, instead of the finite-dimensional Poincaré group, which is a subgroup of the BMS group. Not only are the Lorentz transformations asymptotic symmetry transformations, there are also additional transformations that are not Lorentz transformations but are asymptotic symmetry transformations. In fact, they found an additional infinity of transformation generators known as supertranslations. This implies the conclusion that General Relativity (GR) does not reduce to special relativity in the case of weak fields at long distances. It turns out that the BMS symmetry, suitably modified, could be seen as a restatement of the universal soft graviton theorem in quantum field theory (QFT), which relates universal infrared (soft) QFT with GR asymptotic spacetime symmetries. Causal structure and global geometry In general relativity, no material body can catch up with or overtake a light pulse. No influence from an event A can reach any other location X before light sent out at A to X. In consequence, an exploration of all light worldlines (null geodesics) yields key information about the spacetime's causal structure. This structure can be displayed using Penrose–Carter diagrams in which infinitely large regions of space and infinite time intervals are shrunk ("compactified") so as to fit onto a finite map, while light still travels along diagonals as in standard spacetime diagrams. Aware of the importance of causal structure, Roger Penrose and others developed what is known as global geometry. In global geometry, the object of study is not one particular solution (or family of solutions) to Einstein's equations. Rather, relations that hold true for all geodesics, such as the Raychaudhuri equation, and additional non-specific assumptions about the nature of matter (usually in the form of energy conditions) are used to derive general results. Horizons Using global geometry, some spacetimes can be shown to contain boundaries called horizons, which demarcate one region from the rest of spacetime. The best-known examples are black holes: if mass is compressed into a sufficiently compact region of space (as specified in the hoop conjecture, the relevant length scale is the Schwarzschild radius), no light from inside can escape to the outside. Since no object can overtake a light pulse, all interior matter is imprisoned as well. Passage from the exterior to the interior is still possible, showing that the boundary, the black hole's horizon, is not a physical barrier. Early studies of black holes relied on explicit solutions of Einstein's equations, notably the spherically symmetric Schwarzschild solution (used to describe a static black hole) and the axisymmetric Kerr solution (used to describe a rotating, stationary black hole, and introducing interesting features such as the ergosphere). Using global geometry, later studies have revealed more general properties of black holes. With time they become rather simple objects characterized by eleven parameters specifying: electric charge, mass–energy, linear momentum, angular momentum, and location at a specified time. This is stated by the black hole uniqueness theorem: "black holes have no hair", that is, no distinguishing marks like the hairstyles of humans. Irrespective of the complexity of a gravitating object collapsing to form a black hole, the object that results (having emitted gravitational waves) is very simple. Even more remarkably, there is a general set of laws known as black hole mechanics, which is analogous to the laws of thermodynamics. For instance, by the second law of black hole mechanics, the area of the event horizon of a general black hole will never decrease with time, analogous to the entropy of a thermodynamic system. This limits the energy that can be extracted by classical means from a rotating black hole (e.g. by the Penrose process). There is strong evidence that the laws of black hole mechanics are, in fact, a subset of the laws of thermodynamics, and that the black hole area is proportional to its entropy. This leads to a modification of the original laws of black hole mechanics: for instance, as the second law of black hole mechanics becomes part of the second law of thermodynamics, it is possible for the black hole area to decrease as long as other processes ensure that entropy increases overall. As thermodynamical objects with nonzero temperature, black holes should emit thermal radiation. Semiclassical calculations indicate that indeed they do, with the surface gravity playing the role of temperature in Planck's law. This radiation is known as Hawking radiation (cf. the quantum theory section, below). There are many other types of horizons. In an expanding universe, an observer may find that some regions of the past cannot be observed ("particle horizon"), and some regions of the future cannot be influenced (event horizon). Even in flat Minkowski space, when described by an accelerated observer (Rindler space), there will be horizons associated with a semiclassical radiation known as Unruh radiation. Singularities Another general feature of general relativity is the appearance of spacetime boundaries known as singularities. Spacetime can be explored by following up on timelike and lightlike geodesics—all possible ways that light and particles in free fall can travel. But some solutions of Einstein's equations have "ragged edges"—regions known as spacetime singularities, where the paths of light and falling particles come to an abrupt end, and geometry becomes ill-defined. In the more interesting cases, these are "curvature singularities", where geometrical quantities characterizing spacetime curvature, such as the Ricci scalar, take on infinite values. Well-known examples of spacetimes with future singularities—where worldlines end—are the Schwarzschild solution, which describes a singularity inside an eternal static black hole, or the Kerr solution with its ring-shaped singularity inside an eternal rotating black hole. The Friedmann–Lemaître–Robertson–Walker solutions and other spacetimes describing universes have past singularities on which worldlines begin, namely Big Bang singularities, and some have future singularities (Big Crunch) as well. Given that these examples are all highly symmetric—and thus simplified—it is tempting to conclude that the occurrence of singularities is an artifact of idealization. The famous singularity theorems, proved using the methods of global geometry, say otherwise: singularities are a generic feature of general relativity, and unavoidable once the collapse of an object with realistic matter properties has proceeded beyond a certain stage and also at the beginning of a wide class of expanding universes. However, the theorems say little about the properties of singularities, and much of current research is devoted to characterizing these entities' generic structure (hypothesized e.g. by the BKL conjecture). The cosmic censorship hypothesis states that all realistic future singularities (no perfect symmetries, matter with realistic properties) are safely hidden away behind a horizon, and thus invisible to all distant observers. While no formal proof yet exists, numerical simulations offer supporting evidence of its validity. Evolution equations Each solution of Einstein's equation encompasses the whole history of a universe—it is not just some snapshot of how things are, but a whole, possibly matter-filled, spacetime. It describes the state of matter and geometry everywhere and at every moment in that particular universe. Due to its general covariance, Einstein's theory is not sufficient by itself to determine the time evolution of the metric tensor. It must be combined with a coordinate condition, which is analogous to gauge fixing in other field theories. To understand Einstein's equations as partial differential equations, it is helpful to formulate them in a way that describes the evolution of the universe over time. This is done in "3+1" formulations, where spacetime is split into three space dimensions and one time dimension. The best-known example is the ADM formalism. These decompositions show that the spacetime evolution equations of general relativity are well-behaved: solutions always exist, and are uniquely defined, once suitable initial conditions have been specified. Such formulations of Einstein's field equations are the basis of numerical relativity. Global and quasi-local quantities The notion of evolution equations is intimately tied in with another aspect of general relativistic physics. In Einstein's theory, it turns out to be impossible to find a general definition for a seemingly simple property such as a system's total mass (or energy). The main reason is that the gravitational field—like any physical field—must be ascribed a certain energy, but that it proves to be fundamentally impossible to localize that energy. Nevertheless, there are possibilities to define a system's total mass, either using a hypothetical "infinitely distant observer" (ADM mass) or suitable symmetries (Komar mass). If one excludes from the system's total mass the energy being carried away to infinity by gravitational waves, the result is the Bondi mass at null infinity. Just as in classical physics, it can be shown that these masses are positive. Corresponding global definitions exist for momentum and angular momentum. There have also been a number of attempts to define quasi-local quantities, such as the mass of an isolated system formulated using only quantities defined within a finite region of space containing that system. The hope is to obtain a quantity useful for general statements about isolated systems, such as a more precise formulation of the hoop conjecture. Relationship with quantum theory If general relativity were considered to be one of the two pillars of modern physics, then quantum theory, the basis of understanding matter from elementary particles to solid-state physics, would be the other. However, how to reconcile quantum theory with general relativity is still an open question. Quantum field theory in curved spacetime Ordinary quantum field theories, which form the basis of modern elementary particle physics, are defined in flat Minkowski space, which is an excellent approximation when it comes to describing the behavior of microscopic particles in weak gravitational fields like those found on Earth. In order to describe situations in which gravity is strong enough to influence (quantum) matter, yet not strong enough to require quantization itself, physicists have formulated quantum field theories in curved spacetime. These theories rely on general relativity to describe a curved background spacetime, and define a generalized quantum field theory to describe the behavior of quantum matter within that spacetime. Using this formalism, it can be shown that black holes emit a blackbody spectrum of particles known as Hawking radiation leading to the possibility that they evaporate over time. As briefly mentioned above, this radiation plays an important role for the thermodynamics of black holes. Quantum gravity The demand for consistency between a quantum description of matter and a geometric description of spacetime, as well as the appearance of singularities (where curvature length scales become microscopic), indicate the need for a full theory of quantum gravity: for an adequate description of the interior of black holes, and of the very early universe, a theory is required in which gravity and the associated geometry of spacetime are described in the language of quantum physics. Despite major efforts, no complete and consistent theory of quantum gravity is currently known, even though a number of promising candidates exist. Attempts to generalize ordinary quantum field theories, used in elementary particle physics to describe fundamental interactions, so as to include gravity have led to serious problems. Some have argued that at low energies, this approach proves successful, in that it results in an acceptable effective (quantum) field theory of gravity. At very high energies, however, the perturbative results are badly divergent and lead to models devoid of predictive power ("perturbative non-renormalizability"). One attempt to overcome these limitations is string theory, a quantum theory not of point particles, but of minute one-dimensional extended objects. The theory promises to be a unified description of all particles and interactions, including gravity; the price to pay is unusual features such as six extra dimensions of space in addition to the usual three. In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity. Another approach starts with the canonical quantization procedures of quantum theory. Using the initial-value-formulation of general relativity (cf. evolution equations above), the result is the Wheeler–deWitt equation (an analogue of the Schrödinger equation) which, regrettably, turns out to be ill-defined without a proper ultraviolet (lattice) cutoff. However, with the introduction of what are now known as Ashtekar variables, this leads to a promising model known as loop quantum gravity. Space is represented by a web-like structure called a spin network, evolving over time in discrete steps. Depending on which features of general relativity and quantum theory are accepted unchanged, and on what level changes are introduced, there are numerous other attempts to arrive at a viable theory of quantum gravity, some examples being the lattice theory of gravity based on the Feynman Path Integral approach and Regge calculus, dynamical triangulations, causal sets, twistor models or the path integral based models of quantum cosmology. All candidate theories still have major formal and conceptual problems to overcome. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests (and thus to decide between the candidates where their predictions vary), although there is hope for this to change as future data from cosmological observations and particle physics experiments becomes available. Current status General relativity has emerged as a highly successful model of gravitation and cosmology, which has so far passed many unambiguous observational and experimental tests. However, there are strong indications that the theory is incomplete. The problem of quantum gravity and the question of the reality of spacetime singularities remain open. Observational data that is taken as evidence for dark energy and dark matter could indicate the need for new physics. Even taken as is, general relativity is rich with possibilities for further exploration. Mathematical relativists seek to understand the nature of singularities and the fundamental properties of Einstein's equations, while numerical relativists run increasingly powerful computer simulations (such as those describing merging black holes). In February 2016, it was announced that the existence of gravitational waves was directly detected by the Advanced LIGO team on 14 September 2015. A century after its introduction, general relativity remains a highly active area of research. See also (warp drive) References Bibliography ; original paper in Russian: See also English translation at Einstein Papers Project See also English translation at Einstein Papers Project See also English translation at Einstein Papers Project Further reading Popular books Beginning undergraduate textbooks Advanced undergraduate textbooks Graduate textbooks Specialists' books Journal articles See also English translation at Einstein Papers Project External links Einstein Online  – Articles on a variety of aspects of relativistic physics for a general audience; hosted by the Max Planck Institute for Gravitational Physics GEO600 home page, the official website of the GEO600 project. LIGO Laboratory NCSA Spacetime Wrinkles – produced by the numerical relativity group at the NCSA, with an elementary introduction to general relativity (lecture by Leonard Susskind recorded 22 September 2008 at Stanford University). Series of lectures on General Relativity given in 2006 at the Institut Henri Poincaré (introductory/advanced). General Relativity Tutorials by John Baez. The Feynman Lectures on Physics Vol. II Ch. 42: Curved Space Concepts in astronomy Albert Einstein 1915 in science Articles containing video clips
General relativity
[ "Physics", "Astronomy" ]
12,128
[ "Concepts in astronomy", "General relativity", "Theory of relativity" ]
12,100
https://en.wikipedia.org/wiki/Graviton
In theories of quantum gravity, the graviton is the hypothetical elementary particle that mediates the force of gravitational interaction. There is no complete quantum field theory of gravitons due to an outstanding mathematical problem with renormalization in general relativity. In string theory, believed by some to be a consistent theory of quantum gravity, the graviton is a massless state of a fundamental string. If it exists, the graviton is expected to be massless because the gravitational force has a very long range, and appears to propagate at the speed of light. The graviton must be a spin-2 boson because the source of gravitation is the stress–energy tensor, a second-order tensor (compared with electromagnetism's spin-1 photon, the source of which is the four-current, a first-order tensor). Additionally, it can be shown that any massless spin-2 field would give rise to a force indistinguishable from gravitation, because a massless spin-2 field would couple to the stress–energy tensor in the same way gravitational interactions do. This result suggests that, if a massless spin-2 particle is discovered, it must be the graviton. Theory It is hypothesized that gravitational interactions are mediated by an as yet undiscovered elementary particle, dubbed the graviton. The three other known forces of nature are mediated by elementary particles: electromagnetism by the photon, the strong interaction by gluons, and the weak interaction by the W and Z bosons. All three of these forces appear to be accurately described by the Standard Model of particle physics. In the classical limit, a successful theory of gravitons would reduce to general relativity, which itself reduces to Newton's law of gravitation in the weak-field limit. History Albert Einstein discussed quantized gravitational radiation in 1916, the year following his publication of general relativity. The term graviton was coined in 1934 by Soviet physicists Dmitry Blokhintsev and . Paul Dirac reintroduced the term in a number of lectures in 1959, noting that the energy of the gravitational field should come in quanta. A mediation of the gravitational interaction by particles was anticipated by Pierre-Simon Laplace. Just like Newton's anticipation of photons, Laplace's anticipated "gravitons" had a greater speed than the speed of light in vacuum , the speed of gravitons expected in modern theories, and were not connected to quantum mechanics or special relativity, since these theories didn't yet exist during Laplace's lifetime. Gravitons and renormalization When describing graviton interactions, the classical theory of Feynman diagrams and semiclassical corrections such as one-loop diagrams behave normally. However, Feynman diagrams with at least two loops lead to ultraviolet divergences. These infinite results cannot be removed because quantized general relativity is not perturbatively renormalizable, unlike quantum electrodynamics and models such as the Yang–Mills theory. Therefore, incalculable answers are found from the perturbation method by which physicists calculate the probability of a particle to emit or absorb gravitons, and the theory loses predictive veracity. Those problems and the complementary approximation framework are grounds to show that a theory more unified than quantized general relativity is required to describe the behavior near the Planck scale. Comparison with other forces Like the force carriers of the other forces (see photon, gluon, W and Z bosons), the graviton plays a role in general relativity, in defining the spacetime in which events take place. In some descriptions energy modifies the "shape" of spacetime itself, and gravity is a result of this shape, an idea which at first glance may appear hard to match with the idea of a force acting between particles. Because the diffeomorphism invariance of the theory does not allow any particular space-time background to be singled out as the "true" space-time background, general relativity is said to be background-independent. In contrast, the Standard Model is not background-independent, with Minkowski space enjoying a special status as the fixed background space-time. A theory of quantum gravity is needed in order to reconcile these differences. Whether this theory should be background-independent is an open question. The answer to this question will determine the understanding of what specific role gravitation plays in the fate of the universe. Energy and wavelength While gravitons are presumed to be massless, they would still carry energy, as does any other quantum particle. Photon energy and gluon energy are also carried by massless particles. It is unclear which variables might determine graviton energy, the amount of energy carried by a single graviton. Alternatively, if gravitons are massive at all, the analysis of gravitational waves yielded a new upper bound on the mass of gravitons. The graviton's Compton wavelength is at least , or about 1.6 light-years, corresponding to a graviton mass of no more than . This relation between wavelength and mass-energy is calculated with the Planck–Einstein relation, the same formula that relates electromagnetic wavelength to photon energy. Experimental observation Unambiguous detection of individual gravitons, though not prohibited by any fundamental law, has been thought to be impossible with any physically reasonable detector. The reason is the extremely low cross section for the interaction of gravitons with matter. For example, a detector with the mass of Jupiter and 100% efficiency, placed in close orbit around a neutron star, would only be expected to observe one graviton every 10 years, even under the most favorable conditions. It would be impossible to discriminate these events from the background of neutrinos, since the dimensions of the required neutrino shield would ensure collapse into a black hole. It has been proposed that detecting single gravitons would be possible by quantum sensing. Even quantum events may not indicate quantization of gravitational radiation. LIGO and Virgo collaborations' observations have directly detected gravitational waves. Others have postulated that graviton scattering yields gravitational waves as particle interactions yield coherent states. Although these experiments cannot detect individual gravitons, they might provide information about certain properties of the graviton. For example, if gravitational waves were observed to propagate slower than c (the speed of light in vacuum), that would imply that the graviton has mass (however, gravitational waves must propagate slower than c in a region with non-zero mass density if they are to be detectable). Observations of gravitational waves put an upper bound of on the graviton's mass. Solar system planetary trajectory measurements by space missions such as Cassini and MESSENGER give a comparable upper bound of . The gravitational wave and planetary ephemeris need not agree: they test different aspects of a potential graviton-based theory. Astronomical observations of the kinematics of galaxies, especially the galaxy rotation problem and modified Newtonian dynamics, might point toward gravitons having non-zero mass. Difficulties and outstanding issues Most theories containing gravitons suffer from severe problems. Attempts to extend the Standard Model or other quantum field theories by adding gravitons run into serious theoretical difficulties at energies close to or above the Planck scale. This is because of infinities arising due to quantum effects; technically, gravitation is not renormalizable. Since classical general relativity and quantum mechanics seem to be incompatible at such energies, from a theoretical point of view, this situation is not tenable. One possible solution is to replace particles with strings. String theories are quantum theories of gravity in the sense that they reduce to classical general relativity plus field theory at low energies, but are fully quantum mechanical, contain a graviton, and are thought to be mathematically consistent. See also Gravitino Dual graviton Gravitoelectromagnetism Planck mass Static forces and virtual-particle exchange Soft graviton theorem Polarizable vacuum References External links Bosons Gauge bosons Quantum gravity String theory Hypothetical elementary particles Force carriers
Graviton
[ "Physics", "Astronomy" ]
1,688
[ "Physical phenomena", "Astronomical hypotheses", "Force carriers", "Unsolved problems in physics", "Bosons", "Quantum gravity", "Subatomic particles", "Fundamental interactions", "Hypothetical elementary particles", "String theory", "Physics beyond the Standard Model", "Matter" ]
12,240
https://en.wikipedia.org/wiki/Gold
Gold is a chemical element with the chemical symbol Au (from Latin ) and atomic number 79. In its pure form, it is a bright, slightly orange-yellow, dense, soft, malleable, and ductile metal. Chemically, gold is a transition metal, a group 11 element, and one of the noble metals. It is one of the least reactive chemical elements, being the second-lowest in the reactivity series. It is solid under standard conditions. Gold often occurs in free elemental (native state), as nuggets or grains, in rocks, veins, and alluvial deposits. It occurs in a solid solution series with the native element silver (as in electrum), naturally alloyed with other metals like copper and palladium, and mineral inclusions such as within pyrite. Less commonly, it occurs in minerals as gold compounds, often with tellurium (gold tellurides). Gold is resistant to most acids, though it does dissolve in aqua regia (a mixture of nitric acid and hydrochloric acid), forming a soluble tetrachloroaurate anion. Gold is insoluble in nitric acid alone, which dissolves silver and base metals, a property long used to refine gold and confirm the presence of gold in metallic substances, giving rise to the term 'acid test'. Gold dissolves in alkaline solutions of cyanide, which are used in mining and electroplating. Gold also dissolves in mercury, forming amalgam alloys, and as the gold acts simply as a solute, this is not a chemical reaction. A relatively rare element, gold is a precious metal that has been used for coinage, jewelry, and other works of art throughout recorded history. In the past, a gold standard was often implemented as a monetary policy. Gold coins ceased to be minted as a circulating currency in the 1930s, and the world gold standard was abandoned for a fiat currency system after the Nixon shock measures of 1971. In 2023, the world's largest gold producer was China, followed by Russia and Australia. , a total of around 201,296 tonnes of gold exist above ground. This is equal to a cube, with each side measuring roughly . The world's consumption of new gold produced is about 50% in jewelry, 40% in investments, and 10% in industry. Gold's high malleability, ductility, resistance to corrosion and most other chemical reactions, as well as conductivity of electricity have led to its continued use in corrosion-resistant electrical connectors in all types of computerized devices (its chief industrial use). Gold is also used in infrared shielding, the production of colored glass, gold leafing, and tooth restoration. Certain gold salts are still used as anti-inflammatory agents in medicine. Characteristics Gold is the most malleable of all metals. It can be drawn into a wire of single-atom width, and then stretched considerably before it breaks. Such nanowires distort via the formation, reorientation, and migration of dislocations and crystal twins without noticeable hardening. A single gram of gold can be beaten into a sheet of , and an avoirdupois ounce into . Gold leaf can be beaten thin enough to become semi-transparent. The transmitted light appears greenish-blue because gold strongly reflects yellow and red. Such semi-transparent sheets also strongly reflect infrared light, making them useful as infrared (radiant heat) shields in the visors of heat-resistant suits and in sun visors for spacesuits. Gold is a good conductor of heat and electricity. Gold has a density of 19.3 g/cm3, almost identical to that of tungsten at 19.25 g/cm3; as such, tungsten has been used in the counterfeiting of gold bars, such as by plating a tungsten bar with gold. By comparison, the density of lead is 11.34 g/cm3, and that of the densest element, osmium, is . Color Whereas most metals are gray or silvery white, gold is slightly reddish-yellow. This color is determined by the frequency of plasma oscillations among the metal's valence electrons, in the ultraviolet range for most metals but in the visible range for gold due to relativistic effects affecting the orbitals around gold atoms. Similar effects impart a golden hue to metallic caesium. Common colored gold alloys include the distinctive eighteen-karat rose gold created by the addition of copper. Alloys containing palladium or nickel are also important in commercial jewelry as these produce white gold alloys. Fourteen-karat gold-copper alloy is nearly identical in color to certain bronze alloys, and both may be used to produce police and other badges. Fourteen- and eighteen-karat gold alloys with silver alone appear greenish-yellow and are referred to as green gold. Blue gold can be made by alloying with iron, and purple gold can be made by alloying with aluminium. Less commonly, addition of manganese, indium, and other elements can produce more unusual colors of gold for various applications. Colloidal gold, used by electron-microscopists, is red if the particles are small; larger particles of colloidal gold are blue. Isotopes Gold has only one stable isotope, , which is also its only naturally occurring isotope, so gold is both a mononuclidic and monoisotopic element. Thirty-six radioisotopes have been synthesized, ranging in atomic mass from 169 to 205. The most stable of these is with a half-life of 186.1 days. The least stable is , which decays by proton emission with a half-life of 30 μs. Most of gold's radioisotopes with atomic masses below 197 decay by some combination of proton emission, α decay, and β+ decay. The exceptions are , which decays by electron capture, and , which decays most often by electron capture (93%) with a minor β− decay path (7%). All of gold's radioisotopes with atomic masses above 197 decay by β− decay. At least 32 nuclear isomers have also been characterized, ranging in atomic mass from 170 to 200. Within that range, only , , , , and do not have isomers. Gold's most stable isomer is with a half-life of 2.27 days. Gold's least stable isomer is with a half-life of only 7 ns. has three decay paths: β+ decay, isomeric transition, and alpha decay. No other isomer or isotope of gold has three decay paths. Synthesis The possible production of gold from a more common element, such as lead, has long been a subject of human inquiry, and the ancient and medieval discipline of alchemy often focused on it; however, the transmutation of the chemical elements did not become possible until the understanding of nuclear physics in the 20th century. The first synthesis of gold was conducted by Japanese physicist Hantaro Nagaoka, who synthesized gold from mercury in 1924 by neutron bombardment. An American team, working without knowledge of Nagaoka's prior study, conducted the same experiment in 1941, achieving the same result and showing that the isotopes of gold produced by it were all radioactive. In 1980, Glenn Seaborg transmuted several thousand atoms of bismuth into gold at the Lawrence Berkeley Laboratory. Gold can be manufactured in a nuclear reactor, but doing so is highly impractical and would cost far more than the value of the gold that is produced. Chemistry Although gold is the most noble of the noble metals, it still forms many diverse compounds. The oxidation state of gold in its compounds ranges from −1 to +5, but Au(I) and Au(III) dominate its chemistry. Au(I), referred to as the aurous ion, is the most common oxidation state with soft ligands such as thioethers, thiolates, and organophosphines. Au(I) compounds are typically linear. A good example is , which is the soluble form of gold encountered in mining. The binary gold halides, such as AuCl, form zigzag polymeric chains, again featuring linear coordination at Au. Most drugs based on gold are Au(I) derivatives. Au(III) (referred to as auric) is a common oxidation state, and is illustrated by gold(III) chloride, . The gold atom centers in Au(III) complexes, like other d8 compounds, are typically square planar, with chemical bonds that have both covalent and ionic character. Gold(I,III) chloride is also known, an example of a mixed-valence complex. Gold does not react with oxygen at any temperature and, up to 100 °C, is resistant to attack from ozone: Some free halogens react to form the corresponding gold halides. Gold is strongly attacked by fluorine at dull-red heat to form gold(III) fluoride . Powdered gold reacts with chlorine at 180 °C to form gold(III) chloride . Gold reacts with bromine at 140 °C to form a combination of gold(III) bromide and gold(I) bromide AuBr, but reacts very slowly with iodine to form gold(I) iodide AuI: 2 Au{} + 3 F2 ->[{}\atop\Delta] 2 AuF3 2 Au{} + 3 Cl2 ->[{}\atop\Delta] 2 AuCl3 2 Au{} + 2 Br2 ->[{}\atop\Delta] AuBr3{} + AuBr 2 Au{} + I2 ->[{}\atop\Delta] 2 AuI Gold does not react with sulfur directly, but gold(III) sulfide can be made by passing hydrogen sulfide through a dilute solution of gold(III) chloride or chlorauric acid. Unlike sulfur, phosphorus reacts directly with gold at elevated temperatures to produce gold phosphide (Au2P3). Gold readily dissolves in mercury at room temperature to form an amalgam, and forms alloys with many other metals at higher temperatures. These alloys can be produced to modify the hardness and other metallurgical properties, to control melting point or to create exotic colors. Gold is unaffected by most acids. It does not react with hydrofluoric, hydrochloric, hydrobromic, hydriodic, sulfuric, or nitric acid. It does react with selenic acid, and is dissolved by aqua regia, a 1:3 mixture of nitric acid and hydrochloric acid. Nitric acid oxidizes the metal to +3 ions, but only in minute amounts, typically undetectable in the pure acid because of the chemical equilibrium of the reaction. However, the ions are removed from the equilibrium by hydrochloric acid, forming ions, or chloroauric acid, thereby enabling further oxidation: 2 Au{} + 6 H2SeO4 ->[{}\atop{200^\circ\text{C}}] Au2(SeO4)3{} + 3 H2SeO3{} + 3 H2O Au{} + 4HCl{} + HNO3 -> HAuCl4{} + NO\uparrow + 2H2O Gold is similarly unaffected by most bases. It does not react with aqueous, solid, or molten sodium or potassium hydroxide. It does however, react with sodium or potassium cyanide under alkaline conditions when oxygen is present to form soluble complexes. Common oxidation states of gold include +1 (gold(I) or aurous compounds) and +3 (gold(III) or auric compounds). Gold ions in solution are readily reduced and precipitated as metal by adding any other metal as the reducing agent. The added metal is oxidized and dissolves, allowing the gold to be displaced from solution and be recovered as a solid precipitate. Rare oxidation states Less common oxidation states of gold include −1, +2, and +5. The −1 oxidation state occurs in aurides, compounds containing the anion. Caesium auride (CsAu), for example, crystallizes in the caesium chloride motif; rubidium, potassium, and tetramethylammonium aurides are also known. Gold has the highest electron affinity of any metal, at 222.8 kJ/mol, making a stable species, analogous to the halides. Gold also has a –1 oxidation state in covalent complexes with the group 4 transition metals, such as in titanium tetraauride and the analogous zirconium and hafnium compounds. These chemicals are expected to form gold-bridged dimers in a manner similar to titanium(IV) hydride. Gold(II) compounds are usually diamagnetic with Au–Au bonds such as [. The evaporation of a solution of in concentrated produces red crystals of gold(II) sulfate, . Originally thought to be a mixed-valence compound, it has been shown to contain cations, analogous to the better-known mercury(I) ion, . A gold(II) complex, the tetraxenonogold(II) cation, which contains xenon as a ligand, occurs in . In September 2023, a novel type of metal-halide perovskite material consisting of Au3+ and Au2+ cations in its crystal structure has been found. It has been shown to be unexpectedly stable at normal conditions. Gold pentafluoride, along with its derivative anion, , and its difluorine complex, gold heptafluoride, is the sole example of gold(V), the highest verified oxidation state. Some gold compounds exhibit aurophilic bonding, which describes the tendency of gold ions to interact at distances that are too long to be a conventional Au–Au bond but shorter than van der Waals bonding. The interaction is estimated to be comparable in strength to that of a hydrogen bond. Well-defined cluster compounds are numerous. In some cases, gold has a fractional oxidation state. A representative example is the octahedral species . Origin Gold production in the universe Gold is thought to have been produced in supernova nucleosynthesis, and from the collision of neutron stars, and to have been present in the dust from which the Solar System formed. Traditionally, gold in the universe is thought to have formed by the r-process (rapid neutron capture) in supernova nucleosynthesis, but more recently it has been suggested that gold and other elements heavier than iron may also be produced in quantity by the r-process in the collision of neutron stars. In both cases, satellite spectrometers at first only indirectly detected the resulting gold. However, in August 2017, the spectroscopic signatures of heavy elements, including gold, were observed by electromagnetic observatories in the GW170817 neutron star merger event, after gravitational wave detectors confirmed the event as a neutron star merger. Current astrophysical models suggest that this single neutron star merger event generated between 3 and 13 Earth masses of gold. This amount, along with estimations of the rate of occurrence of these neutron star merger events, suggests that such mergers may produce enough gold to account for most of the abundance of this element in the universe. Asteroid origin theories Because the Earth was molten when it was formed, almost all of the gold present in the early Earth probably sank into the planetary core. Therefore, as hypothesized in one model, most of the gold in the Earth's crust and mantle is thought to have been delivered to Earth by asteroid impacts during the Late Heavy Bombardment, about 4 billion years ago. Gold which is reachable by humans has, in one case, been associated with a particular asteroid impact. The asteroid that formed Vredefort impact structure 2.020 billion years ago is often credited with seeding the Witwatersrand basin in South Africa with the richest gold deposits on earth. However, this scenario is now questioned. The gold-bearing Witwatersrand rocks were laid down between 700 and 950 million years before the Vredefort impact. These gold-bearing rocks had furthermore been covered by a thick layer of Ventersdorp lavas and the Transvaal Supergroup of rocks before the meteor struck, and thus the gold did not actually arrive in the asteroid/meteorite. What the Vredefort impact achieved, however, was to distort the Witwatersrand basin in such a way that the gold-bearing rocks were brought to the present erosion surface in Johannesburg, on the Witwatersrand, just inside the rim of the original diameter crater caused by the meteor strike. The discovery of the deposit in 1886 launched the Witwatersrand Gold Rush. Some 22% of all the gold that is ascertained to exist today on Earth has been extracted from these Witwatersrand rocks. Mantle return theories Much of the rest of the gold on Earth is thought to have been incorporated into the planet since its very beginning, as planetesimals formed the mantle. In 2017, an international group of scientists established that gold "came to the Earth's surface from the deepest regions of our planet", the mantle, as evidenced by their findings at Deseado Massif in the Argentinian Patagonia. Occurrence On Earth, gold is found in ores in rock formed from the Precambrian time onward. It most often occurs as a native metal, typically in a metal solid solution with silver (i.e. as a gold/silver alloy). Such alloys usually have a silver content of 8–10%. Electrum is elemental gold with more than 20% silver, and is commonly known as white gold. Electrum's color runs from golden-silvery to silvery, dependent upon the silver content. The more silver, the lower the specific gravity. Native gold occurs as very small to microscopic particles embedded in rock, often together with quartz or sulfide minerals such as "fool's gold", which is a pyrite. These are called lode deposits. The metal in a native state is also found in the form of free flakes, grains or larger nuggets that have been eroded from rocks and end up in alluvial deposits called placer deposits. Such free gold is always richer at the exposed surface of gold-bearing veins, owing to the oxidation of accompanying minerals followed by weathering; and by washing of the dust into streams and rivers, where it collects and can be welded by water action to form nuggets. Gold sometimes occurs combined with tellurium as the minerals calaverite, krennerite, nagyagite, petzite and sylvanite (see telluride minerals), and as the rare bismuthide maldonite () and antimonide aurostibite (). Gold also occurs in rare alloys with copper, lead, and mercury: the minerals auricupride (), novodneprite () and weishanite (). A 2004 research paper suggests that microbes can sometimes play an important role in forming gold deposits, transporting and precipitating gold to form grains and nuggets that collect in alluvial deposits. A 2013 study has claimed water in faults vaporizes during an earthquake, depositing gold. When an earthquake strikes, it moves along a fault. Water often lubricates faults, filling in fractures and jogs. About below the surface, under very high temperatures and pressures, the water carries high concentrations of carbon dioxide, silica, and gold. During an earthquake, the fault jog suddenly opens wider. The water inside the void instantly vaporizes, flashing to steam and forcing silica, which forms the mineral quartz, and gold out of the fluids and onto nearby surfaces. Seawater The world's oceans contain gold. Measured concentrations of gold in the Atlantic and Northeast Pacific are 50–150 femtomol/L or 10–30 parts per quadrillion (about 10–30 g/km3). In general, gold concentrations for south Atlantic and central Pacific samples are the same (~50 femtomol/L) but less certain. Mediterranean deep waters contain slightly higher concentrations of gold (100–150 femtomol/L), which is attributed to wind-blown dust or rivers. At 10 parts per quadrillion, the Earth's oceans would hold 15,000 tonnes of gold. These figures are three orders of magnitude less than reported in the literature prior to 1988, indicating contamination problems with the earlier data. A number of people have claimed to be able to economically recover gold from sea water, but they were either mistaken or acted in an intentional deception. Prescott Jernegan ran a gold-from-seawater swindle in the United States in the 1890s, as did an English fraudster in the early 1900s. Fritz Haber did research on the extraction of gold from sea water in an effort to help pay Germany's reparations following World War I. Based on the published values of 2 to 64 ppb of gold in seawater, a commercially successful extraction seemed possible. After analysis of 4,000 water samples yielding an average of 0.004 ppb, it became clear that extraction would not be possible, and he ended the project. History The earliest recorded metal employed by humans appears to be gold, which can be found free or "native". Small amounts of natural gold have been found in Spanish caves used during the late Paleolithic period, . The oldest gold artifacts in the world are from Bulgaria and are dating back to the 5th millennium BC (4,600 BC to 4,200 BC), such as those found in the Varna Necropolis near Lake Varna and the Black Sea coast, thought to be the earliest "well-dated" finding of gold artifacts in history. Gold artifacts probably made their first appearance in Ancient Egypt at the very beginning of the pre-dynastic period, at the end of the fifth millennium BC and the start of the fourth, and smelting was developed during the course of the 4th millennium; gold artifacts appear in the archeology of Lower Mesopotamia during the early 4th millennium. As of 1990, gold artifacts found at the Wadi Qana cave cemetery of the 4th millennium BC in West Bank were the earliest from the Levant. Gold artifacts such as the golden hats and the Nebra disk appeared in Central Europe from the 2nd millennium BC Bronze Age. The oldest known map of a gold mine was drawn in the 19th Dynasty of Ancient Egypt (1320–1200 BC), whereas the first written reference to gold was recorded in the 12th Dynasty around 1900 BC. Egyptian hieroglyphs from as early as 2600 BC describe gold, which King Tushratta of the Mitanni claimed was "more plentiful than dirt" in Egypt. Egypt and especially Nubia had the resources to make them major gold-producing areas for much of history. One of the earliest known maps, known as the Turin Papyrus Map, shows the plan of a gold mine in Nubia together with indications of the local geology. The primitive working methods are described by both Strabo and Diodorus Siculus, and included fire-setting. Large mines were also present across the Red Sea in what is now Saudi Arabia. Gold is mentioned in the Amarna letters numbered 19 and 26 from around the 14th century BC. Gold is mentioned frequently in the Old Testament, starting with Genesis 2:11 (at Havilah), the story of the golden calf, and many parts of the temple including the Menorah and the golden altar. In the New Testament, it is included with the gifts of the magi in the first chapters of Matthew. The Book of Revelation 21:21 describes the city of New Jerusalem as having streets "made of pure gold, clear as crystal". Exploitation of gold in the south-east corner of the Black Sea is said to date from the time of Midas, and this gold was important in the establishment of what is probably the world's earliest coinage in Lydia around 610 BC. The legend of the golden fleece dating from eighth century BCE may refer to the use of fleeces to trap gold dust from placer deposits in the ancient world. From the 6th or 5th century BC, the Chu (state) circulated the Ying Yuan, one kind of square gold coin. In Roman metallurgy, new methods for extracting gold on a large scale were developed by introducing hydraulic mining methods, especially in Hispania from 25 BC onwards and in Dacia from 106 AD onwards. One of their largest mines was at Las Medulas in León, where seven long aqueducts enabled them to sluice most of a large alluvial deposit. The mines at Roşia Montană in Transylvania were also very large, and until very recently, still mined by opencast methods. They also exploited smaller deposits in Britain, such as placer and hard-rock deposits at Dolaucothi. The various methods they used are well described by Pliny the Elder in his encyclopedia Naturalis Historia written towards the end of the first century AD. During Mansa Musa's (ruler of the Mali Empire from 1312 to 1337) hajj to Mecca in 1324, he passed through Cairo in July 1324, and was reportedly accompanied by a camel train that included thousands of people and nearly a hundred camels where he gave away so much gold that it depressed the price in Egypt for over a decade, causing high inflation. A contemporary Arab historian remarked: The European exploration of the Americas was fueled in no small part by reports of the gold ornaments displayed in great profusion by Native American peoples, especially in Mesoamerica, Peru, Ecuador and Colombia. The Aztecs regarded gold as the product of the gods, calling it literally "god excrement" (teocuitlatl in Nahuatl), and after Moctezuma II was killed, most of this gold was shipped to Spain. However, for the indigenous peoples of North America gold was considered useless and they saw much greater value in other minerals which were directly related to their utility, such as obsidian, flint, and slate. El Dorado is applied to a legendary story in which precious stones were found in fabulous abundance along with gold coins. The concept of El Dorado underwent several transformations, and eventually accounts of the previous myth were also combined with those of a legendary lost city. El Dorado, was the term used by the Spanish Empire to describe a mythical tribal chief (zipa) of the Muisca native people in Colombia, who, as an initiation rite, covered himself with gold dust and submerged in Lake Guatavita. The legends surrounding El Dorado changed over time, as it went from being a man, to a city, to a kingdom, and then finally to an empire. Beginning in the early modern period, European exploration and colonization of West Africa was driven in large part by reports of gold deposits in the region, which was eventually referred to by Europeans as the "Gold Coast". From the late 15th to early 19th centuries, European trade in the region was primarily focused in gold, along with ivory and slaves. The gold trade in West Africa was dominated by the Ashanti Empire, who initially traded with the Portuguese before branching out and trading with British, French, Spanish and Danish merchants. British desires to secure control of West African gold deposits played a role in the Anglo-Ashanti wars of the late 19th century, which saw the Ashanti Empire annexed by Britain. Gold played a role in western culture, as a cause for desire and of corruption, as told in children's fables such as Rumpelstiltskin—where Rumpelstiltskin turns hay into gold for the peasant's daughter in return for her child when she becomes a princess—and the stealing of the hen that lays golden eggs in Jack and the Beanstalk. The top prize at the Olympic Games and many other sports competitions is the gold medal. 75% of the presently accounted for gold has been extracted since 1910, two-thirds since 1950. One main goal of the alchemists was to produce gold from other substances, such as lead — presumably by the interaction with a mythical substance called the philosopher's stone. Trying to produce gold led the alchemists to systematically find out what can be done with substances, and this laid the foundation for today's chemistry, which can produce gold (albeit uneconomically) by using nuclear transmutation. Their symbol for gold was the circle with a point at its center (☉), which was also the astrological symbol and the ancient Chinese character for the Sun. The Dome of the Rock is covered with an ultra-thin golden glassier. The Sikh Golden temple, the Harmandir Sahib, is a building covered with gold. Similarly the Wat Phra Kaew emerald Buddhist temple (wat) in Thailand has ornamental gold-leafed statues and roofs. Some European king and queen's crowns were made of gold, and gold was used for the bridal crown since antiquity. An ancient Talmudic text circa 100 AD describes Rachel, wife of Rabbi Akiva, receiving a "Jerusalem of Gold" (diadem). A Greek burial crown made of gold was found in a grave circa 370 BC. Etymology Gold is cognate with similar words in many Germanic languages, deriving via Proto-Germanic *gulþą from Proto-Indo-European *ǵʰelh₃- . The symbol Au is from the Latin . The Proto-Indo-European ancestor of aurum was *h₂é-h₂us-o-, meaning . This word is derived from the same root (Proto-Indo-European *h₂u̯es- ) as *h₂éu̯sōs, the ancestor of the Latin word . This etymological relationship is presumably behind the frequent claim in scientific publications that meant . Culture In popular culture gold is a high standard of excellence, often used in awards. Great achievements are frequently rewarded with gold, in the form of gold medals, gold trophies and other decorations. Winners of athletic events and other graded competitions are usually awarded a gold medal. Many awards such as the Nobel Prize are made from gold as well. Other award statues and prizes are depicted in gold or are gold plated (such as the Academy Awards, the Golden Globe Awards, the Emmy Awards, the Palme d'Or, and the British Academy Film Awards). Aristotle in his ethics used gold symbolism when referring to what is now known as the golden mean. Similarly, gold is associated with perfect or divine principles, such as in the case of the golden ratio and the Golden Rule. Gold is further associated with the wisdom of aging and fruition. The fiftieth wedding anniversary is golden. A person's most valued or most successful latter years are sometimes considered "golden years" or "golden jubilee". The height of a civilization is referred to as a golden age. Religion The first known prehistoric human usages of gold were religious in nature. In some forms of Christianity and Judaism, gold has been associated both with the sacred and evil. In the Book of Exodus, the Golden Calf is a symbol of idolatry, while in the Book of Genesis, Abraham was said to be rich in gold and silver, and Moses was instructed to cover the Mercy Seat of the Ark of the Covenant with pure gold. In Byzantine iconography the halos of Christ, Virgin Mary and the saints are often golden. In Islam, gold (along with silk) is often cited as being forbidden for men to wear. Abu Bakr al-Jazaeri, quoting a hadith, said that "[t]he wearing of silk and gold are forbidden on the males of my nation, and they are lawful to their women". This, however, has not been enforced consistently throughout history, e.g. in the Ottoman Empire. Further, small gold accents on clothing, such as in embroidery, may be permitted. In ancient Greek religion and mythology, Theia was seen as the goddess of gold, silver and other gemstones. According to Christopher Columbus, those who had something of gold were in possession of something of great value on Earth and a substance to even help souls to paradise. Wedding rings are typically made of gold. It is long lasting and unaffected by the passage of time and may aid in the ring symbolism of eternal vows before God and the perfection the marriage signifies. In Orthodox Christian wedding ceremonies, the wedded couple is adorned with a golden crown (though some opt for wreaths, instead) during the ceremony, an amalgamation of symbolic rites. On 24 August 2020, Israeli archaeologists discovered a trove of early Islamic gold coins near the central city of Yavne. Analysis of the extremely rare collection of 425 gold coins indicated that they were from the late 9th century. Dating to around 1,100 years back, the gold coins were from the Abbasid Caliphate. Production According to the United States Geological Survey in 2016, about of gold has been accounted for, of which 85% remains in active use. Mining and prospecting Since the 1880s, South Africa has been the source of a large proportion of the world's gold supply, and about 22% of the gold presently accounted is from South Africa. Production in 1970 accounted for 79% of the world supply, about 1,480 tonnes. In 2007 China (with 276 tonnes) overtook South Africa as the world's largest gold producer, the first time since 1905 that South Africa had not been the largest. In 2023, China was the world's leading gold-mining country, followed in order by Russia, Australia, Canada, the United States and Ghana. In South America, the controversial project Pascua Lama aims at exploitation of rich fields in the high mountains of Atacama Desert, at the border between Chile and Argentina. It has been estimated that up to one-quarter of the yearly global gold production originates from artisanal or small scale mining. The city of Johannesburg located in South Africa was founded as a result of the Witwatersrand Gold Rush which resulted in the discovery of some of the largest natural gold deposits in recorded history. The gold fields are confined to the northern and north-western edges of the Witwatersrand basin, which is a thick layer of archean rocks located, in most places, deep under the Free State, Gauteng and surrounding provinces. These Witwatersrand rocks are exposed at the surface on the Witwatersrand, in and around Johannesburg, but also in isolated patches to the south-east and south-west of Johannesburg, as well as in an arc around the Vredefort Dome which lies close to the center of the Witwatersrand basin. From these surface exposures the basin dips extensively, requiring some of the mining to occur at depths of nearly , making them, especially the Savuka and TauTona mines to the south-west of Johannesburg, the deepest mines on Earth. The gold is found only in six areas where archean rivers from the north and north-west formed extensive pebbly Braided river deltas before draining into the "Witwatersrand sea" where the rest of the Witwatersrand sediments were deposited. The Second Boer War of 1899–1901 between the British Empire and the Afrikaner Boers was at least partly over the rights of miners and possession of the gold wealth in South Africa. During the 19th century, gold rushes occurred whenever large gold deposits were discovered. The first documented discovery of gold in the United States was at the Reed Gold Mine near Georgeville, North Carolina in 1803. The first major gold strike in the United States occurred in a small north Georgia town called Dahlonega. Further gold rushes occurred in California, Colorado, the Black Hills, Otago in New Zealand, a number of locations across Australia, Witwatersrand in South Africa, and the Klondike in Canada. Grasberg mine located in Papua, Indonesia is the largest gold mine in the world. Extraction and refining Gold extraction is most economical in large, easily mined deposits. Ore grades as little as 0.5 parts per million (ppm) can be economical. Typical ore grades in open-pit mines are 1–5 ppm; ore grades in underground or hard rock mines are usually at least 3 ppm. Because ore grades of 30 ppm are usually needed before gold is visible to the naked eye, in most gold mines the gold is invisible. The average gold mining and extraction costs were about $317 per troy ounce in 2007, but these can vary widely depending on mining type and ore quality; global mine production amounted to 2,471.1 tonnes. After initial production, gold is often subsequently refined industrially by the Wohlwill process which is based on electrolysis or by the Miller process, that is chlorination in the melt. The Wohlwill process results in higher purity, but is more complex and is only applied in small-scale installations. Other methods of assaying and purifying smaller amounts of gold include parting and inquartation as well as cupellation, or refining methods based on the dissolution of gold in aqua regia. Recycling In 1997, recycled gold accounted for approximately 20% of the 2700 tons of gold supplied to the market. Jewelry companies such as Generation Collection and computer companies including Dell conduct recycling. As of 2020, the amount of carbon dioxide produced in mining a kilogram of gold is 16 tonnes, while recycling a kilogram of gold produces 53 kilograms of equivalent. Approximately 30 percent of the global gold supply is recycled and not mined as of 2020. Consumption The consumption of gold produced in the world is about 50% in jewelry, 40% in investments, and 10% in industry. According to the World Gold Council, China was the world's largest single consumer of gold in 2013, overtaking India. Pollution Gold production is associated with contribution to hazardous pollution. Low-grade gold ore may contain less than one ppm gold metal; such ore is ground and mixed with sodium cyanide to dissolve the gold. Cyanide is a highly poisonous chemical, which can kill living creatures when exposed in minute quantities. Many cyanide spills from gold mines have occurred in both developed and developing countries which killed aquatic life in long stretches of affected rivers. Environmentalists consider these events major environmental disasters. Up to thirty tons of used ore can be dumped as waste for producing one troy ounce of gold. Gold ore dumps are the source of many heavy elements such as cadmium, lead, zinc, copper, arsenic, selenium and mercury. When sulfide-bearing minerals in these ore dumps are exposed to air and water, the sulfide transforms into sulfuric acid which in turn dissolves these heavy metals facilitating their passage into surface water and ground water. This process is called acid mine drainage. These gold ore dumps contain long-term, highly hazardous waste. It was once common to use mercury to recover gold from ore, but today the use of mercury is largely limited to small-scale individual miners. Minute quantities of mercury compounds can reach water bodies, causing heavy metal contamination. Mercury can then enter into the human food chain in the form of methylmercury. Mercury poisoning in humans can cause severe brain damage. Gold extraction is also a highly energy-intensive industry, extracting ore from deep mines and grinding the large quantity of ore for further chemical extraction requires nearly 25 kWh of electricity per gram of gold produced. Monetary use Gold has been widely used throughout the world as money, for efficient indirect exchange (versus barter), and to store wealth in hoards. For exchange purposes, mints produce standardized gold bullion coins, bars and other units of fixed weight and purity. The first known coins containing gold were struck in Lydia, Asia Minor, around 600 BC. The talent coin of gold in use during the periods of Grecian history both before and during the time of the life of Homer weighed between 8.42 and 8.75 grams. From an earlier preference in using silver, European economies re-established the minting of gold as coinage during the thirteenth and fourteenth centuries. Bills (that mature into gold coin) and gold certificates (convertible into gold coin at the issuing bank) added to the circulating stock of gold standard money in most 19th century industrial economies. In preparation for World War I the warring nations moved to fractional gold standards, inflating their currencies to finance the war effort. Post-war, the victorious countries, most notably Britain, gradually restored gold-convertibility, but international flows of gold via bills of exchange remained embargoed; international shipments were made exclusively for bilateral trades or to pay war reparations. After World War II gold was replaced by a system of nominally convertible currencies related by fixed exchange rates following the Bretton Woods system. Gold standards and the direct convertibility of currencies to gold have been abandoned by world governments, led in 1971 by the United States' refusal to redeem its dollars in gold. Fiat currency now fills most monetary roles. Switzerland was the last country to tie its currency to gold; this was ended by a referendum in 1999. Central banks continue to keep a portion of their liquid reserves as gold in some form, and metals exchanges such as the London Bullion Market Association still clear transactions denominated in gold, including future delivery contracts. Today, gold mining output is declining. With the sharp growth of economies in the 20th century, and increasing foreign exchange, the world's gold reserves and their trading market have become a small fraction of all markets and fixed exchange rates of currencies to gold have been replaced by floating prices for gold and gold future contract. Though the gold stock grows by only 1% or 2% per year, very little metal is irretrievably consumed. Inventory above ground would satisfy many decades of industrial and even artisan uses at current prices. The gold proportion (fineness) of alloys is measured by karat (k). Pure gold (commercially termed fine gold) is designated as 24 karat, abbreviated 24k. English gold coins intended for circulation from 1526 into the 1930s were typically a standard 22k alloy called crown gold, for hardness (American gold coins for circulation after 1837 contain an alloy of 0.900 fine gold, or 21.6 kt). Although the prices of some platinum group metals can be much higher, gold has long been considered the most desirable of precious metals, and its value has been used as the standard for many currencies. Gold has been used as a symbol for purity, value, royalty, and particularly roles that combine these properties. Gold as a sign of wealth and prestige was ridiculed by Thomas More in his treatise Utopia. On that imaginary island, gold is so abundant that it is used to make chains for slaves, tableware, and lavatory seats. When ambassadors from other countries arrive, dressed in ostentatious gold jewels and badges, the Utopians mistake them for menial servants, paying homage instead to the most modestly dressed of their party. The ISO 4217 currency code of gold is XAU. Many holders of gold store it in form of bullion coins or bars as a hedge against inflation or other economic disruptions, though its efficacy as such has been questioned; historically, it has not proven itself reliable as a hedging instrument. Modern bullion coins for investment or collector purposes do not require good mechanical wear properties; they are typically fine gold at 24k, although the American Gold Eagle and the British gold sovereign continue to be minted in 22k (0.92) metal in historical tradition, and the South African Krugerrand, first released in 1967, is also 22k (0.92). The special issue Canadian Gold Maple Leaf coin contains the highest purity gold of any bullion coin, at 99.999% or 0.99999, while the popular issue Canadian Gold Maple Leaf coin has a purity of 99.99%. In 2006, the United States Mint began producing the American Buffalo gold bullion coin with a purity of 99.99%. The Australian Gold Kangaroos were first coined in 1986 as the Australian Gold Nugget but changed the reverse design in 1989. Other modern coins include the Austrian Vienna Philharmonic bullion coin and the Chinese Gold Panda. Price Like other precious metals, gold is measured by troy weight and by grams. The proportion of gold in the alloy is measured by karat (k), with 24 karat (24k) being pure gold (100%), and lower karat numbers proportionally less (18k = 75%). The purity of a gold bar or coin can also be expressed as a decimal figure ranging from 0 to 1, known as the millesimal fineness, such as 0.995 being nearly pure. The price of gold is determined through trading in the gold and derivatives markets, but a procedure known as the Gold Fixing in London, originating in September 1919, provides a daily benchmark price to the industry. The afternoon fixing was introduced in 1968 to provide a price when US markets are open. , gold was valued at around $42 per gram ($1,300 per troy ounce). History Historically gold coinage was widely used as currency; when paper money was introduced, it typically was a receipt redeemable for gold coin or bullion. In a monetary system known as the gold standard, a certain weight of gold was given the name of a unit of currency. For a long period, the United States government set the value of the US dollar so that one troy ounce was equal to $20.67 ($0.665 per gram), but in 1934 the dollar was devalued to $35.00 per troy ounce ($0.889/g). By 1961, it was becoming hard to maintain this price, and a pool of US and European banks agreed to manipulate the market to prevent further currency devaluation against increased gold demand. The largest gold depository in the world is that of the U.S. Federal Reserve Bank in New York, which holds about 3% of the gold known to exist and accounted for today, as does the similarly laden U.S. Bullion Depository at Fort Knox. In 2005 the World Gold Council estimated total global gold supply to be 3,859 tonnes and demand to be 3,754 tonnes, giving a surplus of 105 tonnes. After 15 August 1971 Nixon shock, the price began to greatly increase, and between 1968 and 2000 the price of gold ranged widely, from a high of $850 per troy ounce ($27.33/g) on 21 January 1980, to a low of $252.90 per troy ounce ($8.13/g) on 21 June 1999 (London Gold Fixing). Prices increased rapidly from 2001, but the 1980 high was not exceeded until 3 January 2008, when a new maximum of $865.35 per troy ounce was set. Another record price was set on 17 March 2008, at $1023.50 per troy ounce ($32.91/g). On 2 December 2009, gold reached a new high closing at $1,217.23. Gold further rallied hitting new highs in May 2010 after the European Union debt crisis prompted further purchase of gold as a safe asset. On 1 March 2011, gold hit a new all-time high of $1432.57, based on investor concerns regarding ongoing unrest in North Africa as well as in the Middle East. From April 2001 to August 2011, spot gold prices more than quintupled in value against the US dollar, hitting a new all-time high of $1,913.50 on 23 August 2011, prompting speculation that the long secular bear market had ended and a bull market had returned. However, the price then began a slow decline towards $1200 per troy ounce in late 2014 and 2015. In August 2020, the gold price picked up to US$2060 per ounce after a total growth of 59% from August 2018 to October 2020, a period during which it outplaced the Nasdaq total return of 54%. Gold futures are traded on the COMEX exchange. These contacts are priced in USD per troy ounce (1 troy ounce = 31.1034768 grams). Below are the CQG contract specifications outlining the futures contracts: Other applications Jewelry Because of the softness of pure (24k) gold, it is usually alloyed with other metals for use in jewelry, altering its hardness and ductility, melting point, color and other properties. Alloys with lower karat rating, typically 22k, 18k, 14k or 10k, contain higher percentages of copper, silver, palladium or other base metals in the alloy. Nickel is toxic, and its release from nickel white gold is controlled by legislation in Europe. Palladium-gold alloys are more expensive than those using nickel. High-karat white gold alloys are more resistant to corrosion than are either pure silver or sterling silver. The Japanese craft of Mokume-gane exploits the color contrasts between laminated colored gold alloys to produce decorative wood-grain effects. By 2014, the gold jewelry industry was escalating despite a dip in gold prices. Demand in the first quarter of 2014 pushed turnover to $23.7 billion according to a World Gold Council report. Gold solder is used for joining the components of gold jewelry by high-temperature hard soldering or brazing. If the work is to be of hallmarking quality, the gold solder alloy must match the fineness of the work, and alloy formulas are manufactured to color-match yellow and white gold. Gold solder is usually made in at least three melting-point ranges referred to as Easy, Medium and Hard. By using the hard, high-melting point solder first, followed by solders with progressively lower melting points, goldsmiths can assemble complex items with several separate soldered joints. Gold can also be made into thread and used in embroidery. Electronics Only 10% of the world consumption of new gold produced goes to industry, but by far the most important industrial use for new gold is in fabrication of corrosion-free electrical connectors in computers and other electrical devices. For example, according to the World Gold Council, a typical cell phone may contain 50 mg of gold, worth about three dollars. But since nearly one billion cell phones are produced each year, a gold value of US$2.82 in each phone adds to US$2.82 billion in gold from just this application. (Prices updated to November 2022) Though gold is attacked by free chlorine, its good conductivity and general resistance to oxidation and corrosion in other environments (including resistance to non-chlorinated acids) has led to its widespread industrial use in the electronic era as a thin-layer coating on electrical connectors, thereby ensuring good connection. For example, gold is used in the connectors of the more expensive electronics cables, such as audio, video and USB cables. The benefit of using gold over other connector metals such as tin in these applications has been debated; gold connectors are often criticized by audio-visual experts as unnecessary for most consumers and seen as simply a marketing ploy. However, the use of gold in other applications in electronic sliding contacts in highly humid or corrosive atmospheres, and in use for contacts with a very high failure cost (certain computers, communications equipment, spacecraft, jet aircraft engines) remains very common. Besides sliding electrical contacts, gold is also used in electrical contacts because of its resistance to corrosion, electrical conductivity, ductility and lack of toxicity. Switch contacts are generally subjected to more intense corrosion stress than are sliding contacts. Fine gold wires are used to connect semiconductor devices to their packages through a process known as wire bonding. The concentration of free electrons in gold metal is 5.91×1022 cm−3. Gold is highly conductive to electricity and has been used for electrical wiring in some high-energy applications (only silver and copper are more conductive per volume, but gold has the advantage of corrosion resistance). For example, gold electrical wires were used during some of the Manhattan Project's atomic experiments, but large high-current silver wires were used in the calutron isotope separator magnets in the project. It is estimated that 16% of the world's presently-accounted-for gold and 22% of the world's silver is contained in electronic technology in Japan. Medicine There are only two gold compounds currently employed as pharmaceuticals in modern medicine (sodium aurothiomalate and auranofin), used in the treatment of arthritis and other similar conditions in the US due to their anti-inflammatory properties. These drugs have been explored as a means to help to reduce the pain and swelling of rheumatoid arthritis, and also (historically) against tuberculosis and some parasites. Some esotericists and forms of alternative medicine assign metallic gold a healing power, against the scientific consensus. Historically, metallic and gold compounds have long been used for medicinal purposes. Gold, usually as the metal, is perhaps the most anciently administered medicine (apparently by shamanic practitioners) and known to Dioscorides. In medieval times, gold was often seen as beneficial for the health, in the belief that something so rare and beautiful could not be anything but healthy. In the 19th century gold had a reputation as an anxiolytic, a therapy for nervous disorders. Depression, epilepsy, migraine, and glandular problems such as amenorrhea and impotence were treated, and most notably alcoholism (Keeley, 1897). The apparent paradox of the actual toxicology of the substance suggests the possibility of serious gaps in the understanding of the action of gold in physiology. Only salts and radioisotopes of gold are of pharmacological value, since elemental (metallic) gold is inert to all chemicals it encounters inside the body (e.g., ingested gold cannot be attacked by stomach acid). Gold alloys are used in restorative dentistry, especially in tooth restorations, such as crowns and permanent bridges. The gold alloys' slight malleability facilitates the creation of a superior molar mating surface with other teeth and produces results that are generally more satisfactory than those produced by the creation of porcelain crowns. The use of gold crowns in more prominent teeth such as incisors is favored in some cultures and discouraged in others. Colloidal gold preparations (suspensions of gold nanoparticles) in water are intensely red-colored, and can be made with tightly controlled particle sizes up to a few tens of nanometers across by reduction of gold chloride with citrate or ascorbate ions. Colloidal gold is used in research applications in medicine, biology and materials science. The technique of immunogold labeling exploits the ability of the gold particles to adsorb protein molecules onto their surfaces. Colloidal gold particles coated with specific antibodies can be used as probes for the presence and position of antigens on the surfaces of cells. In ultrathin sections of tissues viewed by electron microscopy, the immunogold labels appear as extremely dense round spots at the position of the antigen. Gold, or alloys of gold and palladium, are applied as conductive coating to biological specimens and other non-conducting materials such as plastics and glass to be viewed in a scanning electron microscope. The coating, which is usually applied by sputtering with an argon plasma, has a triple role in this application. Gold's very high electrical conductivity drains electrical charge to earth, and its very high density provides stopping power for electrons in the electron beam, helping to limit the depth to which the electron beam penetrates the specimen. This improves definition of the position and topography of the specimen surface and increases the spatial resolution of the image. Gold also produces a high output of secondary electrons when irradiated by an electron beam, and these low-energy electrons are the most commonly used signal source used in the scanning electron microscope. The isotope gold-198 (half-life 2.7 days) is used in nuclear medicine, in some cancer treatments and for treating other diseases. Cuisine Gold can be used in food and has the E number 175. In 2016, the European Food Safety Authority published an opinion on the re-evaluation of gold as a food additive. Concerns included the possible presence of minute amounts of gold nanoparticles in the food additive, and that gold nanoparticles have been shown to be genotoxic in mammalian cells in vitro. Gold leaf, flake or dust is used on and in some gourmet foods, notably sweets and drinks as decorative ingredient. Gold flake was used by the nobility in medieval Europe as a decoration in food and drinks, Danziger Goldwasser (German: Gold water of Danzig) or Goldwasser () is a traditional German herbal liqueur produced in what is today Gdańsk, Poland, and Schwabach, Germany, and contains flakes of gold leaf. There are also some expensive (c. $1000) cocktails which contain flakes of gold leaf. However, since metallic gold is inert to all body chemistry, it has no taste, it provides no nutrition, and it leaves the body unaltered. Vark is a foil composed of a pure metal that is sometimes gold, and is used for garnishing sweets in South Asian cuisine. Miscellanea Gold produces a deep, intense red color when used as a coloring agent in cranberry glass. In photography, gold toners are used to shift the color of silver bromide black-and-white prints towards brown or blue tones, or to increase their stability. Used on sepia-toned prints, gold toners produce red tones. Kodak published formulas for several types of gold toners, which use gold as the chloride. Gold is a good reflector of electromagnetic radiation such as infrared and visible light, as well as radio waves. It is used for the protective coatings on many artificial satellites, in infrared protective faceplates in thermal-protection suits and astronauts' helmets, and in electronic warfare planes such as the EA-6B Prowler. Gold is used as the reflective layer on some high-end CDs. Automobiles may use gold for heat shielding. McLaren uses gold foil in the engine compartment of its F1 model. Gold can be manufactured so thin that it appears semi-transparent. It is used in some aircraft cockpit windows for de-icing or anti-icing by passing electricity through it. The heat produced by the resistance of the gold is enough to prevent ice from forming. Gold is attacked by and dissolves in alkaline solutions of potassium or sodium cyanide, to form the salt gold cyanide—a technique that has been used in extracting metallic gold from ores in the cyanide process. Gold cyanide is the electrolyte used in commercial electroplating of gold onto base metals and electroforming. Gold chloride (chloroauric acid) solutions are used to make colloidal gold by reduction with citrate or ascorbate ions. Gold chloride and gold oxide are used to make cranberry or red-colored glass, which, like colloidal gold suspensions, contains evenly sized spherical gold nanoparticles. Gold, when dispersed in nanoparticles, can act as a heterogeneous catalyst of chemical reactions. In recent years, gold has been used as a symbol of pride by the autism rights movement, as its symbol Au could be seen as similar to the word "autism". Toxicity Pure metallic (elemental) gold is non-toxic and non-irritating when ingested and is sometimes used as a food decoration in the form of gold leaf. Metallic gold is also a component of the alcoholic drinks Goldschläger, Gold Strike, and Goldwasser. Metallic gold is approved as a food additive in the EU (E175 in the Codex Alimentarius). Although the gold ion is toxic, the acceptance of metallic gold as a food additive is due to its relative chemical inertness, and resistance to being corroded or transformed into soluble salts (gold compounds) by any known chemical process which would be encountered in the human body. Soluble compounds (gold salts) such as gold chloride are toxic to the liver and kidneys. Common cyanide salts of gold such as potassium gold cyanide, used in gold electroplating, are toxic by virtue of both their cyanide and gold content. There are rare cases of lethal gold poisoning from potassium gold cyanide. Gold toxicity can be ameliorated with chelation therapy with an agent such as dimercaprol. Gold metal was voted Allergen of the Year in 2001 by the American Contact Dermatitis Society; gold contact allergies affect mostly women. Despite this, gold is a relatively non-potent contact allergen, in comparison with metals like nickel. A sample of the fungus Aspergillus niger was found growing from gold mining solution; and was found to contain cyano metal complexes, such as gold, silver, copper, iron and zinc. The fungus also plays a role in the solubilization of heavy metal sulfides. See also Bulk leach extractable gold, for sampling ores Chrysiasis (dermatological condition) Digital gold currency, form of electronic currency GFMS business consultancy Gold fingerprinting, use impurities to identify an alloy Gold standard in banking List of countries by gold production Tumbaga, alloy of gold and copper Iron pyrite, fool's gold Nordic gold, non-gold copper alloy References Further reading Bachmann, H. G. The lure of gold : an artistic and cultural history (2006) online Bernstein, Peter L. The Power of Gold: The History of an Obsession (2000) online Brands, H.W. The Age of Gold: The California Gold Rush and the New American Dream (2003) excerpt Buranelli, Vincent. Gold : an illustrated history (1979) online' wide-ranging popular history Cassel, Gustav. "The restoration of the gold standard." Economica 9 (1923): 171–185. online Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919–1939 (Oxford UP, 1992). Ferguson, Niall. The Ascent of Money – Financial History of the World (2009) online Hart, Matthew, Gold: The Race for the World's Most Seductive Metal Gold : the race for the world's most seductive metal", New York: Simon & Schuster, 2013. Johnson, Harry G. "The gold rush of 1968 in retrospect and prospect". American Economic Review 59.2 (1969): 344–348. online Kwarteng, Kwasi. War and Gold: A Five-Hundred-Year History of Empires, Adventures, and Debt (2014) online Vilar, Pierre. A History of Gold and Money, 1450–1920 (1960). online Vilches, Elvira. New World Gold: Cultural Anxiety and Monetary Disorder in Early Modern Spain (2010). External links Chemistry in its element podcast (MP3) from the Royal Society of Chemistry's Chemistry World: Gold www.rsc.org Gold at The Periodic Table of Videos (University of Nottingham) Getting Gold 1898 book, www.lateralscience.co.uk , www.epa.gov Gold element information – rsc.org Chemical elements Transition metals Noble metals Precious metals Cubic minerals Minerals in space group 225 Dental materials Electrical conductors Native element minerals E-number additives Symbols of Alaska Symbols of California Chemical elements with face-centered cubic structure Coinage metals and alloys Symbols of Victoria
Gold
[ "Physics", "Chemistry" ]
13,115
[ "Dental materials", "Chemical elements", "Coinage metals and alloys", "Materials", "Alloys", "Electrical conductors", "Atoms", "Matter" ]
12,241
https://en.wikipedia.org/wiki/Gallium
Gallium is a chemical element; it has the symbol Ga and atomic number 31. Discovered by the French chemist Paul-Émile Lecoq de Boisbaudran in 1875, gallium is in group 13 of the periodic table and is similar to the other metals of the group (aluminium, indium, and thallium). Elemental gallium is a relatively soft, silvery metal at standard temperature and pressure. In its liquid state, it becomes silvery white. If enough force is applied, solid gallium may fracture conchoidally. Since its discovery in 1875, gallium has widely been used to make alloys with low melting points. It is also used in semiconductors, as a dopant in semiconductor substrates. The melting point of gallium (29.7646°C, 85.5763°F, 302.9146 K) is used as a temperature reference point. Gallium alloys are used in thermometers as a non-toxic and environmentally friendly alternative to mercury, and can withstand higher temperatures than mercury. A melting point of , well below the freezing point of water, is claimed for the alloy galinstan (62–⁠95% gallium, 5–⁠22% indium, and 0–⁠16% tin by weight), but that may be the freezing point with the effect of supercooling. Gallium does not occur as a free element in nature, but rather as gallium(III) compounds in trace amounts in zinc ores (such as sphalerite) and in bauxite. Elemental gallium is a liquid at temperatures greater than , and will melt in a person's hands at normal human body temperature of . Gallium is predominantly used in electronics. Gallium arsenide, the primary chemical compound of gallium in electronics, is used in microwave circuits, high-speed switching circuits, and infrared circuits. Semiconducting gallium nitride and indium gallium nitride produce blue and violet light-emitting diodes and diode lasers. Gallium is also used in the production of artificial gadolinium gallium garnet for jewelry. Gallium is considered a technology-critical element by the United States National Library of Medicine and Frontiers Media. Gallium has no known natural role in biology. Gallium(III) behaves in a similar manner to ferric salts in biological systems and has been used in some medical applications, including pharmaceuticals and radiopharmaceuticals. Physical properties Elemental gallium is not found in nature, but it is easily obtained by smelting. Very pure gallium is a silvery blue metal that fractures conchoidally like glass. Gallium's volume expands by 3.10% when it changes from a liquid to a solid so care must be taken when storing it in containers that may rupture when it changes state. Gallium shares the higher-density liquid state with a short list of other materials that includes water, silicon, germanium, bismuth, and plutonium. Gallium forms alloys with most metals. It readily diffuses into cracks or grain boundaries of some metals such as aluminium, aluminium–zinc alloys and steel, causing extreme loss of strength and ductility called liquid metal embrittlement. The melting point of gallium, at 302.9146 K (29.7646 °C, 85.5763 °F), is just above room temperature, and is approximately the same as the average summer daytime temperatures in Earth's mid-latitudes. This melting point (mp) is one of the formal temperature reference points in the International Temperature Scale of 1990 (ITS-90) established by the International Bureau of Weights and Measures (BIPM). The triple point of gallium, 302.9166 K (29.7666 °C, 85.5799 °F), is used by the US National Institute of Standards and Technology (NIST) in preference to the melting point. The melting point of gallium allows it to melt in the human hand, and then solidify if removed. The liquid metal has a strong tendency to supercool below its melting point/freezing point: Ga nanoparticles can be kept in the liquid state below 90 K. Seeding with a crystal helps to initiate freezing. Gallium is one of the four non-radioactive metals (with caesium, rubidium, and mercury) that are known to be liquid at, or near, normal room temperature. Of the four, gallium is the only one that is neither highly reactive (as are rubidium and caesium) nor highly toxic (as is mercury) and can, therefore, be used in metal-in-glass high-temperature thermometers. It is also notable for having one of the largest liquid ranges for a metal, and for having (unlike mercury) a low vapor pressure at high temperatures. Gallium's boiling point, 2676 K, is nearly nine times higher than its melting point on the absolute scale, the greatest ratio between melting point and boiling point of any element. Unlike mercury, liquid gallium metal wets glass and skin, along with most other materials (with the exceptions of quartz, graphite, gallium(III) oxide and PTFE), making it mechanically more difficult to handle even though it is substantially less toxic and requires far fewer precautions than mercury. Gallium painted onto glass is a brilliant mirror. For this reason as well as the metal contamination and freezing-expansion problems, samples of gallium metal are usually supplied in polyethylene packets within other containers. Gallium does not crystallize in any of the simple crystal structures. The stable phase under normal conditions is orthorhombic with 8 atoms in the conventional unit cell. Within a unit cell, each atom has only one nearest neighbor (at a distance of 244 pm). The remaining six unit cell neighbors are spaced 27, 30 and 39 pm farther away, and they are grouped in pairs with the same distance. Many stable and metastable phases are found as function of temperature and pressure. The bonding between the two nearest neighbors is covalent; hence Ga2 dimers are seen as the fundamental building blocks of the crystal. This explains the low melting point relative to the neighbor elements, aluminium and indium. This structure is strikingly similar to that of iodine and may form because of interactions between the single 4p electrons of gallium atoms, further away from the nucleus than the 4s electrons and the [Ar]3d10 core. This phenomenon recurs with mercury with its "pseudo-noble-gas" [Xe]4f145d106s2 electron configuration, which is liquid at room temperature. The 3d10 electrons do not shield the outer electrons very well from the nucleus and hence the first ionisation energy of gallium is greater than that of aluminium. Ga2 dimers do not persist in the liquid state and liquid gallium exhibits a complex low-coordinated structure in which each gallium atom is surrounded by 10 others, rather than 11–12 neighbors typical of most liquid metals. The physical properties of gallium are highly anisotropic, i.e. have different values along the three major crystallographic axes a, b, and c (see table), producing a significant difference between the linear (α) and volume thermal expansion coefficients. The properties of gallium are strongly temperature-dependent, particularly near the melting point. For example, the coefficient of thermal expansion increases by several hundred percent upon melting. Isotopes Gallium has 30 known isotopes, ranging in mass number from 60 to 89. Only two isotopes are stable and occur naturally, gallium-69 and gallium-71. Gallium-69 is more abundant: it makes up about 60.1% of natural gallium, while gallium-71 makes up the remaining 39.9%. All the other isotopes are radioactive, with gallium-67 being the longest-lived (half-life 3.261 days). Isotopes lighter than gallium-69 usually decay through beta plus decay (positron emission) or electron capture to isotopes of zinc, while isotopes heavier than gallium-71 decay through beta minus decay (electron emission), possibly with delayed neutron emission, to isotopes of germanium. Gallium-70 can decay through both beta minus decay and electron capture. Gallium-67 is unique among the light isotopes in having only electron capture as a decay mode, as its decay energy is not sufficient to allow positron emission. Gallium-67 and gallium-68 (half-life 67.7 min) are both used in nuclear medicine. Chemical properties Gallium is found primarily in the +3 oxidation state. The +1 oxidation state is also found in some compounds, although it is less common than it is for gallium's heavier congeners indium and thallium. For example, the very stable GaCl2 contains both gallium(I) and gallium(III) and can be formulated as GaIGaIIICl4; in contrast, the monochloride is unstable above 0 °C, disproportionating into elemental gallium and gallium(III) chloride. Compounds containing Ga–Ga bonds are true gallium(II) compounds, such as GaS (which can be formulated as Ga24+(S2−)2) and the dioxan complex Ga2Cl4(C4H8O2)2. Aqueous chemistry Strong acids dissolve gallium, forming gallium(III) salts such as (gallium nitrate). Aqueous solutions of gallium(III) salts contain the hydrated gallium ion, . Gallium(III) hydroxide, , may be precipitated from gallium(III) solutions by adding ammonia. Dehydrating at 100 °C produces gallium oxide hydroxide, GaO(OH). Alkaline hydroxide solutions dissolve gallium, forming gallate salts (not to be confused with identically named gallic acid salts) containing the anion. Gallium hydroxide, which is amphoteric, also dissolves in alkali to form gallate salts. Although earlier work suggested as another possible gallate anion, it was not found in later work. Oxides and chalcogenides Gallium reacts with the chalcogens only at relatively high temperatures. At room temperature, gallium metal is not reactive with air and water because it forms a passive, protective oxide layer. At higher temperatures, however, it reacts with atmospheric oxygen to form gallium(III) oxide, . Reducing with elemental gallium in vacuum at 500 °C to 700 °C yields the dark brown gallium(I) oxide, . is a very strong reducing agent, capable of reducing to . It disproportionates at 800 °C back to gallium and . Gallium(III) sulfide, , has 3 possible crystal modifications. It can be made by the reaction of gallium with hydrogen sulfide () at 950 °C. Alternatively, can be used at 747 °C: 2 + 3 → + 6 Reacting a mixture of alkali metal carbonates and with leads to the formation of thiogallates containing the anion. Strong acids decompose these salts, releasing in the process. The mercury salt, , can be used as a phosphor. Gallium also forms sulfides in lower oxidation states, such as gallium(II) sulfide and the green gallium(I) sulfide, the latter of which is produced from the former by heating to 1000 °C under a stream of nitrogen. The other binary chalcogenides, and , have the zincblende structure. They are all semiconductors but are easily hydrolysed and have limited utility. Nitrides and pnictides Gallium reacts with ammonia at 1050 °C to form gallium nitride, GaN. Gallium also forms binary compounds with phosphorus, arsenic, and antimony: gallium phosphide (GaP), gallium arsenide (GaAs), and gallium antimonide (GaSb). These compounds have the same structure as ZnS, and have important semiconducting properties. GaP, GaAs, and GaSb can be synthesized by the direct reaction of gallium with elemental phosphorus, arsenic, or antimony. They exhibit higher electrical conductivity than GaN. GaP can also be synthesized by reacting with phosphorus at low temperatures. Gallium forms ternary nitrides; for example: + → Similar compounds with phosphorus and arsenic are possible: and . These compounds are easily hydrolyzed by dilute acids and water. Halides Gallium(III) oxide reacts with fluorinating agents such as HF or to form gallium(III) fluoride, . It is an ionic compound strongly insoluble in water. However, it dissolves in hydrofluoric acid, in which it forms an adduct with water, . Attempting to dehydrate this adduct forms . The adduct reacts with ammonia to form , which can then be heated to form anhydrous . Gallium trichloride is formed by the reaction of gallium metal with chlorine gas. Unlike the trifluoride, gallium(III) chloride exists as dimeric molecules, , with a melting point of 78 °C. Equivalent compounds are formed with bromine and iodine, and . Like the other group 13 trihalides, gallium(III) halides are Lewis acids, reacting as halide acceptors with alkali metal halides to form salts containing anions, where X is a halogen. They also react with alkyl halides to form carbocations and . When heated to a high temperature, gallium(III) halides react with elemental gallium to form the respective gallium(I) halides. For example, reacts with Ga to form : 2 Ga + 3 GaCl (g) At lower temperatures, the equilibrium shifts toward the left and GaCl disproportionates back to elemental gallium and . GaCl can also be produced by reacting Ga with HCl at 950 °C; the product can be condensed as a red solid. Gallium(I) compounds can be stabilized by forming adducts with Lewis acids. For example: GaCl + → The so-called "gallium(II) halides", , are actually adducts of gallium(I) halides with the respective gallium(III) halides, having the structure . For example: GaCl + → Hydrides Like aluminium, gallium also forms a hydride, , known as gallane, which may be produced by reacting lithium gallanate () with gallium(III) chloride at −30 °C: 3 + → 3 LiCl + 4 In the presence of dimethyl ether as solvent, polymerizes to . If no solvent is used, the dimer (digallane) is formed as a gas. Its structure is similar to diborane, having two hydrogen atoms bridging the two gallium centers, unlike α- in which aluminium has a coordination number of 6. Gallane is unstable above −10 °C, decomposing to elemental gallium and hydrogen. Organogallium compounds Organogallium compounds are of similar reactivity to organoindium compounds, less reactive than organoaluminium compounds, but more reactive than organothallium compounds. Alkylgalliums are monomeric. Lewis acidity decreases in the order Al > Ga > In and as a result organogallium compounds do not form bridged dimers as organoaluminium compounds do. Organogallium compounds are also less reactive than organoaluminium compounds. They do form stable peroxides. These alkylgalliums are liquids at room temperature, having low melting points, and are quite mobile and flammable. Triphenylgallium is monomeric in solution, but its crystals form chain structures due to weak intermolecluar Ga···C interactions. Gallium trichloride is a common starting reagent for the formation of organogallium compounds, such as in carbogallation reactions. Gallium trichloride reacts with lithium cyclopentadienide in diethyl ether to form the trigonal planar gallium cyclopentadienyl complex GaCp3. Gallium(I) forms complexes with arene ligands such as hexamethylbenzene. Because this ligand is quite bulky, the structure of the [Ga(η6-C6Me6)]+ is that of a half-sandwich. Less bulky ligands such as mesitylene allow two ligands to be attached to the central gallium atom in a bent sandwich structure. Benzene is even less bulky and allows the formation of dimers: an example is [Ga(η6-C6H6)2] [GaCl4]·3C6H6. History In 1871, the existence of gallium was first predicted by Russian chemist Dmitri Mendeleev, who named it "eka-aluminium" from its position in his periodic table. He also predicted several properties of eka-aluminium that correspond closely to the real properties of gallium, such as its density, melting point, oxide character, and bonding in chloride. {| class="wikitable" |+ Comparison between Mendeleev's 1871 predictions and the known properties of gallium |- ! Property ! Mendeleev's predictions ! Actual properties |- ! Atomic weight | ~68 | 69.723 |- ! Density | 5.9 g/cm3 | 5.904 g/cm3 |- ! Melting point | Low | 29.767 °C |- ! Formula of oxide | M2O3 | Ga2O3 |- ! Density of oxide | 5.5 g/cm3 | 5.88 g/cm3 |- ! Nature of hydroxide | amphoteric | amphoteric |} Mendeleev further predicted that eka-aluminium would be discovered by means of the spectroscope, and that metallic eka-aluminium would dissolve slowly in both acids and alkalis and would not react with air. He also predicted that M2O3 would dissolve in acids to give MX3 salts, that eka-aluminium salts would form basic salts, that eka-aluminium sulfate should form alums, and that anhydrous MCl3 should have a greater volatility than ZnCl2: all of these predictions turned out to be true. Gallium was discovered using spectroscopy by French chemist Paul-Émile Lecoq de Boisbaudran in 1875 from its characteristic spectrum (two violet lines) in a sample of sphalerite. Later that year, Lecoq obtained the free metal by electrolysis of the hydroxide in potassium hydroxide solution. He named the element "gallia", from Latin meaning 'Gaul', a name for his native land of France. It was later claimed that, in a multilingual pun of a kind favoured by men of science in the 19th century, he had also named gallium after himself: is French for 'the rooster', and the Latin word for 'rooster' is . In an 1877 article, Lecoq denied this conjecture. Originally, de Boisbaudran determined the density of gallium as 4.7 g/cm3, the only property that failed to match Mendeleev's predictions; Mendeleev then wrote to him and suggested that he should remeasure the density, and de Boisbaudran then obtained the correct value of 5.9 g/cm3, that Mendeleev had predicted exactly. From its discovery in 1875 until the era of semiconductors, the primary uses of gallium were high-temperature thermometrics and metal alloys with unusual properties of stability or ease of melting (some such being liquid at room temperature). The development of gallium arsenide as a direct bandgap semiconductor in the 1960s ushered in the most important stage in the applications of gallium. In the late 1960s, the electronics industry started using gallium on a commercial scale to fabricate light emitting diodes, photovoltaics and semiconductors, while the metals industry used it to reduce the melting point of alloys. First blue gallium nitride LED were developed in 1971-1973, but they were feeble. Only in the early 1990s Shuji Nakamura managed to combine GaN with indium gallium nitride and develop the modern blue LED, now making the basis of ubiquitous white LEDs, which Nichia commercialized in 1993. He and two other Japanese scientists received a Nobel in Physics in 2014 for this work. Global gallium production slowly grew from several tens of t/year in the 1970s til ca. 2010, when it passed 100 t/yr and rapidly accelerated, by 2024 reaching about 450 t/yr. Occurrence Gallium does not exist as a free element in the Earth's crust, and the few high-content minerals, such as gallite (CuGaS2), are too rare to serve as a primary source. The abundance in the Earth's crust is approximately 16.9 ppm. It is the 34th most abundant element in the crust. This is comparable to the crustal abundances of lead, cobalt, and niobium. Yet unlike these elements, gallium does not form its own ore deposits with concentrations of > 0.1 wt.% in ore. Rather it occurs at trace concentrations similar to the crustal value in zinc ores, and at somewhat higher values (~ 50 ppm) in aluminium ores, from both of which it is extracted as a by-product. This lack of independent deposits is due to gallium's geochemical behaviour, showing no strong enrichment in the processes relevant to the formation of most ore deposits. The United States Geological Survey (USGS) estimates that more than 1 million tons of gallium is contained in known reserves of bauxite and zinc ores. Some coal flue dusts contain small quantities of gallium, typically less than 1% by weight. However, these amounts are not extractable without mining of the host materials (see below). Thus, the availability of gallium is fundamentally determined by the rate at which bauxite, zinc ores, and coal are extracted. Production and availability Gallium is produced exclusively as a by-product during the processing of the ores of other metals. Its main source material is bauxite, the chief ore of aluminium, but minor amounts are also extracted from sulfidic zinc ores (sphalerite being the main host mineral). In the past, certain coals were an important source. During the processing of bauxite to alumina in the Bayer process, gallium accumulates in the sodium hydroxide liquor. From this it can be extracted by a variety of methods. The most recent is the use of ion-exchange resin. Achievable extraction efficiencies critically depend on the original concentration in the feed bauxite. At a typical feed concentration of 50 ppm, about 15% of the contained gallium is extractable. The remainder reports to the red mud and aluminium hydroxide streams. Gallium is removed from the ion-exchange resin in solution. Electrolysis then gives gallium metal. For semiconductor use, it is further purified with zone melting or single-crystal extraction from a melt (Czochralski process). Purities of 99.9999% are routinely achieved and commercially available. Its by-product status means that gallium production is constrained by the amount of bauxite, sulfidic zinc ores (and coal) extracted per year. Therefore, its availability needs to be discussed in terms of supply potential. The supply potential of a by-product is defined as that amount which is economically extractable from its host materials per year under current market conditions (i.e. technology and price). Reserves and resources are not relevant for by-products, since they cannot be extracted independently from the main-products. Recent estimates put the supply potential of gallium at a minimum of 2,100 t/yr from bauxite, 85 t/yr from sulfidic zinc ores, and potentially 590 t/yr from coal. These figures are significantly greater than current production (375 t in 2016). Thus, major future increases in the by-product production of gallium will be possible without significant increases in production costs or price. The average price for low-grade gallium was $120 per kilogram in 2016 and $135–140 per kilogram in 2017. In 2017, the world's production of low-grade gallium was tons—an increase of 15% from 2016. China, Japan, South Korea, Russia, and Ukraine were the leading producers, while Germany ceased primary production of gallium in 2016. The yield of high-purity gallium was ca. 180 tons, mostly originating from China, Japan, Slovakia, UK and U.S. The 2017 world annual production capacity was estimated at 730 tons for low-grade and 320 tons for refined gallium. China produced tons of low-grade gallium in 2016 and tons in 2017. It also accounted for more than half of global LED production. As of July 2023, China accounted for between 80% and 95% of its production. Applications Semiconductor applications dominate the commercial demand for gallium, accounting for 98% of the total. The next major application is for gadolinium gallium garnets. As of 2022, 44% of world use went to light fixtures and 36% to integrated circuits, with smaller shares equal to ~7% going to photovoltaics and magnets each. Semiconductors Extremely high-purity (>99.9999%) gallium is commercially available to serve the semiconductor industry. Gallium arsenide (GaAs) and gallium nitride (GaN) used in electronic components represented about 98% of the gallium consumption in the United States in 2007. About 66% of semiconductor gallium is used in the U.S. in integrated circuits (mostly gallium arsenide), such as the manufacture of ultra-high-speed logic chips and MESFETs for low-noise microwave preamplifiers in cell phones. About 20% of this gallium is used in optoelectronics. Worldwide, gallium arsenide makes up 95% of the annual global gallium consumption. It amounted to $7.5 billion in 2016, with 53% originating from cell phones, 27% from wireless communications, and the rest from automotive, consumer, fiber-optic, and military applications. The recent increase in GaAs consumption is mostly related to the emergence of 3G and 4G smartphones, which employ up to 10 times the amount of GaAs in older models. Gallium arsenide and gallium nitride can also be found in a variety of optoelectronic devices which had a market share of $15.3 billion in 2015 and $18.5 billion in 2016. Aluminium gallium arsenide (AlGaAs) is used in high-power infrared laser diodes. The semiconductors gallium nitride and indium gallium nitride are used in blue and violet optoelectronic devices, mostly laser diodes and light-emitting diodes. For example, gallium nitride 405 nm diode lasers are used as a violet light source for higher-density Blu-ray Disc compact data disc drives. Other major applications of gallium nitride are cable television transmission, commercial wireless infrastructure, power electronics, and satellites. The GaN radio frequency device market alone was estimated at $370 million in 2016 and $420 million in 2016. Multijunction photovoltaic cells, developed for satellite power applications, are made by molecular-beam epitaxy or metalorganic vapour-phase epitaxy of thin films of gallium arsenide, indium gallium phosphide, or indium gallium arsenide. The Mars Exploration Rovers and several satellites use triple-junction gallium arsenide on germanium cells. Gallium is also a component in photovoltaic compounds (such as copper indium gallium selenium sulfide ) used in solar panels as a cost-efficient alternative to crystalline silicon. Galinstan and other alloys Gallium readily alloys with most metals, and is used as an ingredient in low-melting alloys. The nearly eutectic alloy of gallium, indium, and tin is a room temperature liquid used in medical thermometers. This alloy, with the trade-name Galinstan (with the "-stan" referring to the tin, in Latin), has a low melting point of −19 °C (−2.2 °F). It has been suggested that this family of alloys could also be used to cool computer chips in place of water, and is often used as a replacement for thermal paste in high-performance computing. Gallium alloys have been evaluated as substitutes for mercury dental amalgams, but these materials have yet to see wide acceptance. Liquid alloys containing mostly gallium and indium have been found to precipitate gaseous CO2 into solid carbon and are being researched as potential methodologies for carbon capture and possibly carbon removal. Because gallium wets glass or porcelain, gallium can be used to create brilliant mirrors. When the wetting action of gallium-alloys is not desired (as in Galinstan glass thermometers), the glass must be protected with a transparent layer of gallium(III) oxide. Due to their high surface tension and deformability, gallium-based liquid metals can be used to create actuators by controlling the surface tension. Researchers have demonstrated the potentials of using liquid metal actuators as artificial muscle in robotic actuation. The plutonium used in nuclear weapon pits is stabilized in the δ phase and made machinable by alloying with gallium. Biomedical applications Although gallium has no natural function in biology, gallium ions interact with processes in the body in a manner similar to iron(III). Because these processes include inflammation, a marker for many disease states, several gallium salts are used (or are in development) as pharmaceuticals and radiopharmaceuticals in medicine. Interest in the anticancer properties of gallium emerged when it was discovered that 67Ga(III) citrate injected in tumor-bearing animals localized to sites of tumor. Clinical trials have shown gallium nitrate to have antineoplastic activity against non-Hodgkin's lymphoma and urothelial cancers. A new generation of gallium-ligand complexes such as tris(8-quinolinolato)gallium(III) (KP46) and gallium maltolate has emerged. Gallium nitrate (brand name Ganite) has been used as an intravenous pharmaceutical to treat hypercalcemia associated with tumor metastasis to bones. Gallium is thought to interfere with osteoclast function, and the therapy may be effective when other treatments have failed. Gallium maltolate, an oral, highly absorbable form of gallium(III) ion, is an anti-proliferative to pathologically proliferating cells, particularly cancer cells and some bacteria that accept it in place of ferric iron (Fe3+). Researchers are conducting clinical and preclinical trials on this compound as a potential treatment for a number of cancers, infectious diseases, and inflammatory diseases. When gallium ions are mistakenly taken up in place of iron(III) by bacteria such as Pseudomonas, the ions interfere with respiration, and the bacteria die. This happens because iron is redox-active, allowing the transfer of electrons during respiration, while gallium is redox-inactive. A complex amine-phenol Ga(III) compound MR045 is selectively toxic to parasites resistant to chloroquine, a common drug against malaria. Both the Ga(III) complex and chloroquine act by inhibiting crystallization of hemozoin, a disposal product formed from the digestion of blood by the parasites. Radiogallium salts Gallium-67 salts such as gallium citrate and gallium nitrate are used as radiopharmaceutical agents in the nuclear medicine imaging known as gallium scan. The radioactive isotope 67Ga is used, and the compound or salt of gallium is unimportant. The body handles Ga3+ in many ways as though it were Fe3+, and the ion is bound (and concentrates) in areas of inflammation, such as infection, and in areas of rapid cell division. This allows such sites to be imaged by nuclear scan techniques. Gallium-68, a positron emitter with a half-life of 68 min, is now used as a diagnostic radionuclide in PET-CT when linked to pharmaceutical preparations such as DOTATOC, a somatostatin analogue used for neuroendocrine tumors investigation, and DOTA-TATE, a newer one, used for neuroendocrine metastasis and lung neuroendocrine cancer, such as certain types of microcytoma. Gallium-68's preparation as a pharmaceutical is chemical, and the radionuclide is extracted by elution from germanium-68, a synthetic radioisotope of germanium, in gallium-68 generators. Other uses Neutrino detection: Gallium is used for neutrino detection. Possibly the largest amount of pure gallium ever collected in a single location is the Gallium-Germanium Neutrino Telescope used by the SAGE experiment at the Baksan Neutrino Observatory in Russia. This detector contains 55–57 tonnes (~9 cubic metres) of liquid gallium. Another experiment was the GALLEX neutrino detector operated in the early 1990s in an Italian mountain tunnel. The detector contained 12.2 tons of watered gallium-71. Solar neutrinos caused a few atoms of 71Ga to become radioactive 71Ge, which were detected. This experiment showed that the solar neutrino flux is 40% less than theory predicted. This deficit (solar neutrino problem) was not explained until better solar neutrino detectors and theories were constructed (see SNO). Ion source: Gallium is also used as a liquid metal ion source for a focused ion beam. For example, a focused gallium-ion beam was used to create the world's smallest book, Teeny Ted from Turnip Town. Lubricants: Gallium serves as an additive in glide wax for skis and other low-friction surface materials. Flexible electronics: Materials scientists speculate that the properties of gallium could make it suitable for the development of flexible and wearable devices. Hydrogen generation: Gallium disrupts the protective oxide layer on aluminium, allowing water to react with the aluminium in AlGa to produce hydrogen gas. Humor: A well-known practical joke among chemists is to fashion gallium spoons and use them to serve tea to unsuspecting guests, since gallium has a similar appearance to its lighter homolog aluminium. The spoons then melt in the hot tea. Gallium in the ocean Advances in trace element testing have allowed scientists to discover traces of dissolved gallium in the Atlantic and Pacific Oceans. In recent years, dissolved gallium concentrations have presented in the Beaufort Sea. These reports reflect the possible profiles of the Pacific and Atlantic Ocean waters. For the Pacific Oceans, typical dissolved gallium concentrations are between 4 and 6 pmol/kg at depths <~150 m. In comparison, for Atlantic waters 25–28 pmol/kg at depths >~350 m. Gallium has entered oceans mainly through aeolian input, but having gallium in our oceans can be used to resolve aluminium distribution in the oceans. The reason for this is that gallium is geochemically similar to aluminium, just less reactive. Gallium also has a slightly larger surface water residence time than aluminium. Gallium has a similar dissolved profile similar to that of aluminium, due to this gallium can be used as a tracer for aluminium. Gallium can also be used as a tracer of aeolian inputs of iron. Gallium is used as a tracer for iron in the northwest Pacific, south and central Atlantic Oceans. For example, in the northwest Pacific, low gallium surface waters, in the subpolar region suggest that there is low dust input, which can subsequently explain the following high-nutrient, low-chlorophyll environmental behavior. Precautions Metallic gallium is not toxic. However, several gallium compounds are toxic. Gallium halide complexes can be toxic. The Ga3+ ion of soluble gallium salts tends to form the insoluble hydroxide when injected in large doses; precipitation of this hydroxide resulted in nephrotoxicity in animals. In lower doses, soluble gallium is tolerated well and does not accumulate as a poison, instead being excreted mostly through urine. Excretion of gallium occurs in two phases: the first phase has a biological half-life of 1 hour, while the second has a biological half-life of 25 hours. Inhaled Ga2O3 particles are probably toxic. Notes References External links Gallium at The Periodic Table of Videos (University of Nottingham) Safety data sheet at acialloys.com High-resolution photographs of molten gallium, gallium crystals and gallium ingots under Creative Commons licence Textbook information regarding gallium Environmental effects of gallium Gallium Statistics and Information Gallium: A Smart Metal United States Geological Survey Thermal conductivity Physical and thermodynamical properties of liquid gallium (doc pdf) Chemical elements predicted by Dmitri Mendeleev Chemical elements Coolants Post-transition metals Articles containing video clips Materials that expand upon freezing Chemical elements with primitive orthorhombic structure
Gallium
[ "Physics", "Chemistry" ]
7,869
[ "Periodic table", "Physical phenomena", "Phase transitions", "Chemical elements", "Materials", "Materials that expand upon freezing", "Atoms", "Matter", "Chemical elements predicted by Dmitri Mendeleev" ]
12,339
https://en.wikipedia.org/wiki/Genetically%20modified%20organism
A genetically modified organism (GMO) is any organism whose genetic material has been altered using genetic engineering techniques. The exact definition of a genetically modified organism and what constitutes genetic engineering varies, with the most common being an organism altered in a way that "does not occur naturally by mating and/or natural recombination". A wide variety of organisms have been genetically modified (GM), including animals, plants, and microorganisms. Genetic modification can include the introduction of new genes or enhancing, altering, or knocking out endogenous genes. In some genetic modifications, genes are transferred within the same species, across species (creating transgenic organisms), and even across kingdoms. Creating a genetically modified organism is a multi-step process. Genetic engineers must isolate the gene they wish to insert into the host organism and combine it with other genetic elements, including a promoter and terminator region and often a selectable marker. A number of techniques are available for inserting the isolated gene into the host genome. Recent advancements using genome editing techniques, notably CRISPR, have made the production of GMOs much simpler. Herbert Boyer and Stanley Cohen made the first genetically modified organism in 1973, a bacterium resistant to the antibiotic kanamycin. The first genetically modified animal, a mouse, was created in 1974 by Rudolf Jaenisch, and the first plant was produced in 1983. In 1994, the Flavr Savr tomato was released, the first commercialized genetically modified food. The first genetically modified animal to be commercialized was the GloFish (2003) and the first genetically modified animal to be approved for food use was the AquAdvantage salmon in 2015. Bacteria are the easiest organisms to engineer and have been used for research, food production, industrial protein purification (including drugs), agriculture, and art. There is potential to use them for environmental purposes or as medicine. Fungi have been engineered with much the same goals. Viruses play an important role as vectors for inserting genetic information into other organisms. This use is especially relevant to human gene therapy. There are proposals to remove the virulent genes from viruses to create vaccines. Plants have been engineered for scientific research, to create new colors in plants, deliver vaccines, and to create enhanced crops. Genetically modified crops are publicly the most controversial GMOs, in spite of having the most human health and environmental benefits. Animals are generally much harder to transform and the vast majority are still at the research stage. Mammals are the best model organisms for humans. Livestock is modified with the intention of improving economically important traits such as growth rate, quality of meat, milk composition, disease resistance, and survival. Genetically modified fish are used for scientific research, as pets, and as a food source. Genetic engineering has been proposed as a way to control mosquitos, a vector for many deadly diseases. Although human gene therapy is still relatively new, it has been used to treat genetic disorders such as severe combined immunodeficiency and Leber's congenital amaurosis. Many objections have been raised over the development of GMOs, particularly their commercialization. Many of these involve GM crops and whether food produced from them is safe and what impact growing them will have on the environment. Other concerns are the objectivity and rigor of regulatory authorities, contamination of non-genetically modified food, control of the food supply, patenting of life, and the use of intellectual property rights. Although there is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, GM food safety is a leading issue with critics. Gene flow, impact on non-target organisms, and escape are the major environmental concerns. Countries have adopted regulatory measures to deal with these concerns. There are differences in the regulation for the release of GMOs between countries, with some of the most marked differences occurring between the US and Europe. Key issues concerning regulators include whether GM food should be labeled and the status of gene-edited organisms. Definition The definition of a genetically modified organism (GMO) is not clear and varies widely between countries, international bodies, and other communities. At its broadest, the definition of a GMO can include anything that has had its genes altered, including by nature. Taking a less broad view, it can encompass every organism that has had its genes altered by humans, which would include all crops and livestock. In 1993, the Encyclopedia Britannica defined genetic engineering as "any of a wide range of techniques ... among them artificial insemination, in vitro fertilization (e.g., 'test-tube' babies), sperm banks, cloning, and gene manipulation." The European Union (EU) included a similarly broad definition in early reviews, specifically mentioning GMOs being produced by "selective breeding and other means of artificial selection" These definitions were promptly adjusted with a number of exceptions added as the result of pressure from scientific and farming communities, as well as developments in science. The EU definition later excluded traditional breeding, in vitro fertilization, induction of polyploidy, mutation breeding, and cell fusion techniques that do not use recombinant nucleic acids or a genetically modified organism in the process. Another approach was the definition provided by the Food and Agriculture Organization, the World Health Organization, and the European Commission, stating that the organisms must be altered in a way that does "not occur naturally by mating and/or natural recombination". Progress in science, such as the discovery of horizontal gene transfer being a relatively common natural phenomenon, further added to the confusion on what "occurs naturally", which led to further adjustments and exceptions. There are examples of crops that fit this definition, but are not normally considered GMOs. For example, the grain crop triticale was fully developed in a laboratory in 1930 using various techniques to alter its genome. Genetically engineered organism (GEO) can be considered a more precise term compared to GMO when describing organisms' genomes that have been directly manipulated with biotechnology. The Cartagena Protocol on Biosafety used the synonym living modified organism (LMO) in 2000 and defined it as "any living organism that possesses a novel combination of genetic material obtained through the use of modern biotechnology." Modern biotechnology is further defined as "In vitro nucleic acid techniques, including recombinant deoxyribonucleic acid (DNA) and direct injection of nucleic acid into cells or organelles, or fusion of cells beyond the taxonomic family." Originally, the term GMO was not commonly used by scientists to describe genetically engineered organisms until after usage of GMO became common in popular media. The United States Department of Agriculture (USDA) considers GMOs to be plants or animals with heritable changes introduced by genetic engineering or traditional methods, while GEO specifically refers to organisms with genes introduced, eliminated, or rearranged using molecular biology, particularly recombinant DNA techniques, such as transgenesis. The definitions focus on the process more than the product, which means there could be GMOS and non-GMOs with very similar genotypes and phenotypes. This has led scientists to label it as a scientifically meaningless category, saying that it is impossible to group all the different types of GMOs under one common definition. It has also caused issues for organic institutions and groups looking to ban GMOs. It also poses problems as new processes are developed. The current definitions came in before genome editing became popular and there is some confusion as to whether they are GMOs. The EU has adjudged that they are changing their GMO definition to include "organisms obtained by mutagenesis", but has excluded them from regulation based on their "long safety record" and that they have been "conventionally been used in a number of applications". In contrast the USDA has ruled that gene edited organisms are not considered GMOs. Even greater inconsistency and confusion is associated with various "Non-GMO" or "GMO-free" labeling schemes in food marketing, where even products such as water or salt, which do not contain any organic substances and genetic material (and thus cannot be genetically modified by definition), are being labeled to create an impression of being "more healthy". Production Creating a genetically modified organism (GMO) is a multi-step process. Genetic engineers must isolate the gene they wish to insert into the host organism. This gene can be taken from a cell or artificially synthesized. If the chosen gene or the donor organism's genome has been well studied it may already be accessible from a genetic library. The gene is then combined with other genetic elements, including a promoter and terminator region and a selectable marker. A number of techniques are available for inserting the isolated gene into the host genome. Bacteria can be induced to take up foreign DNA, usually by exposed heat shock or electroporation. DNA is generally inserted into animal cells using microinjection, where it can be injected through the cell's nuclear envelope directly into the nucleus, or through the use of viral vectors. In plants the DNA is often inserted using Agrobacterium-mediated recombination, biolistics or electroporation. As only a single cell is transformed with genetic material, the organism must be regenerated from that single cell. In plants this is accomplished through tissue culture. In animals it is necessary to ensure that the inserted DNA is present in the embryonic stem cells. Further testing using PCR, Southern hybridization, and DNA sequencing is conducted to confirm that an organism contains the new gene. Traditionally the new genetic material was inserted randomly within the host genome. Gene targeting techniques, which creates double-stranded breaks and takes advantage on the cells natural homologous recombination repair systems, have been developed to target insertion to exact locations. Genome editing uses artificially engineered nucleases that create breaks at specific points. There are four families of engineered nucleases: meganucleases, zinc finger nucleases, transcription activator-like effector nucleases (TALENs), and the Cas9-guideRNA system (adapted from CRISPR). TALEN and CRISPR are the two most commonly used and each has its own advantages. TALENs have greater target specificity, while CRISPR is easier to design and more efficient. History Humans have domesticated plants and animals since around 12,000 BCE, using selective breeding or artificial selection (as contrasted with natural selection). The process of selective breeding, in which organisms with desired traits (and thus with the desired genes) are used to breed the next generation and organisms lacking the trait are not bred, is a precursor to the modern concept of genetic modification. Various advancements in genetics allowed humans to directly alter the DNA and therefore genes of organisms. In 1972, Paul Berg created the first recombinant DNA molecule when he combined DNA from a monkey virus with that of the lambda virus. Herbert Boyer and Stanley Cohen made the first genetically modified organism in 1973. They took a gene from a bacterium that provided resistance to the antibiotic kanamycin, inserted it into a plasmid and then induced other bacteria to incorporate the plasmid. The bacteria that had successfully incorporated the plasmid was then able to survive in the presence of kanamycin. Boyer and Cohen expressed other genes in bacteria. This included genes from the toad Xenopus laevis in 1974, creating the first GMO expressing a gene from an organism of a different kingdom. In 1974, Rudolf Jaenisch created a transgenic mouse by introducing foreign DNA into its embryo, making it the world's first transgenic animal. However it took another eight years before transgenic mice were developed that passed the transgene to their offspring. Genetically modified mice were created in 1984 that carried cloned oncogenes, predisposing them to developing cancer. Mice with genes removed (termed a knockout mouse) were created in 1989. The first transgenic livestock were produced in 1985 and the first animal to synthesize transgenic proteins in their milk were mice in 1987. The mice were engineered to produce human tissue plasminogen activator, a protein involved in breaking down blood clots. In 1983, the first genetically engineered plant was developed by Michael W. Bevan, Richard B. Flavell and Mary-Dell Chilton. They infected tobacco with Agrobacterium transformed with an antibiotic resistance gene and through tissue culture techniques were able to grow a new plant containing the resistance gene. The gene gun was invented in 1987, allowing transformation of plants not susceptible to Agrobacterium infection. In 2000, Vitamin A-enriched golden rice was the first plant developed with increased nutrient value. In 1976, Genentech, the first genetic engineering company was founded by Herbert Boyer and Robert Swanson; a year later, the company produced a human protein (somatostatin) in E. coli. Genentech announced the production of genetically engineered human insulin in 1978. The insulin produced by bacteria, branded Humulin, was approved for release by the Food and Drug Administration in 1982. In 1988, the first human antibodies were produced in plants. In 1987, a strain of Pseudomonas syringae became the first genetically modified organism to be released into the environment when a strawberry and potato field in California were sprayed with it. The first genetically modified crop, an antibiotic-resistant tobacco plant, was produced in 1982. China was the first country to commercialize transgenic plants, introducing a virus-resistant tobacco in 1992. In 1994, Calgene attained approval to commercially release the Flavr Savr tomato, the first genetically modified food. Also in 1994, the European Union approved tobacco engineered to be resistant to the herbicide bromoxynil, making it the first genetically engineered crop commercialized in Europe. An insect resistant Potato was approved for release in the US in 1995, and by 1996 approval had been granted to commercially grow 8 transgenic crops and one flower crop (carnation) in 6 countries plus the EU. In 2010, scientists at the J. Craig Venter Institute announced that they had created the first synthetic bacterial genome. They named it Synthia and it was the world's first synthetic life form. The first genetically modified animal to be commercialized was the GloFish, a Zebra fish with a fluorescent gene added that allows it to glow in the dark under ultraviolet light. It was released to the US market in 2003. In 2015, AquAdvantage salmon became the first genetically modified animal to be approved for food use. Approval is for fish raised in Panama and sold in the US. The salmon were transformed with a growth hormone-regulating gene from a Pacific Chinook salmon and a promoter from an ocean pout enabling it to grow year-round instead of only during spring and summer. Bacteria Bacteria were the first organisms to be genetically modified in the laboratory, due to the relative ease of modifying their chromosomes. This ease made them important tools for the creation of other GMOs. Genes and other genetic information from a wide range of organisms can be added to a plasmid and inserted into bacteria for storage and modification. Bacteria are cheap, easy to grow, clonal, multiply quickly and can be stored at −80 °C almost indefinitely. Once a gene is isolated it can be stored inside the bacteria, providing an unlimited supply for research. A large number of custom plasmids make manipulating DNA extracted from bacteria relatively easy. Their ease of use has made them great tools for scientists looking to study gene function and evolution. The simplest model organisms come from bacteria, with most of our early understanding of molecular biology coming from studying Escherichia coli. Scientists can easily manipulate and combine genes within the bacteria to create novel or disrupted proteins and observe the effect this has on various molecular systems. Researchers have combined the genes from bacteria and archaea, leading to insights on how these two diverged in the past. In the field of synthetic biology, they have been used to test various synthetic approaches, from synthesizing genomes to creating novel nucleotides. Bacteria have been used in the production of food for a long time, and specific strains have been developed and selected for that work on an industrial scale. They can be used to produce enzymes, amino acids, flavorings, and other compounds used in food production. With the advent of genetic engineering, new genetic changes can easily be introduced into these bacteria. Most food-producing bacteria are lactic acid bacteria, and this is where the majority of research into genetically engineering food-producing bacteria has gone. The bacteria can be modified to operate more efficiently, reduce toxic byproduct production, increase output, create improved compounds, and remove unnecessary pathways. Food products from genetically modified bacteria include alpha-amylase, which converts starch to simple sugars, chymosin, which clots milk protein for cheese making, and pectinesterase, which improves fruit juice clarity. The majority are produced in the US and even though regulations are in place to allow production in Europe, as of 2015 no food products derived from bacteria are currently available there. Genetically modified bacteria are used to produce large amounts of proteins for industrial use. The bacteria are generally grown to a large volume before the gene encoding the protein is activated. The bacteria are then harvested and the desired protein purified from them. The high cost of extraction and purification has meant that only high value products have been produced at an industrial scale. The majority of these products are human proteins for use in medicine. Many of these proteins are impossible or difficult to obtain via natural methods and they are less likely to be contaminated with pathogens, making them safer. The first medicinal use of GM bacteria was to produce the protein insulin to treat diabetes. Other medicines produced include clotting factors to treat hemophilia, human growth hormone to treat various forms of dwarfism, interferon to treat some cancers, erythropoietin for anemic patients, and tissue plasminogen activator which dissolves blood clots. Outside of medicine they have been used to produce biofuels. There is interest in developing an extracellular expression system within the bacteria to reduce costs and make the production of more products economical. With a greater understanding of the role that the microbiome plays in human health, there is a potential to treat diseases by genetically altering the bacteria to, themselves, be therapeutic agents. Ideas include altering gut bacteria so they destroy harmful bacteria, or using bacteria to replace or increase deficient enzymes or proteins. One research focus is to modify Lactobacillus, bacteria that naturally provide some protection against HIV, with genes that will further enhance this protection. If the bacteria do not form colonies inside the patient, the person must repeatedly ingest the modified bacteria in order to get the required doses. Enabling the bacteria to form a colony could provide a more long-term solution, but could also raise safety concerns as interactions between bacteria and the human body are less well understood than with traditional drugs. There are concerns that horizontal gene transfer to other bacteria could have unknown effects. As of 2018 there are clinical trials underway testing the efficacy and safety of these treatments. For over a century, bacteria have been used in agriculture. Crops have been inoculated with Rhizobia (and more recently Azospirillum) to increase their production or to allow them to be grown outside their original habitat. Application of Bacillus thuringiensis (Bt) and other bacteria can help protect crops from insect infestation and plant diseases. With advances in genetic engineering, these bacteria have been manipulated for increased efficiency and expanded host range. Markers have also been added to aid in tracing the spread of the bacteria. The bacteria that naturally colonize certain crops have also been modified, in some cases to express the Bt genes responsible for pest resistance. Pseudomonas strains of bacteria cause frost damage by nucleating water into ice crystals around themselves. This led to the development of ice-minus bacteria, which have the ice-forming genes removed. When applied to crops they can compete with the non-modified bacteria and confer some frost resistance. Other uses for genetically modified bacteria include bioremediation, where the bacteria are used to convert pollutants into a less toxic form. Genetic engineering can increase the levels of the enzymes used to degrade a toxin or to make the bacteria more stable under environmental conditions. Bioart has also been created using genetically modified bacteria. In the 1980s artist Jon Davis and geneticist Dana Boyd converted the Germanic symbol for femininity (ᛉ) into binary code and then into a DNA sequence, which was then expressed in Escherichia coli. This was taken a step further in 2012, when a whole book was encoded onto DNA. Paintings have also been produced using bacteria transformed with fluorescent proteins. Viruses Viruses are often modified so they can be used as vectors for inserting genetic information into other organisms. This process is called transduction and if successful the recipient of the introduced DNA becomes a GMO. Different viruses have different efficiencies and capabilities. Researchers can use this to control for various factors; including the target location, insert size, and duration of gene expression. Any dangerous sequences inherent in the virus must be removed, while those that allow the gene to be delivered effectively are retained. While viral vectors can be used to insert DNA into almost any organism it is especially relevant for its potential in treating human disease. Although primarily still at trial stages, there has been some successes using gene therapy to replace defective genes. This is most evident in curing patients with severe combined immunodeficiency rising from adenosine deaminase deficiency (ADA-SCID), although the development of leukemia in some ADA-SCID patients along with the death of Jesse Gelsinger in a 1999 trial set back the development of this approach for many years. In 2009, another breakthrough was achieved when an eight-year-old boy with Leber's congenital amaurosis regained normal eyesight and in 2016 GlaxoSmithKline gained approval to commercialize a gene therapy treatment for ADA-SCID. As of 2018, there are a substantial number of clinical trials underway, including treatments for hemophilia, glioblastoma, chronic granulomatous disease, cystic fibrosis and various cancers. The most common virus used for gene delivery comes from adenoviruses as they can carry up to 7.5 kb of foreign DNA and infect a relatively broad range of host cells, although they have been known to elicit immune responses in the host and only provide short term expression. Other common vectors are adeno-associated viruses, which have lower toxicity and longer-term expression, but can only carry about 4kb of DNA. Herpes simplex viruses make promising vectors, having a carrying capacity of over 30kb and providing long term expression, although they are less efficient at gene delivery than other vectors. The best vectors for long term integration of the gene into the host genome are retroviruses, but their propensity for random integration is problematic. Lentiviruses are a part of the same family as retroviruses with the advantage of infecting both dividing and non-dividing cells, whereas retroviruses only target dividing cells. Other viruses that have been used as vectors include alphaviruses, flaviviruses, measles viruses, rhabdoviruses, Newcastle disease virus, poxviruses, and picornaviruses. Most vaccines consist of viruses that have been attenuated, disabled, weakened or killed in some way so that their virulent properties are no longer effective. Genetic engineering could theoretically be used to create viruses with the virulent genes removed. This does not affect the viruses infectivity, invokes a natural immune response and there is no chance that they will regain their virulence function, which can occur with some other vaccines. As such they are generally considered safer and more efficient than conventional vaccines, although concerns remain over non-target infection, potential side effects and horizontal gene transfer to other viruses. Another potential approach is to use vectors to create novel vaccines for diseases that have no vaccines available or the vaccines that do not work effectively, such as AIDS, malaria, and tuberculosis. The most effective vaccine against Tuberculosis, the Bacillus Calmette–Guérin (BCG) vaccine, only provides partial protection. A modified vaccine expressing a M tuberculosis antigen is able to enhance BCG protection. It has been shown to be safe to use at phase II trials, although not as effective as initially hoped. Other vector-based vaccines have already been approved and many more are being developed. Another potential use of genetically modified viruses is to alter them so they can directly treat diseases. This can be through expression of protective proteins or by directly targeting infected cells. In 2004, researchers reported that a genetically modified virus that exploits the selfish behavior of cancer cells might offer an alternative way of killing tumours. Since then, several researchers have developed genetically modified oncolytic viruses that show promise as treatments for various types of cancer. In 2017, researchers genetically modified a virus to express spinach defensin proteins. The virus was injected into orange trees to combat citrus greening disease that had reduced orange production by 70% since 2005. Natural viral diseases, such as myxomatosis and rabbit hemorrhagic disease, have been used to help control pest populations. Over time the surviving pests become resistant, leading researchers to look at alternative methods. Genetically modified viruses that make the target animals infertile through immunocontraception have been created in the laboratory as well as others that target the developmental stage of the animal. There are concerns with using this approach regarding virus containment and cross species infection. Sometimes the same virus can be modified for contrasting purposes. Genetic modification of the myxoma virus has been proposed to conserve European wild rabbits in the Iberian peninsula and to help regulate them in Australia. To protect the Iberian species from viral diseases, the myxoma virus was genetically modified to immunize the rabbits, while in Australia the same myxoma virus was genetically modified to lower fertility in the Australian rabbit population. Outside of biology scientists have used a genetically modified virus to construct a lithium-ion battery and other nanostructured materials. It is possible to engineer bacteriophages to express modified proteins on their surface and join them up in specific patterns (a technique called phage display). These structures have potential uses for energy storage and generation, biosensing and tissue regeneration with some new materials currently produced including quantum dots, liquid crystals, nanorings and nanofibres. The battery was made by engineering M13 bacteriaophages so they would coat themselves in iron phosphate and then assemble themselves along a carbon nanotube. This created a highly conductive medium for use in a cathode, allowing energy to be transferred quickly. They could be constructed at lower temperatures with non-toxic chemicals, making them more environmentally friendly. Fungi Fungi can be used for many of the same processes as bacteria. For industrial applications, yeasts combine the bacterial advantages of being a single-celled organism that is easy to manipulate and grow with the advanced protein modifications found in eukaryotes. They can be used to produce large complex molecules for use in food, pharmaceuticals, hormones, and steroids. Yeast is important for wine production and as of 2016 two genetically modified yeasts involved in the fermentation of wine have been commercialized in the United States and Canada. One has increased malolactic fermentation efficiency, while the other prevents the production of dangerous ethyl carbamate compounds during fermentation. There have also been advances in the production of biofuel from genetically modified fungi. Fungi, being the most common pathogens of insects, make attractive biopesticides. Unlike bacteria and viruses they have the advantage of infecting the insects by contact alone, although they are out competed in efficiency by chemical pesticides. Genetic engineering can improve virulence, usually by adding more virulent proteins, increasing infection rate or enhancing spore persistence. Many of the disease carrying vectors are susceptible to entomopathogenic fungi. An attractive target for biological control are mosquitos, vectors for a range of deadly diseases, including malaria, yellow fever and dengue fever. Mosquitos can evolve quickly so it becomes a balancing act of killing them before the Plasmodium they carry becomes the infectious disease, but not so fast that they become resistant to the fungi. By genetically engineering fungi like Metarhizium anisopliae and Beauveria bassiana to delay the development of mosquito infectiousness the selection pressure to evolve resistance is reduced. Another strategy is to add proteins to the fungi that block transmission of malaria or remove the Plasmodium altogether. Agaricus bisporus the common white button mushroom, has been gene edited to resist browning, giving it a longer shelf life. The process used CRISPR to knock out a gene that encodes polyphenol oxidase. As it didn't introduce any foreign DNA into the organism it was not deemed to be regulated under existing GMO frameworks and as such is the first CRISPR-edited organism to be approved for release. This has intensified debates as to whether gene-edited organisms should be considered genetically modified organisms and how they should be regulated. Plants Plants have been engineered for scientific research, to display new flower colors, deliver vaccines, and to create enhanced crops. Many plants are pluripotent, meaning that a single cell from a mature plant can be harvested and under the right conditions can develop into a new plant. This ability can be taken advantage of by genetic engineers; by selecting for cells that have been successfully transformed in an adult plant a new plant can then be grown that contains the transgene in every cell through a process known as tissue culture. Much of the advances in the field of genetic engineering has come from experimentation with tobacco. Major advances in tissue culture and plant cellular mechanisms for a wide range of plants has originated from systems developed in tobacco. It was the first plant to be altered using genetic engineering and is considered a model organism for not only genetic engineering, but a range of other fields. As such the transgenic tools and procedures are well established making tobacco one of the easiest plants to transform. Another major model organism relevant to genetic engineering is Arabidopsis thaliana. Its small genome and short life cycle makes it easy to manipulate and it contains many homologs to important crop species. It was the first plant sequenced, has a host of online resources available and can be transformed by simply dipping a flower in a transformed Agrobacterium solution. In research, plants are engineered to help discover the functions of certain genes. The simplest way to do this is to remove the gene and see what phenotype develops compared to the wild type form. Any differences are possibly the result of the missing gene. Unlike mutagenisis, genetic engineering allows targeted removal without disrupting other genes in the organism. Some genes are only expressed in certain tissues, so reporter genes, like GUS, can be attached to the gene of interest allowing visualization of the location. Other ways to test a gene is to alter it slightly and then return it to the plant and see if it still has the same effect on phenotype. Other strategies include attaching the gene to a strong promoter and see what happens when it is overexpressed, forcing a gene to be expressed in a different location or at different developmental stages. Some genetically modified plants are purely ornamental. They are modified for flower color, fragrance, flower shape and plant architecture. The first genetically modified ornamentals commercialized altered color. Carnations were released in 1997, with the most popular genetically modified organism, a blue rose (actually lavender or mauve) created in 2004. The roses are sold in Japan, the United States, and Canada. Other genetically modified ornamentals include Chrysanthemum and Petunia. As well as increasing aesthetic value there are plans to develop ornamentals that use less water or are resistant to the cold, which would allow them to be grown outside their natural environments. It has been proposed to genetically modify some plant species threatened by extinction to be resistant to invasive plants and diseases, such as the emerald ash borer in North American and the fungal disease, Ceratocystis platani, in European plane trees. The papaya ringspot virus devastated papaya trees in Hawaii in the twentieth century until transgenic papaya plants were given pathogen-derived resistance. However, genetic modification for conservation in plants remains mainly speculative. A unique concern is that a transgenic species may no longer bear enough resemblance to the original species to truly claim that the original species is being conserved. Instead, the transgenic species may be genetically different enough to be considered a new species, thus diminishing the conservation worth of genetic modification. Crops Genetically modified crops are genetically modified plants that are used in agriculture. The first crops developed were used for animal or human food and provide resistance to certain pests, diseases, environmental conditions, spoilage or chemical treatments (e.g. resistance to a herbicide). The second generation of crops aimed to improve the quality, often by altering the nutrient profile. Third generation genetically modified crops could be used for non-food purposes, including the production of pharmaceutical agents, biofuels, and other industrially useful goods, as well as for bioremediation. There are three main aims to agricultural advancement; increased production, improved conditions for agricultural workers and sustainability. GM crops contribute by improving harvests through reducing insect pressure, increasing nutrient value and tolerating different abiotic stresses. Despite this potential, as of 2018, the commercialized crops are limited mostly to cash crops like cotton, soybean, maize and canola and the vast majority of the introduced traits provide either herbicide tolerance or insect resistance. Soybeans accounted for half of all genetically modified crops planted in 2014. Adoption by farmers has been rapid, between 1996 and 2013, the total surface area of land cultivated with GM crops increased by a factor of 100. Geographically though the spread has been uneven, with strong growth in the Americas and parts of Asia and little in Europe and Africa. Its socioeconomic spread has been more even, with approximately 54% of worldwide GM crops grown in developing countries in 2013. Although doubts have been raised, most studies have found growing GM crops to be beneficial to farmers through decreased pesticide use as well as increased crop yield and farm profit. The majority of GM crops have been modified to be resistant to selected herbicides, usually a glyphosate or glufosinate based one. Genetically modified crops engineered to resist herbicides are now more available than conventionally bred resistant varieties; in the USA 93% of soybeans and most of the GM maize grown is glyphosate tolerant. Most currently available genes used to engineer insect resistance come from the Bacillus thuringiensis bacterium and code for delta endotoxins. A few use the genes that encode for vegetative insecticidal proteins. The only gene commercially used to provide insect protection that does not originate from B. thuringiensis is the Cowpea trypsin inhibitor (CpTI). CpTI was first approved for use cotton in 1999 and is currently undergoing trials in rice. Less than one percent of GM crops contained other traits, which include providing virus resistance, delaying senescence and altering the plants composition. Golden rice is the most well known GM crop that is aimed at increasing nutrient value. It has been engineered with three genes that biosynthesise beta-carotene, a precursor of vitamin A, in the edible parts of rice. It is intended to produce a fortified food to be grown and consumed in areas with a shortage of dietary vitamin A, a deficiency which each year is estimated to kill 670,000 children under the age of 5 and cause an additional 500,000 cases of irreversible childhood blindness. The original golden rice produced 1.6μg/g of the carotenoids, with further development increasing this 23 times. It gained its first approvals for use as food in 2018. Plants and plant cells have been genetically engineered for production of biopharmaceuticals in bioreactors, a process known as pharming. Work has been done with duckweed Lemna minor, the algae Chlamydomonas reinhardtii and the moss Physcomitrella patens. Biopharmaceuticals produced include cytokines, hormones, antibodies, enzymes and vaccines, most of which are accumulated in the plant seeds. Many drugs also contain natural plant ingredients and the pathways that lead to their production have been genetically altered or transferred to other plant species to produce greater volume. Other options for bioreactors are biopolymers and biofuels. Unlike bacteria, plants can modify the proteins post-translationally, allowing them to make more complex molecules. They also pose less risk of being contaminated. Therapeutics have been cultured in transgenic carrot and tobacco cells, including a drug treatment for Gaucher's disease. Vaccine production and storage has great potential in transgenic plants. Vaccines are expensive to produce, transport, and administer, so having a system that could produce them locally would allow greater access to poorer and developing areas. As well as purifying vaccines expressed in plants it is also possible to produce edible vaccines in plants. Edible vaccines stimulate the immune system when ingested to protect against certain diseases. Being stored in plants reduces the long-term cost as they can be disseminated without the need for cold storage, don't need to be purified, and have long term stability. Also being housed within plant cells provides some protection from the gut acids upon digestion. However the cost of developing, regulating, and containing transgenic plants is high, leading to most current plant-based vaccine development being applied to veterinary medicine, where the controls are not as strict. Genetically modified crops have been proposed as one of the ways to reduce farming-related emissions due to higher yield, reduced use of pesticides, reduced use of tractor fuel and no tillage. According to a 2021 study, in EU alone widespread adoption of GE crops would reduce greenhouse gas emissions by 33 million tons of equivalent or 7.5% of total farming-related emissions. Animals The vast majority of genetically modified animals are at the research stage with the number close to entering the market remaining small. As of 2018 only three genetically modified animals have been approved, all in the USA. A goat and a chicken have been engineered to produce medicines and a salmon has increased its own growth. Despite the differences and difficulties in modifying them, the end aims are much the same as for plants. GM animals are created for research purposes, production of industrial or therapeutic products, agricultural uses, or improving their health. There is also a market for creating genetically modified pets. Mammals The process of genetically engineering mammals is slow, tedious, and expensive. However, new technologies are making genetic modifications easier and more precise. The first transgenic mammals were produced by injecting viral DNA into embryos and then implanting the embryos in females. The embryo would develop and it would be hoped that some of the genetic material would be incorporated into the reproductive cells. Then researchers would have to wait until the animal reached breeding age and then offspring would be screened for the presence of the gene in every cell. The development of the CRISPR-Cas9 gene editing system as a cheap and fast way of directly modifying germ cells, effectively halving the amount of time needed to develop genetically modified mammals. Mammals are the best models for human disease, making genetic engineered ones vital to the discovery and development of cures and treatments for many serious diseases. Knocking out genes responsible for human genetic disorders allows researchers to study the mechanism of the disease and to test possible cures. Genetically modified mice have been the most common mammals used in biomedical research, as they are cheap and easy to manipulate. Pigs are also a good target as they have a similar body size and anatomical features, physiology, pathophysiological response and diet. Nonhuman primates are the most similar model organisms to humans, but there is less public acceptance towards using them as research animals. In 2009, scientists announced that they had successfully transferred a gene into a primate species (marmosets) for the first time. Their first research target for these marmosets was Parkinson's disease, but they were also considering amyotrophic lateral sclerosis and Huntington's disease. Human proteins expressed in mammals are more likely to be similar to their natural counterparts than those expressed in plants or microorganisms. Stable expression has been accomplished in sheep, pigs, rats and other animals. In 2009, the first human biological drug produced from such an animal, a goat, was approved. The drug, ATryn, is an anticoagulant which reduces the probability of blood clots during surgery or childbirth and is extracted from the goat's milk. Human alpha-1-antitrypsin is another protein that has been produced from goats and is used in treating humans with this deficiency. Another medicinal area is in creating pigs with greater capacity for human organ transplants (xenotransplantation). Pigs have been genetically modified so that their organs can no longer carry retroviruses or have modifications to reduce the chance of rejection. Chimeric pigs could carry fully human organs. The first human transplant of a genetically modified pig heart occurred in 2023, and kidney in 2024. Livestock are modified with the intention of improving economically important traits such as growth-rate, quality of meat, milk composition, disease resistance and survival. Animals have been engineered to grow faster, be healthier and resist diseases. Modifications have also improved the wool production of sheep and udder health of cows. Goats have been genetically engineered to produce milk with strong spiderweb-like silk proteins in their milk. A GM pig called Enviropig was created with the capability of digesting plant phosphorus more efficiently than conventional pigs. They could reduce water pollution since they excrete 30 to 70% less phosphorus in manure. Dairy cows have been genetically engineered to produce milk that would be the same as human breast milk. This could potentially benefit mothers who cannot produce breast milk but want their children to have breast milk rather than formula. Researchers have also developed a genetically engineered cow that produces allergy-free milk. Scientists have genetically engineered several organisms, including some mammals, to include green fluorescent protein (GFP), for research purposes. GFP and other similar reporting genes allow easy visualization and localization of the products of the genetic modification. Fluorescent pigs have been bred to study human organ transplants, regenerating ocular photoreceptor cells, and other topics. In 2011, green-fluorescent cats were created to help find therapies for HIV/AIDS and other diseases as feline immunodeficiency virus is related to HIV. There have been suggestions that genetic engineering could be used to bring animals back from extinction. It involves changing the genome of a close living relative to resemble the extinct one and is currently being attempted with the passenger pigeon. Genes associated with the woolly mammoth have been added to the genome of an African Elephant, although the lead researcher says he has no intention of creating live elephants and transferring all the genes and reversing years of genetic evolution is a long way from being feasible. It is more likely that scientists could use this technology to conserve endangered animals by bringing back lost diversity or transferring evolved genetic advantages from adapted organisms to those that are struggling. Humans Gene therapy uses genetically modified viruses to deliver genes which can cure disease in humans. Although gene therapy is still relatively new, it has had some successes. It has been used to treat genetic disorders such as severe combined immunodeficiency, and Leber's congenital amaurosis. Treatments are also being developed for a range of other currently incurable diseases, such as cystic fibrosis, sickle cell anemia, Parkinson's disease, cancer, diabetes, heart disease and muscular dystrophy. These treatments only effect somatic cells, meaning any changes would not be inheritable. Germline gene therapy results in any change being inheritable, which has raised concerns within the scientific community. In 2015, CRISPR was used to edit the DNA of non-viable human embryos. In November 2018, He Jiankui announced that he had edited the genomes of two human embryos, in an attempt to disable the CCR5 gene, which codes for a receptor that HIV uses to enter cells. He said that twin girls, Lulu and Nana, had been born a few weeks earlier and that they carried functional copies of CCR5 along with disabled CCR5 (mosaicism) and were still vulnerable to HIV. The work was widely condemned as unethical, dangerous, and premature. Fish Genetically modified fish are used for scientific research, as pets and as a food source. Aquaculture is a growing industry, currently providing over half the consumed fish worldwide. Through genetic engineering it is possible to increase growth rates, reduce food intake, remove allergenic properties, increase cold tolerance and provide disease resistance. Fish can also be used to detect aquatic pollution or function as bioreactors. Several groups have been developing zebrafish to detect pollution by attaching fluorescent proteins to genes activated by the presence of pollutants. The fish will then glow and can be used as environmental sensors. The GloFish is a brand of genetically modified fluorescent zebrafish with bright red, green, and orange fluorescent color. It was originally developed by one of the groups to detect pollution, but is now part of the ornamental fish trade, becoming the first genetically modified animal to become publicly available as a pet when in 2003 it was introduced for sale in the USA. GM fish are widely used in basic research in genetics and development. Two species of fish, zebrafish and medaka, are most commonly modified because they have optically clear chorions (membranes in the egg), rapidly develop, and the one-cell embryo is easy to see and microinject with transgenic DNA. Zebrafish are model organisms for developmental processes, regeneration, genetics, behavior, disease mechanisms and toxicity testing. Their transparency allows researchers to observe developmental stages, intestinal functions and tumour growth. The generation of transgenic protocols (whole organism, cell or tissue specific, tagged with reporter genes) has increased the level of information gained by studying these fish. GM fish have been developed with promoters driving an over-production of growth hormone for use in the aquaculture industry to increase the speed of development and potentially reduce fishing pressure on wild stocks. This has resulted in dramatic growth enhancement in several species, including salmon, trout and tilapia. AquaBounty Technologies, a biotechnology company, have produced a salmon (called AquAdvantage salmon) that can mature in half the time as wild salmon. It obtained regulatory approval in 2015, the first non-plant GMO food to be commercialized. As of August 2017, GMO salmon is being sold in Canada. Sales in the US started in May 2021. Insects In biological research, transgenic fruit flies (Drosophila melanogaster) are model organisms used to study the effects of genetic changes on development. Fruit flies are often preferred over other animals due to their short life cycle and low maintenance requirements. They also have a relatively simple genome compared to many vertebrates, with typically only one copy of each gene, making phenotypic analysis easy. Drosophila have been used to study genetics and inheritance, embryonic development, learning, behavior, and aging. The discovery of transposons, in particular the p-element, in Drosophila provided an early method to add transgenes to their genome, although this has been taken over by more modern gene-editing techniques. Due to their significance to human health, scientists are looking at ways to control mosquitoes through genetic engineering. Malaria-resistant mosquitoes have been developed in the laboratory by inserting a gene that reduces the development of the malaria parasite and then use homing endonucleases to rapidly spread that gene throughout the male population (known as a gene drive). This approach has been taken further by using the gene drive to spread a lethal gene. In trials the populations of Aedes aegypti mosquitoes, the single most important carrier of dengue fever and Zika virus, were reduced by between 80% and by 90%. Another approach is to use a sterile insect technique, whereby males genetically engineered to be sterile out compete viable males, to reduce population numbers. Other insect pests that make attractive targets are moths. Diamondback moths cause US$4 to $5 billion of damage each year worldwide. The approach is similar to the sterile technique tested on mosquitoes, where males are transformed with a gene that prevents any females born from reaching maturity. They underwent field trials in 2017. Genetically modified moths have previously been released in field trials. In this case a strain of pink bollworm that were sterilized with radiation were genetically engineered to express a red fluorescent protein making it easier for researchers to monitor them. Silkworm, the larvae stage of Bombyx mori, is an economically important insect in sericulture. Scientists are developing strategies to enhance silk quality and quantity. There is also potential to use the silk producing machinery to make other valuable proteins. Proteins currently developed to be expressed by silkworms include; human serum albumin, human collagen α-chain, mouse monoclonal antibody and N-glycanase. Silkworms have been created that produce spider silk, a stronger but extremely difficult to harvest silk, and even novel silks. Other Systems have been developed to create transgenic organisms in a wide variety of other animals. Chickens have been genetically modified for a variety of purposes. This includes studying embryo development, preventing the transmission of bird flu and providing evolutionary insights using reverse engineering to recreate dinosaur-like phenotypes. A GM chicken that produces the drug Kanuma, an enzyme that treats a rare condition, in its egg passed US regulatory approval in 2015. Genetically modified frogs, in particular Xenopus laevis and Xenopus tropicalis, are used in developmental biology research. GM frogs can also be used as pollution sensors, especially for endocrine disrupting chemicals. There are proposals to use genetic engineering to control cane toads in Australia. The nematode Caenorhabditis elegans is one of the major model organisms for researching molecular biology. RNA interference (RNAi) was discovered in C. elegans and could be induced by simply feeding them bacteria modified to express double stranded RNA. It is also relatively easy to produce stable transgenic nematodes and this along with RNAi are the major tools used in studying their genes. The most common use of transgenic nematodes has been studying gene expression and localization by attaching reporter genes. Transgenes can also be combined with RNAi techniques to rescue phenotypes, study gene function, image cell development in real time or control expression for different tissues or developmental stages. Transgenic nematodes have been used to study viruses, toxicology, diseases, and to detect environmental pollutants. The gene responsible for albinism in sea cucumbers has been found and used to engineer white sea cucumbers, a rare delicacy. The technology also opens the way to investigate the genes responsible for some of the cucumbers more unusual traits, including hibernating in summer, eviscerating their intestines, and dissolving their bodies upon death. Flatworms have the ability to regenerate themselves from a single cell. Until 2017 there was no effective way to transform them, which hampered research. By using microinjection and radiation scientists have now created the first genetically modified flatworms. The bristle worm, a marine annelid, has been modified. It is of interest due to its reproductive cycle being synchronized with lunar phases, regeneration capacity and slow evolution rate. Cnidaria such as Hydra and the sea anemone Nematostella vectensis are attractive model organisms to study the evolution of immunity and certain developmental processes. Other animals that have been genetically modified include snails, geckos, turtles, crayfish, oysters, shrimp, clams, abalone and sponges. Regulation Genetically modified organisms are regulated by government agencies. This applies to research as well as the release of genetically modified organisms, including crops and food. The development of a regulatory framework concerning genetic engineering began in 1975, at Asilomar, California. The Asilomar meeting recommended a set of guidelines regarding the cautious use of recombinant technology and any products resulting from that technology. The Cartagena Protocol on Biosafety was adopted on 29 January 2000 and entered into force on 11 September 2003. It is an international treaty that governs the transfer, handling, and use of genetically modified organisms. One hundred and fifty-seven countries are members of the Protocol and many use it as a reference point for their own regulations. Universities and research institutes generally have a special committee that is responsible for approving any experiments that involve genetic engineering. Many experiments also need permission from a national regulatory group or legislation. All staff must be trained in the use of GMOs and all laboratories must gain approval from their regulatory agency to work with GMOs. The legislation covering GMOs are often derived from regulations and guidelines in place for the non-GMO version of the organism, although they are more severe. There is a near-universal system for assessing the relative risks associated with GMOs and other agents to laboratory staff and the community. They are assigned to one of four risk categories based on their virulence, the severity of the disease, the mode of transmission, and the availability of preventive measures or treatments. There are four biosafety levels that a laboratory can fall into, ranging from level 1 (which is suitable for working with agents not associated with disease) to level 4 (working with life-threatening agents). Different countries use different nomenclature to describe the levels and can have different requirements for what can be done at each level. There are differences in the regulation for the release of GMOs between countries, with some of the most marked differences occurring between the US and Europe. Regulation varies in a given country depending on the intended use of the products of the genetic engineering. For example, a crop not intended for food use is generally not reviewed by authorities responsible for food safety. Some nations have banned the release of GMOs or restricted their use, and others permit them with widely differing degrees of regulation. In 2016, thirty eight countries officially ban or prohibit the cultivation of GMOs and nine (Algeria, Bhutan, Kenya, Kyrgyzstan, Madagascar, Peru, Russia, Venezuela and Zimbabwe) ban their importation. Most countries that do not allow GMO cultivation do permit research using GMOs. Despite regulation, illegal releases have sometimes occurred, due to weakness of enforcement. The European Union (EU) differentiates between approval for cultivation within the EU and approval for import and processing. While only a few GMOs have been approved for cultivation in the EU a number of GMOs have been approved for import and processing. The cultivation of GMOs has triggered a debate about the market for GMOs in Europe. Depending on the coexistence regulations, incentives for cultivation of GM crops differ. The US policy does not focus on the process as much as other countries, looks at verifiable scientific risks and uses the concept of substantial equivalence. Whether gene edited organisms should be regulated the same as genetically modified organism is debated. USA regulations sees them as separate and does not regulate them under the same conditions, while in Europe a GMO is any organism created using genetic engineering techniques. One of the key issues concerning regulators is whether GM products should be labeled. The European Commission says that mandatory labeling and traceability are needed to allow for informed choice, avoid potential false advertising and facilitate the withdrawal of products if adverse effects on health or the environment are discovered. The American Medical Association and the American Association for the Advancement of Science say that absent scientific evidence of harm even voluntary labeling is misleading and will falsely alarm consumers. Labeling of GMO products in the marketplace is required in 64 countries. Labeling can be mandatory up to a threshold GM content level (which varies between countries) or voluntary. In the U.S., the National Bioengineered Food Disclosure Standard (Mandatory Compliance Date: January 1, 2022) requires labeling GM foods. In Canada, labeling of GM food is voluntary, while in Europe all food (including processed food) or feed which contains greater than 0.9% of approved GMOs must be labeled. In 2014, sales of products that had been labeled as non-GMO grew 30 percent to $1.1 billion. Controversy There is controversy over GMOs, especially with regard to their release outside laboratory environments. The dispute involves consumers, producers, biotechnology companies, governmental regulators, non-governmental organizations, and scientists. Many of these concerns involve GM crops and whether food produced from them is safe and what impact growing them will have on the environment. These controversies have led to litigation, international trade disputes, and protests, and to restrictive regulation of commercial products in some countries. Most concerns are around the health and environmental effects of GMOs. These include whether they may provoke an allergic reaction, whether the transgenes could transfer to human cells, and whether genes not approved for human consumption could outcross into the food supply. There is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, but that each GM food needs to be tested on a case-by-case basis before introduction. Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe. The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. As late as the 1990s gene flow into wild populations was thought to be unlikely and rare, and if it were to occur, easily eradicated. It was thought that this would add no additional environmental costs or risks – no effects were expected other than those already caused by pesticide applications. However, in the decades since, several such examples have been observed. Gene flow between GM crops and compatible plants, along with increased use of broad-spectrum herbicides, can increase the risk of herbicide resistant weed populations. Debate over the extent and consequences of gene flow intensified in 2001 when a paper was published showing transgenes had been found in landrace maize in Mexico, the crop's center of diversity. Gene flow from GM crops to other organisms has been found to generally be lower than what would occur naturally. In order to address some of these concerns some GMOs have been developed with traits to help control their spread. To prevent the genetically modified salmon inadvertently breeding with wild salmon, all the fish raised for food are females, triploid, 99% are reproductively sterile, and raised in areas where escaped salmon could not survive. Bacteria have also been modified to depend on nutrients that cannot be found in nature, and genetic use restriction technology has been developed, though not yet marketed, that causes the second generation of GM plants to be sterile. Other environmental and agronomic concerns include a decrease in biodiversity, an increase in secondary pests (non-targeted pests) and evolution of resistant insect pests. In the areas of China and the US with Bt crops the overall biodiversity of insects has increased and the impact of secondary pests has been minimal. Resistance was found to be slow to evolve when best practice strategies were followed. The impact of Bt crops on beneficial non-target organisms became a public issue after a 1999 paper suggested they could be toxic to monarch butterflies. Follow up studies have since shown that the toxicity levels encountered in the field were not high enough to harm the larvae. Accusations that scientists are "playing God" and other religious issues have been ascribed to the technology from the beginning. With the ability to genetically engineer humans now possible there are ethical concerns over how far this technology should go, or if it should be used at all. Much debate revolves around where the line between treatment and enhancement is and whether the modifications should be inheritable. Other concerns include contamination of the non-genetically modified food supply, the rigor of the regulatory process, consolidation of control of the food supply in companies that make and sell GMOs, exaggeration of the benefits of genetic modification, or concerns over the use of herbicides with glyphosate. Other issues raised include the patenting of life and the use of intellectual property rights. There are large differences in consumer acceptance of GMOs, with Europeans more likely to view GM food negatively than North Americans. GMOs arrived on the scene as the public confidence in food safety, attributed to recent food scares such as Bovine spongiform encephalopathy and other scandals involving government regulation of products in Europe, was low. This along with campaigns run by various non-governmental organizations (NGO) have been very successful in blocking or limiting the use of GM crops. NGOs like the Organic Consumers Association, the Union of Concerned Scientists, Greenpeace and other groups have said that risks have not been adequately identified and managed and that there are unanswered questions regarding the potential long-term impact on human health from food derived from GMOs. They propose mandatory labeling or a moratorium on such products. References External links ISAAA database GMO-Compass: Information on genetically modified organisms Molecular biology 1973 introductions Articles containing video clips
Genetically modified organism
[ "Chemistry", "Engineering", "Biology" ]
12,613
[ "Biochemistry", "Genetic engineering", "Genetically modified organisms", "Molecular biology" ]
12,385
https://en.wikipedia.org/wiki/Genetic%20code
The genetic code is the set of rules used by living cells to translate information encoded within genetic material (DNA or RNA sequences of nucleotide triplets or codons) into proteins. Translation is accomplished by the ribosome, which links proteinogenic amino acids in an order specified by messenger RNA (mRNA), using transfer RNA (tRNA) molecules to carry amino acids and to read the mRNA three nucleotides at a time. The genetic code is highly similar among all organisms and can be expressed in a simple table with 64 entries. The codons specify which amino acid will be added next during protein biosynthesis. With some exceptions, a three-nucleotide codon in a nucleic acid sequence specifies a single amino acid. The vast majority of genes are encoded with a single scheme (see the RNA codon table). That scheme is often called the canonical or standard genetic code, or simply the genetic code, though variant codes (such as in mitochondria) exist. History Efforts to understand how proteins are encoded began after DNA's structure was discovered in 1953. The key discoverers, English biophysicist Francis Crick and American biologist James Watson, working together at the Cavendish Laboratory of the University of Cambridge, hypothesied that information flows from DNA and that there is a link between DNA and proteins. Soviet-American physicist George Gamow was the first to give a workable scheme for protein synthesis from DNA. He postulated that sets of three bases (triplets) must be employed to encode the 20 standard amino acids used by living cells to build proteins, which would allow a maximum of amino acids. He named this DNA–protein interaction (the original genetic code) as the "diamond code". In 1954, Gamow created an informal scientific organisation the RNA Tie Club, as suggested by Watson, for scientists of different persuasions who were interested in how proteins were synthesised from genes. However, the club could have only 20 permanent members to represent each of the 20 amino acids; and four additional honorary members to represent the four nucleotides of DNA. The first scientific contribution of the club, later recorded as "one of the most important unpublished articles in the history of science" and "the most famous unpublished paper in the annals of molecular biology", was made by Crick. Crick presented a type-written paper titled "On Degenerate Templates and the Adaptor Hypothesis: A Note for the RNA Tie Club" to the members of the club in January 1955, which "totally changed the way we thought about protein synthesis", as Watson recalled. The hypothesis states that the triplet code was not passed on to amino acids as Gamow thought, but carried by a different molecule, an adaptor, that interacts with amino acids. The adaptor was later identified as tRNA. Codons The Crick, Brenner, Barnett and Watts-Tobin experiment first demonstrated that codons consist of three DNA bases. Marshall Nirenberg and J. Heinrich Matthaei were the first to reveal the nature of a codon in 1961. They used a cell-free system to translate a poly-uracil RNA sequence (i.e., UUUUU...) and discovered that the polypeptide that they had synthesized consisted of only the amino acid phenylalanine. They thereby deduced that the codon UUU specified the amino acid phenylalanine. This was followed by experiments in Severo Ochoa's laboratory that demonstrated that the poly-adenine RNA sequence (AAAAA...) coded for the polypeptide poly-lysine and that the poly-cytosine RNA sequence (CCCCC...) coded for the polypeptide poly-proline. Therefore, the codon AAA specified the amino acid lysine, and the codon CCC specified the amino acid proline. Using various copolymers most of the remaining codons were then determined. Subsequent work by Har Gobind Khorana identified the rest of the genetic code. Shortly thereafter, Robert W. Holley determined the structure of transfer RNA (tRNA), the adapter molecule that facilitates the process of translating RNA into protein. This work was based upon Ochoa's earlier studies, yielding the latter the Nobel Prize in Physiology or Medicine in 1959 for work on the enzymology of RNA synthesis. Extending this work, Nirenberg and Philip Leder revealed the code's triplet nature and deciphered its codons. In these experiments, various combinations of mRNA were passed through a filter that contained ribosomes, the components of cells that translate RNA into protein. Unique triplets promoted the binding of specific tRNAs to the ribosome. Leder and Nirenberg were able to determine the sequences of 54 out of 64 codons in their experiments. Khorana, Holley and Nirenberg received the Nobel Prize (1968) for their work. The three stop codons were named by discoverers Richard Epstein and Charles Steinberg. "Amber" was named after their friend Harris Bernstein, whose last name means "amber" in German. The other two stop codons were named "ochre" and "opal" in order to keep the "color names" theme. Expanded genetic codes (synthetic biology) In a broad academic audience, the concept of the evolution of the genetic code from the original and ambiguous genetic code to a well-defined ("frozen") code with the repertoire of 20 (+2) canonical amino acids is widely accepted. However, there are different opinions, concepts, approaches and ideas, which is the best way to change it experimentally. Even models are proposed that predict "entry points" for synthetic amino acid invasion of the genetic code. Since 2001, 40 non-natural amino acids have been added into proteins by creating a unique codon (recoding) and a corresponding transfer-RNA:aminoacyl – tRNA-synthetase pair to encode it with diverse physicochemical and biological properties in order to be used as a tool to exploring protein structure and function or to create novel or enhanced proteins. H. Murakami and M. Sisido extended some codons to have four and five bases. Steven A. Benner constructed a functional 65th (in vivo) codon. In 2015 N. Budisa, D. Söll and co-workers reported the full substitution of all 20,899 tryptophan residues (UGG codons) with unnatural thienopyrrole-alanine in the genetic code of the bacterium Escherichia coli. In 2016 the first stable semisynthetic organism was created. It was a (single cell) bacterium with two synthetic bases (called X and Y). The bases survived cell division. In 2017, researchers in South Korea reported that they had engineered a mouse with an extended genetic code that can produce proteins with unnatural amino acids. In May 2019, researchers reported the creation of a new "Syn61" strain of the bacterium Escherichia coli. This strain has a fully synthetic genome that is refactored (all overlaps expanded), recoded (removing the use of three out of 64 codons completely), and further modified to remove the now unnecessary tRNAs and release factors. It is fully viable and grows 1.6× slower than its wild-type counterpart "MDS42". Features Reading frame A reading frame is defined by the initial triplet of nucleotides from which translation starts. It sets the frame for a run of successive, non-overlapping codons, which is known as an "open reading frame" (ORF). For example, the string 5'-AAATGAACG-3' (see figure), if read from the first position, contains the codons AAA, TGA, and ACG ; if read from the second position, it contains the codons AAT and GAA ; and if read from the third position, it contains the codons ATG and AAC. Every sequence can, thus, be read in its 5' → 3' direction in three reading frames, each producing a possibly distinct amino acid sequence: in the given example, Lys (K)-Trp (W)-Thr (T), Asn (N)-Glu (E), or Met (M)-Asn (N), respectively (when translating with the vertebrate mitochondrial code). When DNA is double-stranded, six possible reading frames are defined, three in the forward orientation on one strand and three reverse on the opposite strand. Protein-coding frames are defined by a start codon, usually the first AUG (ATG) codon in the RNA (DNA) sequence. In eukaryotes, ORFs in exons are often interrupted by introns. Start and stop codons Translation starts with a chain-initiation codon or start codon. The start codon alone is not sufficient to begin the process. Nearby sequences such as the Shine-Dalgarno sequence in E. coli and initiation factors are also required to start translation. The most common start codon is AUG, which is read as methionine or as formylmethionine (in bacteria, mitochondria, and plastids). Alternative start codons depending on the organism include "GUG" or "UUG"; these codons normally represent valine and leucine, respectively, but as start codons they are translated as methionine or formylmethionine. The three stop codons have names: UAG is amber, UGA is opal (sometimes also called umber), and UAA is ochre. Stop codons are also called "termination" or "nonsense" codons. They signal release of the nascent polypeptide from the ribosome because no cognate tRNA has anticodons complementary to these stop signals, allowing a release factor to bind to the ribosome instead. Effect of mutations During the process of DNA replication, errors occasionally occur in the polymerization of the second strand. These errors, mutations, can affect an organism's phenotype, especially if they occur within the protein coding sequence of a gene. Error rates are typically 1 error in every 10–100 million bases—due to the "proofreading" ability of DNA polymerases. Missense mutations and nonsense mutations are examples of point mutations that can cause genetic diseases such as sickle-cell disease and thalassemia respectively. Clinically important missense mutations generally change the properties of the coded amino acid residue among basic, acidic, polar or non-polar states, whereas nonsense mutations result in a stop codon. Mutations that disrupt the reading frame sequence by indels (insertions or deletions) of a non-multiple of 3 nucleotide bases are known as frameshift mutations. These mutations usually result in a completely different translation from the original, and likely cause a stop codon to be read, which truncates the protein. These mutations may impair the protein's function and are thus rare in in vivo protein-coding sequences. One reason inheritance of frameshift mutations is rare is that, if the protein being translated is essential for growth under the selective pressures the organism faces, absence of a functional protein may cause death before the organism becomes viable. Frameshift mutations may result in severe genetic diseases such as Tay–Sachs disease. Although most mutations that change protein sequences are harmful or neutral, some mutations have benefits. These mutations may enable the mutant organism to withstand particular environmental stresses better than wild type organisms, or reproduce more quickly. In these cases a mutation will tend to become more common in a population through natural selection. Viruses that use RNA as their genetic material have rapid mutation rates, which can be an advantage, since these viruses thereby evolve rapidly, and thus evade the immune system defensive responses. In large populations of asexually reproducing organisms, for example, E. coli, multiple beneficial mutations may co-occur. This phenomenon is called clonal interference and causes competition among the mutations. Degeneracy Degeneracy is the redundancy of the genetic code. This term was given by Bernfield and Nirenberg. The genetic code has redundancy but no ambiguity (see the codon tables below for the full correlation). For example, although codons GAA and GAG both specify glutamic acid (redundancy), neither specifies another amino acid (no ambiguity). The codons encoding one amino acid may differ in any of their three positions. For example, the amino acid leucine is specified by YUR or CUN (UUA, UUG, CUU, CUC, CUA, or CUG) codons (difference in the first or third position indicated using IUPAC notation), while the amino acid serine is specified by UCN or AGY (UCA, UCG, UCC, UCU, AGU, or AGC) codons (difference in the first, second, or third position). A practical consequence of redundancy is that errors in the third position of the triplet codon cause only a silent mutation or an error that would not affect the protein because the hydrophilicity or hydrophobicity is maintained by equivalent substitution of amino acids; for example, a codon of NUN (where N = any nucleotide) tends to code for hydrophobic amino acids. NCN yields amino acid residues that are small in size and moderate in hydropathicity; NAN encodes average size hydrophilic residues. The genetic code is so well-structured for hydropathicity that a mathematical analysis (Singular Value Decomposition) of 12 variables (4 nucleotides x 3 positions) yields a remarkable correlation (C = 0.95) for predicting the hydropathicity of the encoded amino acid directly from the triplet nucleotide sequence, without translation. Note in the table, below, eight amino acids are not affected at all by mutations at the third position of the codon, whereas in the figure above, a mutation at the second position is likely to cause a radical change in the physicochemical properties of the encoded amino acid. Nevertheless, changes in the first position of the codons are more important than changes in the second position on a global scale. The reason may be that charge reversal (from a positive to a negative charge or vice versa) can only occur upon mutations in the first position of certain codons, but not upon changes in the second position of any codon. Such charge reversal may have dramatic consequences for the structure or function of a protein. This aspect may have been largely underestimated by previous studies. Codon usage bias The frequency of codons, also known as codon usage bias, can vary from species to species with functional implications for the control of translation. The codon varies by organism; for example, most common proline codon in E. coli is CCG, whereas in humans this is the least used proline codon. Alternative genetic codes Non-standard amino acids In some proteins, non-standard amino acids are substituted for standard stop codons, depending on associated signal sequences in the messenger RNA. For example, UGA can code for selenocysteine and UAG can code for pyrrolysine. Selenocysteine came to be seen as the 21st amino acid, and pyrrolysine as the 22nd. Both selenocysteine and pyrrolysine may be present in the same organism. Although the genetic code is normally fixed in an organism, the achaeal prokaryote Acetohalobium arabaticum can expand its genetic code from 20 to 21 amino acids (by including pyrrolysine) under different conditions of growth. Variations There was originally a simple and widely accepted argument that the genetic code should be universal: namely, that any variation in the genetic code would be lethal to the organism (although Crick had stated that viruses were an exception). This is known as the "frozen accident" argument for the universality of the genetic code. However, in his seminal paper on the origins of the genetic code in 1968, Francis Crick still stated that the universality of the genetic code in all organisms was an unproven assumption, and was probably not true in some instances. He predicted that "The code is universal (the same in all organisms) or nearly so". The first variation was discovered in 1979, by researchers studying human mitochondrial genes. Many slight variants were discovered thereafter, including various alternative mitochondrial codes. These minor variants for example involve translation of the codon UGA as tryptophan in Mycoplasma species, and translation of CUG as a serine rather than leucine in yeasts of the "CTG clade" (such as Candida albicans). Because viruses must use the same genetic code as their hosts, modifications to the standard genetic code could interfere with viral protein synthesis or functioning. However, viruses such as totiviruses have adapted to the host's genetic code modification. In bacteria and archaea, GUG and UUG are common start codons. In rare cases, certain proteins may use alternative start codons. Surprisingly, variations in the interpretation of the genetic code exist also in human nuclear-encoded genes: In 2016, researchers studying the translation of malate dehydrogenase found that in about 4% of the mRNAs encoding this enzyme the stop codon is naturally used to encode the amino acids tryptophan and arginine. This type of recoding is induced by a high-readthrough stop codon context and it is referred to as functional translational readthrough. Despite these differences, all known naturally occurring codes are very similar. The coding mechanism is the same for all organisms: three-base codons, tRNA, ribosomes, single direction reading and translating single codons into single amino acids. The most extreme variations occur in certain ciliates where the meaning of stop codons depends on their position within mRNA. When close to the 3' end they act as terminators while in internal positions they either code for amino acids as in Condylostoma magnum or trigger ribosomal frameshifting as in Euplotes. The origins and variation of the genetic code, including the mechanisms behind the evolvability of the genetic code, have been widely studied, and some studies have been done experimentally evolving the genetic code of some organisms. Inference Variant genetic codes used by an organism can be inferred by identifying highly conserved genes encoded in that genome, and comparing its codon usage to the amino acids in homologous proteins of other organisms. For example, the program FACIL infers a genetic code by searching which amino acids in homologous protein domains are most often aligned to every codon. The resulting amino acid (or stop codon) probabilities for each codon are displayed in a genetic code logo. As of January 2022, the most complete survey of genetic codes is done by Shulgina and Eddy, who screened 250,000 prokaryotic genomes using their Codetta tool. This tool uses a similar approach to FACIL with a larger Pfam database. Despite the NCBI already providing 27 translation tables, the authors were able to find new 5 genetic code variations (corroborated by tRNA mutations) and correct several misattributions. Codetta was later used to analyze genetic code change in ciliates. Origin The genetic code is a key part of the history of life, according to one version of which self-replicating RNA molecules preceded life as we know it. This is the RNA world hypothesis. Under this hypothesis, any model for the emergence of the genetic code is intimately related to a model of the transfer from ribozymes (RNA enzymes) to proteins as the principal enzymes in cells. In line with the RNA world hypothesis, transfer RNA molecules appear to have evolved before modern aminoacyl-tRNA synthetases, so the latter cannot be part of the explanation of its patterns. A hypothetical randomly evolved genetic code further motivates a biochemical or evolutionary model for its origin. If amino acids were randomly assigned to triplet codons, there would be 1.5 × 1084 possible genetic codes. This number is found by calculating the number of ways that 21 items (20 amino acids plus one stop) can be placed in 64 bins, wherein each item is used at least once. However, the distribution of codon assignments in the genetic code is nonrandom. In particular, the genetic code clusters certain amino acid assignments. Amino acids that share the same biosynthetic pathway tend to have the same first base in their codons. This could be an evolutionary relic of an early, simpler genetic code with fewer amino acids that later evolved to code a larger set of amino acids. It could also reflect steric and chemical properties that had another effect on the codon during its evolution. Amino acids with similar physical properties also tend to have similar codons, reducing the problems caused by point mutations and mistranslations. Given the non-random genetic triplet coding scheme, a tenable hypothesis for the origin of genetic code could address multiple aspects of the codon table, such as absence of codons for D-amino acids, secondary codon patterns for some amino acids, confinement of synonymous positions to third position, the small set of only 20 amino acids (instead of a number approaching 64), and the relation of stop codon patterns to amino acid coding patterns. Three main hypotheses address the origin of the genetic code. Many models belong to one of them or to a hybrid: Random freeze: the genetic code was randomly created. For example, early tRNA-like ribozymes may have had different affinities for amino acids, with codons emerging from another part of the ribozyme that exhibited random variability. Once enough peptides were coded for, any major random change in the genetic code would have been lethal; hence it became "frozen". Stereochemical affinity: the genetic code is a result of a high affinity between each amino acid and its codon or anti-codon; the latter option implies that pre-tRNA molecules matched their corresponding amino acids by this affinity. Later during evolution, this matching was gradually replaced with matching by aminoacyl-tRNA synthetases. Optimality: the genetic code continued to evolve after its initial creation, so that the current code maximizes some fitness function, usually some kind of error minimization. Hypotheses have addressed a variety of scenarios: Chemical principles govern specific RNA interaction with amino acids. Experiments with aptamers showed that some amino acids have a selective chemical affinity for their codons. Experiments showed that of 8 amino acids tested, 6 show some RNA triplet-amino acid association. Biosynthetic expansion. The genetic code grew from a simpler earlier code through a process of "biosynthetic expansion". Primordial life "discovered" new amino acids (for example, as by-products of metabolism) and later incorporated some of these into the machinery of genetic coding. Although much circumstantial evidence has been found to suggest that fewer amino acid types were used in the past, precise and detailed hypotheses about which amino acids entered the code in what order are controversial. However, several studies have suggested that Gly, Ala, Asp, Val, Ser, Pro, Glu, Leu, Thr may belong to a group of early-addition amino acids, whereas Cys, Met, Tyr, Trp, His, Phe may belong to a group of later-addition amino acids. Natural selection has led to codon assignments of the genetic code that minimize the effects of mutations. A recent hypothesis suggests that the triplet code was derived from codes that used longer than triplet codons (such as quadruplet codons). Longer than triplet decoding would increase codon redundancy and would be more error resistant. This feature could allow accurate decoding absent complex translational machinery such as the ribosome, such as before cells began making ribosomes. Information channels: Information-theoretic approaches model the process of translating the genetic code into corresponding amino acids as an error-prone information channel. The inherent noise (that is, the error) in the channel poses the organism with a fundamental question: how can a genetic code be constructed to withstand noise while accurately and efficiently translating information? These "rate-distortion" models suggest that the genetic code originated as a result of the interplay of the three conflicting evolutionary forces: the needs for diverse amino acids, for error-tolerance and for minimal resource cost. The code emerges at a transition when the mapping of codons to amino acids becomes nonrandom. The code's emergence is governed by the topology defined by the probable errors and is related to the map coloring problem. Game theory: Models based on signaling games combine elements of game theory, natural selection and information channels. Such models have been used to suggest that the first polypeptides were likely short and had non-enzymatic function. Game theoretic models suggested that the organization of RNA strings into cells may have been necessary to prevent "deceptive" use of the genetic code, i.e. preventing the ancient equivalent of viruses from overwhelming the RNA world. Stop codons: Codons for translational stops are also an interesting aspect to the problem of the origin of the genetic code. As an example for addressing stop codon evolution, it has been suggested that the stop codons are such that they are most likely to terminate translation early in the case of a frame shift error. In contrast, some stereochemical molecular models explain the origin of stop codons as "unassignable". See also List of genetic engineering software Codon tables References Further reading External links The Genetic Codes: Genetic Code Tables The Codon Usage Database — Codon frequency tables for many organisms History of deciphering the genetic code Gene expression Genetics Molecular genetics Molecular biology Protein biosynthesis
Genetic code
[ "Chemistry", "Biology" ]
5,418
[ "Protein biosynthesis", "Genetics", "Gene expression", "Molecular genetics", "Biosynthesis", "Cellular processes", "Molecular biology", "Biochemistry" ]
12,450
https://en.wikipedia.org/wiki/G%C3%B6del%27s%20completeness%20theorem
Gödel's completeness theorem is a fundamental theorem in mathematical logic that establishes a correspondence between semantic truth and syntactic provability in first-order logic. The completeness theorem applies to any first-order theory: If T is such a theory, and φ is a sentence (in the same language) and every model of T is a model of φ, then there is a (first-order) proof of φ using the statements of T as axioms. One sometimes says this as "anything true in all models is provable". (This does not contradict Gödel's incompleteness theorem, which is about a formula φu that is unprovable in a certain theory T but true in the "standard" model of the natural numbers: φu is false in some other, "non-standard" models of T.) The completeness theorem makes a close link between model theory, which deals with what is true in different models, and proof theory, which studies what can be formally proven in particular formal systems. It was first proved by Kurt Gödel in 1929. It was then simplified when Leon Henkin observed in his Ph.D. thesis that the hard part of the proof can be presented as the Model Existence Theorem (published in 1949). Henkin's proof was simplified by Gisbert Hasenjaeger in 1953. Preliminaries There are numerous deductive systems for first-order logic, including systems of natural deduction and Hilbert-style systems. Common to all deductive systems is the notion of a formal deduction. This is a sequence (or, in some cases, a finite tree) of formulae with a specially designated conclusion. The definition of a deduction is such that it is finite and that it is possible to verify algorithmically (by a computer, for example, or by hand) that a given sequence (or tree) of formulae is indeed a deduction. A first-order formula is called logically valid if it is true in every structure for the language of the formula (i.e. for any assignment of values to the variables of the formula). To formally state, and then prove, the completeness theorem, it is necessary to also define a deductive system. A deductive system is called complete if every logically valid formula is the conclusion of some formal deduction, and the completeness theorem for a particular deductive system is the theorem that it is complete in this sense. Thus, in a sense, there is a different completeness theorem for each deductive system. A converse to completeness is soundness, the fact that only logically valid formulas are provable in the deductive system. If some specific deductive system of first-order logic is sound and complete, then it is "perfect" (a formula is provable if and only if it is logically valid), thus equivalent to any other deductive system with the same quality (any proof in one system can be converted into the other). Statement We first fix a deductive system of first-order predicate calculus, choosing any of the well-known equivalent systems. Gödel's original proof assumed the Hilbert-Ackermann proof system. Gödel's original formulation The completeness theorem says that if a formula is logically valid then there is a finite deduction (a formal proof) of the formula. Thus, the deductive system is "complete" in the sense that no additional inference rules are required to prove all the logically valid formulae. A converse to completeness is soundness, the fact that only logically valid formulae are provable in the deductive system. Together with soundness (whose verification is easy), this theorem implies that a formula is logically valid if and only if it is the conclusion of a formal deduction. More general form The theorem can be expressed more generally in terms of logical consequence. We say that a sentence s is a syntactic consequence of a theory T, denoted , if s is provable from T in our deductive system. We say that s is a semantic consequence of T, denoted , if s holds in every model of T. The completeness theorem then says that for any first-order theory T with a well-orderable language, and any sentence s in the language of T, Since the converse (soundness) also holds, it follows that if and only if , and thus that syntactic and semantic consequence are equivalent for first-order logic. This more general theorem is used implicitly, for example, when a sentence is shown to be provable from the axioms of group theory by considering an arbitrary group and showing that the sentence is satisfied by that group. Gödel's original formulation is deduced by taking the particular case of a theory without any axiom. Model existence theorem The completeness theorem can also be understood in terms of consistency, as a consequence of Henkin's model existence theorem. We say that a theory T is syntactically consistent if there is no sentence s such that both s and its negation ¬s are provable from T in our deductive system. The model existence theorem says that for any first-order theory T with a well-orderable language, Another version, with connections to the Löwenheim–Skolem theorem, says: Given Henkin's theorem, the completeness theorem can be proved as follows: If , then does not have models. By the contrapositive of Henkin's theorem, then is syntactically inconsistent. So a contradiction () is provable from in the deductive system. Hence , and then by the properties of the deductive system, . As a theorem of arithmetic The model existence theorem and its proof can be formalized in the framework of Peano arithmetic. Precisely, we can systematically define a model of any consistent effective first-order theory T in Peano arithmetic by interpreting each symbol of T by an arithmetical formula whose free variables are the arguments of the symbol. (In many cases, we will need to assume, as a hypothesis of the construction, that T is consistent, since Peano arithmetic may not prove that fact.) However, the definition expressed by this formula is not recursive (but is, in general, Δ2). Consequences An important consequence of the completeness theorem is that it is possible to recursively enumerate the semantic consequences of any effective first-order theory, by enumerating all the possible formal deductions from the axioms of the theory, and use this to produce an enumeration of their conclusions. This comes in contrast with the direct meaning of the notion of semantic consequence, that quantifies over all structures in a particular language, which is clearly not a recursive definition. Also, it makes the concept of "provability", and thus of "theorem", a clear concept that only depends on the chosen system of axioms of the theory, and not on the choice of a proof system. Relationship to the incompleteness theorems Gödel's incompleteness theorems show that there are inherent limitations to what can be proven within any given first-order theory in mathematics. The "incompleteness" in their name refers to another meaning of complete (see model theory – Using the compactness and completeness theorems): A theory is complete (or decidable) if every sentence in the language of is either provable () or disprovable (). The first incompleteness theorem states that any which is consistent, effective and contains Robinson arithmetic ("Q") must be incomplete in this sense, by explicitly constructing a sentence which is demonstrably neither provable nor disprovable within . The second incompleteness theorem extends this result by showing that can be chosen so that it expresses the consistency of itself. Since cannot be proven in , the completeness theorem implies the existence of a model of in which is false. In fact, is a Π1 sentence, i.e. it states that some finitistic property is true of all natural numbers; so if it is false, then some natural number is a counterexample. If this counterexample existed within the standard natural numbers, its existence would disprove within ; but the incompleteness theorem showed this to be impossible, so the counterexample must not be a standard number, and thus any model of in which is false must include non-standard numbers. In fact, the model of any theory containing Q obtained by the systematic construction of the arithmetical model existence theorem, is always non-standard with a non-equivalent provability predicate and a non-equivalent way to interpret its own construction, so that this construction is non-recursive (as recursive definitions would be unambiguous). Also, if is at least slightly stronger than Q (e.g. if it includes induction for bounded existential formulas), then Tennenbaum's theorem shows that it has no recursive non-standard models. Relationship to the compactness theorem The completeness theorem and the compactness theorem are two cornerstones of first-order logic. While neither of these theorems can be proven in a completely effective manner, each one can be effectively obtained from the other. The compactness theorem says that if a formula φ is a logical consequence of a (possibly infinite) set of formulas Γ then it is a logical consequence of a finite subset of Γ. This is an immediate consequence of the completeness theorem, because only a finite number of axioms from Γ can be mentioned in a formal deduction of φ, and the soundness of the deductive system then implies φ is a logical consequence of this finite set. This proof of the compactness theorem is originally due to Gödel. Conversely, for many deductive systems, it is possible to prove the completeness theorem as an effective consequence of the compactness theorem. The ineffectiveness of the completeness theorem can be measured along the lines of reverse mathematics. When considered over a countable language, the completeness and compactness theorems are equivalent to each other and equivalent to a weak form of choice known as weak Kőnig's lemma, with the equivalence provable in RCA0 (a second-order variant of Peano arithmetic restricted to induction over Σ01 formulas). Weak Kőnig's lemma is provable in ZF, the system of Zermelo–Fraenkel set theory without axiom of choice, and thus the completeness and compactness theorems for countable languages are provable in ZF. However the situation is different when the language is of arbitrary large cardinality since then, though the completeness and compactness theorems remain provably equivalent to each other in ZF, they are also provably equivalent to a weak form of the axiom of choice known as the ultrafilter lemma. In particular, no theory extending ZF can prove either the completeness or compactness theorems over arbitrary (possibly uncountable) languages without also proving the ultrafilter lemma on a set of the same cardinality. Completeness in other logics The completeness theorem is a central property of first-order logic that does not hold for all logics. Second-order logic, for example, does not have a completeness theorem for its standard semantics (though does have the completeness property for Henkin semantics), and the set of logically valid formulas in second-order logic is not recursively enumerable. The same is true of all higher-order logics. It is possible to produce sound deductive systems for higher-order logics, but no such system can be complete. Lindström's theorem states that first-order logic is the strongest (subject to certain constraints) logic satisfying both compactness and completeness. A completeness theorem can be proved for modal logic or intuitionistic logic with respect to Kripke semantics. Proofs Gödel's original proof of the theorem proceeded by reducing the problem to a special case for formulas in a certain syntactic form, and then handling this form with an ad hoc argument. In modern logic texts, Gödel's completeness theorem is usually proved with Henkin's proof, rather than with Gödel's original proof. Henkin's proof directly constructs a term model for any consistent first-order theory. James Margetson (2004) developed a computerized formal proof using the Isabelle theorem prover. Other proofs are also known. See also Gödel's incompleteness theorems Original proof of Gödel's completeness theorem References Further reading The first proof of the completeness theorem. The same material as the dissertation, except with briefer proofs, more succinct explanations, and omitting the lengthy introduction. Chapter 5: "Gödel's completeness theorem". External links Stanford Encyclopedia of Philosophy: "Kurt Gödel"—by Juliette Kennedy. MacTutor biography: Kurt Gödel. Detlovs, Vilnis, and Podnieks, Karlis, "Introduction to mathematical logic." Theorems in the foundations of mathematics Metatheorems Model theory Proof theory Completeness theorem
Gödel's completeness theorem
[ "Mathematics" ]
2,718
[ "Foundations of mathematics", "Proof theory", "Mathematical logic", "Model theory", "Mathematical problems", "Mathematical theorems", "Theorems in the foundations of mathematics" ]
12,543
https://en.wikipedia.org/wiki/Groupoid
In mathematics, especially in category theory and homotopy theory, a groupoid (less often Brandt groupoid or virtual group) generalises the notion of group in several equivalent ways. A groupoid can be seen as a: Group with a partial function replacing the binary operation; Category in which every morphism is invertible. A category of this sort can be viewed as augmented with a unary operation on the morphisms, called inverse by analogy with group theory. A groupoid where there is only one object is a usual group. In the presence of dependent typing, a category in general can be viewed as a typed monoid, and similarly, a groupoid can be viewed as simply a typed group. The morphisms take one from one object to another, and form a dependent family of types, thus morphisms might be typed , , say. Composition is then a total function: , so that . Special cases include: Setoids: sets that come with an equivalence relation, G-sets: sets equipped with an action of a group . Groupoids are often used to reason about geometrical objects such as manifolds. introduced groupoids implicitly via Brandt semigroups. Definitions Algebraic A groupoid can be viewed as an algebraic structure consisting of a set with a binary partial function . Precisely, it is a non-empty set with a unary operation , and a partial function . Here is not a binary operation because it is not necessarily defined for all pairs of elements of . The precise conditions under which is defined are not articulated here and vary by situation. The operations and −1 have the following axiomatic properties: For all , , and in , Associativity: If and are defined, then and are defined and are equal. Conversely, if one of or is defined, then they are both defined (and they are equal to each other), and and are also defined. Inverse: and are always defined. Identity: If is defined, then , and . (The previous two axioms already show that these expressions are defined and unambiguous.) Two easy and convenient properties follow from these axioms: , If is defined, then . Category-theoretic A groupoid is a small category in which every morphism is an isomorphism, i.e., invertible. More explicitly, a groupoid is a set of objects with for each pair of objects x and y, a (possibly empty) set G(x,y) of morphisms (or arrows) from x to y; we write f : x → y to indicate that f is an element of G(x,y); for every object x, a designated element of G(x, x); for each triple of objects x, y, and z, a function ; for each pair of objects x, y, a function satisfying, for any f : x → y, g : y → z, and h : z → w: and ; ; and . If f is an element of G(x,y), then x is called the source of f, written s(f), and y is called the target of f, written t(f). A groupoid G is sometimes denoted as , where is the set of all morphisms, and the two arrows represent the source and the target. More generally, one can consider a groupoid object in an arbitrary category admitting finite fiber products. Comparing the definitions The algebraic and category-theoretic definitions are equivalent, as we now show. Given a groupoid in the category-theoretic sense, let G be the disjoint union of all of the sets G(x,y) (i.e. the sets of morphisms from x to y). Then and become partial operations on G, and will in fact be defined everywhere. We define ∗ to be and −1 to be , which gives a groupoid in the algebraic sense. Explicit reference to G0 (and hence to ) can be dropped. Conversely, given a groupoid G in the algebraic sense, define an equivalence relation on its elements by iff a ∗ a−1 = b ∗ b−1. Let G0 be the set of equivalence classes of , i.e. . Denote a ∗ a−1 by if with . Now define as the set of all elements f such that exists. Given and , their composite is defined as . To see that this is well defined, observe that since and exist, so does . The identity morphism on x is then , and the category-theoretic inverse of f is f−1. Sets in the definitions above may be replaced with classes, as is generally the case in category theory. Vertex groups and orbits Given a groupoid G, the vertex groups or isotropy groups or object groups in G are the subsets of the form G(x,x), where x is any object of G. It follows easily from the axioms above that these are indeed groups, as every pair of elements is composable and inverses are in the same vertex group. The orbit of a groupoid G at a point is given by the set containing every point that can be joined to x by a morphism in G. If two points and are in the same orbits, their vertex groups and are isomorphic: if is any morphism from to , then the isomorphism is given by the mapping . Orbits form a partition of the set X, and a groupoid is called transitive if it has only one orbit (equivalently, if it is connected as a category). In that case, all the vertex groups are isomorphic (on the other hand, this is not a sufficient condition for transitivity; see the section below for counterexamples). Subgroupoids and morphisms A subgroupoid of is a subcategory that is itself a groupoid. It is called wide or full if it is wide or full as a subcategory, i.e., respectively, if or for every . A groupoid morphism is simply a functor between two (category-theoretic) groupoids. Particular kinds of morphisms of groupoids are of interest. A morphism of groupoids is called a fibration if for each object of and each morphism of starting at there is a morphism of starting at such that . A fibration is called a covering morphism or covering of groupoids if further such an is unique. The covering morphisms of groupoids are especially useful because they can be used to model covering maps of spaces. It is also true that the category of covering morphisms of a given groupoid is equivalent to the category of actions of the groupoid on sets. Examples Topology Given a topological space , let be the set . The morphisms from the point to the point are equivalence classes of continuous paths from to , with two paths being equivalent if they are homotopic. Two such morphisms are composed by first following the first path, then the second; the homotopy equivalence guarantees that this composition is associative. This groupoid is called the fundamental groupoid of , denoted (or sometimes, ). The usual fundamental group is then the vertex group for the point . The orbits of the fundamental groupoid are the path-connected components of . Accordingly, the fundamental groupoid of a path-connected space is transitive, and we recover the known fact that the fundamental groups at any base point are isomorphic. Moreover, in this case, the fundamental groupoid and the fundamental groups are equivalent as categories (see the section below for the general theory). An important extension of this idea is to consider the fundamental groupoid where is a chosen set of "base points". Here is a (full) subgroupoid of , where one considers only paths whose endpoints belong to . The set may be chosen according to the geometry of the situation at hand. Equivalence relation If is a setoid, i.e. a set with an equivalence relation , then a groupoid "representing" this equivalence relation can be formed as follows: The objects of the groupoid are the elements of ; For any two elements and in , there is a single morphism from to (denote by ) if and only if ; The composition of and is . The vertex groups of this groupoid are always trivial; moreover, this groupoid is in general not transitive and its orbits are precisely the equivalence classes. There are two extreme examples: If every element of is in relation with every other element of , we obtain the pair groupoid of , which has the entire as set of arrows, and which is transitive. If every element of is only in relation with itself, one obtains the unit groupoid, which has as set of arrows, , and which is completely intransitive (every singleton is an orbit). Examples If is a smooth surjective submersion of smooth manifolds, then is an equivalence relation since has a topology isomorphic to the quotient topology of under the surjective map of topological spaces. If we write, then we get a groupoid which is sometimes called the banal groupoid of a surjective submersion of smooth manifolds. If we relax the reflexivity requirement and consider partial equivalence relations, then it becomes possible to consider semidecidable notions of equivalence on computable realisers for sets. This allows groupoids to be used as a computable approximation to set theory, called PER models. Considered as a category, PER models are a cartesian closed category with natural numbers object and subobject classifier, giving rise to the effective topos introduced by Martin Hyland. Čech groupoid A Čech groupoidp. 5 is a special kind of groupoid associated to an equivalence relation given by an open cover of some manifold . Its objects are given by the disjoint union and its arrows are the intersections The source and target maps are then given by the induced mapsand the inclusion mapgiving the structure of a groupoid. In fact, this can be further extended by settingas the -iterated fiber product where the represents -tuples of composable arrows. The structure map of the fiber product is implicitly the target map, sinceis a cartesian diagram where the maps to are the target maps. This construction can be seen as a model for some ∞-groupoids. Also, another artifact of this construction is k-cocyclesfor some constant sheaf of abelian groups can be represented as a functiongiving an explicit representation of cohomology classes. Group action If the group acts on the set , then we can form the action groupoid (or transformation groupoid) representing this group action as follows: The objects are the elements of ; For any two elements and in , the morphisms from to correspond to the elements of such that ; Composition of morphisms interprets the binary operation of . More explicitly, the action groupoid is a small category with and and with source and target maps and . It is often denoted (or for a right action). Multiplication (or composition) in the groupoid is then , which is defined provided . For in , the vertex group consists of those with , which is just the isotropy subgroup at for the given action (which is why vertex groups are also called isotropy groups). Similarly, the orbits of the action groupoid are the orbit of the group action, and the groupoid is transitive if and only if the group action is transitive. Another way to describe -sets is the functor category , where is the groupoid (category) with one element and isomorphic to the group . Indeed, every functor of this category defines a set and for every in (i.e. for every morphism in ) induces a bijection : . The categorical structure of the functor assures us that defines a -action on the set . The (unique) representable functor is the Cayley representation of . In fact, this functor is isomorphic to and so sends to the set which is by definition the "set" and the morphism of (i.e. the element of ) to the permutation of the set . We deduce from the Yoneda embedding that the group is isomorphic to the group , a subgroup of the group of permutations of . Finite set Consider the group action of on the finite set that takes each number to its negative, so and . The quotient groupoid is the set of equivalence classes from this group action , and has a group action of on it. Quotient variety Any finite group that maps to gives a group action on the affine space (since this is the group of automorphisms). Then, a quotient groupoid can be of the form , which has one point with stabilizer at the origin. Examples like these form the basis for the theory of orbifolds. Another commonly studied family of orbifolds are weighted projective spaces and subspaces of them, such as Calabi–Yau orbifolds. Fiber product of groupoids Given a diagram of groupoids with groupoid morphisms where and , we can form the groupoid whose objects are triples , where , , and in . Morphisms can be defined as a pair of morphisms where and such that for triples , there is a commutative diagram in of , and the . Homological algebra A two term complex of objects in a concrete Abelian category can be used to form a groupoid. It has as objects the set and as arrows the set ; the source morphism is just the projection onto while the target morphism is the addition of projection onto composed with and projection onto . That is, given , we have Of course, if the abelian category is the category of coherent sheaves on a scheme, then this construction can be used to form a presheaf of groupoids. Puzzles While puzzles such as the Rubik's Cube can be modeled using group theory (see Rubik's Cube group), certain puzzles are better modeled as groupoids. The transformations of the fifteen puzzle form a groupoid (not a group, as not all moves can be composed). This groupoid acts on configurations. Mathieu groupoid The Mathieu groupoid is a groupoid introduced by John Horton Conway acting on 13 points such that the elements fixing a point form a copy of the Mathieu group M12. Relation to groups If a groupoid has only one object, then the set of its morphisms forms a group. Using the algebraic definition, such a groupoid is literally just a group. Many concepts of group theory generalize to groupoids, with the notion of functor replacing that of group homomorphism. Every transitive/connected groupoid - that is, as explained above, one in which any two objects are connected by at least one morphism - is isomorphic to an action groupoid (as defined above) . By transitivity, there will only be one orbit under the action. Note that the isomorphism just mentioned is not unique, and there is no natural choice. Choosing such an isomorphism for a transitive groupoid essentially amounts to picking one object , a group isomorphism from to , and for each other than , a morphism in from to . If a groupoid is not transitive, then it is isomorphic to a disjoint union of groupoids of the above type, also called its connected components (possibly with different groups and sets for each connected component). In category-theoretic terms, each connected component of a groupoid is equivalent (but not isomorphic) to a groupoid with a single object, that is, a single group. Thus any groupoid is equivalent to a multiset of unrelated groups. In other words, for equivalence instead of isomorphism, one does not need to specify the sets , but only the groups . For example, The fundamental groupoid of is equivalent to the collection of the fundamental groups of each path-connected component of , but an isomorphism requires specifying the set of points in each component; The set with the equivalence relation is equivalent (as a groupoid) to one copy of the trivial group for each equivalence class, but an isomorphism requires specifying what each equivalence class is; The set equipped with an action of the group is equivalent (as a groupoid) to one copy of for each orbit of the action, but an isomorphism requires specifying what set each orbit is. The collapse of a groupoid into a mere collection of groups loses some information, even from a category-theoretic point of view, because it is not natural. Thus when groupoids arise in terms of other structures, as in the above examples, it can be helpful to maintain the entire groupoid. Otherwise, one must choose a way to view each in terms of a single group, and this choice can be arbitrary. In the example from topology, one would have to make a coherent choice of paths (or equivalence classes of paths) from each point to each point in the same path-connected component. As a more illuminating example, the classification of groupoids with one endomorphism does not reduce to purely group theoretic considerations. This is analogous to the fact that the classification of vector spaces with one endomorphism is nontrivial. Morphisms of groupoids come in more kinds than those of groups: we have, for example, fibrations, covering morphisms, universal morphisms, and quotient morphisms. Thus a subgroup of a group yields an action of on the set of cosets of in and hence a covering morphism from, say, to , where is a groupoid with vertex groups isomorphic to . In this way, presentations of the group can be "lifted" to presentations of the groupoid , and this is a useful way of obtaining information about presentations of the subgroup . For further information, see the books by Higgins and by Brown in the References. Category of groupoids The category whose objects are groupoids and whose morphisms are groupoid morphisms is called the groupoid category, or the category of groupoids, and is denoted by Grpd. The category Grpd is, like the category of small categories, Cartesian closed: for any groupoids we can construct a groupoid whose objects are the morphisms and whose arrows are the natural equivalences of morphisms. Thus if are just groups, then such arrows are the conjugacies of morphisms. The main result is that for any groupoids there is a natural bijection This result is of interest even if all the groupoids are just groups. Another important property of Grpd is that it is both complete and cocomplete. Relation to Cat The inclusion has both a left and a right adjoint: Here, denotes the localization of a category that inverts every morphism, and denotes the subcategory of all isomorphisms. Relation to sSet The nerve functor embeds Grpd as a full subcategory of the category of simplicial sets. The nerve of a groupoid is always a Kan complex. The nerve has a left adjoint Here, denotes the fundamental groupoid of the simplicial set . Groupoids in Grpd There is an additional structure which can be derived from groupoids internal to the category of groupoids, double-groupoids. Because Grpd is a 2-category, these objects form a 2-category instead of a 1-category since there is extra structure. Essentially, these are groupoids with functorsand an embedding given by an identity functorOne way to think about these 2-groupoids is they contain objects, morphisms, and squares which can compose together vertically and horizontally. For example, given squares and with the same morphism, they can be vertically conjoined giving a diagramwhich can be converted into another square by composing the vertical arrows. There is a similar composition law for horizontal attachments of squares. Groupoids with geometric structures When studying geometrical objects, the arising groupoids often carry a topology, turning them into topological groupoids, or even some differentiable structure, turning them into Lie groupoids. These last objects can be also studied in terms of their associated Lie algebroids, in analogy to the relation between Lie groups and Lie algebras. Groupoids arising from geometry often possess further structures which interact with the groupoid multiplication. For instance, in Poisson geometry one has the notion of a symplectic groupoid, which is a Lie groupoid endowed with a compatible symplectic form. Similarly, one can have groupoids with a compatible Riemannian metric, or complex structure, etc. See also ∞-groupoid 2-group Homotopy type theory Inverse category Groupoid algebra (not to be confused with algebraic groupoid) R-algebroid Notes References Brown, Ronald, 1987, "From groups to groupoids: a brief survey", Bull. London Math. Soc. 19: 113–34. Reviews the history of groupoids up to 1987, starting with the work of Brandt on quadratic forms. The downloadable version updates the many references. —, 2006. Topology and groupoids. Booksurge. Revised and extended edition of a book previously published in 1968 and 1988. Groupoids are introduced in the context of their topological application. —, Higher dimensional group theory. Explains how the groupoid concept has led to higher-dimensional homotopy groupoids, having applications in homotopy theory and in group cohomology. Many references. F. Borceux, G. Janelidze, 2001, Galois theories. Cambridge Univ. Press. Shows how generalisations of Galois theory lead to Galois groupoids. Cannas da Silva, A., and A. Weinstein, Geometric Models for Noncommutative Algebras. Especially Part VI. Golubitsky, M., Ian Stewart, 2006, "Nonlinear dynamics of networks: the groupoid formalism", Bull. Amer. Math. Soc. 43: 305–64 Higgins, P. J., "The fundamental groupoid of a graph of groups", J. London Math. Soc. (2) 13 (1976) 145–149. Higgins, P. J. and Taylor, J., "The fundamental groupoid and the homotopy crossed complex of an orbit space", in Category theory (Gummersbach, 1981), Lecture Notes in Math., Volume 962. Springer, Berlin (1982), 115–122. Higgins, P. J., 1971. Categories and groupoids. Van Nostrand Notes in Mathematics. Republished in Reprints in Theory and Applications of Categories, No. 7 (2005) pp. 1–195; freely downloadable. Substantial introduction to category theory with special emphasis on groupoids. Presents applications of groupoids in group theory, for example to a generalisation of Grushko's theorem, and in topology, e.g. fundamental groupoid. Mackenzie, K. C. H., 2005. General theory of Lie groupoids and Lie algebroids. Cambridge Univ. Press. Weinstein, Alan, "Groupoids: unifying internal and external symmetry – A tour through some examples". Also available in Postscript, Notices of the AMS, July 1996, pp. 744–752. Weinstein, Alan, "The Geometry of Momentum" (2002) R.T. Zivaljevic. "Groupoids in combinatorics—applications of a theory of local symmetries". In Algebraic and geometric combinatorics, volume 423 of Contemp. Math., 305–324. Amer. Math. Soc., Providence, RI (2006) Algebraic structures Category theory Homotopy theory
Groupoid
[ "Mathematics" ]
4,952
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Algebraic structures", "Category theory" ]
12,558
https://en.wikipedia.org/wiki/Galaxy
A galaxy is a system of stars, stellar remnants, interstellar gas, dust, and dark matter bound together by gravity. The word is derived from the Greek (), literally 'milky', a reference to the Milky Way galaxy that contains the Solar System. Galaxies, averaging an estimated 100 million stars, range in size from dwarfs with less than a thousand stars, to the largest galaxies known – supergiants with one hundred trillion stars, each orbiting its galaxy's center of mass. Most of the mass in a typical galaxy is in the form of dark matter, with only a few percent of that mass visible in the form of stars and nebulae. Supermassive black holes are a common feature at the centres of galaxies. Galaxies are categorised according to their visual morphology as elliptical, spiral, or irregular. The Milky Way is an example of a spiral galaxy. It is estimated that there are between 200 billion () to 2 trillion galaxies in the observable universe. Most galaxies are 1,000 to 100,000 parsecs in diameter (approximately 3,000 to 300,000 light years) and are separated by distances in the order of millions of parsecs (or megaparsecs). For comparison, the Milky Way has a diameter of at least 26,800 parsecs (87,400 ly) and is separated from the Andromeda Galaxy, its nearest large neighbour, by just over 750,000 parsecs (2.5 million ly). The space between galaxies is filled with a tenuous gas (the intergalactic medium) with an average density of less than one atom per cubic metre. Most galaxies are gravitationally organised into groups, clusters and superclusters. The Milky Way is part of the Local Group, which it dominates along with the Andromeda Galaxy. The group is part of the Virgo Supercluster. At the largest scale, these associations are generally arranged into sheets and filaments surrounded by immense voids. Both the Local Group and the Virgo Supercluster are contained in a much larger cosmic structure named Laniakea. Etymology The word galaxy was borrowed via French and Medieval Latin from the Greek term for the Milky Way, () 'milky (circle)', named after its appearance as a milky band of light in the sky. In Greek mythology, Zeus places his son, born by a mortal woman, the infant Heracles, on Hera's breast while she is asleep so the baby will drink her divine milk and thus become immortal. Hera wakes up while breastfeeding and then realises she is nursing an unknown baby: she pushes the baby away, some of her milk spills, and it produces the band of light known as the Milky Way. In the astronomical literature, the capitalised word "Galaxy" is often used to refer to the Milky Way galaxy, to distinguish it from the other galaxies in the observable universe. The English term Milky Way can be traced back to a story by Geoffrey Chaucer : Galaxies were initially discovered telescopically and were known as spiral nebulae. Most 18th- to 19th-century astronomers considered them as either unresolved star clusters or anagalactic nebulae, and were just thought of as a part of the Milky Way, but their true composition and natures remained a mystery. Observations using larger telescopes of a few nearby bright galaxies, like the Andromeda Galaxy, began resolving them into huge conglomerations of stars, but based simply on the apparent faintness and sheer population of stars, the true distances of these objects placed them well beyond the Milky Way. For this reason they were popularly called island universes, but this term quickly fell into disuse, as the word universe implied the entirety of existence. Instead, they became known simply as galaxies. Nomenclature Millions of galaxies have been catalogued, but only a few have well-established names, such as the Andromeda Galaxy, the Magellanic Clouds, the Whirlpool Galaxy, and the Sombrero Galaxy. Astronomers work with numbers from certain catalogues, such as the Messier catalogue, the NGC (New General Catalogue), the IC (Index Catalogue), the CGCG (Catalogue of Galaxies and of Clusters of Galaxies), the MCG (Morphological Catalogue of Galaxies), the UGC (Uppsala General Catalogue of Galaxies), and the PGC (Catalogue of Principal Galaxies, also known as LEDA). All the well-known galaxies appear in one or more of these catalogues but each time under a different number. For example, Messier 109 (or "M109") is a spiral galaxy having the number 109 in the catalogue of Messier. It also has the designations NGC 3992, UGC 6937, CGCG 269–023, MCG +09-20-044, and PGC 37617 (or LEDA 37617), among others. Millions of fainter galaxies are known by their identifiers in sky surveys such as the Sloan Digital Sky Survey. Observation history Milky Way Greek philosopher Democritus (450–370 BCE) proposed that the bright band on the night sky known as the Milky Way might consist of distant stars. Aristotle (384–322 BCE), however, believed the Milky Way was caused by "the ignition of the fiery exhalation of some stars that were large, numerous and close together" and that the "ignition takes place in the upper part of the atmosphere, in the region of the World that is continuous with the heavenly motions." Neoplatonist philosopher Olympiodorus the Younger (–570 CE) was critical of this view, arguing that if the Milky Way was sublunary (situated between Earth and the Moon) it should appear different at different times and places on Earth, and that it should have parallax, which it did not. In his view, the Milky Way was celestial. According to Mohani Mohamed, Arabian astronomer Ibn al-Haytham (965–1037) made the first attempt at observing and measuring the Milky Way's parallax, and he thus "determined that because the Milky Way had no parallax, it must be remote from the Earth, not belonging to the atmosphere." Persian astronomer al-Biruni (973–1048) proposed the Milky Way galaxy was "a collection of countless fragments of the nature of nebulous stars." Andalusian astronomer Avempace ( 1138) proposed that it was composed of many stars that almost touched one another, and appeared to be a continuous image due to the effect of refraction from sublunary material, citing his observation of the conjunction of Jupiter and Mars as evidence of this occurring when two objects were near. In the 14th century, Syrian-born Ibn Qayyim al-Jawziyya proposed the Milky Way galaxy was "a myriad of tiny stars packed together in the sphere of the fixed stars." Actual proof of the Milky Way consisting of many stars came in 1610 when the Italian astronomer Galileo Galilei used a telescope to study it and discovered it was composed of a huge number of faint stars. In 1750, English astronomer Thomas Wright, in his An Original Theory or New Hypothesis of the Universe, correctly speculated that it might be a rotating body of a huge number of stars held together by gravitational forces, akin to the Solar System but on a much larger scale, and that the resulting disk of stars could be seen as a band on the sky from a perspective inside it. In his 1755 treatise, Immanuel Kant elaborated on Wright's idea about the Milky Way's structure. The first project to describe the shape of the Milky Way and the position of the Sun was undertaken by William Herschel in 1785 by counting the number of stars in different regions of the sky. He produced a diagram of the shape of the galaxy with the Solar System close to the center. Using a refined approach, Kapteyn in 1920 arrived at the picture of a small (diameter about 15 kiloparsecs) ellipsoid galaxy with the Sun close to the center. A different method by Harlow Shapley based on the cataloguing of globular clusters led to a radically different picture: a flat disk with diameter approximately 70 kiloparsecs and the Sun far from the centre. Both analyses failed to take into account the absorption of light by interstellar dust present in the galactic plane; but after Robert Julius Trumpler quantified this effect in 1930 by studying open clusters, the present picture of the Milky Way galaxy emerged. Distinction from other nebulae A few galaxies outside the Milky Way are visible on a dark night to the unaided eye, including the Andromeda Galaxy, Large Magellanic Cloud, Small Magellanic Cloud, and the Triangulum Galaxy. In the 10th century, Persian astronomer Abd al-Rahman al-Sufi made the earliest recorded identification of the Andromeda Galaxy, describing it as a "small cloud". In 964, he probably mentioned the Large Magellanic Cloud in his Book of Fixed Stars, referring to "Al Bakr of the southern Arabs", since at a declination of about 70° south it was not visible where he lived. It was not well known to Europeans until Magellan's voyage in the 16th century. The Andromeda Galaxy was later independently noted by Simon Marius in 1612. In 1734, philosopher Emanuel Swedenborg in his Principia speculated that there might be other galaxies outside that were formed into galactic clusters that were minuscule parts of the universe that extended far beyond what could be seen. These views "are remarkably close to the present-day views of the cosmos." In 1745, Pierre Louis Maupertuis conjectured that some nebula-like objects were collections of stars with unique properties, including a glow exceeding the light its stars produced on their own, and repeated Johannes Hevelius's view that the bright spots were massive and flattened due to their rotation. In 1750, Thomas Wright correctly speculated that the Milky Way was a flattened disk of stars, and that some of the nebulae visible in the night sky might be separate Milky Ways. Toward the end of the 18th century, Charles Messier compiled a catalog containing the 109 brightest celestial objects having nebulous appearance. Subsequently, William Herschel assembled a catalog of 5,000 nebulae. In 1845, Lord Rosse examined the nebulae catalogued by Herschel and observed the spiral structure of Messier object M51, now known as the Whirlpool Galaxy. In 1912, Vesto M. Slipher made spectrographic studies of the brightest spiral nebulae to determine their composition. Slipher discovered that the spiral nebulae have high Doppler shifts, indicating that they are moving at a rate exceeding the velocity of the stars he had measured. He found that the majority of these nebulae are moving away from us. In 1917, Heber Doust Curtis observed nova S Andromedae within the "Great Andromeda Nebula", as the Andromeda Galaxy, Messier object M31, was then known. Searching the photographic record, he found 11 more novae. Curtis noticed that these novae were, on average, 10 magnitudes fainter than those that occurred within this galaxy. As a result, he was able to come up with a distance estimate of 150,000 parsecs. He became a proponent of the so-called "island universes" hypothesis, which holds that spiral nebulae are actually independent galaxies. In 1920 a debate took place between Harlow Shapley and Heber Curtis, the Great Debate, concerning the nature of the Milky Way, spiral nebulae, and the dimensions of the universe. To support his claim that the Great Andromeda Nebula is an external galaxy, Curtis noted the appearance of dark lanes resembling the dust clouds in the Milky Way, as well as the significant Doppler shift. In 1922, the Estonian astronomer Ernst Öpik gave a distance determination that supported the theory that the Andromeda Nebula is indeed a distant extra-galactic object. Using the new 100-inch Mt. Wilson telescope, Edwin Hubble was able to resolve the outer parts of some spiral nebulae as collections of individual stars and identified some Cepheid variables, thus allowing him to estimate the distance to the nebulae: they were far too distant to be part of the Milky Way. In 1926 Hubble produced a classification of galactic morphology that is used to this day. Multi-wavelength observation Advances in astronomy have always been driven by technology. After centuries of success in optical astronomy, recent decades have seen major progress in other regions of the electromagnetic spectrum. The dust present in the interstellar medium is opaque to visual light. It is more transparent to far-infrared, which can be used to observe the interior regions of giant molecular clouds and galactic cores in great detail. Infrared is also used to observe distant, red-shifted galaxies that were formed much earlier. Water vapor and carbon dioxide absorb a number of useful portions of the infrared spectrum, so high-altitude or space-based telescopes are used for infrared astronomy. The first non-visual study of galaxies, particularly active galaxies, was made using radio frequencies. The Earth's atmosphere is nearly transparent to radio between 5 MHz and 30 GHz. The ionosphere blocks signals below this range. Large radio interferometers have been used to map the active jets emitted from active nuclei. Ultraviolet and X-ray telescopes can observe highly energetic galactic phenomena. Ultraviolet flares are sometimes observed when a star in a distant galaxy is torn apart from the tidal forces of a nearby black hole. The distribution of hot gas in galactic clusters can be mapped by X-rays. The existence of supermassive black holes at the cores of galaxies was confirmed through X-ray astronomy. Modern research In 1944, Hendrik van de Hulst predicted that microwave radiation with wavelength of 21 cm would be detectable from interstellar atomic hydrogen gas; and in 1951 it was observed. This radiation is not affected by dust absorption, and so its Doppler shift can be used to map the motion of the gas in this galaxy. These observations led to the hypothesis of a rotating bar structure in the center of this galaxy. With improved radio telescopes, hydrogen gas could also be traced in other galaxies. In the 1970s, Vera Rubin uncovered a discrepancy between observed galactic rotation speed and that predicted by the visible mass of stars and gas. Today, the galaxy rotation problem is thought to be explained by the presence of large quantities of unseen dark matter. Beginning in the 1990s, the Hubble Space Telescope yielded improved observations. Among other things, its data helped establish that the missing dark matter in this galaxy could not consist solely of inherently faint and small stars. The Hubble Deep Field, an extremely long exposure of a relatively empty part of the sky, provided evidence that there are about 125 billion () galaxies in the observable universe. Improved technology in detecting the spectra invisible to humans (radio telescopes, infrared cameras, and x-ray telescopes) allows detection of other galaxies that are not detected by Hubble. Particularly, surveys in the Zone of Avoidance (the region of sky blocked at visible-light wavelengths by the Milky Way) have revealed a number of new galaxies. A 2016 study published in The Astrophysical Journal, led by Christopher Conselice of the University of Nottingham, used 20 years of Hubble images to estimate that the observable universe contained at least two trillion () galaxies. However, later observations with the New Horizons space probe from outside the zodiacal light reduced this to roughly 200 billion (). Types and morphology Galaxies come in three main types: ellipticals, spirals, and irregulars. A slightly more extensive description of galaxy types based on their appearance is given by the Hubble sequence. Since the Hubble sequence is entirely based upon visual morphological type (shape), it may miss certain important characteristics of galaxies such as star formation rate in starburst galaxies and activity in the cores of active galaxies. Many galaxies are thought to contain a supermassive black hole at their center. This includes the Milky Way, whose core region is called the Galactic Center. Ellipticals The Hubble classification system rates elliptical galaxies on the basis of their ellipticity, ranging from E0, being nearly spherical, up to E7, which is highly elongated. These galaxies have an ellipsoidal profile, giving them an elliptical appearance regardless of the viewing angle. Their appearance shows little structure and they typically have relatively little interstellar matter. Consequently, these galaxies also have a low portion of open clusters and a reduced rate of new star formation. Instead, they are dominated by generally older, more evolved stars that are orbiting the common center of gravity in random directions. The stars contain low abundances of heavy elements because star formation ceases after the initial burst. In this sense they have some similarity to the much smaller globular clusters. Type-cD galaxies The largest galaxies are the type-cD galaxies. First described in 1964 by a paper by Thomas A. Matthews and others, they are a subtype of the more general class of D galaxies, which are giant elliptical galaxies, except that they are much larger. They are popularly known as the supergiant elliptical galaxies and constitute the largest and most luminous galaxies known. These galaxies feature a central elliptical nucleus with an extensive, faint halo of stars extending to megaparsec scales. The profile of their surface brightnesses as a function of their radius (or distance from their cores) falls off more slowly than their smaller counterparts. The formation of these cD galaxies remains an active area of research, but the leading model is that they are the result of the mergers of smaller galaxies in the environments of dense clusters, or even those outside of clusters with random overdensities. These processes are the mechanisms that drive the formation of fossil groups or fossil clusters, where a large, relatively isolated, supergiant elliptical resides in the middle of the cluster and are surrounded by an extensive cloud of X-rays as the residue of these galactic collisions. Another older model posits the phenomenon of cooling flow, where the heated gases in clusters collapses towards their centers as they cool, forming stars in the process, a phenomenon observed in clusters such as Perseus, and more recently in the Phoenix Cluster. Shell galaxy A shell galaxy is a type of elliptical galaxy where the stars in its halo are arranged in concentric shells. About one-tenth of elliptical galaxies have a shell-like structure, which has never been observed in spiral galaxies. These structures are thought to develop when a larger galaxy absorbs a smaller companion galaxy—that as the two galaxy centers approach, they start to oscillate around a center point, and the oscillation creates gravitational ripples forming the shells of stars, similar to ripples spreading on water. For example, galaxy NGC 3923 has over 20 shells. Spirals Spiral galaxies resemble spiraling pinwheels. Though the stars and other visible material contained in such a galaxy lie mostly on a plane, the majority of mass in spiral galaxies exists in a roughly spherical halo of dark matter which extends beyond the visible component, as demonstrated by the universal rotation curve concept. Spiral galaxies consist of a rotating disk of stars and interstellar medium, along with a central bulge of generally older stars. Extending outward from the bulge are relatively bright arms. In the Hubble classification scheme, spiral galaxies are listed as type S, followed by a letter (a, b, or c) which indicates the degree of tightness of the spiral arms and the size of the central bulge. An Sa galaxy has tightly wound, poorly defined arms and possesses a relatively large core region. At the other extreme, an Sc galaxy has open, well-defined arms and a small core region. A galaxy with poorly defined arms is sometimes referred to as a flocculent spiral galaxy; in contrast to the grand design spiral galaxy that has prominent and well-defined spiral arms. The speed in which a galaxy rotates is thought to correlate with the flatness of the disc as some spiral galaxies have thick bulges, while others are thin and dense. In spiral galaxies, the spiral arms do have the shape of approximate logarithmic spirals, a pattern that can be theoretically shown to result from a disturbance in a uniformly rotating mass of stars. Like the stars, the spiral arms rotate around the center, but they do so with constant angular velocity. The spiral arms are thought to be areas of high-density matter, or "density waves". As stars move through an arm, the space velocity of each stellar system is modified by the gravitational force of the higher density. (The velocity returns to normal after the stars depart on the other side of the arm.) This effect is akin to a "wave" of slowdowns moving along a highway full of moving cars. The arms are visible because the high density facilitates star formation, and therefore they harbor many bright and young stars. Barred spiral galaxy A majority of spiral galaxies, including the Milky Way galaxy, have a linear, bar-shaped band of stars that extends outward to either side of the core, then merges into the spiral arm structure. In the Hubble classification scheme, these are designated by an SB, followed by a lower-case letter (a, b or c) which indicates the form of the spiral arms (in the same manner as the categorization of normal spiral galaxies). Bars are thought to be temporary structures that can occur as a result of a density wave radiating outward from the core, or else due to a tidal interaction with another galaxy. Many barred spiral galaxies are active, possibly as a result of gas being channeled into the core along the arms. Our own galaxy, the Milky Way, is a large disk-shaped barred-spiral galaxy about 30 kiloparsecs in diameter and a kiloparsec thick. It contains about two hundred billion (2×1011) stars and has a total mass of about six hundred billion (6×1011) times the mass of the Sun. Super-luminous spiral Recently, researchers described galaxies called super-luminous spirals. They are very large with an upward diameter of 437,000 light-years (compared to the Milky Way's 87,400 light-year diameter). With a mass of 340 billion solar masses, they generate a significant amount of ultraviolet and mid-infrared light. They are thought to have an increased star formation rate around 30 times faster than the Milky Way. Other morphologies Peculiar galaxies are galactic formations that develop unusual properties due to tidal interactions with other galaxies. A ring galaxy has a ring-like structure of stars and interstellar medium surrounding a bare core. A ring galaxy is thought to occur when a smaller galaxy passes through the core of a spiral galaxy. Such an event may have affected the Andromeda Galaxy, as it displays a multi-ring-like structure when viewed in infrared radiation. A lenticular galaxy is an intermediate form that has properties of both elliptical and spiral galaxies. These are categorized as Hubble type S0, and they possess ill-defined spiral arms with an elliptical halo of stars (barred lenticular galaxies receive Hubble classification SB0). Irregular galaxies are galaxies that can not be readily classified into an elliptical or spiral morphology. An Irr-I galaxy has some structure but does not align cleanly with the Hubble classification scheme. Irr-II galaxies do not possess any structure that resembles a Hubble classification, and may have been disrupted. Nearby examples of (dwarf) irregular galaxies include the Magellanic Clouds. A dark or "ultra diffuse" galaxy is an extremely-low-luminosity galaxy. It may be the same size as the Milky Way, but have a visible star count only one percent of the Milky Way's. Multiple mechanisms for producing this type of galaxy have been proposed, and it is possible that different dark galaxies formed by different means. One candidate explanation for the low luminosity is that the galaxy lost its star-forming gas at an early stage, resulting in old stellar populations. Dwarfs Despite the prominence of large elliptical and spiral galaxies, most galaxies are dwarf galaxies. They are relatively small when compared with other galactic formations, being about one hundredth the size of the Milky Way, with only a few billion stars. Blue compact dwarf galaxies contains large clusters of young, hot, massive stars. Ultra-compact dwarf galaxies have been discovered that are only 100 parsecs across. Many dwarf galaxies may orbit a single larger galaxy; the Milky Way has at least a dozen such satellites, with an estimated 300–500 yet to be discovered. Most of the information we have about dwarf galaxies come from observations of the local group, containing two spiral galaxies, the Milky Way and Andromeda, and many dwarf galaxies. These dwarf galaxies are classified as either irregular or dwarf elliptical/dwarf spheroidal galaxies. A study of 27 Milky Way neighbors found that in all dwarf galaxies, the central mass is approximately 10 million solar masses, regardless of whether it has thousands or millions of stars. This suggests that galaxies are largely formed by dark matter, and that the minimum size may indicate a form of warm dark matter incapable of gravitational coalescence on a smaller scale. Variants Interacting Interactions between galaxies are relatively frequent, and they can play an important role in galactic evolution. Near misses between galaxies result in warping distortions due to tidal interactions, and may cause some exchange of gas and dust. Collisions occur when two galaxies pass directly through each other and have sufficient relative momentum not to merge. The stars of interacting galaxies usually do not collide, but the gas and dust within the two forms interacts, sometimes triggering star formation. A collision can severely distort the galaxies' shapes, forming bars, rings or tail-like structures. At the extreme of interactions are galactic mergers, where the galaxies' relative momentums are insufficient to allow them to pass through each other. Instead, they gradually merge to form a single, larger galaxy. Mergers can result in significant changes to the galaxies' original morphology. If one of the galaxies is much more massive than the other, the result is known as cannibalism, where the more massive larger galaxy remains relatively undisturbed, and the smaller one is torn apart. The Milky Way galaxy is currently in the process of cannibalizing the Sagittarius Dwarf Elliptical Galaxy and the Canis Major Dwarf Galaxy. Starburst Stars are created within galaxies from a reserve of cold gas that forms giant molecular clouds. Some galaxies have been observed to form stars at an exceptional rate, which is known as a starburst. If they continue to do so, they would consume their reserve of gas in a time span less than the galaxy's lifespan. Hence starburst activity usually lasts only about ten million years, a relatively brief period in a galaxy's history. Starburst galaxies were more common during the universe's early history, but still contribute an estimated 15% to total star production. Starburst galaxies are characterized by dusty concentrations of gas and the appearance of newly formed stars, including massive stars that ionize the surrounding clouds to create H II regions. These stars produce supernova explosions, creating expanding remnants that interact powerfully with the surrounding gas. These outbursts trigger a chain reaction of star-building that spreads throughout the gaseous region. Only when the available gas is nearly consumed or dispersed does the activity end. Starbursts are often associated with merging or interacting galaxies. The prototype example of such a starburst-forming interaction is M82, which experienced a close encounter with the larger M81. Irregular galaxies often exhibit spaced knots of starburst activity. Radio galaxy A radio galaxy is a galaxy with giant regions of radio emission extending well beyond its visible structure. These energetic radio lobes are powered by jets from its active galactic nucleus. Radio galaxies are classified according to their Fanaroff–Riley classification. The FR I class have lower radio luminosity and exhibit structures which are more elongated; the FR II class are higher radio luminosity. The correlation of radio luminosity and structure suggests that the sources in these two types of galaxies may differ. Radio galaxies can also be classified as giant radio galaxies (GRGs), whose radio emissions can extend to scales of megaparsecs (3.26 million light-years). Alcyoneus is an FR II class low-excitation radio galaxy which has the largest observed radio emission, with lobed structures spanning 5 megaparsecs (16×106 ly). For comparison, another similarly sized giant radio galaxy is 3C 236, with lobes 15 million light-years across. It should however be noted that radio emissions are not always considered part of the main galaxy itself. A giant radio galaxy is a special class of objects characterized by the presence of radio lobes generated by relativistic jets powered by the central galaxy's supermassive black hole. Giant radio galaxies are different from ordinary radio galaxies in that they can extend to much larger scales, reaching upwards to several megaparsecs across, far larger than the diameters of their host galaxies. A "normal" radio galaxy do not have a source that is a supermassive black hole or monster neutron star; instead the source is synchrotron radiation from relativistic electrons accelerated by supernova. These sources are comparatively short lived, making the radio spectrum from normal radio galaxies an especially good way to study star formation. Active galaxy Some observable galaxies are classified as "active" if they contain an active galactic nucleus (AGN). A significant portion of the galaxy's total energy output is emitted by the active nucleus instead of its stars, dust and interstellar medium. There are multiple classification and naming schemes for AGNs, but those in the lower ranges of luminosity are called Seyfert galaxies, while those with luminosities much greater than that of the host galaxy are known as quasi-stellar objects or quasars. Models of AGNs suggest that a significant fraction of their light is shifted to far-infrared frequencies because optical and UV emission in the nucleus is absorbed and remitted by dust and gas surrounding it. The standard model for an active galactic nucleus is based on an accretion disc that forms around a supermassive black hole (SMBH) at the galaxy's core region. The radiation from an active galactic nucleus results from the gravitational energy of matter as it falls toward the black hole from the disc. The AGN's luminosity depends on the SMBH's mass and the rate at which matter falls onto it. In about 10% of these galaxies, a diametrically opposed pair of energetic jets ejects particles from the galaxy core at velocities close to the speed of light. The mechanism for producing these jets is not well understood. Seyfert galaxy Seyfert galaxies are one of the two largest groups of active galaxies, along with quasars. They have quasar-like nuclei (very luminous, distant and bright sources of electromagnetic radiation) with very high surface brightnesses; but unlike quasars, their host galaxies are clearly detectable. Seen through a telescope, a Seyfert galaxy appears like an ordinary galaxy with a bright star superimposed atop the core. Seyfert galaxies are divided into two principal subtypes based on the frequencies observed in their spectra. Quasar Quasars are the most energetic and distant members of active galactic nuclei. Extremely luminous, they were first identified as high redshift sources of electromagnetic energy, including radio waves and visible light, that appeared more similar to stars than to extended sources similar to galaxies. Their luminosity can be 100 times that of the Milky Way. The nearest known quasar, Markarian 231, is about 581 million light-years from Earth, while others have been discovered as far away as UHZ1, roughly 13.2 billion light-years distant. Quasars are noteworthy for providing the first demonstration of the phenomenon that gravity can act as a lens for light. Other AGNs Blazars are believed to be active galaxies with a relativistic jet pointed in the direction of Earth. A radio galaxy emits radio frequencies from relativistic jets. A unified model of these types of active galaxies explains their differences based on the observer's position. Possibly related to active galactic nuclei (as well as starburst regions) are low-ionization nuclear emission-line regions (LINERs). The emission from LINER-type galaxies is dominated by weakly ionized elements. The excitation sources for the weakly ionized lines include post-AGB stars, AGN, and shocks. Approximately one-third of nearby galaxies are classified as containing LINER nuclei. Luminous infrared galaxy Luminous infrared galaxies (LIRGs) are galaxies with luminosities—the measurement of electromagnetic power output—above 1011 L☉ (solar luminosities). In most cases, most of their energy comes from large numbers of young stars which heat surrounding dust, which reradiates the energy in the infrared. Luminosity high enough to be a LIRG requires a star formation rate of at least 18 M☉ yr−1. Ultra-luminous infrared galaxies (ULIRGs) are at least ten times more luminous still and form stars at rates >180 M☉ yr−1. Many LIRGs also emit radiation from an AGN. Infrared galaxies emit more energy in the infrared than all other wavelengths combined, with peak emission typically at wavelengths of 60 to 100 microns. LIRGs are believed to be created from the strong interaction and merger of spiral galaxies. While uncommon in the local universe, LIRGs and ULIRGS were more prevalent when the universe was younger. Physical diameters Galaxies do not have a definite boundary by their nature, and are characterized by a gradually decreasing stellar density as a function of increasing distance from their center, making measurements of their true extents difficult. Nevertheless, astronomers over the past few decades have made several criteria in defining the sizes of galaxies. Angular diameter As early as the time of Edwin Hubble in 1936, there have been attempts to characterize the diameters of galaxies. The earliest efforts were based on the observed angle subtended by the galaxy and its estimated distance, leading to an angular diameter (also called "metric diameter"). Isophotal diameter The isophotal diameter is introduced as a conventional way of measuring a galaxy's size based on its apparent surface brightness. Isophotes are curves in a diagram - such as a picture of a galaxy - that adjoins points of equal brightnesses, and are useful in defining the extent of the galaxy. The apparent brightness flux of a galaxy is measured in units of magnitudes per square arcsecond (mag/arcsec2; sometimes expressed as mag arcsec−2), which defines the brightness depth of the isophote. To illustrate how this unit works, a typical galaxy has a brightness flux of 18 mag/arcsec2 at its central region. This brightness is equivalent to the light of an 18th magnitude hypothetical point object (like a star) being spread out evenly in a one square arcsecond area of the sky. The isophotal diameter is typically defined as the region enclosing all the light down to 25 mag/arcsec2 in the blue B-band, which is then referred to as the D25 standard. Effective radius (half-light) and its variations The half-light radius (also known as effective radius; Re) is a measure that is based on the galaxy's overall brightness flux. This is the radius upon which half, or 50%, of the total brightness flux of the galaxy was emitted. This was first proposed by Gérard de Vaucouleurs in 1948. The choice of using 50% was arbitrary, but proved to be useful in further works by R. A. Fish in 1963, where he established a luminosity concentration law that relates the brightnesses of elliptical galaxies and their respective Re, and by José Luis Sérsic in 1968 that defined a mass-radius relation in galaxies. In defining Re, it is necessary that the overall brightness flux galaxy should be captured, with a method employed by Bershady in 2000 suggesting to measure twice the size where the brightness flux of an arbitrarily chosen radius, defined as the local flux, divided by the overall average flux equals to 0.2. Using half-light radius allows a rough estimate of a galaxy's size, but is not particularly helpful in determining its morphology. Variations of this method exist. In particular, in the ESO-Uppsala Catalogue of Galaxies values of 50%, 70%, and 90% of the total blue light (the light detected through a B-band specific filter) had been used to calculate a galaxy's diameter. Petrosian magnitude First described by Vahe Petrosian in 1976, a modified version of this method has been used by the Sloan Digital Sky Survey (SDSS). This method employs a mathematical model on a galaxy whose radius is determined by the azimuthally (horizontal) averaged profile of its brightness flux. In particular, the SDSS employed the Petrosian magnitude in the R-band (658 nm, in the red part of the visible spectrum) to ensure that the brightness flux of a galaxy would be captured as much as possible while counteracting the effects of background noise. For a galaxy whose brightness profile is exponential, it is expected to capture all of its brightness flux, and 80% for galaxies that follow a profile that follows de Vaucouleurs's law. Petrosian magnitudes have the advantage of being redshift and distance independent, allowing the measurement of the galaxy's apparent size since the Petrosian radius is defined in terms of the galaxy's overall luminous flux. A critique of an earlier version of this method has been issued by the Infrared Processing and Analysis Center, with the method causing a magnitude of error (upwards to 10%) of the values than using isophotal diameter. The use of Petrosian magnitudes also have the disadvantage of missing most of the light outside the Petrosian aperture, which is defined relative to the galaxy's overall brightness profile, especially for elliptical galaxies, with higher signal-to-noise ratios on higher distances and redshifts. A correction for this method has been issued by Graham et al. in 2005, based on the assumption that galaxies follow Sérsic's law. Near-infrared method This method has been used by 2MASS as an adaptation from the previously used methods of isophotal measurement. Since 2MASS operates in the near infrared, which has the advantage of being able to recognize dimmer, cooler, and older stars, it has a different form of approach compared to other methods that normally use B-filter. The detail of the method used by 2MASS has been described thoroughly in a document by Jarrett et al., with the survey measuring several parameters. The standard aperture ellipse (area of detection) is defined by the infrared isophote at the Ks band (roughly 2.2 μm wavelength) of 20 mag/arcsec2. Gathering the overall luminous flux of the galaxy has been employed by at least four methods: the first being a circular aperture extending 7 arcseconds from the center, an isophote at 20 mag/arcsec2, a "total" aperture defined by the radial light distribution that covers the supposed extent of the galaxy, and the Kron aperture (defined as 2.5 times the first-moment radius, an integration of the flux of the "total" aperture). Larger-scale structures Deep-sky surveys show that galaxies are often found in groups and clusters. Solitary galaxies that have not significantly interacted with other galaxies of comparable mass in the past few billion years are relatively scarce. Only about 5% of the galaxies surveyed are isolated in this sense. However, they may have interacted and even merged with other galaxies in the past, and may still be orbited by smaller satellite galaxies. On the largest scale, the universe is continually expanding, resulting in an average increase in the separation between individual galaxies (see Hubble's law). Associations of galaxies can overcome this expansion on a local scale through their mutual gravitational attraction. These associations formed early, as clumps of dark matter pulled their respective galaxies together. Nearby groups later merged to form larger-scale clusters. This ongoing merging process, as well as an influx of infalling gas, heats the intergalactic gas in a cluster to very high temperatures of 30–100 megakelvins. About 70–80% of a cluster's mass is in the form of dark matter, with 10–30% consisting of this heated gas and the remaining few percent in the form of galaxies. Most galaxies are gravitationally bound to a number of other galaxies. These form a fractal-like hierarchical distribution of clustered structures, with the smallest such associations being termed groups. A group of galaxies is the most common type of galactic cluster; these formations contain the majority of galaxies (as well as most of the baryonic mass) in the universe. To remain gravitationally bound to such a group, each member galaxy must have a sufficiently low velocity to prevent it from escaping (see Virial theorem). If there is insufficient kinetic energy, however, the group may evolve into a smaller number of galaxies through mergers. Clusters of galaxies consist of hundreds to thousands of galaxies bound together by gravity. Clusters of galaxies are often dominated by a single giant elliptical galaxy, known as the brightest cluster galaxy, which, over time, tidally destroys its satellite galaxies and adds their mass to its own. Superclusters contain tens of thousands of galaxies, which are found in clusters, groups and sometimes individually. At the supercluster scale, galaxies are arranged into sheets and filaments surrounding vast empty voids. Above this scale, the universe appears to be the same in all directions (isotropic and homogeneous), though this notion has been challenged in recent years by numerous findings of large-scale structures that appear to be exceeding this scale. The Hercules–Corona Borealis Great Wall, currently the largest structure in the universe found so far, is 10 billion light-years (three gigaparsecs) in length. The Milky Way galaxy is a member of an association named the Local Group, a relatively small group of galaxies that has a diameter of approximately one megaparsec. The Milky Way and the Andromeda Galaxy are the two brightest galaxies within the group; many of the other member galaxies are dwarf companions of these two. The Local Group itself is a part of a cloud-like structure within the Virgo Supercluster, a large, extended structure of groups and clusters of galaxies centered on the Virgo Cluster. In turn, the Virgo Supercluster is a portion of the Laniakea Supercluster. Magnetic fields Galaxies have magnetic fields of their own. A galaxy's magnetic field influences its dynamics in multiple ways, including affecting the formation of spiral arms and transporting angular momentum in gas clouds. The latter effect is particularly important, as it is a necessary factor for the gravitational collapse of those clouds, and thus for star formation. The typical average equipartition strength for spiral galaxies is about 10 μG (microgauss) or 1nT (nanotesla). By comparison, the Earth's magnetic field has an average strength of about 0.3 G (Gauss) or 30 μT (microtesla). Radio-faint galaxies like M 31 and M33, the Milky Way's neighbors, have weaker fields (about 5μG), while gas-rich galaxies with high star-formation rates, like M 51, M 83 and NGC 6946, have 15 μG on average. In prominent spiral arms, the field strength can be up to 25 μG, in regions where cold gas and dust are also concentrated. The strongest total equipartition fields (50–100 μG) were found in starburst galaxies—for example, in M 82 and the Antennae; and in nuclear starburst regions, such as the centers of NGC 1097 and other barred galaxies. Formation and evolution Formation Current models of the formation of galaxies in the early universe are based on the ΛCDM model. About 300,000 years after the Big Bang, atoms of hydrogen and helium began to form, in an event called recombination. Nearly all the hydrogen was neutral (non-ionized) and readily absorbed light, and no stars had yet formed. As a result, this period has been called the "dark ages". It was from density fluctuations (or anisotropic irregularities) in this primordial matter that larger structures began to appear. As a result, masses of baryonic matter started to condense within cold dark matter halos. These primordial structures allowed gasses to condense in to protogalaxies, large scale gas clouds that were precursors to the first galaxies. As gas falls in to the gravity of the dark matter halos, its pressure and temperature rise. To condense further, the gas must radiate energy. This process was slow in the early universe dominated by hydrogen atoms and molecules which are inefficient radiators compared to heavier elements. As clumps of gas aggregate forming rotating disks, temperatures and pressures continue to increase. Some places within the disk reach high enough density to form stars. Once protogalaxies began to form and contract, the first halo stars, called Population III stars, appeared within them. These were composed of primordial gas, almost entirely of hydrogen and helium. Emission from the first stars heats the remaining gas helping to trigger additional star formation; the ultraviolet light emission from the first generation of stars re-ionized the surrounding neutral hydrogen in expanding spheres eventually reaching the entire universe, an event called reionization. The most massive stars collapse in violent supernova explosions releasing heavy elements ("metals") into the interstellar medium. This metal content is incorporated into population II stars. Theoretical models for early galaxy formation have been verified and informed by a large number and variety of sophisticated astronomical observations. The photometric observations generally need spectroscopic confirmation due the large number mechanisms that can introduce systematic errors. For example, a high redshift (z ~ 16) photometric observation by James Webb Space Telescope (JWST) was later corrected to be closer to z ~ 5. Nevertheless, confirmed observations from the JWST and other observatories are accumulating, allowing systematic comparison of early galaxies to predictions of theory. Evidence for individual Population III stars in early galaxies is even more challenging. Even seemingly confirmed spectroscopic evidence may turn out to have other origins. For example, astronomers reported HeII emission evidence for Population III stars in the Cosmos Redshift 7 galaxy, with a redshift value of 6.60. Subsequent observations found metallic emission lines, OIII, inconsistent with an early-galaxy star. Evolution Once stars begin to form, emit radiation, and in some cases explode, the process of galaxy formation becomes very complex, involving interactions between the forces of gravity, radiation, and thermal energy. Many details are still poorly understood. Within a billion years of a galaxy's formation, key structures begin to appear. Globular clusters, the central supermassive black hole, and a galactic bulge of metal-poor Population II stars form. The creation of a supermassive black hole appears to play a key role in actively regulating the growth of galaxies by limiting the total amount of additional matter added. During this early epoch, galaxies undergo a major burst of star formation. During the following two billion years, the accumulated matter settles into a galactic disc. A galaxy will continue to absorb infalling material from high-velocity clouds and dwarf galaxies throughout its life. This matter is mostly hydrogen and helium. The cycle of stellar birth and death slowly increases the abundance of heavy elements, eventually allowing the formation of planets. Star formation rates in galaxies depend upon their local environment. Isolated 'void' galaxies have highest rate per stellar mass, with 'field' galaxies associated with spiral galaxies having lower rates and galaxies in dense cluster having the lowest rates. The evolution of galaxies can be significantly affected by interactions and collisions. Mergers of galaxies were common during the early epoch, and the majority of galaxies were peculiar in morphology. Given the distances between the stars, the great majority of stellar systems in colliding galaxies will be unaffected. However, gravitational stripping of the interstellar gas and dust that makes up the spiral arms produces a long train of stars known as tidal tails. Examples of these formations can be seen in NGC 4676 or the Antennae Galaxies. The Milky Way galaxy and the nearby Andromeda Galaxy are moving toward each other at about 130 km/s, and—depending upon the lateral movements—the two might collide in about five to six billion years. Although the Milky Way has never collided with a galaxy as large as Andromeda before, it has collided and merged with other galaxies in the past. Cosmological simulations indicate that, 11 billion years ago, it merged with a particularly large galaxy that has been labeled the Kraken. Such large-scale interactions are rare. As time passes, mergers of two systems of equal size become less common. Most bright galaxies have remained fundamentally unchanged for the last few billion years, and the net rate of star formation probably also peaked about ten billion years ago. Future trends Spiral galaxies, like the Milky Way, produce new generations of stars as long as they have dense molecular clouds of interstellar hydrogen in their spiral arms. Elliptical galaxies are largely devoid of this gas, and so form few new stars. The supply of star-forming material is finite; once stars have converted the available supply of hydrogen into heavier elements, new star formation will come to an end. The current era of star formation is expected to continue for up to one hundred billion years, and then the "stellar age" will wind down after about ten trillion to one hundred trillion years (1013–1014 years), as the smallest, longest-lived stars in the visible universe, tiny red dwarfs, begin to fade. At the end of the stellar age, galaxies will be composed of compact objects: brown dwarfs, white dwarfs that are cooling or cold ("black dwarfs"), neutron stars, and black holes. Eventually, as a result of gravitational relaxation, all stars will either fall into central supermassive black holes or be flung into intergalactic space as a result of collisions. Gallery See also Bright early galaxies Dark galaxy Galactic orientation Galaxy formation and evolution Illustris project List of galaxies List of the most distant astronomical objects List of nearest galaxies List of largest galaxies Low surface brightness galaxy Outline of galaxies Timeline of knowledge about galaxies, clusters of galaxies, and large-scale structure Notes References Bibliography External links NASA/IPAC Extragalactic Database (NED) NED Redshift-Independent Distances An Atlas of The Universe Galaxies – Information and amateur observations Galaxy Zoo – citizen science galaxy classification project "A Flight Through the Universe, by the Sloan Digital Sky Survey" – animated video from Berkeley Lab Concepts in astronomy Articles containing video clips
Galaxy
[ "Physics", "Astronomy" ]
10,515
[ "Concepts in astronomy", "Galaxies", "Astronomical objects" ]
12,581
https://en.wikipedia.org/wiki/Glass
Glass is an amorphous (non-crystalline) solid. Because it is often transparent and chemically inert, glass has found widespread practical, technological, and decorative use in window panes, tableware, and optics. Some common objects made of glass are named after the material, e.g., a "glass" for drinking, "glasses" for vision correction, and a "magnifying glass". Glass is most often formed by rapid cooling (quenching) of the molten form. Some glasses such as volcanic glass are naturally occurring, and obsidian has been used to make arrowheads and knives since the Stone Age. Archaeological evidence suggests glassmaking dates back to at least 3600 BC in Mesopotamia, Egypt, or Syria. The earliest known glass objects were beads, perhaps created accidentally during metalworking or the production of faience, which is a form of pottery using lead glazes. Due to its ease of formability into any shape, glass has been traditionally used for vessels, such as bowls, vases, bottles, jars and drinking glasses. Soda–lime glass, containing around 70% silica, accounts for around 90% of modern manufactured glass. Glass can be coloured by adding metal salts or painted and printed with vitreous enamels, leading to its use in stained glass windows and other glass art objects. The refractive, reflective and transmission properties of glass make glass suitable for manufacturing optical lenses, prisms, and optoelectronics materials. Extruded glass fibres have applications as optical fibres in communications networks, thermal insulating material when matted as glass wool to trap air, or in glass-fibre reinforced plastic (fibreglass). Microscopic structure The standard definition of a glass (or vitreous solid) is a non-crystalline solid formed by rapid melt quenching. However, the term "glass" is often defined in a broader sense, to describe any non-crystalline (amorphous) solid that exhibits a glass transition when heated towards the liquid state. Glass is an amorphous solid. Although the atomic-scale structure of glass shares characteristics of the structure of a supercooled liquid, glass exhibits all the mechanical properties of a solid. As in other amorphous solids, the atomic structure of a glass lacks the long-range periodicity observed in crystalline solids. Due to chemical bonding constraints, glasses do possess a high degree of short-range order with respect to local atomic polyhedra. The notion that glass flows to an appreciable extent over extended periods well below the glass transition temperature is not supported by empirical research or theoretical analysis (see viscosity in solids). Though atomic motion at glass surfaces can be observed, and viscosity on the order of 1017–1018 Pa s can be measured in glass, such a high value reinforces the fact that glass would not change shape appreciably over even large periods of time. Formation from a supercooled liquid For melt quenching, if the cooling is sufficiently rapid (relative to the characteristic crystallization time) then crystallization is prevented and instead, the disordered atomic configuration of the supercooled liquid is frozen into the solid state at Tg. The tendency for a material to form a glass while quenched is called glass-forming ability. This ability can be predicted by the rigidity theory. Generally, a glass exists in a structurally metastable state with respect to its crystalline form, although in certain circumstances, for example in atactic polymers, there is no crystalline analogue of the amorphous phase. Glass is sometimes considered to be a liquid due to its lack of a first-order phase transition where certain thermodynamic variables such as volume, entropy and enthalpy are discontinuous through the glass transition range. The glass transition may be described as analogous to a second-order phase transition where the intensive thermodynamic variables such as the thermal expansivity and heat capacity are discontinuous. However, the equilibrium theory of phase transformations does not hold for glass, and hence the glass transition cannot be classed as one of the classical equilibrium phase transformations in solids. Occurrence in nature Glass can form naturally from volcanic magma. Obsidian is a common volcanic glass with high silica (SiO2) content formed when felsic lava extruded from a volcano cools rapidly. Impactite is a form of glass formed by the impact of a meteorite, where Moldavite (found in central and eastern Europe), and Libyan desert glass (found in areas in the eastern Sahara, the deserts of eastern Libya and western Egypt) are notable examples. Vitrification of quartz can also occur when lightning strikes sand, forming hollow, branching rootlike structures called fulgurites. Trinitite is a glassy residue formed from the desert floor sand at the Trinity nuclear bomb test site. Edeowie glass, found in South Australia, is proposed to originate from Pleistocene grassland fires, lightning strikes, or hypervelocity impact by one or several asteroids or comets. History Naturally occurring obsidian glass was used by Stone Age societies as it fractures along very sharp edges, making it ideal for cutting tools and weapons. Glassmaking dates back at least 6000 years, long before humans had discovered how to smelt iron. Archaeological evidence suggests that the first true synthetic glass was made in Lebanon and the coastal north Syria, Mesopotamia or ancient Egypt. The earliest known glass objects, of the mid-third millennium BC, were beads, perhaps initially created as accidental by-products of metalworking (slags) or during the production of faience, a pre-glass vitreous material made by a process similar to glazing. Early glass was rarely transparent and often contained impurities and imperfections, and is technically faience rather than true glass, which did not appear until the 15th century BC. However, red-orange glass beads excavated from the Indus Valley Civilization dated before 1700 BC (possibly as early as 1900 BC) predate sustained glass production, which appeared around 1600 BC in Mesopotamia and 1500 BC in Egypt. During the Late Bronze Age, there was a rapid growth in glassmaking technology in Egypt and Western Asia. Archaeological finds from this period include coloured glass ingots, vessels, and beads. Much early glass production relied on grinding techniques borrowed from stoneworking, such as grinding and carving glass in a cold state. The term glass has its origins in the late Roman Empire, in the Roman glass making centre at Trier (located in current-day Germany) where the late-Latin term glesum originated, likely from a Germanic word for a transparent, lustrous substance. Glass objects have been recovered across the Roman Empire in domestic, funerary, and industrial contexts, as well as trade items in marketplaces in distant provinces. Examples of Roman glass have been found outside of the former Roman Empire in China, the Baltics, the Middle East, and India. The Romans perfected cameo glass, produced by etching and carving through fused layers of different colours to produce a design in relief on the glass object. In post-classical West Africa, Benin was a manufacturer of glass and glass beads. Glass was used extensively in Europe during the Middle Ages. Anglo-Saxon glass has been found across England during archaeological excavations of both settlement and cemetery sites. From the 10th century onwards, glass was employed in stained glass windows of churches and cathedrals, with famous examples at Chartres Cathedral and the Basilica of Saint-Denis. By the 14th century, architects were designing buildings with walls of stained glass such as Sainte-Chapelle, Paris, (1203–1248) and the East end of Gloucester Cathedral. With the change in architectural style during the Renaissance period in Europe, the use of large stained glass windows became much less prevalent, although stained glass had a major revival with Gothic Revival architecture in the 19th century. During the 13th century, the island of Murano, Venice, became a centre for glass making, building on medieval techniques to produce colourful ornamental pieces in large quantities. Murano glass makers developed the exceptionally clear colourless glass cristallo, so called for its resemblance to natural crystal, which was extensively used for windows, mirrors, ships' lanterns, and lenses. In the 13th, 14th, and 15th centuries, enamelling and gilding on glass vessels were perfected in Egypt and Syria. Towards the end of the 17th century, Bohemia became an important region for glass production, remaining so until the start of the 20th century. By the 17th century, glass in the Venetian tradition was also being produced in England. In about 1675, George Ravenscroft invented lead crystal glass, with cut glass becoming fashionable in the 18th century. Ornamental glass objects became an important art medium during the Art Nouveau period in the late 19th century. Throughout the 20th century, new mass production techniques led to the widespread availability of glass in much larger amounts, making it practical as a building material and enabling new applications of glass. In the 1920s a mould-etch process was developed, in which art was etched directly into the mould so that each cast piece emerged from the mould with the image already on the surface of the glass. This reduced manufacturing costs and, combined with a wider use of coloured glass, led to cheap glassware in the 1930s, which later became known as Depression glass. In the 1950s, Pilkington Bros., England, developed the float glass process, producing high-quality distortion-free flat sheets of glass by floating on molten tin. Modern multi-story buildings are frequently constructed with curtain walls made almost entirely of glass. Laminated glass has been widely applied to vehicles for windscreens. Optical glass for spectacles has been used since the Middle Ages. The production of lenses has become increasingly proficient, aiding astronomers as well as having other applications in medicine and science. Glass is also employed as the aperture cover in many solar energy collectors. In the 21st century, glass manufacturers have developed different brands of chemically strengthened glass for widespread application in touchscreens for smartphones, tablet computers, and many other types of information appliances. These include Gorilla Glass, developed and manufactured by Corning, AGC Inc.'s Dragontrail and Schott AG's Xensation. Physical properties Optical Glass is in widespread use in optical systems due to its ability to refract, reflect, and transmit light following geometrical optics. The most common and oldest applications of glass in optics are as lenses, windows, mirrors, and prisms. The key optical properties refractive index, dispersion, and transmission, of glass are strongly dependent on chemical composition and, to a lesser degree, its thermal history. Optical glass typically has a refractive index of 1.4 to 2.4, and an Abbe number (which characterises dispersion) of 15 to 100. The refractive index may be modified by high-density (refractive index increases) or low-density (refractive index decreases) additives. Glass transparency results from the absence of grain boundaries which diffusely scatter light in polycrystalline materials. Semi-opacity due to crystallization may be induced in many glasses by maintaining them for a long period at a temperature just insufficient to cause fusion. In this way, the crystalline, devitrified material, known as Réaumur's glass porcelain is produced. Although generally transparent to visible light, glasses may be opaque to other wavelengths of light. While silicate glasses are generally opaque to infrared wavelengths with a transmission cut-off at 4 μm, heavy-metal fluoride and chalcogenide glasses are transparent to infrared wavelengths of 7 to 18 μm. The addition of metallic oxides results in different coloured glasses as the metallic ions will absorb wavelengths of light corresponding to specific colours. Other In the manufacturing process, glasses can be poured, formed, extruded and moulded into forms ranging from flat sheets to highly intricate shapes. The finished product is brittle but can be laminated or tempered to enhance durability. Glass is typically inert, resistant to chemical attack, and can mostly withstand the action of water, making it an ideal material for the manufacture of containers for foodstuffs and most chemicals. Nevertheless, although usually highly resistant to chemical attack, glass will corrode or dissolve under some conditions. The materials that make up a particular glass composition affect how quickly the glass corrodes. Glasses containing a high proportion of alkali or alkaline earth elements are more susceptible to corrosion than other glass compositions. The density of glass varies with chemical composition with values ranging from for fused silica to for dense flint glass. Glass is stronger than most metals, with a theoretical tensile strength for pure, flawless glass estimated at due to its ability to undergo reversible compression without fracture. However, the presence of scratches, bubbles, and other microscopic flaws lead to a typical range of in most commercial glasses. Several processes such as toughening can increase the strength of glass. Carefully drawn flawless glass fibres can be produced with a strength of up to . Reputed flow The observation that old windows are sometimes found to be thicker at the bottom than at the top is often offered as supporting evidence for the view that glass flows over a timescale of centuries, the assumption being that the glass has exhibited the liquid property of flowing from one shape to another. This assumption is incorrect, as once solidified, glass stops flowing. The sags and ripples observed in old glass were already there the day it was made; manufacturing processes used in the past produced sheets with imperfect surfaces and non-uniform thickness (the near-perfect float glass used today only became widespread in the 1960s). A 2017 study computed the rate of flow of the medieval glass used in Westminster Abbey from the year 1268. The study found that the room temperature viscosity of this glass was roughly 1024Pa·s which is about 1016 times less viscous than a previous estimate made in 1998, which focused on soda-lime silicate glass. Even with this lower viscosity, the study authors calculated that the maximum flow rate of medieval glass is 1 nm per billion years, making it impossible to observe in a human timescale. Types Silicate glasses Silicon dioxide (SiO2) is a common fundamental constituent of glass. Fused quartz is a glass made from chemically pure silica. It has very low thermal expansion and excellent resistance to thermal shock, being able to survive immersion in water while red hot, resists high temperatures (1000–1500 °C) and chemical weathering, and is very hard. It is also transparent to a wider spectral range than ordinary glass, extending from the visible further into both the UV and IR ranges, and is sometimes used where transparency to these wavelengths is necessary. Fused quartz is used for high-temperature applications such as furnace tubes, lighting tubes, melting crucibles, etc. However, its high melting temperature (1723 °C) and viscosity make it difficult to work with. Therefore, normally, other substances (fluxes) are added to lower the melting temperature and simplify glass processing. Soda–lime glass Sodium carbonate (Na2CO3, "soda") is a common additive and acts to lower the glass-transition temperature. However, sodium silicate is water-soluble, so lime (CaO, calcium oxide, generally obtained from limestone), along with magnesium oxide (MgO), and aluminium oxide (Al2O3), are commonly added to improve chemical durability. Soda–lime glasses (Na2O) + lime (CaO) + magnesia (MgO) + alumina (Al2O3) account for over 75% of manufactured glass, containing about 70 to 74% silica by weight. Soda–lime–silicate glass is transparent, easily formed, and most suitable for window glass and tableware. However, it has a high thermal expansion and poor resistance to heat. Soda–lime glass is typically used for windows, bottles, light bulbs, and jars. Borosilicate glass Borosilicate glasses (e.g. Pyrex, Duran) typically contain 5–13% boron trioxide (B2O3). Borosilicate glasses have fairly low coefficients of thermal expansion (7740 Pyrex CTE is 3.25/°C as compared to about 9/°C for a typical soda–lime glass). They are, therefore, less subject to stress caused by thermal expansion and thus less vulnerable to cracking from thermal shock. They are commonly used for e.g. labware, household cookware, and sealed beam car head lamps. Lead glass The addition of lead(II) oxide into silicate glass lowers the melting point and viscosity of the melt. The high density of lead glass (silica + lead oxide (PbO) + potassium oxide (K2O) + soda (Na2O) + zinc oxide (ZnO) + alumina) results in a high electron density, and hence high refractive index, making the look of glassware more brilliant and causing noticeably more specular reflection and increased optical dispersion. Lead glass has a high elasticity, making the glassware more workable and giving rise to a clear "ring" sound when struck. However, lead glass cannot withstand high temperatures well. Lead oxide also facilitates the solubility of other metal oxides and is used in coloured glass. The viscosity decrease of lead glass melt is very significant (roughly 100 times in comparison with soda glass); this allows easier removal of bubbles and working at lower temperatures, hence its frequent use as an additive in vitreous enamels and glass solders. The high ionic radius of the Pb2+ ion renders it highly immobile and hinders the movement of other ions; lead glasses therefore have high electrical resistance, about two orders of magnitude higher than soda–lime glass (108.5 vs 106.5 Ω⋅cm, DC at 250 °C). Aluminosilicate glass Aluminosilicate glass typically contains 5–10% alumina (Al2O3). Aluminosilicate glass tends to be more difficult to melt and shape compared to borosilicate compositions but has excellent thermal resistance and durability. Aluminosilicate glass is extensively used for fibreglass, used for making glass-reinforced plastics (boats, fishing rods, etc.), top-of-stove cookware, and halogen bulb glass. Other oxide additives The addition of barium also increases the refractive index. Thorium oxide gives glass a high refractive index and low dispersion and was formerly used in producing high-quality lenses, but due to its radioactivity has been replaced by lanthanum oxide in modern eyeglasses. Iron can be incorporated into glass to absorb infrared radiation, for example in heat-absorbing filters for movie projectors, while cerium(IV) oxide can be used for glass that absorbs ultraviolet wavelengths. Fluorine lowers the dielectric constant of glass. Fluorine is highly electronegative and lowers the polarizability of the material. Fluoride silicate glasses are used in the manufacture of integrated circuits as an insulator. Glass-ceramics Glass-ceramic materials contain both non-crystalline glass and crystalline ceramic phases. They are formed by controlled nucleation and partial crystallisation of a base glass by heat treatment. Crystalline grains are often embedded within a non-crystalline intergranular phase of grain boundaries. Glass-ceramics exhibit advantageous thermal, chemical, biological, and dielectric properties as compared to metals or organic polymers. The most commercially important property of glass-ceramics is their imperviousness to thermal shock. Thus, glass-ceramics have become extremely useful for countertop cooking and industrial processes. The negative thermal expansion coefficient (CTE) of the crystalline ceramic phase can be balanced with the positive CTE of the glassy phase. At a certain point (~70% crystalline) the glass-ceramic has a net CTE near zero. This type of glass-ceramic exhibits excellent mechanical properties and can sustain repeated and quick temperature changes up to 1000 °C. Fibreglass Fibreglass (also called glass fibre reinforced plastic, GRP) is a composite material made by reinforcing a plastic resin with glass fibres. It is made by melting glass and stretching the glass into fibres. These fibres are woven together into a cloth and left to set in a plastic resin. Fibreglass has the properties of being lightweight and corrosion resistant and is a good insulator enabling its use as building insulation material and for electronic housing for consumer products. Fibreglass was originally used in the United Kingdom and United States during World War II to manufacture radomes. Uses of fibreglass include building and construction materials, boat hulls, car body parts, and aerospace composite materials. Glass-fibre wool is an excellent thermal and sound insulation material, commonly used in buildings (e.g. attic and cavity wall insulation), and plumbing (e.g. pipe insulation), and soundproofing. It is produced by forcing molten glass through a fine mesh by centripetal force and breaking the extruded glass fibres into short lengths using a stream of high-velocity air. The fibres are bonded with an adhesive spray and the resulting wool mat is cut and packed in rolls or panels. Non-silicate glasses Besides common silica-based glasses many other inorganic and organic materials may also form glasses, including metals, aluminates, phosphates, borates, chalcogenides, fluorides, germanates (glasses based on GeO2), tellurites (glasses based on TeO2), antimonates (glasses based on Sb2O3), arsenates (glasses based on As2O3), titanates (glasses based on TiO2), tantalates (glasses based on Ta2O5), nitrates, carbonates, plastics, acrylic, and many other substances. Some of these glasses (e.g. Germanium dioxide (GeO2, Germania), in many respects a structural analogue of silica, fluoride, aluminate, phosphate, borate, and chalcogenide glasses) have physicochemical properties useful for their application in fibre-optic waveguides in communication networks and other specialised technological applications. Silica-free glasses may often have poor glass-forming tendencies. Novel techniques, including containerless processing by aerodynamic levitation (cooling the melt whilst it floats on a gas stream) or splat quenching (pressing the melt between two metal anvils or rollers), may be used to increase the cooling rate or to reduce crystal nucleation triggers. Amorphous metals In the past, small batches of amorphous metals with high surface area configurations (ribbons, wires, films, etc.) have been produced through the implementation of extremely rapid rates of cooling. Amorphous metal wires have been produced by sputtering molten metal onto a spinning metal disk. Several alloys have been produced in layers with thicknesses exceeding 1 millimetre. These are known as bulk metallic glasses (BMG). Liquidmetal Technologies sells several zirconium-based BMGs. Batches of amorphous steel have also been produced that demonstrate mechanical properties far exceeding those found in conventional steel alloys. Experimental evidence indicates that the system Al-Fe-Si may undergo a first-order transition to an amorphous form (dubbed "q-glass") on rapid cooling from the melt. Transmission electron microscopy (TEM) images indicate that q-glass nucleates from the melt as discrete particles with uniform spherical growth in all directions. While x-ray diffraction reveals the isotropic nature of q-glass, a nucleation barrier exists implying an interfacial discontinuity (or internal surface) between the glass and melt phases. Polymers Important polymer glasses include amorphous and glassy pharmaceutical compounds. These are useful because the solubility of the compound is greatly increased when it is amorphous compared to the same crystalline composition. Many emerging pharmaceuticals are practically insoluble in their crystalline forms. Many polymer thermoplastics familiar to everyday use are glasses. For many applications, like glass bottles or eyewear, polymer glasses (acrylic glass, polycarbonate or polyethylene terephthalate) are a lighter alternative to traditional glass. Molecular liquids and molten salts Molecular liquids, electrolytes, molten salts, and aqueous solutions are mixtures of different molecules or ions that do not form a covalent network but interact only through weak van der Waals forces or transient hydrogen bonds. In a mixture of three or more ionic species of dissimilar size and shape, crystallization can be so difficult that the liquid can easily be supercooled into a glass. Examples include LiCl:RH2O (a solution of lithium chloride salt and water molecules) in the composition range 4<R<8. sugar glass, or Ca0.4K0.6(NO3)1.4. Glass electrolytes in the form of Ba-doped Li-glass and Ba-doped Na-glass have been proposed as solutions to problems identified with organic liquid electrolytes used in modern lithium-ion battery cells. Production Following the glass batch preparation and mixing, the raw materials are transported to the furnace. Soda–lime glass for mass production is melted in glass-melting furnaces. Smaller-scale furnaces for speciality glasses include electric melters, pot furnaces, and day tanks. After melting, homogenization and refining (removal of bubbles), the glass is formed. This may be achieved manually by glassblowing, which involves gathering a mass of hot semi-molten glass, inflating it into a bubble using a hollow blowpipe, and forming it into the required shape by blowing, swinging, rolling, or moulding. While hot, the glass can be worked using hand tools, cut with shears, and additional parts such as handles or feet attached by welding. Flat glass for windows and similar applications is formed by the float glass process, developed between 1953 and 1957 by Sir Alastair Pilkington and Kenneth Bickerstaff of the UK's Pilkington Brothers, who created a continuous ribbon of glass using a molten tin bath on which the molten glass flows unhindered under the influence of gravity. The top surface of the glass is subjected to nitrogen under pressure to obtain a polished finish. Container glass for common bottles and jars is formed by blowing and pressing methods. This glass is often slightly modified chemically (with more alumina and calcium oxide) for greater water resistance. Once the desired form is obtained, glass is usually annealed for the removal of stresses and to increase the glass's hardness and durability. Surface treatments, coatings or lamination may follow to improve the chemical durability (glass container coatings, glass container internal treatment), strength (toughened glass, bulletproof glass, windshields), or optical properties (insulated glazing, anti-reflective coating). New chemical glass compositions or new treatment techniques can be initially investigated in small-scale laboratory experiments. The raw materials for laboratory-scale glass melts are often different from those used in mass production because the cost factor has a low priority. In the laboratory mostly pure chemicals are used. Care must be taken that the raw materials have not reacted with moisture or other chemicals in the environment (such as alkali or alkaline earth metal oxides and hydroxides, or boron oxide), or that the impurities are quantified (loss on ignition). Evaporation losses during glass melting should be considered during the selection of the raw materials, e.g., sodium selenite may be preferred over easily evaporating selenium dioxide (SeO2). Also, more readily reacting raw materials may be preferred over relatively inert ones, such as aluminium hydroxide (Al(OH)3) over alumina (Al2O3). Usually, the melts are carried out in platinum crucibles to reduce contamination from the crucible material. Glass homogeneity is achieved by homogenizing the raw materials mixture (glass batch), stirring the melt, and crushing and re-melting the first melt. The obtained glass is usually annealed to prevent breakage during processing. Colour Colour in glass may be obtained by addition of homogenously distributed electrically charged ions (or colour centres). While ordinary soda–lime glass appears colourless in thin section, iron(II) oxide (FeO) impurities produce a green tint in thick sections. Manganese dioxide (MnO2), which gives glass a purple colour, may be added to remove the green tint given by FeO. FeO and chromium(III) oxide (Cr2O3) additives are used in the production of green bottles. Iron (III) oxide, on the other-hand, produces yellow or yellow-brown glass. Low concentrations (0.025 to 0.1%) of cobalt oxide (CoO) produce rich, deep blue cobalt glass. Chromium is a very powerful colouring agent, yielding dark green. Sulphur combined with carbon and iron salts produces amber glass ranging from yellowish to almost black. A glass melt can also acquire an amber colour from a reducing combustion atmosphere. Cadmium sulfide produces imperial red, and combined with selenium can produce shades of yellow, orange, and red. Addition of copper(II) oxide (CuO) produces a turquoise colour in glass, in contrast to copper(I) oxide (Cu2O) which gives a dull red-brown colour. Uses Architecture and windows Soda–lime sheet glass is typically used as a transparent glazing material, typically as windows in external walls of buildings. Float or rolled sheet glass products are cut to size either by scoring and snapping the material, laser cutting, water jets, or diamond-bladed saw. The glass may be thermally or chemically tempered (strengthened) for safety and bent or curved during heating. Surface coatings may be added for specific functions such as scratch resistance, blocking specific wavelengths of light (e.g. infrared or ultraviolet), dirt-repellence (e.g. self-cleaning glass), or switchable electrochromic coatings. Structural glazing systems represent one of the most significant architectural innovations of modern times, where glass buildings now often dominate the skylines of many modern cities. These systems use stainless steel fittings countersunk into recesses in the corners of the glass panels allowing strengthened panes to appear unsupported creating a flush exterior. Structural glazing systems have their roots in iron and glass conservatories of the nineteenth century Tableware Glass is an essential component of tableware and is typically used for water, beer and wine drinking glasses. Wine glasses are typically stemware, i.e. goblets formed from a bowl, stem, and foot. Crystal or Lead crystal glass may be cut and polished to produce decorative drinking glasses with gleaming facets. Other uses of glass in tableware include decanters, jugs, plates, and bowls. Packaging The inert and impermeable nature of glass makes it a stable and widely used material for food and drink packaging as glass bottles and jars. Most container glass is soda–lime glass, produced by blowing and pressing techniques. Container glass has a lower magnesium oxide and sodium oxide content than flat glass, and a higher silica, calcium oxide, and aluminium oxide content. Its higher content of water-insoluble oxides imparts slightly higher chemical durability against water, which is advantageous for storing beverages and food. Glass packaging is sustainable, readily recycled, reusable and refillable. For electronics applications, glass can be used as a substrate in the manufacture of integrated passive devices, thin-film bulk acoustic resonators, and as a hermetic sealing material in device packaging, including very thin solely glass based encapsulation of integrated circuits and other semiconductors in high manufacturing volumes. Laboratories Glass is an important material in scientific laboratories for the manufacture of experimental apparatus because it is relatively cheap, readily formed into required shapes for experiment, easy to keep clean, can withstand heat and cold treatment, is generally non-reactive with many reagents, and its transparency allows for the observation of chemical reactions and processes. Laboratory glassware applications include flasks, Petri dishes, test tubes, pipettes, graduated cylinders, glass-lined metallic containers for chemical processing, fractionation columns, glass pipes, Schlenk lines, gauges, and thermometers. Although most standard laboratory glassware has been mass-produced since the 1920s, scientists still employ skilled glassblowers to manufacture bespoke glass apparatus for their experimental requirements. Optics Glass is a ubiquitous material in optics because of its ability to refract, reflect, and transmit light. These and other optical properties can be controlled by varying chemical compositions, thermal treatment, and manufacturing techniques. The many applications of glass in optics include glasses for eyesight correction, imaging optics (e.g. lenses and mirrors in telescopes, microscopes, and cameras), fibre optics in telecommunications technology, and integrated optics. Microlenses and gradient-index optics (where the refractive index is non-uniform) find application in e.g. reading optical discs, laser printers, photocopiers, and laser diodes. Modern Art The 19th century saw a revival in ancient glassmaking techniques including cameo glass, achieved for the first time since the Roman Empire, initially mostly for pieces in a neo-classical style. The Art Nouveau movement made great use of glass, with René Lalique, Émile Gallé, and Daum of Nancy in the first French wave of the movement, producing coloured vases and similar pieces, often in cameo glass or lustre glass techniques. Louis Comfort Tiffany in America specialised in stained glass, both secular and religious, in panels and his famous lamps. The early 20th century saw the large-scale factory production of glass art by firms such as Waterford and Lalique. Small studios may hand-produce glass artworks. Techniques for producing glass art include blowing, kiln-casting, fusing, slumping, pâte de verre, flame-working, hot-sculpting and cold-working. Cold work includes traditional stained glass work and other methods of shaping glass at room temperature. Objects made out of glass include vessels, paperweights, marbles, beads, sculptures and installation art. See also Aluminium oxynitride transparent ceramic Fire glass Flexible glass Glass in green buildings Kimberley points Prince Rupert's drop Smart glass References External links The Story of Glass Making in Canada from The Canadian Museum of Civilization. "How Your Glass Ware Is Made" by George W. Waltz, February 1951, Popular Science. All About Glass from the Corning Museum of Glass: a collection of articles, multimedia, and virtual books all about glass, including the Glass Dictionary. Amorphous solids Dielectrics Materials Packaging materials Sculpture materials Windows
Glass
[ "Physics", "Chemistry" ]
7,224
[ "Glass", "Unsolved problems in physics", "Homogeneous chemical mixtures", "Materials", "Dielectrics", "Amorphous solids", "Matter" ]
12,582
https://en.wikipedia.org/wiki/Gel%20electrophoresis
Gel electrophoresis is an electrophoresis method for separation and analysis of biomacromolecules (DNA, RNA, proteins, etc.) and their fragments, based on their size and charge through a gel. It is used in clinical chemistry to separate proteins by charge or size (IEF agarose, essentially size independent) and in biochemistry and molecular biology to separate a mixed population of DNA and RNA fragments by length, to estimate the size of DNA and RNA fragments or to separate proteins by charge. Nucleic acid molecules are separated by applying an electric field to move the negatively charged molecules through a gel matrix of agarose, polyacrylamide, or other substances. Shorter molecules move faster and migrate farther than longer ones because shorter molecules migrate more easily through the pores of the gel. This phenomenon is called sieving. Proteins are separated by the charge in agarose because the pores of the gel are too large to sieve proteins. Gel electrophoresis can also be used for the separation of nanoparticles. Gel electrophoresis uses a gel as an anticonvective medium or sieving medium during electrophoresis. Gels suppress the thermal convection caused by the application of the electric field and can also simply serve to maintain the finished separation so that a post electrophoresis stain can be applied. DNA gel electrophoresis is usually performed for analytical purposes, often after amplification of DNA via polymerase chain reaction (PCR), but may be used as a preparative technique prior to use of other methods such as mass spectrometry, RFLP, PCR, cloning, DNA sequencing, or southern blotting for further characterization. Physical basis Electrophoresis is a process that enables the sorting of molecules based on charge, size, or shape. Using an electric field, molecules (such as DNA) can be made to move through a gel made of agarose or polyacrylamide. The electric field consists of a negative charge at one end which pushes the molecules through the gel, and a positive charge at the other end that pulls the molecules through the gel. The molecules being sorted are dispensed into a well in the gel material. The gel is placed in an electrophoresis chamber, which is then connected to a power source. When the electric field is applied, the larger molecules move more slowly through the gel while the smaller molecules move faster. The different sized molecules form distinct bands on the gel. The term "gel" in this instance refers to the matrix used to contain, then separate the target molecules. In most cases, the gel is a crosslinked polymer whose composition and porosity are chosen based on the specific weight and composition of the target to be analyzed. When separating proteins or small nucleic acids (DNA, RNA, or oligonucleotides) the gel is usually composed of different concentrations of acrylamide and a cross-linker, producing different sized mesh networks of polyacrylamide. When separating larger nucleic acids (greater than a few hundred bases), the preferred matrix is purified agarose. In both cases, the gel forms a solid, yet porous matrix. Acrylamide, in contrast to polyacrylamide, is a neurotoxin and must be handled using appropriate safety precautions to avoid poisoning. Agarose is composed of long unbranched chains of uncharged carbohydrates without cross-links resulting in a gel with large pores allowing for the separation of macromolecules and macromolecular complexes. Electrophoresis refers to the electromotive force (EMF) that is used to move the molecules through the gel matrix. By placing the molecules in wells in the gel and applying an electric field, the molecules will move through the matrix at different rates, determined largely by their mass when the charge-to-mass ratio (Z) of all species is uniform. However, when charges are not all uniform the electrical field generated by the electrophoresis procedure will cause the molecules to migrate differentially according to charge. Species that are net positively charged will migrate towards the cathode which is negatively charged (because this is an electrolytic rather than galvanic cell), whereas species that are net negatively charged will migrate towards the positively charged anode. Mass remains a factor in the speed with which these non-uniformly charged molecules migrate through the matrix toward their respective electrodes. If several samples have been loaded into adjacent wells in the gel, they will run parallel in individual lanes. Depending on the number of different molecules, each lane shows the separation of the components from the original mixture as one or more distinct bands, one band per component. Incomplete separation of the components can lead to overlapping bands, or indistinguishable smears representing multiple unresolved components. Bands in different lanes that end up at the same distance from the top contain molecules that passed through the gel at the same speed, which usually means they are approximately the same size. There are molecular weight size markers available that contain a mixture of molecules of known sizes. If such a marker was run on one lane in the gel parallel to the unknown samples, the bands observed can be compared to those of the unknown to determine their size. The distance a band travels is approximately inversely proportional to the logarithm of the size of the molecule (alternatively, this can be stated as the distance traveled is inversely proportional to the log of samples's molecular weight). There are limits to electrophoretic techniques. Since passing a current through a gel causes heating, gels may melt during electrophoresis. Electrophoresis is performed in buffer solutions to reduce pH changes due to the electric field, which is important because the charge of DNA and RNA depends on pH, but running for too long can exhaust the buffering capacity of the solution. There are also limitations in determining the molecular weight by SDS-PAGE, especially when trying to find the MW of an unknown protein. Certain biological variables are difficult or impossible to minimize and can affect electrophoretic migration. Such factors include protein structure, post-translational modifications, and amino acid composition. For example, tropomyosin is an acidic protein that migrates abnormally on SDS-PAGE gels. This is because the acidic residues are repelled by the negatively charged SDS, leading to an inaccurate mass-to-charge ratio and migration. Further, different preparations of genetic material may not migrate consistently with each other, for morphological or other reasons. Types of gel The types of gel most typically used are agarose and polyacrylamide gels. Each type of gel is well-suited to different types and sizes of the analyte. Polyacrylamide gels are usually used for proteins and have very high resolving power for small fragments of DNA (5-500 bp). Agarose gels, on the other hand, have lower resolving power for DNA but have a greater range of separation, and are therefore used for DNA fragments of usually 50–20,000 bp in size, but the resolution of over 6 Mb is possible with pulsed field gel electrophoresis (PFGE). Polyacrylamide gels are run in a vertical configuration while agarose gels are typically run horizontally in a submarine mode. They also differ in their casting methodology, as agarose sets thermally, while polyacrylamide forms in a chemical polymerization reaction. Agarose Agarose gels are made from the natural polysaccharide polymers extracted from seaweed. Agarose gels are easily cast and handled compared to other matrices because the gel setting is a physical rather than chemical change. Samples are also easily recovered. After the experiment is finished, the resulting gel can be stored in a plastic bag in a refrigerator. Agarose gels do not have a uniform pore size, but are optimal for electrophoresis of proteins that are larger than 200 kDa. Agarose gel electrophoresis can also be used for the separation of DNA fragments ranging from 50 base pair to several megabases (millions of bases), the largest of which require specialized apparatus. The distance between DNA bands of different lengths is influenced by the percent agarose in the gel, with higher percentages requiring longer run times, sometimes days. Instead high percentage agarose gels should be run with a pulsed field electrophoresis (PFE), or field inversion electrophoresis. "Most agarose gels are made with between 0.7% (good separation or resolution of large 5–10kb DNA fragments) and 2% (good resolution for small 0.2–1kb fragments) agarose dissolved in electrophoresis buffer. Up to 3% can be used for separating very tiny fragments but a vertical polyacrylamide gel is more appropriate in this case. Low percentage gels are very weak and may break when you try to lift them. High percentage gels are often brittle and do not set evenly. 1% gels are common for many applications." Polyacrylamide Polyacrylamide gel electrophoresis (PAGE) is used for separating proteins ranging in size from 5 to 2,000 kDa due to the uniform pore size provided by the polyacrylamide gel. Pore size is controlled by modulating the concentrations of acrylamide and bis-acrylamide powder used in creating a gel. Care must be used when creating this type of gel, as acrylamide is a potent neurotoxin in its liquid and powdered forms. Traditional DNA sequencing techniques such as Maxam-Gilbert or Sanger methods used polyacrylamide gels to separate DNA fragments differing by a single base-pair in length so the sequence could be read. Most modern DNA separation methods now use agarose gels, except for particularly small DNA fragments. It is currently most often used in the field of immunology and protein analysis, often used to separate different proteins or isoforms of the same protein into separate bands. These can be transferred onto a nitrocellulose or PVDF membrane to be probed with antibodies and corresponding markers, such as in a western blot. Typically resolving gels are made in 6%, 8%, 10%, 12% or 15%. Stacking gel (5%) is poured on top of the resolving gel and a gel comb (which forms the wells and defines the lanes where proteins, sample buffer, and ladders will be placed) is inserted. The percentage chosen depends on the size of the protein that one wishes to identify or probe in the sample. The smaller the known weight, the higher the percentage that should be used. Changes in the buffer system of the gel can help to further resolve proteins of very small sizes. Starch Partially hydrolysed potato starch makes for another non-toxic medium for protein electrophoresis. The gels are slightly more opaque than acrylamide or agarose. Non-denatured proteins can be separated according to charge and size. They are visualised using Napthal Black or Amido Black staining. Typical starch gel concentrations are 5% to 10%. Gel conditions Denaturing Denaturing gels are run under conditions that disrupt the natural structure of the analyte, causing it to unfold into a linear chain. Thus, the mobility of each macromolecule depends only on its linear length and its mass-to-charge ratio. Thus, the secondary, tertiary, and quaternary levels of biomolecular structure are disrupted, leaving only the primary structure to be analyzed. Nucleic acids are often denatured by including urea in the buffer, while proteins are denatured using sodium dodecyl sulfate, usually as part of the SDS-PAGE process. For full denaturation of proteins, it is also necessary to reduce the covalent disulfide bonds that stabilize their tertiary and quaternary structure, a method called reducing PAGE. Reducing conditions are usually maintained by the addition of beta-mercaptoethanol or dithiothreitol. For a general analysis of protein samples, reducing PAGE is the most common form of protein electrophoresis. Denaturing conditions are necessary for proper estimation of molecular weight of RNA. RNA is able to form more intramolecular interactions than DNA which may result in change of its electrophoretic mobility. Urea, DMSO and glyoxal are the most often used denaturing agents to disrupt RNA structure. Originally, highly toxic methylmercury hydroxide was often used in denaturing RNA electrophoresis, but it may be method of choice for some samples. Denaturing gel electrophoresis is used in the DNA and RNA banding pattern-based methods temperature gradient gel electrophoresis (TGGE) and denaturing gradient gel electrophoresis (DGGE). Native Native gels are run in non-denaturing conditions so that the analyte's natural structure is maintained. This allows the physical size of the folded or assembled complex to affect the mobility, allowing for analysis of all four levels of the biomolecular structure. For biological samples, detergents are used only to the extent that they are necessary to lyse lipid membranes in the cell. Complexes remain—for the most part—associated and folded as they would be in the cell. One downside, however, is that complexes may not separate cleanly or predictably, as it is difficult to predict how the molecule's shape and size will affect its mobility. Addressing and solving this problem is a major aim of preparative native PAGE. Unlike denaturing methods, native gel electrophoresis does not use a charged denaturing agent. The molecules being separated (usually proteins or nucleic acids) therefore differ not only in molecular mass and intrinsic charge, but also the cross-sectional area, and thus experience different electrophoretic forces dependent on the shape of the overall structure. For proteins, since they remain in the native state they may be visualized not only by general protein staining reagents but also by specific enzyme-linked staining. A specific experiment example of an application of native gel electrophoresis is to check for enzymatic activity to verify the presence of the enzyme in the sample during protein purification. For example, for the protein alkaline phosphatase, the staining solution is a mixture of 4-chloro-2-2methylbenzenediazonium salt with 3-phospho-2-naphthoic acid-2'-4'-dimethyl aniline in Tris buffer. This stain is commercially sold as a kit for staining gels. If the protein is present, the mechanism of the reaction takes place in the following order: it starts with the de-phosphorylation of 3-phospho-2-naphthoic acid-2'-4'-dimethyl aniline by alkaline phosphatase (water is needed for the reaction). The phosphate group is released and replaced by an alcohol group from water. The electrophile 4- chloro-2-2 methylbenzenediazonium (Fast Red TR Diazonium salt) displaces the alcohol group forming the final product Red Azo dye. As its name implies, this is the final visible-red product of the reaction. In undergraduate academic experimentation of protein purification, the gel is usually run next to commercial purified samples to visualize the results and conclude whether or not purification was successful. Native gel electrophoresis is typically used in proteomics and metallomics. However, native PAGE is also used to scan genes (DNA) for unknown mutations as in single-strand conformation polymorphism. Buffers Buffers in gel electrophoresis are used to provide ions that carry a current and to maintain the pH at a relatively constant value. These buffers have plenty of ions in them, which is necessary for the passage of electricity through them. Something like distilled water or benzene contains few ions, which is not ideal for the use in electrophoresis. There are a number of buffers used for electrophoresis. The most common being, for nucleic acids Tris/Acetate/EDTA (TAE), Tris/Borate/EDTA (TBE). Many other buffers have been proposed, e.g. lithium borate, which is rarely used, based on Pubmed citations (LB), isoelectric histidine, pK matched goods buffers, etc.; in most cases the purported rationale is lower current (less heat) matched ion mobilities, which leads to longer buffer life. Borate is problematic; Borate can polymerize, or interact with cis diols such as those found in RNA. TAE has the lowest buffering capacity but provides the best resolution for larger DNA. This means a lower voltage and more time, but a better product. LB is relatively new and is ineffective in resolving fragments larger than 5 kbp; However, with its low conductivity, a much higher voltage could be used (up to 35 V/cm), which means a shorter analysis time for routine electrophoresis. As low as one base pair size difference could be resolved in 3% agarose gel with an extremely low conductivity medium (1 mM Lithium borate). Most SDS-PAGE protein separations are performed using a "discontinuous" (or DISC) buffer system that significantly enhances the sharpness of the bands within the gel. During electrophoresis in a discontinuous gel system, an ion gradient is formed in the early stage of electrophoresis that causes all of the proteins to focus on a single sharp band in a process called isotachophoresis. Separation of the proteins by size is achieved in the lower, "resolving" region of the gel. The resolving gel typically has a much smaller pore size, which leads to a sieving effect that now determines the electrophoretic mobility of the proteins. Visualization After the electrophoresis is complete, the molecules in the gel can be stained to make them visible. DNA may be visualized using ethidium bromide which, when intercalated into DNA, fluoresce under ultraviolet light, while protein may be visualised using silver stain or Coomassie brilliant blue dye. Other methods may also be used to visualize the separation of the mixture's components on the gel. If the molecules to be separated contain radioactivity, for example in a DNA sequencing gel, an autoradiogram can be recorded of the gel. Photographs can be taken of gels, often using a Gel Doc system. Gels are then commonly labelled for presentation and scientific records on the popular figure-creation website, SciUGo. Downstream processing After separation, an additional separation method may then be used, such as isoelectric focusing or SDS-PAGE. The gel will then be physically cut, and the protein complexes extracted from each portion separately. Each extract may then be analysed, such as by peptide mass fingerprinting or de novo peptide sequencing after in-gel digestion. This can provide a great deal of information about the identities of the proteins in a complex. Applications Estimation of the size of DNA molecules following restriction enzyme digestion, e.g. in restriction mapping of cloned DNA. Analysis of PCR products, e.g. in molecular genetic diagnosis or genetic fingerprinting Separation of restricted genomic DNA prior to Southern transfer, or of RNA prior to Northern transfer. Gel electrophoresis is used in forensics, molecular biology, genetics, microbiology and biochemistry. The results can be analyzed quantitatively by visualizing the gel with UV light and a gel imaging device. The image is recorded with a computer-operated camera, and the intensity of the band or spot of interest is measured and compared against standard or markers loaded on the same gel. The measurement and analysis are mostly done with specialized software. Depending on the type of analysis being performed, other techniques are often implemented in conjunction with the results of gel electrophoresis, providing a wide range of field-specific applications. Nucleic acids In the case of nucleic acids, the direction of migration, from negative to positive electrodes, is due to the naturally occurring negative charge carried by their sugar-phosphate backbone. Double-stranded DNA fragments naturally behave as long rods, so their migration through the gel is relative to their size or, for cyclic fragments, their radius of gyration. Circular DNA such as plasmids, however, may show multiple bands, the speed of migration may depend on whether it is relaxed or supercoiled. Single-stranded DNA or RNA tends to fold up into molecules with complex shapes and migrate through the gel in a complicated manner based on their tertiary structure. Therefore, agents that disrupt the hydrogen bonds, such as sodium hydroxide or formamide, are used to denature the nucleic acids and cause them to behave as long rods again. Gel electrophoresis of large DNA or RNA is usually done by agarose gel electrophoresis. See the "chain termination method" page for an example of a polyacrylamide DNA sequencing gel. Characterization through ligand interaction of nucleic acids or fragments may be performed by mobility shift affinity electrophoresis. Electrophoresis of RNA samples can be used to check for genomic DNA contamination and also for RNA degradation. RNA from eukaryotic organisms shows distinct bands of 28s and 18s rRNA, the 28s band being approximately twice as intense as the 18s band. Degraded RNA has less sharply defined bands, has a smeared appearance, and the intensity ratio is less than 2:1. Proteins Proteins, unlike nucleic acids, can have varying charges and complex shapes, therefore they may not migrate into the polyacrylamide gel at similar rates, or all when placing a negative to positive EMF on the sample. Proteins, therefore, are usually denatured in the presence of a detergent such as sodium dodecyl sulfate (SDS) that coats the proteins with a negative charge. Generally, the amount of SDS bound is relative to the size of the protein (usually 1.4g SDS per gram of protein), so that the resulting denatured proteins have an overall negative charge, and all the proteins have a similar charge-to-mass ratio. Since denatured proteins act like long rods instead of having a complex tertiary shape, the rate at which the resulting SDS coated proteins migrate in the gel is relative only to their size and not their charge or shape. Proteins are usually analyzed by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE), by native gel electrophoresis, by preparative native gel electrophoresis (QPNC-PAGE), or by 2-D electrophoresis. Characterization through ligand interaction may be performed by electroblotting or by affinity electrophoresis in agarose or by capillary electrophoresis as for estimation of binding constants and determination of structural features like glycan content through lectin binding. Nanoparticles A novel application for gel electrophoresis is the separation or characterization of metal or metal oxide nanoparticles (e.g. Au, Ag, ZnO, SiO2) regarding the size, shape, or surface chemistry of the nanoparticles. The scope is to obtain a more homogeneous sample (e.g. narrower particle size distribution), which then can be used in further products/processes (e.g. self-assembly processes). For the separation of nanoparticles within a gel, the key parameter is the ratio of the particle size to the mesh size, whereby two migration mechanisms were identified: the unrestricted mechanism, where the particle size << mesh size, and the restricted mechanism, where particle size is similar to mesh size. History 1930s – first reports of the use of sucrose for gel electrophoresis; moving-boundary electrophoresis (Tiselius) 1950 – introduction of "zone electrophoresis" (Tiselius); paper electrophoresis 1955 – introduction of starch gels, mediocre separation (Smithies) 1959 – introduction of acrylamide gels; discontinuous electrophoresis (Ornstein and Davis); accurate control of parameters such as pore size and stability; and (Raymond and Weintraub) 1965 – introduction of free-flow electrophoresis (Hannig) 1966 – first use of agar gels 1969 – introduction of denaturing agents especially SDS separation of protein subunit (Weber and Osborn) 1970 – Lämmli separated 28 components of T4 phage using a stacking gel and SDS 1972 – agarose gels with ethidium bromide stain 1975 – 2-dimensional gels (O’Farrell); isoelectric focusing, then SDS gel electrophoresis 1977 – sequencing gels (Sanger) 1981 – introduction of capillary electrophoresis (Jorgenson and Lukacs) 1984 – pulsed-field gel electrophoresis enables separation of large DNA molecules (Schwartz and Cantor) 2004 – introduction of a standardized polymerization time for acrylamide gel solutions to optimize gel properties, in particular gel stability (Kastenholz) A 1959 book on electrophoresis by Milan Bier cites references from the 1800s. However, Oliver Smithies made significant contributions. Bier states: "The method of Smithies ... is finding wide application because of its unique separatory power." Taken in context, Bier clearly implies that Smithies' method is an improvement. See also History of electrophoresis Electrophoretic mobility shift assay Gel extraction Isoelectric focusing Pulsed field gel electrophoresis Nonlinear frictiophoresis Two-dimensional gel electrophoresis SDD-AGE QPNC-PAGE Zymography Fast parallel proteolysis Free-flow electrophoresis References External links Biotechniques Laboratory electrophoresis demonstration, from the University of Utah's Genetic Science Learning Center Discontinuous native protein gel electrophoresis Drinking straw electrophoresis How to run a DNA or RNA gel Animation of gel analysis of DNA restriction Step by step photos of running a gel and extracting DNA A typical method from wikiversity Protein methods Molecular biology Laboratory techniques Electrophoresis Polymerase chain reaction electrophoresis
Gel electrophoresis
[ "Chemistry", "Biology" ]
5,552
[ "Biochemistry methods", "Genetics techniques", "Biochemistry", "Polymerase chain reaction", "Instrumental analysis", "Protein methods", "Protein biochemistry", "Colloids", "Biochemical separation processes", "Molecular biology techniques", "Gels", "Molecular biology", "nan", "Electrophores...
12,608
https://en.wikipedia.org/wiki/Geodesy
Geodesy or geodetics is the science of measuring and representing the geometry, gravity, and spatial orientation of the Earth in temporally varying 3D. It is called planetary geodesy when studying other astronomical bodies, such as planets or circumplanetary systems. Geodesy is an earth science and many consider the study of Earth's shape and gravity to be central to that science. It is also a discipline of applied mathematics. Geodynamical phenomena, including crustal motion, tides, and polar motion, can be studied by designing global and national control networks, applying space geodesy and terrestrial geodetic techniques, and relying on datums and coordinate systems. Geodetic job titles include geodesist and geodetic surveyor. History Geodesy began in pre-scientific antiquity, so the very word geodesy comes from the Ancient Greek word or geodaisia (literally, "division of Earth"). Early ideas about the figure of the Earth held the Earth to be flat and the heavens a physical dome spanning over it. Two early arguments for a spherical Earth were that lunar eclipses appear to an observer as circular shadows and that Polaris appears lower and lower in the sky to a traveler headed South. Definition In English, geodesy refers to the science of measuring and representing geospatial information, while geomatics encompasses practical applications of geodesy on local and regional scales, including surveying. In German, geodesy can refer to either higher geodesy ( or , literally "geomensuration") — concerned with measuring Earth on the global scale, or engineering geodesy () that includes surveying — measuring parts or regions of Earth. For the longest time, geodesy was the science of measuring and understanding Earth's geometric shape, orientation in space, and gravitational field; however, geodetic science and operations are applied to other astronomical bodies in our Solar System also. To a large extent, Earth's shape is the result of rotation, which causes its equatorial bulge, and the competition of geological processes such as the collision of plates, as well as of volcanism, resisted by Earth's gravitational field. This applies to the solid surface, the liquid surface (dynamic sea surface topography), and Earth's atmosphere. For this reason, the study of Earth's gravitational field is called physical geodesy. Geoid and reference ellipsoid The geoid essentially is the figure of Earth abstracted from its topographical features. It is an idealized equilibrium surface of seawater, the mean sea level surface in the absence of currents and air pressure variations, and continued under the continental masses. Unlike a reference ellipsoid, the geoid is irregular and too complicated to serve as the computational surface for solving geometrical problems like point positioning. The geometrical separation between the geoid and a reference ellipsoid is called geoidal undulation, and it varies globally between ±110 m based on the GRS 80 ellipsoid. A reference ellipsoid, customarily chosen to be the same size (volume) as the geoid, is described by its semi-major axis (equatorial radius) a and flattening f. The quantity f = , where b is the semi-minor axis (polar radius), is purely geometrical. The mechanical ellipticity of Earth (dynamical flattening, symbol J2) can be determined to high precision by observation of satellite orbit perturbations. Its relationship with geometrical flattening is indirect and depends on the internal density distribution or, in simplest terms, the degree of central concentration of mass. The 1980 Geodetic Reference System (GRS 80), adopted at the XVII General Assembly of the International Union of Geodesy and Geophysics (IUGG), posited a 6,378,137 m semi-major axis and a 1:298.257 flattening. GRS 80 essentially constitutes the basis for geodetic positioning by the Global Positioning System (GPS) and is thus also in widespread use outside the geodetic community. Numerous systems used for mapping and charting are becoming obsolete as countries increasingly move to global, geocentric reference systems utilizing the GRS 80 reference ellipsoid. The geoid is a "realizable" surface, meaning it can be consistently located on Earth by suitable simple measurements from physical objects like a tide gauge. The geoid can, therefore, be considered a physical ("real") surface. The reference ellipsoid, however, has many possible instantiations and is not readily realizable, so it is an abstract surface. The third primary surface of geodetic interest — the topographic surface of Earth — is also realizable. Coordinate systems in space The locations of points in 3D space most conveniently are described by three cartesian or rectangular coordinates, X, Y, and Z. Since the advent of satellite positioning, such coordinate systems are typically geocentric, with the Z-axis aligned to Earth's (conventional or instantaneous) rotation axis. Before the era of satellite geodesy, the coordinate systems associated with a geodetic datum attempted to be geocentric, but with the origin differing from the geocenter by hundreds of meters due to regional deviations in the direction of the plumbline (vertical). These regional geodetic datums, such as ED 50 (European Datum 1950) or NAD 27 (North American Datum 1927), have ellipsoids associated with them that are regional "best fits" to the geoids within their areas of validity, minimizing the deflections of the vertical over these areas. It is only because GPS satellites orbit about the geocenter that this point becomes naturally the origin of a coordinate system defined by satellite geodetic means, as the satellite positions in space themselves get computed within such a system. Geocentric coordinate systems used in geodesy can be divided naturally into two classes: The inertial reference systems, where the coordinate axes retain their orientation relative to the fixed stars or, equivalently, to the rotation axes of ideal gyroscopes. The X-axis points to the vernal equinox. The co-rotating reference systems (also ECEF or "Earth Centred, Earth Fixed"), in which the axes are "attached" to the solid body of Earth. The X-axis lies within the Greenwich observatory's meridian plane. The coordinate transformation between these two systems to good approximation is described by (apparent) sidereal time, which accounts for variations in Earth's axial rotation (length-of-day variations). A more accurate description also accounts for polar motion as a phenomenon closely monitored by geodesists. Coordinate systems in the plane In geodetic applications like surveying and mapping, two general types of coordinate systems in the plane are in use: Plano-polar, with points in the plane defined by their distance, s, from a specified point along a ray having a direction α from a baseline or axis. Rectangular, with points defined by distances from two mutually perpendicular axes, x and y. Contrary to the mathematical convention, in geodetic practice, the x-axis points North and the y-axis East. One can intuitively use rectangular coordinates in the plane for one's current location, in which case the x-axis will point to the local north. More formally, such coordinates can be obtained from 3D coordinates using the artifice of a map projection. It is impossible to map the curved surface of Earth onto a flat map surface without deformation. The compromise most often chosen — called a conformal projection — preserves angles and length ratios so that small circles get mapped as small circles and small squares as squares. An example of such a projection is UTM (Universal Transverse Mercator). Within the map plane, we have rectangular coordinates x and y. In this case, the north direction used for reference is the map north, not the local north. The difference between the two is called meridian convergence. It is easy enough to "translate" between polar and rectangular coordinates in the plane: let, as above, direction and distance be α and s respectively; then we have: The reverse transformation is given by: Heights In geodesy, point or terrain heights are "above sea level" as an irregular, physically defined surface. Height systems in use are: Orthometric heights Dynamic heights Geopotential heights Normal heights Each system has its advantages and disadvantages. Both orthometric and normal heights are expressed in metres above sea level, whereas geopotential numbers are measures of potential energy (unit: m2 s−2) and not metric. The reference surface is the geoid, an equigeopotential surface approximating the mean sea level as described above. For normal heights, the reference surface is the so-called quasi-geoid, which has a few-metre separation from the geoid due to the density assumption in its continuation under the continental masses. One can relate these heights through the geoid undulation concept to ellipsoidal heights (also known as geodetic heights), representing the height of a point above the reference ellipsoid. Satellite positioning receivers typically provide ellipsoidal heights unless fitted with special conversion software based on a model of the geoid. Geodetic datums Because coordinates and heights of geodetic points always get obtained within a system that itself was constructed based on real-world observations, geodesists introduced the concept of a "geodetic datum" (plural datums): a physical (real-world) realization of a coordinate system used for describing point locations. This realization follows from choosing (therefore conventional) coordinate values for one or more datum points. In the case of height data, it suffices to choose one datum point — the reference benchmark, typically a tide gauge at the shore. Thus we have vertical datums, such as the NAVD 88 (North American Vertical Datum 1988), NAP (Normaal Amsterdams Peil), the Kronstadt datum, the Trieste datum, and numerous others. In both mathematics and geodesy, a coordinate system is a "coordinate system" per ISO terminology, whereas the International Earth Rotation and Reference Systems Service (IERS) uses the term "reference system" for the same. When coordinates are realized by choosing datum points and fixing a geodetic datum, ISO speaks of a "coordinate reference system", whereas IERS uses a "reference frame" for the same. The ISO term for a datum transformation again is a "coordinate transformation". Positioning General geopositioning, or simply positioning, is the determination of the location of points on Earth, by myriad techniques. Geodetic positioning employs geodetic methods to determine a set of precise geodetic coordinates of a point on land, at sea, or in space. It may be done within a coordinate system (point positioning or absolute positioning) or relative to another point (relative positioning). One computes the position of a point in space from measurements linking terrestrial or extraterrestrial points of known location ("known points") with terrestrial ones of unknown location ("unknown points"). The computation may involve transformations between or among astronomical and terrestrial coordinate systems. Known points used in point positioning can be GNSS continuously operating reference stations or triangulation points of a higher-order network. Traditionally, geodesists built a hierarchy of networks to allow point positioning within a country. The highest in this hierarchy were triangulation networks, densified into the networks of traverses (polygons) into which local mapping and surveying measurements, usually collected using a measuring tape, a corner prism, and the red-and-white poles, are tied. Commonly used nowadays is GPS, except for specialized measurements (e.g., in underground or high-precision engineering). The higher-order networks are measured with static GPS, using differential measurement to determine vectors between terrestrial points. These vectors then get adjusted in a traditional network fashion. A global polyhedron of permanently operating GPS stations under the auspices of the IERS is the basis for defining a single global, geocentric reference frame that serves as the "zero-order" (global) reference to which national measurements are attached. Real-time kinematic positioning (RTK GPS) is employed frequently in survey mapping. In that measurement technique, unknown points can get quickly tied into nearby terrestrial known points. One purpose of point positioning is the provision of known points for mapping measurements, also known as (horizontal and vertical) control. There can be thousands of those geodetically determined points in a country, usually documented by national mapping agencies. Surveyors involved in real estate and insurance will use these to tie their local measurements. Geodetic problems In geometrical geodesy, there are two main problems: First geodetic problem (also known as direct or forward geodetic problem): given the coordinates of a point and the directional (azimuth) and distance to a second point, determine the coordinates of that second point. Second geodetic problem (also known as inverse or reverse geodetic problem): given the coordinates of two points, determine the azimuth and length of the (straight, curved, or geodesic) line connecting those points. The solutions to both problems in plane geometry reduce to simple trigonometry and are valid for small areas on Earth's surface; on a sphere, solutions become significantly more complex as, for example, in the inverse problem, the azimuths differ going between the two end points along the arc of the connecting great circle. The general solution is called the geodesic for the surface considered, and the differential equations for the geodesic are solvable numerically. On the ellipsoid of revolution, geodesics are expressible in terms of elliptic integrals, which are usually evaluated in terms of a series expansion — see, for example, Vincenty's formulae. Observational concepts As defined in geodesy (and also astronomy), some basic observational concepts like angles and coordinates include (most commonly from the viewpoint of a local observer): Plumbline or vertical: (the line along) the direction of local gravity. Zenith: the (direction to the) intersection of the upwards-extending gravity vector at a point and the celestial sphere. Nadir: the (direction to the) antipodal point where the downward-extending gravity vector intersects the (obscured) celestial sphere. Celestial horizon: a plane perpendicular to the gravity vector at a point. Azimuth: the direction angle within the plane of the horizon, typically counted clockwise from the north (in geodesy and astronomy) or the south (in France). Elevation: the angular height of an object above the horizon; alternatively: zenith distance equal to 90 degrees minus elevation. Local topocentric coordinates: azimuth (direction angle within the plane of the horizon), elevation angle (or zenith angle), distance. North celestial pole: the extension of Earth's (precessing and nutating) instantaneous spin axis extended northward to intersect the celestial sphere. (Similarly for the south celestial pole.) Celestial equator: the (instantaneous) intersection of Earth's equatorial plane with the celestial sphere. Meridian plane: any plane perpendicular to the celestial equator and containing the celestial poles. Local meridian: the plane which contains the direction to the zenith and the celestial pole. Measurements The reference surface (level) used to determine height differences and height reference systems is known as mean sea level. The traditional spirit level directly produces such (for practical purposes most useful) heights above sea level; the more economical use of GPS instruments for height determination requires precise knowledge of the figure of the geoid, as GPS only gives heights above the GRS80 reference ellipsoid. As geoid determination improves, one may expect that the use of GPS in height determination shall increase, too. The theodolite is an instrument used to measure horizontal and vertical (relative to the local vertical) angles to target points. In addition, the tachymeter determines, electronically or electro-optically, the distance to a target and is highly automated or even robotic in operations. Widely used for the same purpose is the method of free station position. Commonly for local detail surveys, tachymeters are employed, although the old-fashioned rectangular technique using an angle prism and steel tape is still an inexpensive alternative. As mentioned, also there are quick and relatively accurate real-time kinematic (RTK) GPS techniques. Data collected are tagged and recorded digitally for entry into Geographic Information System (GIS) databases. Geodetic GNSS (most commonly GPS) receivers directly produce 3D coordinates in a geocentric coordinate frame. One such frame is WGS84, as well as frames by the International Earth Rotation and Reference Systems Service (IERS). GNSS receivers have almost completely replaced terrestrial instruments for large-scale base network surveys. To monitor the Earth's rotation irregularities and plate tectonic motions and for planet-wide geodetic surveys, methods of very-long-baseline interferometry (VLBI) measuring distances to quasars, lunar laser ranging (LLR) measuring distances to prisms on the Moon, and satellite laser ranging (SLR) measuring distances to prisms on artificial satellites, are employed. Gravity is measured using gravimeters, of which there are two kinds. First are absolute gravimeters, based on measuring the acceleration of free fall (e.g., of a reflecting prism in a vacuum tube). They are used to establish vertical geospatial control or in the field. Second, relative gravimeters are spring-based and more common. They are used in gravity surveys over large areas — to establish the figure of the geoid over these areas. The most accurate relative gravimeters are called superconducting gravimeters, which are sensitive to one-thousandth of one-billionth of Earth-surface gravity. Twenty-some superconducting gravimeters are used worldwide in studying Earth's tides, rotation, interior, oceanic and atmospheric loading, as well as in verifying the Newtonian constant of gravitation. In the future, gravity and altitude might become measurable using the special-relativistic concept of time dilation as gauged by optical clocks. Units and measures on the ellipsoid Geographical latitude and longitude are stated in the units degree, minute of arc, and second of arc. They are angles, not metric measures, and describe the direction of the local normal to the reference ellipsoid of revolution. This direction is approximately the same as the direction of the plumbline, i.e., local gravity, which is also the normal to the geoid surface. For this reason, astronomical position determination – measuring the direction of the plumbline by astronomical means – works reasonably well when one also uses an ellipsoidal model of the figure of the Earth. One geographical mile, defined as one minute of arc on the equator, equals 1,855.32571922 m. One nautical mile is one minute of astronomical latitude. The radius of curvature of the ellipsoid varies with latitude, being the longest at the pole and the shortest at the equator same as with the nautical mile. A metre was originally defined as the 10-millionth part of the length from the equator to the North Pole along the meridian through Paris (the target was not quite reached in actual implementation, as it is off by 200 ppm in the current definitions). This situation means that one kilometre roughly equals (1/40,000) * 360 * 60 meridional minutes of arc, or 0.54 nautical miles. (This is not exactly so as the two units had been defined on different bases, so the international nautical mile is 1,852 m exactly, which corresponds to rounding the quotient from 1,000/0.54 m to four digits). Temporal changes Various techniques are used in geodesy to study temporally changing surfaces, bodies of mass, physical fields, and dynamical systems. Points on Earth's surface change their location due to a variety of mechanisms: Continental plate motion, plate tectonics The episodic motion of tectonic origin, especially close to fault lines Periodic effects due to tides and tidal loading Postglacial land uplift due to isostatic adjustment Mass variations due to hydrological changes, including the atmosphere, cryosphere, land hydrology, and oceans Sub-daily polar motion Length-of-day variability Earth's center-of-mass (geocenter) variations Anthropogenic movements such as reservoir construction or petroleum or water extraction Geodynamics is the discipline that studies deformations and motions of Earth's crust and its solidity as a whole. Often the study of Earth's irregular rotation is included in the above definition. Geodynamical studies require terrestrial reference frames realized by the stations belonging to the Global Geodetic Observing System (GGOS). Techniques for studying geodynamic phenomena on global scales include: Satellite positioning by GPS, GLONASS, Galileo, and BeiDou Very-long-baseline interferometry (VLBI) Satellite laser ranging (SLR) and lunar laser ranging (LLR) DORIS Regionally and locally precise leveling Precise tachymeters Monitoring of gravity change using land, airborne, shipborne, and spaceborne gravimetry Satellite altimetry based on microwave and laser observations for studying the ocean surface, sea level rise, and ice cover monitoring Interferometric synthetic aperture radar (InSAR) using satellite images. Notable geodesists See also Fundamentals Geodesy (book) Concepts and Techniques in Modern Geography Geodesics on an ellipsoid History of geodesy Physical geodesy Earth's circumference Physics Geosciences Governmental agencies National mapping agencies U.S. National Geodetic Survey National Geospatial-Intelligence Agency Ordnance Survey United States Coast and Geodetic Survey United States Geological Survey International organizations International Union of Geodesy and Geophysics (IUGG) International Association of Geodesy (IAG) International Federation of Surveyors (IFS) International Geodetic Student Organisation (IGSO) Other EPSG Geodetic Parameter Dataset Meridian arc Surveying References Further reading F. R. Helmert, Mathematical and Physical Theories of Higher Geodesy, Part 1, ACIC (St. Louis, 1964). This is an English translation of Die mathematischen und physikalischen Theorieen der höheren Geodäsie, Vol 1 (Teubner, Leipzig, 1880). F. R. Helmert, Mathematical and Physical Theories of Higher Geodesy, Part 2, ACIC (St. Louis, 1964). This is an English translation of Die mathematischen und physikalischen Theorieen der höheren Geodäsie, Vol 2 (Teubner, Leipzig, 1884). B. Hofmann-Wellenhof and H. Moritz, Physical Geodesy, Springer-Verlag Wien, 2005. (This text is an updated edition of the 1967 classic by W.A. Heiskanen and H. Moritz). W. Kaula, Theory of Satellite Geodesy : Applications of Satellites to Geodesy, Dover Publications, 2000. (This text is a reprint of the 1966 classic). Vaníček P. and E.J. Krakiwsky, Geodesy: the Concepts, pp. 714, Elsevier, 1986. Torge, W (2001), Geodesy (3rd edition), published by de Gruyter, . Thomas H. Meyer, Daniel R. Roman, and David B. Zilkoski. "What does height really mean?" (This is a series of four articles published in Surveying and Land Information Science, SaLIS.) "Part I: Introduction" SaLIS Vol. 64, No. 4, pages 223–233, December 2004. "Part II: Physics and gravity" SaLIS Vol. 65, No. 1, pages 5–15, March 2005. "Part III: Height systems" SaLIS Vol. 66, No. 2, pages 149–160, June 2006. "Part IV: GPS heighting" SaLIS Vol. 66, No. 3, pages 165–183, September 2006. External links Geodetic awareness guidance note, Geodesy Subcommittee, Geomatics Committee, International Association of Oil & Gas Producers Earth sciences Cartography Measurement Navigation Applied mathematics Articles containing video clips
Geodesy
[ "Physics", "Astronomy", "Mathematics" ]
5,099
[ "Applied and interdisciplinary physics", "Physical quantities", "Applied mathematics", "Quantity", "Measurement", "Size", "nan", "Geophysics", "Geodesy" ]
12,610
https://en.wikipedia.org/wiki/Grand%20Unified%20Theory
A Grand Unified Theory (GUT) is any model in particle physics that merges the electromagnetic, weak, and strong forces (the three gauge interactions of the Standard Model) into a single force at high energies. Although this unified force has not been directly observed, many GUT models theorize its existence. If the unification of these three interactions is possible, it raises the possibility that there was a grand unification epoch in the very early universe in which these three fundamental interactions were not yet distinct. Experiments have confirmed that at high energy, the electromagnetic interaction and weak interaction unify into a single combined electroweak interaction. GUT models predict that at even higher energy, the strong and electroweak interactions will unify into one electronuclear interaction. This interaction is characterized by one larger gauge symmetry and thus several force carriers, but one unified coupling constant. Unifying gravity with the electronuclear interaction would provide a more comprehensive theory of everything (TOE) rather than a Grand Unified Theory. Thus, GUTs are often seen as an intermediate step towards a TOE. The novel particles predicted by GUT models are expected to have extremely high masses—around the GUT scale of GeV (just three orders of magnitude below the Planck scale of GeV)—and so are well beyond the reach of any foreseen particle hadron collider experiments. Therefore, the particles predicted by GUT models will be unable to be observed directly, and instead the effects of grand unification might be detected through indirect observations of the following: proton decay, electric dipole moments of elementary particles, or the properties of neutrinos. Some GUTs, such as the Pati–Salam model, predict the existence of magnetic monopoles. While GUTs might be expected to offer simplicity over the complications present in the Standard Model, realistic models remain complicated because they need to introduce additional fields and interactions, or even additional dimensions of space, in order to reproduce observed fermion masses and mixing angles. This difficulty, in turn, may be related to the existence of family symmetries beyond the conventional GUT models. Due to this and the lack of any observed effect of grand unification so far, there is no generally accepted GUT model. Models that do not unify the three interactions using one simple group as the gauge symmetry but do so using semisimple groups can exhibit similar properties and are sometimes referred to as Grand Unified Theories as well. History Historically, the first true GUT, which was based on the simple Lie group , was proposed by Howard Georgi and Sheldon Glashow in 1974. The Georgi–Glashow model was preceded by the semisimple Lie algebra Pati–Salam model by Abdus Salam and Jogesh Pati also in 1974, who pioneered the idea to unify gauge interactions. The acronym GUT was first coined in 1978 by CERN researchers John Ellis, Andrzej Buras, Mary K. Gaillard, and Dimitri Nanopoulos, however in the final version of their paper they opted for the less anatomical GUM (Grand Unification Mass). Nanopoulos later that year was the first to use the acronym in a paper. Motivation The fact that the electric charges of electrons and protons seem to cancel each other exactly to extreme precision is essential for the existence of the macroscopic world as we know it, but this important property of elementary particles is not explained in the Standard Model of particle physics. While the description of strong and weak interactions within the Standard Model is based on gauge symmetries governed by the simple symmetry groups and which allow only discrete charges, the remaining component, the weak hypercharge interaction is described by an abelian symmetry which in principle allows for arbitrary charge assignments. The observed charge quantization, namely the postulation that all known elementary particles carry electric charges which are exact multiples of one-third of the "elementary" charge, has led to the idea that hypercharge interactions and possibly the strong and weak interactions might be embedded in one Grand Unified interaction described by a single, larger simple symmetry group containing the Standard Model. This would automatically predict the quantized nature and values of all elementary particle charges. Since this also results in a prediction for the relative strengths of the fundamental interactions which we observe, in particular, the weak mixing angle, grand unification ideally reduces the number of independent input parameters but is also constrained by observations. Grand unification is reminiscent of the unification of electric and magnetic forces by Maxwell's field theory of electromagnetism in the 19th century, but its physical implications and mathematical structure are qualitatively different. Unification of matter particles SU(5) is the simplest GUT. The smallest simple Lie group which contains the standard model, and upon which the first Grand Unified Theory was based, is . Such group symmetries allow the reinterpretation of several known particles, including the photon, W and Z bosons, and gluon, as different states of a single particle field. However, it is not obvious that the simplest possible choices for the extended "Grand Unified" symmetry should yield the correct inventory of elementary particles. The fact that all currently known matter particles fit perfectly into three copies of the smallest group representations of and immediately carry the correct observed charges, is one of the first and most important reasons why people believe that a Grand Unified Theory might actually be realized in nature. The two smallest irreducible representations of are (the defining representation) and . (These bold numbers indicate the dimension of the representation.) In the standard assignment, the contains the charge conjugates of the right-handed down-type quark color triplet and a left-handed lepton isospin doublet, while the contains the six up-type quark components, the left-handed down-type quark color triplet, and the right-handed electron. This scheme has to be replicated for each of the three known generations of matter. It is notable that the theory is anomaly free with this matter content. The hypothetical right-handed neutrinos are a singlet of , which means its mass is not forbidden by any symmetry; it doesn't need a spontaneous electroweak symmetry breaking which explains why its mass would be heavy (see seesaw mechanism). SO(10) The next simple Lie group which contains the standard model is . Here, the unification of matter is even more complete, since the irreducible spinor representation contains both the and of and a right-handed neutrino, and thus the complete particle content of one generation of the extended standard model with neutrino masses. This is already the largest simple group that achieves the unification of matter in a scheme involving only the already known matter particles (apart from the Higgs sector). Since different standard model fermions are grouped together in larger representations, GUTs specifically predict relations among the fermion masses, such as between the electron and the down quark, the muon and the strange quark, and the tau lepton and the bottom quark for and . Some of these mass relations hold approximately, but most don't (see Georgi-Jarlskog mass relation). The boson matrix for is found by taking the matrix from the representation of and adding an extra row and column for the right-handed neutrino. The bosons are found by adding a partner to each of the 20 charged bosons (2 right-handed W bosons, 6 massive charged gluons and 12 X/Y type bosons) and adding an extra heavy neutral Z-boson to make 5 neutral bosons in total. The boson matrix will have a boson or its new partner in each row and column. These pairs combine to create the familiar 16D Dirac spinor matrices of . E6 In some forms of string theory, including E8 × E8 heterotic string theory, the resultant four-dimensional theory after spontaneous compactification on a six-dimensional Calabi–Yau manifold resembles a GUT based on the group E6. Notably E6 is the only exceptional simple Lie group to have any complex representations, a requirement for a theory to contain chiral fermions (namely all weakly-interacting fermions). Hence the other four (G2, F4, E7, and E8) can't be the gauge group of a GUT. Extended Grand Unified Theories Non-chiral extensions of the Standard Model with vectorlike split-multiplet particle spectra which naturally appear in the higher SU(N) GUTs considerably modify the desert physics and lead to the realistic (string-scale) grand unification for conventional three quark-lepton families even without using supersymmetry (see below). On the other hand, due to a new missing VEV mechanism emerging in the supersymmetric SU(8) GUT the simultaneous solution to the gauge hierarchy (doublet-triplet splitting) problem and problem of unification of flavor can be argued. GUTs with four families / generations, SU(8): Assuming 4 generations of fermions instead of 3 makes a total of types of particles. These can be put into representations of . This can be divided into which is the theory together with some heavy bosons which act on the generation number. GUTs with four families / generations, O(16): Again assuming 4 generations of fermions, the 128 particles and anti-particles can be put into a single spinor representation of . Symplectic groups and quaternion representations Symplectic gauge groups could also be considered. For example, (which is called in the article symplectic group) has a representation in terms of quaternion unitary matrices which has a dimensional real representation and so might be considered as a candidate for a gauge group. has 32 charged bosons and 4 neutral bosons. Its subgroups include so can at least contain the gluons and photon of . Although it's probably not possible to have weak bosons acting on chiral fermions in this representation. A quaternion representation of the fermions might be: A further complication with quaternion representations of fermions is that there are two types of multiplication: left multiplication and right multiplication which must be taken into account. It turns out that including left and right-handed quaternion matrices is equivalent to including a single right-multiplication by a unit quaternion which adds an extra SU(2) and so has an extra neutral boson and two more charged bosons. Thus the group of left- and right-handed quaternion matrices is which does include the standard model bosons: If is a quaternion valued spinor, is quaternion hermitian matrix coming from and is a pure vector quaternion (both of which are 4-vector bosons) then the interaction term is: Octonion representations It can be noted that a generation of 16 fermions can be put into the form of an octonion with each element of the octonion being an 8-vector. If the 3 generations are then put in a 3x3 hermitian matrix with certain additions for the diagonal elements then these matrices form an exceptional (Grassmann) Jordan algebra, which has the symmetry group of one of the exceptional Lie groups (, , , or ) depending on the details. Because they are fermions the anti-commutators of the Jordan algebra become commutators. It is known that has subgroup and so is big enough to include the Standard Model. An gauge group, for example, would have 8 neutral bosons, 120 charged bosons and 120 charged anti-bosons. To account for the 248 fermions in the lowest multiplet of , these would either have to include anti-particles (and so have baryogenesis), have new undiscovered particles, or have gravity-like (spin connection) bosons affecting elements of the particles spin direction. Each of these possesses theoretical problems. Beyond Lie groups Other structures have been suggested including Lie 3-algebras and Lie superalgebras. Neither of these fit with Yang–Mills theory. In particular Lie superalgebras would introduce bosons with incorrect statistics. Supersymmetry, however, does fit with Yang–Mills. Unification of forces and the role of supersymmetry The unification of forces is possible due to the energy scale dependence of force coupling parameters in quantum field theory called renormalization group "running", which allows parameters with vastly different values at usual energies to converge to a single value at a much higher energy scale. The renormalization group running of the three gauge couplings in the Standard Model has been found to nearly, but not quite, meet at the same point if the hypercharge is normalized so that it is consistent with or GUTs, which are precisely the GUT groups which lead to a simple fermion unification. This is a significant result, as other Lie groups lead to different normalizations. However, if the supersymmetric extension MSSM is used instead of the Standard Model, the match becomes much more accurate. In this case, the coupling constants of the strong and electroweak interactions meet at the grand unification energy, also known as the GUT scale: . It is commonly believed that this matching is unlikely to be a coincidence, and is often quoted as one of the main motivations to further investigate supersymmetric theories despite the fact that no supersymmetric partner particles have been experimentally observed. Also, most model builders simply assume supersymmetry because it solves the hierarchy problem—i.e., it stabilizes the electroweak Higgs mass against radiative corrections. Neutrino masses Since Majorana masses of the right-handed neutrino are forbidden by symmetry, GUTs predict the Majorana masses of right-handed neutrinos to be close to the GUT scale where the symmetry is spontaneously broken in those models. In supersymmetric GUTs, this scale tends to be larger than would be desirable to obtain realistic masses of the light, mostly left-handed neutrinos (see neutrino oscillation) via the seesaw mechanism. These predictions are independent of the Georgi–Jarlskog mass relations, wherein some GUTs predict other fermion mass ratios. Proposed theories Several theories have been proposed, but none is currently universally accepted. An even more ambitious theory that includes all fundamental forces, including gravitation, is termed a theory of everything. Some common mainstream GUT models are: Pati–Salam model — Georgi–Glashow model — ; and Flipped — model; and Flipped — model; and Trinification — minimal left-right model — 331 model — chiral color Not quite GUTs: Technicolor models Little Higgs String theory Causal fermion systems M-theory Preons Loop quantum gravity Causal dynamical triangulation theory Note: These models refer to Lie algebras not to Lie groups. The Lie group could be just to take a random example. The most promising candidate is . (Minimal) does not contain any exotic fermions (i.e. additional fermions besides the Standard Model fermions and the right-handed neutrino), and it unifies each generation into a single irreducible representation. A number of other GUT models are based upon subgroups of . They are the minimal left-right model, , flipped and the Pati–Salam model. The GUT group contains , but models based upon it are significantly more complicated. The primary reason for studying models comes from heterotic string theory. GUT models generically predict the existence of topological defects such as monopoles, cosmic strings, domain walls, and others. But none have been observed. Their absence is known as the monopole problem in cosmology. Many GUT models also predict proton decay, although not the Pati–Salam model. As of now, proton decay has never been experimentally observed. The minimal experimental limit on the proton's lifetime pretty much rules out minimal and heavily constrains the other models. The lack of detected supersymmetry to date also constrains many models. Some GUT theories like and suffer from what is called the doublet-triplet problem. These theories predict that for each electroweak Higgs doublet, there is a corresponding colored Higgs triplet field with a very small mass (many orders of magnitude smaller than the GUT scale here). In theory, unifying quarks with leptons, the Higgs doublet would also be unified with a Higgs triplet. Such triplets have not been observed. They would also cause extremely rapid proton decay (far below current experimental limits) and prevent the gauge coupling strengths from running together in the renormalization group. Most GUT models require a threefold replication of the matter fields. As such, they do not explain why there are three generations of fermions. Most GUT models also fail to explain the little hierarchy between the fermion masses for different generations. Ingredients A GUT model consists of a gauge group which is a compact Lie group, a connection form for that Lie group, a Yang–Mills action for that connection given by an invariant symmetric bilinear form over its Lie algebra (which is specified by a coupling constant for each factor), a Higgs sector consisting of a number of scalar fields taking on values within real/complex representations of the Lie group and chiral Weyl fermions taking on values within a complex rep of the Lie group. The Lie group contains the Standard Model group and the Higgs fields acquire VEVs leading to a spontaneous symmetry breaking to the Standard Model. The Weyl fermions represent matter. Current evidence The discovery of neutrino oscillations indicates that the Standard Model is incomplete, but there is currently no clear evidence that nature is described by any Grand Unified Theory. Neutrino oscillations have led to renewed interest toward certain GUT such as . One of the few possible experimental tests of certain GUT is proton decay and also fermion masses. There are a few more special tests for supersymmetric GUT. However, minimum proton lifetimes from research (at or exceeding the ~ year range) have ruled out simpler GUTs and most non-SUSY models. The maximum upper limit on proton lifetime (if unstable), is calculated at 6× years for SUSY models and 1.4× years for minimal non-SUSY GUTs. The gauge coupling strengths of QCD, the weak interaction and hypercharge seem to meet at a common length scale called the GUT scale and equal approximately to  GeV (slightly less than the Planck energy of  GeV), which is somewhat suggestive. This interesting numerical observation is called the gauge coupling unification, and it works particularly well if one assumes the existence of superpartners of the Standard Model particles. Still, it is possible to achieve the same by postulating, for instance, that ordinary (non supersymmetric) models break with an intermediate gauge scale, such as the one of Pati–Salam group. See also B − L quantum number Classical unified field theories Paradigm shift Physics beyond the Standard Model Theory of everything X and Y bosons Notes References Further reading Stephen Hawking, A Brief History of Time, includes a brief popular overview. External links The Algebra of Grand Unified Theories Particle physics Physical cosmology Physics beyond the Standard Model
Grand Unified Theory
[ "Physics", "Astronomy" ]
3,962
[ "Astronomical sub-disciplines", "Theoretical physics", "Unsolved problems in physics", "Astrophysics", "Particle physics", "Grand Unified Theory", "Physics beyond the Standard Model", "Physical cosmology" ]
12,644
https://en.wikipedia.org/wiki/Glycolysis
Glycolysis is the metabolic pathway that converts glucose () into pyruvate and, in most organisms, occurs in the liquid part of cells (the cytosol). The free energy released in this process is used to form the high-energy molecules adenosine triphosphate (ATP) and reduced nicotinamide adenine dinucleotide (NADH). Glycolysis is a sequence of ten reactions catalyzed by enzymes. The wide occurrence of glycolysis in other species indicates that it is an ancient metabolic pathway. Indeed, the reactions that make up glycolysis and its parallel pathway, the pentose phosphate pathway, can occur in the oxygen-free conditions of the Archean oceans, also in the absence of enzymes, catalyzed by metal ions, meaning this is a plausible prebiotic pathway for abiogenesis. The most common type of glycolysis is the Embden–Meyerhof–Parnas (EMP) pathway, which was discovered by Gustav Embden, Otto Meyerhof, and Jakub Karol Parnas. Glycolysis also refers to other pathways, such as the Entner–Doudoroff pathway and various heterofermentative and homofermentative pathways. However, the discussion here will be limited to the Embden–Meyerhof–Parnas pathway. The glycolysis pathway can be separated into two phases: Investment phase – wherein ATP is consumed Yield phase – wherein more ATP is produced than originally consumed Overview The overall reaction of glycolysis is: The use of symbols in this equation makes it appear unbalanced with respect to oxygen atoms, hydrogen atoms, and charges. Atom balance is maintained by the two phosphate (Pi) groups: Each exists in the form of a hydrogen phosphate anion (), dissociating to contribute overall Each liberates an oxygen atom when it binds to an adenosine diphosphate (ADP) molecule, contributing 2O overall Charges are balanced by the difference between ADP and ATP. In the cellular environment, all three hydroxyl groups of ADP dissociate into −O− and H+, giving ADP3−, and this ion tends to exist in an ionic bond with Mg2+, giving ADPMg−. ATP behaves identically except that it has four hydroxyl groups, giving ATPMg2−. When these differences along with the true charges on the two phosphate groups are considered together, the net charges of −4 on each side are balanced. In high-oxygen (aerobic) conditions, eukaryotic cells can continue from glycolysis to metabolise the pyruvate through the citric acid cycle or the electron transport chain to produce significantly more ATP. Importantly, under low-oxygen (anaerobic) conditions, glycolysis is the only biochemical pathway in eukaryotes that can generate ATP, and, for many anaerobic respiring organisms the most important producer of ATP. Therefore, many organisms have evolved fermentation pathways to recycle NAD+ to continue glycolysis to produce ATP for survival. These pathways include ethanol fermentation and lactic acid fermentation. History The modern understanding of the pathway of glycolysis took almost 100 years to fully learn. The combined results of many smaller experiments were required to understand the entire pathway. The first steps in understanding glycolysis began in the 19th century. For economic reasons, the French wine industry sought to investigate why wine sometimes turned distasteful, instead of fermenting into alcohol. The French scientist Louis Pasteur researched this issue during the 1850s. His experiments showed that alcohol fermentation occurs by the action of living microorganisms, yeasts, and that glucose consumption decreased under aerobic conditions (the Pasteur effect). The component steps of glycolysis were first analysed by the non-cellular fermentation experiments of Eduard Buchner during the 1890s. Buchner demonstrated that the conversion of glucose to ethanol was possible using a non-living extract of yeast, due to the action of enzymes in the extract. This experiment not only revolutionized biochemistry, but also allowed later scientists to analyze this pathway in a more controlled laboratory setting. In a series of experiments (1905–1911), scientists Arthur Harden and William Young discovered more pieces of glycolysis. They discovered the regulatory effects of ATP on glucose consumption during alcohol fermentation. They also shed light on the role of one compound as a glycolysis intermediate: fructose 1,6-bisphosphate. The elucidation of fructose 1,6-bisphosphate was accomplished by measuring levels when yeast juice was incubated with glucose. production increased rapidly then slowed down. Harden and Young noted that this process would restart if an inorganic phosphate (Pi) was added to the mixture. Harden and Young deduced that this process produced organic phosphate esters, and further experiments allowed them to extract fructose diphosphate (F-1,6-DP). Arthur Harden and William Young along with Nick Sheppard determined, in a second experiment, that a heat-sensitive high-molecular-weight subcellular fraction (the enzymes) and a heat-insensitive low-molecular-weight cytoplasm fraction (ADP, ATP and NAD+ and other cofactors) are required together for fermentation to proceed. This experiment begun by observing that dialyzed (purified) yeast juice could not ferment or even create a sugar phosphate. This mixture was rescued with the addition of undialyzed yeast extract that had been boiled. Boiling the yeast extract renders all proteins inactive (as it denatures them). The ability of boiled extract plus dialyzed juice to complete fermentation suggests that the cofactors were non-protein in character. In the 1920s Otto Meyerhof was able to link together some of the many individual pieces of glycolysis discovered by Buchner, Harden, and Young. Meyerhof and his team were able to extract different glycolytic enzymes from muscle tissue, and combine them to artificially create the pathway from glycogen to lactic acid. In one paper, Meyerhof and scientist Renate Junowicz-Kockolaty investigated the reaction that splits fructose 1,6-diphosphate into the two triose phosphates. Previous work proposed that the split occurred via 1,3-diphosphoglyceraldehyde plus an oxidizing enzyme and cozymase. Meyerhoff and Junowicz found that the equilibrium constant for the isomerase and aldoses reaction were not affected by inorganic phosphates or any other cozymase or oxidizing enzymes. They further removed diphosphoglyceraldehyde as a possible intermediate in glycolysis. With all of these pieces available by the 1930s, Gustav Embden proposed a detailed, step-by-step outline of that pathway we now know as glycolysis. The biggest difficulties in determining the intricacies of the pathway were due to the very short lifetime and low steady-state concentrations of the intermediates of the fast glycolytic reactions. By the 1940s, Meyerhof, Embden and many other biochemists had finally completed the puzzle of glycolysis. The understanding of the isolated pathway has been expanded in the subsequent decades, to include further details of its regulation and integration with other metabolic pathways. Sequence of reactions Summary of reactions Preparatory phase The first five steps of Glycolysis are regarded as the preparatory (or investment) phase, since they consume energy to convert the glucose into two three-carbon sugar phosphates (G3P). Once glucose enters the cell, the first step is phosphorylation of glucose by a family of enzymes called hexokinases to form glucose 6-phosphate (G6P). This reaction consumes ATP, but it acts to keep the glucose concentration inside the cell low, promoting continuous transport of blood glucose into the cell through the plasma membrane transporters. In addition, phosphorylation blocks the glucose from leaking out – the cell lacks transporters for G6P, and free diffusion out of the cell is prevented due to the charged nature of G6P. Glucose may alternatively be formed from the phosphorolysis or hydrolysis of intracellular starch or glycogen. In animals, an isozyme of hexokinase called glucokinase is also used in the liver, which has a much lower affinity for glucose (Km in the vicinity of normal glycemia), and differs in regulatory properties. The different substrate affinity and alternate regulation of this enzyme are a reflection of the role of the liver in maintaining blood sugar levels. Cofactors: Mg2+ G6P is then rearranged into fructose 6-phosphate (F6P) by glucose phosphate isomerase. Fructose can also enter the glycolytic pathway by phosphorylation at this point. The change in structure is an isomerization, in which the G6P has been converted to F6P. The reaction requires an enzyme, phosphoglucose isomerase, to proceed. This reaction is freely reversible under normal cell conditions. However, it is often driven forward because of a low concentration of F6P, which is constantly consumed during the next step of glycolysis. Under conditions of high F6P concentration, this reaction readily runs in reverse. This phenomenon can be explained through Le Chatelier's Principle. Isomerization to a keto sugar is necessary for carbanion stabilization in the fourth reaction step (below). The energy expenditure of another ATP in this step is justified in 2 ways: The glycolytic process (up to this step) becomes irreversible, and the energy supplied destabilizes the molecule. Because the reaction catalyzed by phosphofructokinase 1 (PFK-1) is coupled to the hydrolysis of ATP (an energetically favorable step) it is, in essence, irreversible, and a different pathway must be used to do the reverse conversion during gluconeogenesis. This makes the reaction a key regulatory point (see below). Furthermore, the second phosphorylation event is necessary to allow the formation of two charged groups (rather than only one) in the subsequent step of glycolysis, ensuring the prevention of free diffusion of substrates out of the cell. The same reaction can also be catalyzed by pyrophosphate-dependent phosphofructokinase (PFP or PPi-PFK), which is found in most plants, some bacteria, archea, and protists, but not in animals. This enzyme uses pyrophosphate (PPi) as a phosphate donor instead of ATP. It is a reversible reaction, increasing the flexibility of glycolytic metabolism. A rarer ADP-dependent PFK enzyme variant has been identified in archaean species. Cofactors: Mg2+ Destabilizing the molecule in the previous reaction allows the hexose ring to be split by aldolase into two triose sugars: dihydroxyacetone phosphate (a ketose), and glyceraldehyde 3-phosphate (an aldose). There are two classes of aldolases: class I aldolases, present in animals and plants, and class II aldolases, present in fungi and bacteria; the two classes use different mechanisms in cleaving the ketose ring. Electrons delocalized in the carbon-carbon bond cleavage associate with the alcohol group. The resulting carbanion is stabilized by the structure of the carbanion itself via resonance charge distribution and by the presence of a charged ion prosthetic group. Triosephosphate isomerase rapidly interconverts dihydroxyacetone phosphate with glyceraldehyde 3-phosphate (GADP) that proceeds further into glycolysis. This is advantageous, as it directs dihydroxyacetone phosphate down the same pathway as glyceraldehyde 3-phosphate, simplifying regulation. Pay-off phase The second half of glycolysis is known as the pay-off phase, characterised by a net gain of the energy-rich molecules ATP and NADH. Since glucose leads to two triose sugars in the preparatory phase, each reaction in the pay-off phase occurs twice per glucose molecule. This yields 2 NADH molecules and 4 ATP molecules, leading to a net gain of 2 NADH molecules and 2 ATP molecules from the glycolytic pathway per glucose. The aldehyde groups of the triose sugars are oxidised, and inorganic phosphate is added to them, forming 1,3-bisphosphoglycerate. The hydrogen is used to reduce two molecules of NAD+, a hydrogen carrier, to give NADH + H+ for each triose. Hydrogen atom balance and charge balance are both maintained because the phosphate (Pi) group actually exists in the form of a hydrogen phosphate anion (), which dissociates to contribute the extra H+ ion and gives a net charge of -3 on both sides. Here, arsenate (), an anion akin to inorganic phosphate may replace phosphate as a substrate to form 1-arseno-3-phosphoglycerate. This, however, is unstable and readily hydrolyzes to form 3-phosphoglycerate, the intermediate in the next step of the pathway. As a consequence of bypassing this step, the molecule of ATP generated from 1-3 bisphosphoglycerate in the next reaction will not be made, even though the reaction proceeds. As a result, arsenate is an uncoupler of glycolysis. This step is the enzymatic transfer of a phosphate group from 1,3-bisphosphoglycerate to ADP by phosphoglycerate kinase, forming ATP and 3-phosphoglycerate. At this step, glycolysis has reached the break-even point: 2 molecules of ATP were consumed, and 2 new molecules have now been synthesized. This step, one of the two substrate-level phosphorylation steps, requires ADP; thus, when the cell has plenty of ATP (and little ADP), this reaction does not occur. Because ATP decays relatively quickly when it is not metabolized, this is an important regulatory point in the glycolytic pathway. ADP actually exists as ADPMg−, and ATP as ATPMg2−, balancing the charges at −5 both sides. Cofactors: Mg2+ Phosphoglycerate mutase isomerises 3-phosphoglycerate into 2-phosphoglycerate. Enolase next converts 2-phosphoglycerate to phosphoenolpyruvate. This reaction is an elimination reaction involving an E1cB mechanism. Cofactors: 2 Mg2+, one "conformational" ion to coordinate with the carboxylate group of the substrate, and one "catalytic" ion that participates in the dehydration. A final substrate-level phosphorylation now forms a molecule of pyruvate and a molecule of ATP by means of the enzyme pyruvate kinase. This serves as an additional regulatory step, similar to the phosphoglycerate kinase step. Cofactors: Mg2+ Biochemical logic The existence of more than one point of regulation indicates that intermediates between those points enter and leave the glycolysis pathway by other processes. For example, in the first regulated step, hexokinase converts glucose into glucose-6-phosphate. Instead of continuing through the glycolysis pathway, this intermediate can be converted into glucose storage molecules, such as glycogen or starch. The reverse reaction, breaking down, e.g., glycogen, produces mainly glucose-6-phosphate; very little free glucose is formed in the reaction. The glucose-6-phosphate so produced can enter glycolysis after the first control point. In the second regulated step (the third step of glycolysis), phosphofructokinase converts fructose-6-phosphate into fructose-1,6-bisphosphate, which then is converted into glyceraldehyde-3-phosphate and dihydroxyacetone phosphate. The dihydroxyacetone phosphate can be removed from glycolysis by conversion into glycerol-3-phosphate, which can be used to form triglycerides. Conversely, triglycerides can be broken down into fatty acids and glycerol; the latter, in turn, can be converted into dihydroxyacetone phosphate, which can enter glycolysis after the second control point. Free energy changes The change in free energy, ΔG, for each step in the glycolysis pathway can be calculated using ΔG = ΔG°′ + RTln Q, where Q is the reaction quotient. This requires knowing the concentrations of the metabolites. All of these values are available for erythrocytes, with the exception of the concentrations of NAD+ and NADH. The ratio of NAD+ to NADH in the cytoplasm is approximately 1000, which makes the oxidation of glyceraldehyde-3-phosphate (step 6) more favourable. Using the measured concentrations of each step, and the standard free energy changes, the actual free energy change can be calculated. (Neglecting this is very common—the delta G of ATP hydrolysis in cells is not the standard free energy change of ATP hydrolysis quoted in textbooks). From measuring the physiological concentrations of metabolites in an erythrocyte it seems that about seven of the steps in glycolysis are in equilibrium for that cell type. Three of the steps—the ones with large negative free energy changes—are not in equilibrium and are referred to as irreversible; such steps are often subject to regulation. Step 5 in the figure is shown behind the other steps, because that step is a side-reaction that can decrease or increase the concentration of the intermediate glyceraldehyde-3-phosphate. That compound is converted to dihydroxyacetone phosphate by the enzyme triose phosphate isomerase, which is a catalytically perfect enzyme; its rate is so fast that the reaction can be assumed to be in equilibrium. The fact that ΔG is not zero indicates that the actual concentrations in the erythrocyte are not accurately known. Regulation The enzymes that catalyse glycolysis are regulated via a range of biological mechanisms in order to control overall flux though the pathway. This is vital for both homeostatsis in a static environment, and metabolic adaptation to a changing environment or need. The details of regulation for some enzymes are highly conserved between species, whereas others vary widely. Gene Expression: Firstly, the cellular concentrations of glycolytic enzymes are modulated via regulation of gene expression via transcription factors, with several glycolysis enzymes themselves acting as regulatory protein kinases in the nucleus. Allosteric inhibition and activation by metabolites: In particular end-product inhibition of regulated enzymes by metabolites such as ATP serves as negative feedback regulation of the pathway. Allosteric inhibition and activation by Protein-protein interactions (PPI). Indeed, some proteins interact with and regulate multiple glycolytic enzymes. Post-translational modification (PTM). In particular, phosphorylation and dephosphorylation is a key mechanism of regulation of pyruvate kinase in the liver. Localization Regulation by insulin in animals In animals, regulation of blood glucose levels by the pancreas in conjunction with the liver is a vital part of homeostasis. The beta cells in the pancreatic islets are sensitive to the blood glucose concentration. A rise in the blood glucose concentration causes them to release insulin into the blood, which has an effect particularly on the liver, but also on fat and muscle cells, causing these tissues to remove glucose from the blood. When the blood sugar falls the pancreatic beta cells cease insulin production, but, instead, stimulate the neighboring pancreatic alpha cells to release glucagon into the blood. This, in turn, causes the liver to release glucose into the blood by breaking down stored glycogen, and by means of gluconeogenesis. If the fall in the blood glucose level is particularly rapid or severe, other glucose sensors cause the release of epinephrine from the adrenal glands into the blood. This has the same action as glucagon on glucose metabolism, but its effect is more pronounced. In the liver glucagon and epinephrine cause the phosphorylation of the key, regulated enzymes of glycolysis, fatty acid synthesis, cholesterol synthesis, gluconeogenesis, and glycogenolysis. Insulin has the opposite effect on these enzymes. The phosphorylation and dephosphorylation of these enzymes (ultimately in response to the glucose level in the blood) is the dominant manner by which these pathways are controlled in the liver, fat, and muscle cells. Thus the phosphorylation of phosphofructokinase inhibits glycolysis, whereas its dephosphorylation through the action of insulin stimulates glycolysis. Regulated Enzymes in Glycolysis The three regulatory enzymes are hexokinase (or glucokinase in the liver), phosphofructokinase, and pyruvate kinase. The flux through the glycolytic pathway is adjusted in response to conditions both inside and outside the cell. The internal factors that regulate glycolysis do so primarily to provide ATP in adequate quantities for the cell's needs. The external factors act primarily on the liver, fat tissue, and muscles, which can remove large quantities of glucose from the blood after meals (thus preventing hyperglycemia by storing the excess glucose as fat or glycogen, depending on the tissue type). The liver is also capable of releasing glucose into the blood between meals, during fasting, and exercise thus preventing hypoglycemia by means of glycogenolysis and gluconeogenesis. These latter reactions coincide with the halting of glycolysis in the liver. In addition hexokinase and glucokinase act independently of the hormonal effects as controls at the entry points of glucose into the cells of different tissues. Hexokinase responds to the glucose-6-phosphate (G6P) level in the cell, or, in the case of glucokinase, to the blood sugar level in the blood to impart entirely intracellular controls of the glycolytic pathway in different tissues (see below). When glucose has been converted into G6P by hexokinase or glucokinase, it can either be converted to glucose-1-phosphate (G1P) for conversion to glycogen, or it is alternatively converted by glycolysis to pyruvate, which enters the mitochondrion where it is converted into acetyl-CoA and then into citrate. Excess citrate is exported from the mitochondrion back into the cytosol, where ATP citrate lyase regenerates acetyl-CoA and oxaloacetate (OAA). The acetyl-CoA is then used for fatty acid synthesis and cholesterol synthesis, two important ways of utilizing excess glucose when its concentration is high in blood. The regulated enzymes catalyzing these reactions perform these functions when they have been dephosphorylated through the action of insulin on the liver cells. Between meals, during fasting, exercise or hypoglycemia, glucagon and epinephrine are released into the blood. This causes liver glycogen to be converted back to G6P, and then converted to glucose by the liver-specific enzyme glucose 6-phosphatase and released into the blood. Glucagon and epinephrine also stimulate gluconeogenesis, which converts non-carbohydrate substrates into G6P, which joins the G6P derived from glycogen, or substitutes for it when the liver glycogen store have been depleted. This is critical for brain function, since the brain utilizes glucose as an energy source under most conditions. The simultaneously phosphorylation of, particularly, phosphofructokinase, but also, to a certain extent pyruvate kinase, prevents glycolysis occurring at the same time as gluconeogenesis and glycogenolysis. Hexokinase and glucokinase All cells contain the enzyme hexokinase, which catalyzes the conversion of glucose that has entered the cell into glucose-6-phosphate (G6P). Since the cell membrane is impervious to G6P, hexokinase essentially acts to transport glucose into the cells from which it can then no longer escape. Hexokinase is inhibited by high levels of G6P in the cell. Thus the rate of entry of glucose into cells partially depends on how fast G6P can be disposed of by glycolysis, and by glycogen synthesis (in the cells which store glycogen, namely liver and muscles). Glucokinase, unlike hexokinase, is not inhibited by G6P. It occurs in liver cells, and will only phosphorylate the glucose entering the cell to form G6P, when the glucose in the blood is abundant. This being the first step in the glycolytic pathway in the liver, it therefore imparts an additional layer of control of the glycolytic pathway in this organ. Phosphofructokinase Phosphofructokinase is an important control point in the glycolytic pathway, since it is one of the irreversible steps and has key allosteric effectors, AMP and fructose 2,6-bisphosphate (F2,6BP). F2,6BP is a very potent activator of phosphofructokinase (PFK-1) that is synthesized when F6P is phosphorylated by a second phosphofructokinase (PFK2). In the liver, when blood sugar is low and glucagon elevates cAMP, PFK2 is phosphorylated by protein kinase A. The phosphorylation inactivates PFK2, and another domain on this protein becomes active as fructose bisphosphatase-2, which converts F2,6BP back to F6P. Both glucagon and epinephrine cause high levels of cAMP in the liver. The result of lower levels of liver F2,6BP is a decrease in activity of phosphofructokinase and an increase in activity of fructose 1,6-bisphosphatase, so that gluconeogenesis (in essence, "glycolysis in reverse") is favored. This is consistent with the role of the liver in such situations, since the response of the liver to these hormones is to release glucose to the blood. ATP competes with AMP for the allosteric effector site on the PFK enzyme. ATP concentrations in cells are much higher than those of AMP, typically 100-fold higher, but the concentration of ATP does not change more than about 10% under physiological conditions, whereas a 10% drop in ATP results in a 6-fold increase in AMP. Thus, the relevance of ATP as an allosteric effector is questionable. An increase in AMP is a consequence of a decrease in energy charge in the cell. Citrate inhibits phosphofructokinase when tested in vitro by enhancing the inhibitory effect of ATP. However, it is doubtful that this is a meaningful effect in vivo, because citrate in the cytosol is utilized mainly for conversion to acetyl-CoA for fatty acid and cholesterol synthesis. TIGAR, a p53 induced enzyme, is responsible for the regulation of phosphofructokinase and acts to protect against oxidative stress. TIGAR is a single enzyme with dual function that regulates F2,6BP. It can behave as a phosphatase (fructuose-2,6-bisphosphatase) which cleaves the phosphate at carbon-2 producing F6P. It can also behave as a kinase (PFK2) adding a phosphate onto carbon-2 of F6P which produces F2,6BP. In humans, the TIGAR protein is encoded by C12orf5 gene. The TIGAR enzyme will hinder the forward progression of glycolysis, by creating a build up of fructose-6-phosphate (F6P) which is isomerized into glucose-6-phosphate (G6P). The accumulation of G6P will shunt carbons into the pentose phosphate pathway. Pyruvate kinase The final step of glycolysis is catalysed by pyruvate kinase to form pyruvate and another ATP. It is regulated by a range of different transcriptional, covalent and non-covalent regulation mechanisms, which can vary widely in different tissues. For example, in the liver, pyruvate kinase is regulated based on glucose availability. During fasting (no glucose available), glucagon activates protein kinase A which phosphorylates pyruvate kinase to inhibit it. An increase in blood sugar leads to secretion of insulin, which activates protein phosphatase 1, leading to dephosphorylation and re-activation of pyruvate kinase. These controls prevent pyruvate kinase from being active at the same time as the enzymes that catalyze the reverse reaction (pyruvate carboxylase and phosphoenolpyruvate carboxykinase), preventing a futile cycle. Conversely, the isoform of pyruvate kinasein found in muscle is not affected by protein kinase A (which is activated by adrenaline in that tissue), so that glycolysis remains active in muscles even during fasting. Post-glycolysis processes The overall process of glycolysis is: Glucose + 2 NAD+ + 2 ADP + 2 Pi → 2 Pyruvate + 2 NADH + 2 H+ + 2 ATP + 2 H2O If glycolysis were to continue indefinitely, all of the NAD+ would be used up, and glycolysis would stop. To allow glycolysis to continue, organisms must be able to oxidize NADH back to NAD+. How this is performed depends on which external electron acceptor is available. Anoxic regeneration of NAD+ One method of doing this is to simply have the pyruvate do the oxidation; in this process, pyruvate is converted to lactate (the conjugate base of lactic acid) in a process called lactic acid fermentation: Pyruvate + NADH + H+ → Lactate + NAD+ This process occurs in the bacteria involved in making yogurt (the lactic acid causes the milk to curdle). This process also occurs in animals under hypoxic (or partially anaerobic) conditions, found, for example, in overworked muscles that are starved of oxygen. In many tissues, this is a cellular last resort for energy; most animal tissue cannot tolerate anaerobic conditions for an extended period of time. Some organisms, such as yeast, convert NADH back to NAD+ in a process called ethanol fermentation. In this process, the pyruvate is converted first to acetaldehyde and carbon dioxide, and then to ethanol. Lactic acid fermentation and ethanol fermentation can occur in the absence of oxygen. This anaerobic fermentation allows many single-cell organisms to use glycolysis as their only energy source. Anoxic regeneration of NAD+ is only an effective means of energy production during short, intense exercise in vertebrates, for a period ranging from 10 seconds to 2 minutes during a maximal effort in humans. (At lower exercise intensities it can sustain muscle activity in diving animals, such as seals, whales and other aquatic vertebrates, for very much longer periods of time.) Under these conditions NAD+ is replenished by NADH donating its electrons to pyruvate to form lactate. This produces 2 ATP molecules per glucose molecule, or about 5% of glucose's energy potential (38 ATP molecules in bacteria). But the speed at which ATP is produced in this manner is about 100 times that of oxidative phosphorylation. The pH in the cytoplasm quickly drops when hydrogen ions accumulate in the muscle, eventually inhibiting the enzymes involved in glycolysis. The burning sensation in muscles during hard exercise can be attributed to the release of hydrogen ions during the shift to glucose fermentation from glucose oxidation to carbon dioxide and water, when aerobic metabolism can no longer keep pace with the energy demands of the muscles. These hydrogen ions form a part of lactic acid. The body falls back on this less efficient but faster method of producing ATP under low oxygen conditions. This is thought to have been the primary means of energy production in earlier organisms before oxygen reached high concentrations in the atmosphere between 2000 and 2500 million years ago, and thus would represent a more ancient form of energy production than the aerobic replenishment of NAD+ in cells. The liver in mammals gets rid of this excess lactate by transforming it back into pyruvate under aerobic conditions; see Cori cycle. Fermentation of pyruvate to lactate is sometimes also called "anaerobic glycolysis", however, glycolysis ends with the production of pyruvate regardless of the presence or absence of oxygen. In the above two examples of fermentation, NADH is oxidized by transferring two electrons to pyruvate. However, anaerobic bacteria use a wide variety of compounds as the terminal electron acceptors in cellular respiration: nitrogenous compounds, such as nitrates and nitrites; sulfur compounds, such as sulfates, sulfites, sulfur dioxide, and elemental sulfur; carbon dioxide; iron compounds; manganese compounds; cobalt compounds; and uranium compounds. Aerobic regeneration of NAD+ and further catabolism of pyruvate In aerobic eukaryotes, a complex mechanism has developed to use the oxygen in air as the final electron acceptor, in a process called oxidative phosphorylation. Aerobic prokaryotes, which lack mitochondria, use a variety of simpler mechanisms. Firstly, the NADH + H+ generated by glycolysis has to be transferred to the mitochondrion to be oxidized, and thus to regenerate the NAD+ necessary for glycolysis to continue. However the inner mitochondrial membrane is impermeable to NADH and NAD+. Use is therefore made of two "shuttles" to transport the electrons from NADH across the mitochondrial membrane. They are the malate-aspartate shuttle and the glycerol phosphate shuttle. In the former the electrons from NADH are transferred to cytosolic oxaloacetate to form malate. The malate then traverses the inner mitochondrial membrane into the mitochondrial matrix, where it is reoxidized by NAD+ forming intra-mitochondrial oxaloacetate and NADH. The oxaloacetate is then re-cycled to the cytosol via its conversion to aspartate which is readily transported out of the mitochondrion. In the glycerol phosphate shuttle electrons from cytosolic NADH are transferred to dihydroxyacetone to form glycerol-3-phosphate which readily traverses the outer mitochondrial membrane. Glycerol-3-phosphate is then reoxidized to dihydroxyacetone, donating its electrons to FAD instead of NAD+. This reaction takes place on the inner mitochondrial membrane, allowing FADH2 to donate its electrons directly to coenzyme Q (ubiquinone) which is part of the electron transport chain which ultimately transfers electrons to molecular oxygen , with the formation of water, and the release of energy eventually captured in the form of ATP. The glycolytic end-product, pyruvate (plus NAD+) is converted to acetyl-CoA, and NADH + H+ within the mitochondria in a process called pyruvate decarboxylation. The resulting acetyl-CoA enters the citric acid cycle (or Krebs Cycle), where the acetyl group of the acetyl-CoA is converted into carbon dioxide by two decarboxylation reactions with the formation of yet more intra-mitochondrial NADH + H+. The intra-mitochondrial NADH + H+ is oxidized to NAD+ by the electron transport chain, using oxygen as the final electron acceptor to form water. The energy released during this process is used to create a hydrogen ion (or proton) gradient across the inner membrane of the mitochondrion. Finally, the proton gradient is used to produce about 2.5 ATP for every NADH + H+ oxidized in a process called oxidative phosphorylation. Conversion of carbohydrates into fatty acids and cholesterol The pyruvate produced by glycolysis is an important intermediary in the conversion of carbohydrates into fatty acids and cholesterol. This occurs via the conversion of pyruvate into acetyl-CoA in the mitochondrion. However, this acetyl CoA needs to be transported into cytosol where the synthesis of fatty acids and cholesterol occurs. This cannot occur directly. To obtain cytosolic acetyl-CoA, citrate (produced by the condensation of acetyl CoA with oxaloacetate) is removed from the citric acid cycle and carried across the inner mitochondrial membrane into the cytosol. There it is cleaved by ATP citrate lyase into acetyl-CoA and oxaloacetate. The oxaloacetate is returned to mitochondrion as malate (and then back into oxaloacetate to transfer more acetyl-CoA out of the mitochondrion). The cytosolic acetyl-CoA can be carboxylated by acetyl-CoA carboxylase into malonyl CoA, the first committed step in the synthesis of fatty acids, or it can be combined with acetoacetyl-CoA to form 3-hydroxy-3-methylglutaryl-CoA (HMG-CoA) which is the rate limiting step controlling the synthesis of cholesterol. Cholesterol can be used as is, as a structural component of cellular membranes, or it can be used to synthesize the steroid hormones, bile salts, and vitamin D. Conversion of pyruvate into oxaloacetate for the citric acid cycle Pyruvate molecules produced by glycolysis are actively transported across the inner mitochondrial membrane, and into the matrix where they can either be oxidized and combined with coenzyme A to form , acetyl-CoA, and NADH, or they can be carboxylated (by pyruvate carboxylase) to form oxaloacetate. This latter reaction "fills up" the amount of oxaloacetate in the citric acid cycle, and is therefore an anaplerotic reaction (from the Greek meaning to "fill up"), increasing the cycle's capacity to metabolize acetyl-CoA when the tissue's energy needs (e.g. in heart and skeletal muscle) are suddenly increased by activity. In the citric acid cycle all the intermediates (e.g. citrate, iso-citrate, alpha-ketoglutarate, succinate, fumarate, malate and oxaloacetate) are regenerated during each turn of the cycle. Adding more of any of these intermediates to the mitochondrion therefore means that that additional amount is retained within the cycle, increasing all the other intermediates as one is converted into the other. Hence the addition of oxaloacetate greatly increases the amounts of all the citric acid intermediates, thereby increasing the cycle's capacity to metabolize acetyl CoA, converting its acetate component into and water, with the release of enough energy to form 11 ATP and 1 GTP molecule for each additional molecule of acetyl CoA that combines with oxaloacetate in the cycle. To cataplerotically remove oxaloacetate from the citric cycle, malate can be transported from the mitochondrion into the cytoplasm, decreasing the amount of oxaloacetate that can be regenerated. Furthermore, citric acid intermediates are constantly used to form a variety of substances such as the purines, pyrimidines and porphyrins. Intermediates for other pathways This article concentrates on the catabolic role of glycolysis with regard to converting potential chemical energy to usable chemical energy during the oxidation of glucose to pyruvate. Many of the metabolites in the glycolytic pathway are also used by anabolic pathways, and, as a consequence, flux through the pathway is critical to maintain a supply of carbon skeletons for biosynthesis. The following metabolic pathways are all strongly reliant on glycolysis as a source of metabolites: and many more. Pentose phosphate pathway, which begins with the dehydrogenation of glucose-6-phosphate, the first intermediate to be produced by glycolysis, produces various pentose sugars, and NADPH for the synthesis of fatty acids and cholesterol. Glycogen synthesis also starts with glucose-6-phosphate at the beginning of the glycolytic pathway. Glycerol, for the formation of triglycerides and phospholipids, is produced from the glycolytic intermediate glyceraldehyde-3-phosphate. Various post-glycolytic pathways: Fatty acid synthesis Cholesterol synthesis The citric acid cycle which in turn leads to: Amino acid synthesis Nucleotide synthesis Tetrapyrrole synthesis Although gluconeogenesis and glycolysis share many intermediates the one is not functionally a branch or tributary of the other. There are two regulatory steps in both pathways which, when active in the one pathway, are automatically inactive in the other. The two processes can therefore not be simultaneously active. Indeed, if both sets of reactions were highly active at the same time the net result would be the hydrolysis of four high energy phosphate bonds (two ATP and two GTP) per reaction cycle. NAD+ is the oxidizing agent in glycolysis, as it is in most other energy yielding metabolic reactions (e.g. beta-oxidation of fatty acids, and during the citric acid cycle). The NADH thus produced is primarily used to ultimately transfer electrons to to produce water, or, when is not available, to produce compounds such as lactate or ethanol (see Anoxic regeneration of NAD+ above). NADH is rarely used for synthetic processes, the notable exception being gluconeogenesis. During fatty acid and cholesterol synthesis the reducing agent is NADPH. This difference exemplifies a general principle that NADPH is consumed during biosynthetic reactions, whereas NADH is generated in energy-yielding reactions. The source of the NADPH is two-fold. When malate is oxidatively decarboxylated by "NADP+-linked malic enzyme" pyruvate, and NADPH are formed. NADPH is also formed by the pentose phosphate pathway which converts glucose into ribose, which can be used in synthesis of nucleotides and nucleic acids, or it can be catabolized to pyruvate. Glycolysis in disease Diabetes Cellular uptake of glucose occurs in response to insulin signals, and glucose is subsequently broken down through glycolysis, lowering blood sugar levels. However, insulin resistance or low insulin levels seen in diabetes result in hyperglycemia, where glucose levels in the blood rise and glucose is not properly taken up by cells. Hepatocytes further contribute to this hyperglycemia through gluconeogenesis. Glycolysis in hepatocytes controls hepatic glucose production, and when glucose is overproduced by the liver without having a means of being broken down by the body, hyperglycemia results. Genetic diseases Glycolytic mutations are generally rare due to importance of the metabolic pathway; the majority of occurring mutations result in an inability of the cell to respire, and therefore cause the death of the cell at an early stage. However, some mutations (glycogen storage diseases and other inborn errors of carbohydrate metabolism) are seen with one notable example being pyruvate kinase deficiency, leading to chronic hemolytic anemia. In combined malonic and methylmalonic aciduria (CMAMMA) due to ACSF3 deficiency, glycolysis is reduced by -50%, which is caused by reduced lipoylation of mitochondrial enzymes such as the pyruvate dehydrogenase complex and α-ketoglutarate dehydrogenase complex. Cancer Malignant tumor cells perform glycolysis at a rate that is ten times faster than their noncancerous tissue counterparts. During their genesis, limited capillary support often results in hypoxia (decreased O2 supply) within the tumor cells. Thus, these cells rely on anaerobic metabolic processes such as glycolysis for ATP (adenosine triphosphate). Some tumor cells overexpress specific glycolytic enzymes which result in higher rates of glycolysis. Often these enzymes are Isoenzymes, of traditional glycolysis enzymes, that vary in their susceptibility to traditional feedback inhibition. The increase in glycolytic activity ultimately counteracts the effects of hypoxia by generating sufficient ATP from this anaerobic pathway. This phenomenon was first described in 1930 by Otto Warburg and is referred to as the Warburg effect. The Warburg hypothesis claims that cancer is primarily caused by dysfunctionality in mitochondrial metabolism, rather than because of the uncontrolled growth of cells. A number of theories have been advanced to explain the Warburg effect. One such theory suggests that the increased glycolysis is a normal protective process of the body and that malignant change could be primarily caused by energy metabolism. This high glycolysis rate has important medical applications, as high aerobic glycolysis by malignant tumors is utilized clinically to diagnose and monitor treatment responses of cancers by imaging uptake of 2-18F-2-deoxyglucose (FDG) (a radioactive modified hexokinase substrate) with positron emission tomography (PET). There is ongoing research to affect mitochondrial metabolism and treat cancer by reducing glycolysis and thus starving cancerous cells in various new ways, including a ketogenic diet. Interactive pathway map The diagram below shows human protein names. Names in other organisms may be different and the number of isozymes (such as HK1, HK2, ...) is likely to be different too. Alternative nomenclature Some of the metabolites in glycolysis have alternative names and nomenclature. In part, this is because some of them are common to other pathways, such as the Calvin cycle. Structure of glycolysis components in Fischer projections and polygonal model The intermediates of glycolysis depicted in Fischer projections show the chemical changing step by step. Such image can be compared to polygonal model representation. See also Carbohydrate catabolism Citric acid cycle Cori cycle Fermentation (biochemistry) Gluconeogenesis Glycolytic oscillation Glycogenoses (glycogen storage diseases) Inborn errors of carbohydrate metabolism Pentose phosphate pathway Pyruvate decarboxylation Triose kinase References External links A Detailed Glycolysis Animation provided by IUBMB (Adobe Flash Required) The Glycolytic enzymes in Glycolysis at RCSB PDB Glycolytic cycle with animations at wdv.com Metabolism, Cellular Respiration and Photosynthesis - The Virtual Library of Biochemistry, Molecular Biology and Cell Biology The chemical logic behind glycolysis at ufp.pt Expasy biochemical pathways poster at ExPASy metpath: Interactive representation of glycolysis Biochemical reactions Carbohydrates Cellular respiration Metabolic pathways
Glycolysis
[ "Chemistry", "Biology" ]
10,402
[ "Biomolecules by chemical classification", "Carbohydrate metabolism", "Carbohydrates", "Cellular respiration", "Biochemistry", "Glycolysis", "Biochemical reactions", "Organic compounds", "Carbohydrate chemistry", "Metabolic pathways", "Metabolism" ]
12,666
https://en.wikipedia.org/wiki/Gluon
A gluon ( ) is a type of massless elementary particle that mediates the strong interaction between quarks, acting as the exchange particle for the interaction. Gluons are massless vector bosons, thereby having a spin of 1. Through the strong interaction, gluons bind quarks into groups according to quantum chromodynamics (QCD), forming hadrons such as protons and neutrons. Gluons carry the color charge of the strong interaction, thereby participating in the strong interaction as well as mediating it. Because gluons carry the color charge, QCD is more difficult to analyze compared to quantum electrodynamics (QED) where the photon carries no electric charge. The term was coined by Murray Gell-Mann in 1962 for being similar to an adhesive or glue that keeps the nucleus together. Together with the quarks, these particles were referred to as partons by Richard Feynman. Properties The gluon is a vector boson, which means it has a spin of 1. While massive spin-1 particles have three polarization states, massless gauge bosons like the gluon have only two polarization states because gauge invariance requires the field polarization to be transverse to the direction that the gluon is traveling. In quantum field theory, unbroken gauge invariance requires that gauge bosons have zero mass. Experiments limit the gluon's rest mass (if any) to less than a few MeV/c2. The gluon has negative intrinsic parity. Counting gluons There are eight independent types of gluons in QCD. This is unlike the photon of QED or the three W and Z bosons of the weak interaction. Additionally, gluons are subject to the color charge phenomena. Quarks carry three types of color charge; antiquarks carry three types of anticolor. Gluons carry both color and anticolor. This gives nine possible combinations of color and anticolor in gluons. The following is a list of those combinations (and their schematic names): red–antired red–antigreen red–antiblue green–antired green–antigreen green–antiblue blue–antired blue–antigreen blue–antiblue These possible combinations are only effective states, not the actual observed color states of gluons. To understand how they are combined, it is necessary to consider the mathematics of color charge in more detail. Color singlet states The stable strongly interacting particles, including hadrons like the proton or the neutron, are observed to be "colorless". More precisely, they are in a "color singlet" state, and mathematically analogous to a spin singlet state. The states allow interaction with other color singlets, but not other color states; because long-range gluon interactions do not exist, this illustrates that gluons in the singlet state do not exist either. The color singlet state is: If one could measure the color of the state, there would be equal probabilities of it being red–antired, blue–antiblue, or green–antigreen. Eight color states There are eight remaining independent color states corresponding to the "eight types" or "eight colors" of gluons. Since the states can be mixed together, there are multiple ways of presenting these states. These are known as the "color octet", and a commonly used list for each is: These are equivalent to the Gell-Mann matrices. The critical feature of these particular eight states is that they are linearly independent, and also independent of the singlet state, hence 32 − 1 or 23. There is no way to add any combination of these states to produce any others. It is also impossible to add them to make r, g, or b the forbidden singlet state. There are many other possible choices, but all are mathematically equivalent, at least equally complicated, and give the same physical results. Group theory details Formally, QCD is a gauge theory with SU(3) gauge symmetry. Quarks are introduced as spinors in Nf flavors, each in the fundamental representation (triplet, denoted 3) of the color gauge group, SU(3). The gluons are vectors in the adjoint representation (octets, denoted 8) of color SU(3). For a general gauge group, the number of force-carriers, like photons or gluons, is always equal to the dimension of the adjoint representation. For the simple case of SU(N), the dimension of this representation is . In group theory, there are no color singlet gluons because quantum chromodynamics has an SU(3) rather than a U(3) symmetry. There is no known a priori reason for one group to be preferred over the other, but as discussed above, the experimental evidence supports SU(3). If the group were U(3), the ninth (colorless singlet) gluon would behave like a "second photon" and not like the other eight gluons. Confinement Since gluons themselves carry color charge, they participate in strong interactions. These gluon–gluon interactions constrain color fields to string-like objects called "flux tubes", which exert constant force when stretched. Due to this force, quarks are confined within composite particles called hadrons. This effectively limits the range of the strong interaction to meters, roughly the size of a nucleon. Beyond a certain distance, the energy of the flux tube binding two quarks increases linearly. At a large enough distance, it becomes energetically more favorable to pull a quark–antiquark pair out of the vacuum rather than increase the length of the flux tube. One consequence of the hadron-confinement property of gluons is that they are not directly involved in the nuclear forces between hadrons. The force mediators for these are other hadrons called mesons. Although in the normal phase of QCD single gluons may not travel freely, it is predicted that there exist hadrons that are formed entirely of gluons — called glueballs. There are also conjectures about other exotic hadrons in which real gluons (as opposed to virtual ones found in ordinary hadrons) would be primary constituents. Beyond the normal phase of QCD (at extreme temperatures and pressures), quark–gluon plasma forms. In such a plasma there are no hadrons; quarks and gluons become free particles. Experimental observations Quarks and gluons (colored) manifest themselves by fragmenting into more quarks and gluons, which in turn hadronize into normal (colorless) particles, correlated in jets. As revealed in 1978 summer conferences, the PLUTO detector at the electron-positron collider DORIS (DESY) produced the first evidence that the hadronic decays of the very narrow resonance Υ(9.46) could be interpreted as three-jet event topologies produced by three gluons. Later, published analyses by the same experiment confirmed this interpretation and also the spin = 1 nature of the gluon (see also the recollection and PLUTO experiments). In summer 1979, at higher energies at the electron-positron collider PETRA (DESY), again three-jet topologies were observed, now clearly visible and interpreted as q gluon bremsstrahlung, by TASSO, MARK-J and PLUTO experiments (later in 1980 also by JADE). The spin = 1 property of the gluon was confirmed in 1980 by TASSO and PLUTO experiments (see also the review). In 1991 a subsequent experiment at the LEP storage ring at CERN again confirmed this result. The gluons play an important role in the elementary strong interactions between quarks and gluons, described by QCD and studied particularly at the electron-proton collider HERA at DESY. The number and momentum distribution of the gluons in the proton (gluon density) have been measured by two experiments, H1 and ZEUS, in the years 1996–2007. The gluon contribution to the proton spin has been studied by the HERMES experiment at HERA. The gluon density in the proton (when behaving hadronically) also has been measured. Color confinement is verified by the failure of free quark searches (searches of fractional charges). Quarks are normally produced in pairs (quark + antiquark) to compensate the quantum color and flavor numbers; however at Fermilab single production of top quarks has been shown. No glueball has been demonstrated. Deconfinement was claimed in 2000 at CERN SPS in heavy-ion collisions, and it implies a new state of matter: quark–gluon plasma, less interactive than in the nucleus, almost as in a liquid. It was found at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven in the years 2004–2010 by four contemporaneous experiments. A quark–gluon plasma state has been confirmed at the CERN Large Hadron Collider (LHC) by the three experiments ALICE, ATLAS and CMS in 2010. Jefferson Lab's Continuous Electron Beam Accelerator Facility, in Newport News, Virginia, is one of 10 Department of Energy facilities doing research on gluons. The Virginia lab was competing with another facility – Brookhaven National Laboratory, on Long Island, New York – for funds to build a new electron-ion collider. In December 2019, the US Department of Energy selected the Brookhaven National Laboratory to host the electron-ion collider. See also Quark Hadron Meson Gauge boson Quark model Quantum chromodynamics Quark–gluon plasma Color confinement Glueball Gluon field Gluon field strength tensor Exotic hadrons Standard Model Three-jet event Deep inelastic scattering Quantum chromodynamics binding energy Special unitary group Hadronization Color charge Coupling constant Footnotes References Further reading Cambridge Handout 8 : Quantum Chromodynamics – Particle Physics External resources Big Think website, clear explanation of the QCD Octet Why are there eight gluons and not nine? Bosons Elementary particles Gauge bosons Gluons Quantum chromodynamics Force carriers Subatomic particles with spin 1
Gluon
[ "Physics" ]
2,174
[ "Matter", "Elementary particles", "Physical phenomena", "Force carriers", "Bosons", "Fundamental interactions", "Subatomic particles" ]
12,718
https://en.wikipedia.org/wiki/Griffith%27s%20experiment
Griffith's experiment, performed by Frederick Griffith and reported in 1928, was the first experiment suggesting that bacteria are capable of transferring genetic information through a process known as transformation. Griffith's findings were followed by research in the late 1930s and early 40s that isolated DNA as the material that communicated this genetic information. Pneumonia was a serious cause of death in the wake of the post-WWI Spanish influenza pandemic, and Griffith was studying the possibility of creating a vaccine. Griffith used two strains of pneumococcus (Diplococcus pneumoniae) bacteria which infect mice – a type III-S (smooth) which was virulent, and a type II-R (rough) strain which was nonvirulent. The III-S strain synthesized a polysaccharide capsule that protected itself from the host's immune system, resulting in the death of the host, while the II-R strain did not have that protective capsule and was defeated by the host's immune system. A German bacteriologist, Fred Neufeld, had discovered the three pneumococcal types (Types I, II, and III) and discovered the quellung reaction to identify them in vitro. Until Griffith's experiment, bacteriologists believed that the types were fixed and unchangeable, from one generation to another. In this experiment, bacteria from the III-S strain were killed by heat, and their remains were added to II-R strain bacteria. While neither alone harmed the mice, the combination was able to kill its host. Griffith was also able to isolate both live II-R and live III-S strains of pneumococcus from the blood of these dead mice. Griffith concluded that the type II-R had been "transformed" into the lethal III-S strain by a "transforming principle" that was somehow part of the dead III-S strain bacteria. Scientific advances since then have revealed that the "transforming principle" Griffith observed was the DNA of the III-s strain bacteria. While the bacteria had been killed, the DNA had survived the heating process and was taken up by the II-R strain bacteria. The III-S strain DNA contains the genes that form the smooth protective polysaccharide capsule. Equipped with this gene, the former II-R strain bacteria were now protected from the host's immune system and could kill the host. The exact nature of the transforming principle (DNA) was verified in the experiments done by Avery, McLeod and McCarty and by Hershey and Chase. Notes References (References the original experiment by Griffith. Original article and 35th anniversary reprint available.'') Further reading 854 pages. Genetics experiments Genetics in the United Kingdom History of genetics Microbiology 1928 in biology
Griffith's experiment
[ "Chemistry", "Biology" ]
566
[ "Microbiology", "Microscopy" ]
12,778
https://en.wikipedia.org/wiki/Group%20velocity
The group velocity of a wave is the velocity with which the overall envelope shape of the wave's amplitudes—known as the modulation or envelope of the wave—propagates through space. For example, if a stone is thrown into the middle of a very still pond, a circular pattern of waves with a quiescent center appears in the water, also known as a capillary wave. The expanding ring of waves is the wave group or wave packet, within which one can discern individual waves that travel faster than the group as a whole. The amplitudes of the individual waves grow as they emerge from the trailing edge of the group and diminish as they approach the leading edge of the group. History The idea of a group velocity distinct from a wave's phase velocity was first proposed by W.R. Hamilton in 1839, and the first full treatment was by Rayleigh in his "Theory of Sound" in 1877. Definition and interpretation The group velocity is defined by the equation: where is the wave's angular frequency (usually expressed in radians per second), and is the angular wavenumber (usually expressed in radians per meter). The phase velocity is: . The function , which gives as a function of , is known as the dispersion relation. If is directly proportional to , then the group velocity is exactly equal to the phase velocity. A wave of any shape will travel undistorted at this velocity. If ω is a linear function of k, but not directly proportional , then the group velocity and phase velocity are different. The envelope of a wave packet (see figure on right) will travel at the group velocity, while the individual peaks and troughs within the envelope will move at the phase velocity. If is not a linear function of , the envelope of a wave packet will become distorted as it travels. Since a wave packet contains a range of different frequencies (and hence different values of ), the group velocity will be different for different values of . Therefore, the envelope does not move at a single velocity, but its wavenumber components () move at different velocities, distorting the envelope. If the wavepacket has a narrow range of frequencies, and is approximately linear over that narrow range, the pulse distortion will be small, in relation to the small nonlinearity. See further discussion below. For example, for deep water gravity waves, , and hence . This underlies the Kelvin wake pattern for the bow wave of all ships and swimming objects. Regardless of how fast they are moving, as long as their velocity is constant, on each side the wake forms an angle of 19.47° = arcsin(1/3) with the line of travel. Derivation One derivation of the formula for group velocity is as follows. Consider a wave packet as a function of position and time . Let be its Fourier transform at time , By the superposition principle, the wavepacket at any time is where is implicitly a function of . Assume that the wave packet is almost monochromatic, so that is sharply peaked around a central wavenumber . Then, linearization gives where and (see next section for discussion of this step). Then, after some algebra, There are two factors in this expression. The first factor, , describes a perfect monochromatic wave with wavevector , with peaks and troughs moving at the phase velocity within the envelope of the wavepacket. The other factor, , gives the envelope of the wavepacket. This envelope function depends on position and time only through the combination . Therefore, the envelope of the wavepacket travels at velocity which explains the group velocity formula. Other expressions For light, the refractive index , vacuum wavelength , and wavelength in the medium , are related by with the phase velocity. The group velocity, therefore, can be calculated by any of the following formulas, Dispersion Part of the previous derivation is the Taylor series approximation that: If the wavepacket has a relatively large frequency spread, or if the dispersion has sharp variations (such as due to a resonance), or if the packet travels over very long distances, this assumption is not valid, and higher-order terms in the Taylor expansion become important. As a result, the envelope of the wave packet not only moves, but also distorts, in a manner that can be described by the material's group velocity dispersion. Loosely speaking, different frequency-components of the wavepacket travel at different speeds, with the faster components moving towards the front of the wavepacket and the slower moving towards the back. Eventually, the wave packet gets stretched out. This is an important effect in the propagation of signals through optical fibers and in the design of high-power, short-pulse lasers. Relation to phase velocity, refractive index and transmission speed In three dimensions For waves traveling through three dimensions, such as light waves, sound waves, and matter waves, the formulas for phase and group velocity are generalized in a straightforward way: One dimension: Three dimensions: where means the gradient of the angular frequency as a function of the wave vector , and is the unit vector in direction k. If the waves are propagating through an anisotropic (i.e., not rotationally symmetric) medium, for example a crystal, then the phase velocity vector and group velocity vector may point in different directions. In lossy or gainful media The group velocity is often thought of as the velocity at which energy or information is conveyed along a wave. In most cases this is accurate, and the group velocity can be thought of as the signal velocity of the waveform. However, if the wave is travelling through an absorptive or gainful medium, this does not always hold. In these cases the group velocity may not be a well-defined quantity, or may not be a meaningful quantity. In his text "Wave Propagation in Periodic Structures", Brillouin argued that in a lossy medium the group velocity ceases to have a clear physical meaning. An example concerning the transmission of electromagnetic waves through an atomic gas is given by Loudon. Another example is mechanical waves in the solar photosphere: The waves are damped (by radiative heat flow from the peaks to the troughs), and related to that, the energy velocity is often substantially lower than the waves' group velocity. Despite this ambiguity, a common way to extend the concept of group velocity to complex media is to consider spatially damped plane wave solutions inside the medium, which are characterized by a complex-valued wavevector. Then, the imaginary part of the wavevector is arbitrarily discarded and the usual formula for group velocity is applied to the real part of wavevector, i.e., Or, equivalently, in terms of the real part of complex refractive index, , one has It can be shown that this generalization of group velocity continues to be related to the apparent speed of the peak of a wavepacket. The above definition is not universal, however: alternatively one may consider the time damping of standing waves (real , complex ), or, allow group velocity to be a complex-valued quantity. Different considerations yield distinct velocities, yet all definitions agree for the case of a lossless, gainless medium. The above generalization of group velocity for complex media can behave strangely, and the example of anomalous dispersion serves as a good illustration. At the edges of a region of anomalous dispersion, becomes infinite (surpassing even the speed of light in vacuum), and may easily become negative (its sign opposes Re) inside the band of anomalous dispersion. Superluminal group velocities Since the 1980s, various experiments have verified that it is possible for the group velocity (as defined above) of laser light pulses sent through lossy materials, or gainful materials, to significantly exceed the speed of light in vacuum . The peaks of wavepackets were also seen to move faster than . In all these cases, however, there is no possibility that signals could be carried faster than the speed of light in vacuum, since the high value of does not help to speed up the true motion of the sharp wavefront that would occur at the start of any real signal. Essentially the seemingly superluminal transmission is an artifact of the narrow band approximation used above to define group velocity and happens because of resonance phenomena in the intervening medium. In a wide band analysis it is seen that the apparently paradoxical speed of propagation of the signal envelope is actually the result of local interference of a wider band of frequencies over many cycles, all of which propagate perfectly causally and at phase velocity. The result is akin to the fact that shadows can travel faster than light, even if the light causing them always propagates at light speed; since the phenomenon being measured is only loosely connected with causality, it does not necessarily respect the rules of causal propagation, even if it under normal circumstances does so and leads to a common intuition. See also Wave propagation Dispersion (water waves) Dispersion (optics) Wave propagation speed Group delay Group velocity dispersion Group delay dispersion Phase delay Phase velocity Signal velocity Slow light Front velocity Matter wave#Group velocity Soliton References Notes Further reading Crawford jr., Frank S. (1968). Waves (Berkeley Physics Course, Vol. 3), McGraw-Hill, Free online version External links Greg Egan has an excellent Java applet on his web site that illustrates the apparent difference in group velocity from phase velocity. Maarten Ambaum has a webpage with movie demonstrating the importance of group velocity to downstream development of weather systems. Phase vs. Group Velocity – Various Phase- and Group-velocity relations (animation) Radio frequency propagation Optical quantities Wave mechanics Physical quantities Mathematical physics
Group velocity
[ "Physics", "Mathematics" ]
2,010
[ "Physical phenomena", "Spectrum (physical sciences)", "Physical quantities", "Radio frequency propagation", "Applied mathematics", "Quantity", "Theoretical physics", "Classical mechanics", "Electromagnetic spectrum", "Waves", "Wave mechanics", "Optical quantities", "Mathematical physics", ...
12,832
https://en.wikipedia.org/wiki/G%20protein-coupled%20receptor
G protein-coupled receptors (GPCRs), also known as seven-(pass)-transmembrane domain receptors, 7TM receptors, heptahelical receptors, serpentine receptors, and G protein-linked receptors (GPLR), form a large group of evolutionarily related proteins that are cell surface receptors that detect molecules outside the cell and activate cellular responses. They are coupled with G proteins. They pass through the cell membrane seven times in the form of six loops (three extracellular loops interacting with ligand molecules, three intracellular loops interacting with G proteins, an N-terminal extracellular region and a C-terminal intracellular region) of amino acid residues, which is why they are sometimes referred to as seven-transmembrane receptors. Ligands can bind either to the extracellular N-terminus and loops (e.g. glutamate receptors) or to the binding site within transmembrane helices (rhodopsin-like family). They are all activated by agonists, although a spontaneous auto-activation of an empty receptor has also been observed. G protein-coupled receptors are found only in eukaryotes, including yeast, and choanoflagellates. The ligands that bind and activate these receptors include light-sensitive compounds, odors, pheromones, hormones, and neurotransmitters, and vary in size from small molecules to peptides to large proteins. G protein-coupled receptors are involved in many diseases. There are two principal signal transduction pathways involving the G protein-coupled receptors: the cAMP signal pathway and the phosphatidylinositol signal pathway. When a ligand binds to the GPCR it causes a conformational change in the GPCR, which allows it to act as a guanine nucleotide exchange factor (GEF). The GPCR can then activate an associated G protein by exchanging the GDP bound to the G protein for a GTP. The G protein's α subunit, together with the bound GTP, can then dissociate from the β and γ subunits to further affect intracellular signaling proteins or target functional proteins directly depending on the α subunit type (Gαs, Gαi/o, Gαq/11, Gα12/13). GPCRs are an important drug target and approximately 34% of all Food and Drug Administration (FDA) approved drugs target 108 members of this family. The global sales volume for these drugs is estimated to be 180 billion US dollars . It is estimated that GPCRs are targets for about 50% of drugs currently on the market, mainly due to their involvement in signaling pathways related to many diseases i.e. mental, metabolic including endocrinological disorders, immunological including viral infections, cardiovascular, inflammatory, senses disorders, and cancer. The long ago discovered association between GPCRs and many endogenous and exogenous substances, resulting in e.g. analgesia, is another dynamically developing field of the pharmaceutical research. History and significance With the determination of the first structure of the complex between a G-protein coupled receptor (GPCR) and a G-protein trimer (Gαβγ) in 2011 a new chapter of GPCR research was opened for structural investigations of global switches with more than one protein being investigated. The previous breakthroughs involved determination of the crystal structure of the first GPCR, rhodopsin, in 2000 and the crystal structure of the first GPCR with a diffusible ligand (β2AR) in 2007. The way in which the seven transmembrane helices of a GPCR are arranged into a bundle was suspected based on the low-resolution model of frog rhodopsin from cryogenic electron microscopy studies of the two-dimensional crystals. The crystal structure of rhodopsin, that came up three years later, was not a surprise apart from the presence of an additional cytoplasmic helix H8 and a precise location of a loop covering retinal binding site. However, it provided a scaffold which was hoped to be a universal template for homology modeling and drug design for other GPCRs – a notion that proved to be too optimistic. Results 7 years later were surprising because the crystallization of β2-adrenergic receptor (β2AR) with a diffusible ligand revealed quite a different shape of the receptor extracellular side than that of rhodopsin. This area is important because it is responsible for the ligand binding and is targeted by many drugs. Moreover, the ligand binding site was much more spacious than in the rhodopsin structure and was open to the exterior. In the other receptors crystallized shortly afterwards the binding side was even more easily accessible to the ligand. New structures complemented with biochemical investigations uncovered mechanisms of action of molecular switches which modulate the structure of the receptor leading to activation states for agonists or to complete or partial inactivation states for inverse agonists. The 2012 Nobel Prize in Chemistry was awarded to Brian Kobilka and Robert Lefkowitz for their work that was "crucial for understanding how G protein-coupled receptors function". There have been at least seven other Nobel Prizes awarded for some aspect of G protein–mediated signaling. As of 2012, two of the top ten global best-selling drugs (Advair Diskus and Abilify) act by targeting G protein-coupled receptors. Classification The exact size of the GPCR superfamily is unknown, but at least 831 different human genes (or about 4% of the entire protein-coding genome) have been predicted to code for them from genome sequence analysis. Although numerous classification schemes have been proposed, the superfamily was classically divided into three main classes (A, B, and C) with no detectable shared sequence homology between classes. The largest class by far is class A, which accounts for nearly 85% of the GPCR genes. Of class A GPCRs, over half of these are predicted to encode olfactory receptors, while the remaining receptors are liganded by known endogenous compounds or are classified as orphan receptors. Despite the lack of sequence homology between classes, all GPCRs have a common structure and mechanism of signal transduction. The very large rhodopsin A group has been further subdivided into 19 subgroups (A1-A19). According to the classical A-F system, GPCRs can be grouped into six classes based on sequence homology and functional similarity: Class A (or 1) (Rhodopsin-like) Class B (or 2) (Secretin receptor family) Class C (or 3) (Metabotropic glutamate/pheromone) Class D (or 4) (Fungal mating pheromone receptors) Class E (or 5) (Cyclic AMP receptors) Class F (or 6) (Frizzled/Smoothened) More recently, an alternative classification system called GRAFS (Glutamate, Rhodopsin, Adhesion, Frizzled/Taste2, Secretin) has been proposed for vertebrate GPCRs. They correspond to classical classes C, A, B2, F, and B. An early study based on available DNA sequence suggested that the human genome encodes roughly 750 G protein-coupled receptors, about 350 of which detect hormones, growth factors, and other endogenous ligands. Approximately 150 of the GPCRs found in the human genome have unknown functions. Some web-servers and bioinformatics prediction methods have been used for predicting the classification of GPCRs according to their amino acid sequence alone, by means of the pseudo amino acid composition approach. Physiological roles GPCRs are involved in a wide variety of physiological processes. Some examples of their physiological roles include: The visual sense: The opsins use a photoisomerization reaction to translate electromagnetic radiation into cellular signals. Rhodopsin, for example, uses the conversion of 11-cis-retinal to all-trans-retinal for this purpose. The gustatory sense (taste): GPCRs in taste cells mediate release of gustducin in response to bitter-, umami- and sweet-tasting substances. The sense of smell: Receptors of the olfactory epithelium bind odorants (olfactory receptors) and pheromones (vomeronasal receptors) Behavioral and mood regulation: Receptors in the mammalian brain bind several different neurotransmitters, including serotonin, dopamine, histamine, GABA, and glutamate Regulation of immune system activity and inflammation: chemokine receptors bind ligands that mediate intercellular communication between cells of the immune system; receptors such as histamine receptors bind inflammatory mediators and engage target cell types in the inflammatory response. GPCRs are also involved in immune-modulation, e. g. regulating interleukin induction or suppressing TLR-induced immune responses from T cells. Autonomic nervous system transmission: Both the sympathetic and parasympathetic nervous systems are regulated by GPCR pathways, responsible for control of many automatic functions of the body such as blood pressure, heart rate, and digestive processes Cell density sensing: A novel GPCR role in regulating cell density sensing. Homeostasis modulation (e.g., water balance). Involved in growth and metastasis of some types of tumors. Used in the endocrine system for peptide and amino-acid derivative hormones that bind to GCPRs on the cell membrane of a target cell. This activates cAMP, which in turn activates several kinases, allowing for a cellular response, such as transcription. Receptor structure GPCRs are integral membrane proteins that possess seven membrane-spanning domains or transmembrane helices. The extracellular parts of the receptor can be glycosylated. These extracellular loops also contain two highly conserved cysteine residues that form disulfide bonds to stabilize the receptor structure. Some seven-transmembrane helix proteins (channelrhodopsin) that resemble GPCRs may contain ion channels, within their protein. In 2000, the first crystal structure of a mammalian GPCR, that of bovine rhodopsin (), was solved. In 2007, the first structure of a human GPCR was solved This human β2-adrenergic receptor GPCR structure proved highly similar to the bovine rhodopsin. The structures of activated or agonist-bound GPCRs have also been determined. These structures indicate how ligand binding at the extracellular side of a receptor leads to conformational changes in the cytoplasmic side of the receptor. The biggest change is an outward movement of the cytoplasmic part of the 5th and 6th transmembrane helix (TM5 and TM6). The structure of activated beta-2 adrenergic receptor in complex with Gs confirmed that the Gα binds to a cavity created by this movement. GPCRs exhibit a similar structure to some other proteins with seven transmembrane domains, such as microbial rhodopsins and adiponectin receptors 1 and 2 (ADIPOR1 and ADIPOR2). However, these 7TMH (7-transmembrane helices) receptors and channels do not associate with G proteins. In addition, ADIPOR1 and ADIPOR2 are oriented oppositely to GPCRs in the membrane (i.e. GPCRs usually have an extracellular N-terminus, cytoplasmic C-terminus, whereas ADIPORs are inverted). Structure–function relationships In terms of structure, GPCRs are characterized by an extracellular N-terminus, followed by seven transmembrane (7-TM) α-helices (TM-1 to TM-7) connected by three intracellular (IL-1 to IL-3) and three extracellular loops (EL-1 to EL-3), and finally an intracellular C-terminus. The GPCR arranges itself into a tertiary structure resembling a barrel, with the seven transmembrane helices forming a cavity within the plasma membrane that serves a ligand-binding domain that is often covered by EL-2. Ligands may also bind elsewhere, however, as is the case for bulkier ligands (e.g., proteins or large peptides), which instead interact with the extracellular loops, or, as illustrated by the class C metabotropic glutamate receptors (mGluRs), the N-terminal tail. The class C GPCRs are distinguished by their large N-terminal tail, which also contains a ligand-binding domain. Upon glutamate-binding to an mGluR, the N-terminal tail undergoes a conformational change that leads to its interaction with the residues of the extracellular loops and TM domains. The eventual effect of all three types of agonist-induced activation is a change in the relative orientations of the TM helices (likened to a twisting motion) leading to a wider intracellular surface and "revelation" of residues of the intracellular helices and TM domains crucial to signal transduction function (i.e., G-protein coupling). Inverse agonists and antagonists may also bind to a number of different sites, but the eventual effect must be prevention of this TM helix reorientation. The structure of the N- and C-terminal tails of GPCRs may also serve important functions beyond ligand-binding. For example, The C-terminus of M3 muscarinic receptors is sufficient, and the six-amino-acid polybasic (KKKRRK) domain in the C-terminus is necessary for its preassembly with Gq proteins. In particular, the C-terminus often contains serine (Ser) or threonine (Thr) residues that, when phosphorylated, increase the affinity of the intracellular surface for the binding of scaffolding proteins called β-arrestins (β-arr). Once bound, β-arrestins both sterically prevent G-protein coupling and may recruit other proteins, leading to the creation of signaling complexes involved in extracellular-signal regulated kinase (ERK) pathway activation or receptor endocytosis (internalization). As the phosphorylation of these Ser and Thr residues often occurs as a result of GPCR activation, the β-arr-mediated G-protein-decoupling and internalization of GPCRs are important mechanisms of desensitization. In addition, internalized "mega-complexes" consisting of a single GPCR, β-arr(in the tail conformation), and heterotrimeric G protein exist and may account for protein signaling from endosomes. A final common structural theme among GPCRs is palmitoylation of one or more sites of the C-terminal tail or the intracellular loops. Palmitoylation is the covalent modification of cysteine (Cys) residues via addition of hydrophobic acyl groups, and has the effect of targeting the receptor to cholesterol- and sphingolipid-rich microdomains of the plasma membrane called lipid rafts. As many of the downstream transducer and effector molecules of GPCRs (including those involved in negative feedback pathways) are also targeted to lipid rafts, this has the effect of facilitating rapid receptor signaling. GPCRs respond to extracellular signals mediated by a huge diversity of agonists, ranging from proteins to biogenic amines to protons, but all transduce this signal via a mechanism of G-protein coupling. This is made possible by a guanine-nucleotide exchange factor (GEF) domain primarily formed by a combination of IL-2 and IL-3 along with adjacent residues of the associated TM helices. Mechanism The G protein-coupled receptor is activated by an external signal in the form of a ligand or other signal mediator. This creates a conformational change in the receptor, causing activation of a G protein. Further effect depends on the type of G protein. G proteins are subsequently inactivated by GTPase activating proteins, known as RGS proteins. Ligand binding GPCRs include one or more receptors for the following ligands: sensory signal mediators (e.g., light and olfactory stimulatory molecules); adenosine, bombesin, bradykinin, endothelin, γ-aminobutyric acid (GABA), hepatocyte growth factor (HGF), melanocortins, neuropeptide Y, opioid peptides, opsins, somatostatin, GH, tachykinins, members of the vasoactive intestinal peptide family, and vasopressin; biogenic amines (e.g., dopamine, epinephrine, norepinephrine, histamine, serotonin, and melatonin); glutamate (metabotropic effect); glucagon; acetylcholine (muscarinic effect); chemokines; lipid mediators of inflammation (e.g., prostaglandins, prostanoids, platelet-activating factor, and leukotrienes); peptide hormones (e.g., calcitonin, C5a anaphylatoxin, follicle-stimulating hormone [FSH], gonadotropin-releasing hormone [GnRH], neurokinin, thyrotropin-releasing hormone [TRH], and oxytocin); and endocannabinoids. GPCRs that act as receptors for stimuli that have not yet been identified are known as orphan receptors. However, in contrast to other types of receptors that have been studied, wherein ligands bind externally to the membrane, the ligands of GPCRs typically bind within the transmembrane domain. However, protease-activated receptors are activated by cleavage of part of their extracellular domain. Conformational change The transduction of the signal through the membrane by the receptor is not completely understood. It is known that in the inactive state, the GPCR is bound to a heterotrimeric G protein complex. Binding of an agonist to the GPCR results in a conformational change in the receptor that is transmitted to the bound Gα subunit of the heterotrimeric G protein via protein domain dynamics. The activated Gα subunit exchanges GTP in place of GDP which in turn triggers the dissociation of Gα subunit from the Gβγ dimer and from the receptor. The dissociated Gα and Gβγ subunits interact with other intracellular proteins to continue the signal transduction cascade while the freed GPCR is able to rebind to another heterotrimeric G protein to form a new complex that is ready to initiate another round of signal transduction. It is believed that a receptor molecule exists in a conformational equilibrium between active and inactive biophysical states. The binding of ligands to the receptor may shift the equilibrium toward the active receptor states. Three types of ligands exist: Agonists are ligands that shift the equilibrium in favour of active states; inverse agonists are ligands that shift the equilibrium in favour of inactive states; and neutral antagonists are ligands that do not affect the equilibrium. It is not yet known how exactly the active and inactive states differ from each other. G-protein activation/deactivation cycle When the receptor is inactive, the GEF domain may be bound to an also inactive α-subunit of a heterotrimeric G-protein. These "G-proteins" are a trimer of α, β, and γ subunits (known as Gα, Gβ, and Gγ, respectively) that is rendered inactive when reversibly bound to Guanosine diphosphate (GDP) (or, alternatively, no guanine nucleotide) but active when bound to guanosine triphosphate (GTP). Upon receptor activation, the GEF domain, in turn, allosterically activates the G-protein by facilitating the exchange of a molecule of GDP for GTP at the G-protein's α-subunit. The cell maintains a 10:1 ratio of cytosolic GTP:GDP so exchange for GTP is ensured. At this point, the subunits of the G-protein dissociate from the receptor, as well as each other, to yield a Gα-GTP monomer and a tightly interacting Gβγ dimer, which are now free to modulate the activity of other intracellular proteins. The extent to which they may diffuse, however, is limited due to the palmitoylation of Gα and the presence of an isoprenoid moiety that has been covalently added to the C-termini of Gγ. Because Gα also has slow GTP→GDP hydrolysis capability, the inactive form of the α-subunit (Gα-GDP) is eventually regenerated, thus allowing reassociation with a Gβγ dimer to form the "resting" G-protein, which can again bind to a GPCR and await activation. The rate of GTP hydrolysis is often accelerated due to the actions of another family of allosteric modulating proteins called regulators of G-protein signaling, or RGS proteins, which are a type of GTPase-activating protein, or GAP. In fact, many of the primary effector proteins (e.g., adenylate cyclases) that become activated/inactivated upon interaction with Gα-GTP also have GAP activity. Thus, even at this early stage in the process, GPCR-initiated signaling has the capacity for self-termination. Crosstalk GPCRs downstream signals have been shown to possibly interact with integrin signals, such as FAK. Integrin signaling will phosphorylate FAK, which can then decrease GPCR Gαs activity. Signaling If a receptor in an active state encounters a G protein, it may activate it. Some evidence suggests that receptors and G proteins are actually pre-coupled. For example, binding of G proteins to receptors affects the receptor's affinity for ligands. Activated G proteins are bound to GTP. Further signal transduction depends on the type of G protein. The enzyme adenylate cyclase is an example of a cellular protein that can be regulated by a G protein, in this case the G protein Gs. Adenylate cyclase activity is activated when it binds to a subunit of the activated G protein. Activation of adenylate cyclase ends when the G protein returns to the GDP-bound state. Adenylate cyclases (of which 9 membrane-bound and one cytosolic forms are known in humans) may also be activated or inhibited in other ways (e.g., Ca2+/calmodulin binding), which can modify the activity of these enzymes in an additive or synergistic fashion along with the G proteins. The signaling pathways activated through a GPCR are limited by the primary sequence and tertiary structure of the GPCR itself but ultimately determined by the particular conformation stabilized by a particular ligand, as well as the availability of transducer molecules. Currently, GPCRs are considered to utilize two primary types of transducers: G-proteins and β-arrestins. Because β-arr's have high affinity only to the phosphorylated form of most GPCRs (see above or below), the majority of signaling is ultimately dependent upon G-protein activation. However, the possibility for interaction does allow for G-protein-independent signaling to occur. G-protein-dependent signaling There are three main G-protein-mediated signaling pathways, mediated by four sub-classes of G-proteins distinguished from each other by sequence homology (Gαs, Gαi/o, Gαq/11, and Gα12/13). Each sub-class of G-protein consists of multiple proteins, each the product of multiple genes or splice variations that may imbue them with differences ranging from subtle to distinct with regard to signaling properties, but in general they appear reasonably grouped into four classes. Because the signal transducing properties of the various possible βγ combinations do not appear to radically differ from one another, these classes are defined according to the isoform of their α-subunit. While most GPCRs are capable of activating more than one Gα-subtype, they also show a preference for one subtype over another. When the subtype activated depends on the ligand that is bound to the GPCR, this is called functional selectivity (also known as agonist-directed trafficking, or conformation-specific agonism). However, the binding of any single particular agonist may also initiate activation of multiple different G-proteins, as it may be capable of stabilizing more than one conformation of the GPCR's GEF domain, even over the course of a single interaction. In addition, a conformation that preferably activates one isoform of Gα may activate another if the preferred is less available. Furthermore, feedback pathways may result in receptor modifications (e.g., phosphorylation) that alter the G-protein preference. Regardless of these various nuances, the GPCR's preferred coupling partner is usually defined according to the G-protein most obviously activated by the endogenous ligand under most physiological or experimental conditions. Gα signaling The effector of both the Gαs and Gαi/o pathways is the cyclic-adenosine monophosphate (cAMP)-generating enzyme adenylate cyclase, or AC. While there are ten different AC gene products in mammals, each with subtle differences in tissue distribution or function, all catalyze the conversion of cytosolic adenosine triphosphate (ATP) to cAMP, and all are directly stimulated by G-proteins of the Gαs class. In contrast, however, interaction with Gα subunits of the Gαi/o type inhibits AC from generating cAMP. Thus, a GPCR coupled to Gαs counteracts the actions of a GPCR coupled to Gαi/o, and vice versa. The level of cytosolic cAMP may then determine the activity of various ion channels as well as members of the ser/thr-specific protein kinase A (PKA) family. Thus cAMP is considered a second messenger and PKA a secondary effector. The effector of the Gαq/11 pathway is phospholipase C-β (PLCβ), which catalyzes the cleavage of membrane-bound phosphatidylinositol 4,5-bisphosphate (PIP2) into the second messengers inositol (1,4,5) trisphosphate (IP3) and diacylglycerol (DAG). IP3 acts on IP3 receptors found in the membrane of the endoplasmic reticulum (ER) to elicit Ca2+ release from the ER, while DAG diffuses along the plasma membrane where it may activate any membrane localized forms of a second ser/thr kinase called protein kinase C (PKC). Since many isoforms of PKC are also activated by increases in intracellular Ca2+, both these pathways can also converge on each other to signal through the same secondary effector. Elevated intracellular Ca2+ also binds and allosterically activates proteins called calmodulins, which in turn tosolic small GTPase, Rho. Once bound to GTP, Rho can then go on to activate various proteins responsible for cytoskeleton regulation such as Rho-kinase (ROCK). Most GPCRs that couple to Gα12/13 also couple to other sub-classes, often Gαq/11. Gβγ signaling The above descriptions ignore the effects of Gβγ–signalling, which can also be important, in particular in the case of activated Gαi/o-coupled GPCRs. The primary effectors of Gβγ are various ion channels, such as G-protein-regulated inwardly rectifying K+ channels (GIRKs), P/Q- and N-type voltage-gated Ca2+ channels, as well as some isoforms of AC and PLC, along with some phosphoinositide-3-kinase (PI3K) isoforms. G-protein-independent signaling Although they are classically thought of working only together, GPCRs may signal through G-protein-independent mechanisms, and heterotrimeric G-proteins may play functional roles independent of GPCRs. GPCRs may signal independently through many proteins already mentioned for their roles in G-protein-dependent signaling such as β-arrs, GRKs, and Srcs. Such signaling has been shown to be physiologically relevant, for example, β-arrestin signaling mediated by the chemokine receptor CXCR3 was necessary for full efficacy chemotaxis of activated T cells. In addition, further scaffolding proteins involved in subcellular localization of GPCRs (e.g., PDZ-domain-containing proteins) may also act as signal transducers. Most often the effector is a member of the MAPK family. Examples In the late 1990s, evidence began accumulating to suggest that some GPCRs are able to signal without G proteins. The ERK2 mitogen-activated protein kinase, a key signal transduction mediator downstream of receptor activation in many pathways, has been shown to be activated in response to cAMP-mediated receptor activation in the slime mold D. discoideum despite the absence of the associated G protein α- and β-subunits. In mammalian cells, the much-studied β2-adrenoceptor has been demonstrated to activate the ERK2 pathway after arrestin-mediated uncoupling of G-protein-mediated signaling. Therefore, it seems likely that some mechanisms previously believed related purely to receptor desensitisation are actually examples of receptors switching their signaling pathway, rather than simply being switched off. In kidney cells, the bradykinin receptor B2 has been shown to interact directly with a protein tyrosine phosphatase. The presence of a tyrosine-phosphorylated ITIM (immunoreceptor tyrosine-based inhibitory motif) sequence in the B2 receptor is necessary to mediate this interaction and subsequently the antiproliferative effect of bradykinin. GPCR-independent signaling by heterotrimeric G-proteins Although it is a relatively immature area of research, it appears that heterotrimeric G-proteins may also take part in non-GPCR signaling. There is evidence for roles as signal transducers in nearly all other types of receptor-mediated signaling, including integrins, receptor tyrosine kinases (RTKs), cytokine receptors (JAK/STATs), as well as modulation of various other "accessory" proteins such as GEFs, guanine-nucleotide dissociation inhibitors (GDIs) and protein phosphatases. There may even be specific proteins of these classes whose primary function is as part of GPCR-independent pathways, termed activators of G-protein signalling (AGS). Both the ubiquity of these interactions and the importance of Gα vs. Gβγ subunits to these processes are still unclear. Details of cAMP and PIP2 pathways There are two principal signal transduction pathways involving the G protein-linked receptors: the cAMP signal pathway and the phosphatidylinositol signal pathway. cAMP signal pathway The cAMP signal transduction contains five main characters: stimulative hormone receptor (Rs) or inhibitory hormone receptor (Ri); stimulative regulative G-protein (Gs) or inhibitory regulative G-protein (Gi); adenylyl cyclase; protein kinase A (PKA); and cAMP phosphodiesterase. Stimulative hormone receptor (Rs) is a receptor that can bind with stimulative signal molecules, while inhibitory hormone receptor (Ri) is a receptor that can bind with inhibitory signal molecules. Stimulative regulative G-protein is a G-protein linked to stimulative hormone receptor (Rs), and its α subunit upon activation could stimulate the activity of an enzyme or other intracellular metabolism. On the contrary, inhibitory regulative G-protein is linked to an inhibitory hormone receptor, and its α subunit upon activation could inhibit the activity of an enzyme or other intracellular metabolism. Adenylyl cyclase is a 12-transmembrane glycoprotein that catalyzes the conversion of ATP to cAMP with the help of cofactor Mg2+ or Mn2+. The cAMP produced is a second messenger in cellular metabolism and is an allosteric activator of protein kinase A. Protein kinase A is an important enzyme in cell metabolism due to its ability to regulate cell metabolism by phosphorylating specific committed enzymes in the metabolic pathway. It can also regulate specific gene expression, cellular secretion, and membrane permeability. The protein enzyme contains two catalytic subunits and two regulatory subunits. When there is no cAMP,the complex is inactive. When cAMP binds to the regulatory subunits, their conformation is altered, causing the dissociation of the regulatory subunits, which activates protein kinase A and allows further biological effects. These signals then can be terminated by cAMP phosphodiesterase, which is an enzyme that degrades cAMP to 5'-AMP and inactivates protein kinase A. Phosphatidylinositol signal pathway In the phosphatidylinositol signal pathway, the extracellular signal molecule binds with the G-protein receptor (Gq) on the cell surface and activates phospholipase C, which is located on the plasma membrane. The lipase hydrolyzes phosphatidylinositol 4,5-bisphosphate (PIP2) into two second messengers: inositol 1,4,5-trisphosphate (IP3) and diacylglycerol (DAG). IP3 binds with the IP3 receptor in the membrane of the smooth endoplasmic reticulum and mitochondria to open Ca2+ channels. DAG helps activate protein kinase C (PKC), which phosphorylates many other proteins, changing their catalytic activities, leading to cellular responses. The effects of Ca2+ are also remarkable: it cooperates with DAG in activating PKC and can activate the CaM kinase pathway, in which calcium-modulated protein calmodulin (CaM) binds Ca2+, undergoes a change in conformation, and activates CaM kinase II, which has unique ability to increase its binding affinity to CaM by autophosphorylation, making CaM unavailable for the activation of other enzymes. The kinase then phosphorylates target enzymes, regulating their activities. The two signal pathways are connected together by Ca2+-CaM, which is also a regulatory subunit of adenylyl cyclase and phosphodiesterase in the cAMP signal pathway. Receptor regulation GPCRs become desensitized when exposed to their ligand for a long period of time. There are two recognized forms of desensitization: 1) homologous desensitization, in which the activated GPCR is downregulated; and 2) heterologous desensitization, wherein the activated GPCR causes downregulation of a different GPCR. The key reaction of this downregulation is the phosphorylation of the intracellular (or cytoplasmic) receptor domain by protein kinases. Phosphorylation by cAMP-dependent protein kinases Cyclic AMP-dependent protein kinases (protein kinase A) are activated by the signal chain coming from the G protein (that was activated by the receptor) via adenylate cyclase and cyclic AMP (cAMP). In a feedback mechanism, these activated kinases phosphorylate the receptor. The longer the receptor remains active the more kinases are activated and the more receptors are phosphorylated. In β2-adrenoceptors, this phosphorylation results in the switching of the coupling from the Gs class of G-protein to the Gi class. cAMP-dependent PKA mediated phosphorylation can cause heterologous desensitisation in receptors other than those activated. Phosphorylation by GRKs The G protein-coupled receptor kinases (GRKs) are protein kinases that phosphorylate only active GPCRs. G-protein-coupled receptor kinases (GRKs) are key modulators of G-protein-coupled receptor (GPCR) signaling. They constitute a family of seven mammalian serine-threonine protein kinases that phosphorylate agonist-bound receptor. GRKs-mediated receptor phosphorylation rapidly initiates profound impairment of receptor signaling and desensitization. Activity of GRKs and subcellular targeting is tightly regulated by interaction with receptor domains, G protein subunits, lipids, anchoring proteins and calcium-sensitive proteins. Phosphorylation of the receptor can have two consequences: Translocation: The receptor is, along with the part of the membrane it is embedded in, brought to the inside of the cell, where it is dephosphorylated within the acidic vesicular environment and then brought back. This mechanism is used to regulate long-term exposure, for example, to a hormone, by allowing resensitisation to follow desensitisation. Alternatively, the receptor may undergo lysozomal degradation, or remain internalised, where it is thought to participate in the initiation of signalling events, the nature of which depending on the internalised vesicle's subcellular localisation. Arrestin linking: The phosphorylated receptor can be linked to arrestin molecules that prevent it from binding (and activating) G proteins, in effect switching it off for a short period of time. This mechanism is used, for example, with rhodopsin in retina cells to compensate for exposure to bright light. In many cases, arrestin's binding to the receptor is a prerequisite for translocation. For example, beta-arrestin bound to β2-adrenoreceptors acts as an adaptor for binding with clathrin, and with the beta-subunit of AP2 (clathrin adaptor molecules); thus, the arrestin here acts as a scaffold assembling the components needed for clathrin-mediated endocytosis of β2-adrenoreceptors. Mechanisms of GPCR signal termination As mentioned above, G-proteins may terminate their own activation due to their intrinsic GTP→GDP hydrolysis capability. However, this reaction proceeds at a slow rate (≈0.02 times/sec) and, thus, it would take around 50 seconds for any single G-protein to deactivate if other factors did not come into play. Indeed, there are around 30 isoforms of RGS proteins that, when bound to Gα through their GAP domain, accelerate the hydrolysis rate to ≈30 times/sec. This 1500-fold increase in rate allows for the cell to respond to external signals with high speed, as well as spatial resolution due to limited amount of second messenger that can be generated and limited distance a G-protein can diffuse in 0.03 seconds. For the most part, the RGS proteins are promiscuous in their ability to deactivate G-proteins, while which RGS is involved in a given signaling pathway seems more determined by the tissue and GPCR involved than anything else. In addition, RGS proteins have the additional function of increasing the rate of GTP-GDP exchange at GPCRs, (i.e., as a sort of co-GEF) further contributing to the time resolution of GPCR signaling. In addition, the GPCR may be desensitized itself. This can occur as: a direct result of ligand occupation, wherein the change in conformation allows recruitment of GPCR-Regulating Kinases (GRKs), which go on to phosphorylate various serine/threonine residues of IL-3 and the C-terminal tail. Upon GRK phosphorylation, the GPCR's affinity for β-arrestin (β-arrestin-1/2 in most tissues) is increased, at which point β-arrestin may bind and act to both sterically hinder G-protein coupling as well as initiate the process of receptor internalization through clathrin-mediated endocytosis. Because only the liganded receptor is desensitized by this mechanism, it is called homologous desensitization the affinity for β-arrestin may be increased in a ligand occupation and GRK-independent manner through phosphorylation of different ser/thr sites (but also of IL-3 and the C-terminal tail) by PKC and PKA. These phosphorylations are often sufficient to impair G-protein coupling on their own as well. PKC/PKA may, instead, phosphorylate GRKs, which can also lead to GPCR phosphorylation and β-arrestin binding in an occupation-independent manner. These latter two mechanisms allow for desensitization of one GPCR due to the activities of others, or heterologous desensitization. GRKs may also have GAP domains and so may contribute to inactivation through non-kinase mechanisms as well. A combination of these mechanisms may also occur. Once β-arrestin is bound to a GPCR, it undergoes a conformational change allowing it to serve as a scaffolding protein for an adaptor complex termed AP-2, which in turn recruits another protein called clathrin. If enough receptors in the local area recruit clathrin in this manner, they aggregate and the membrane buds inwardly as a result of interactions between the molecules of clathrin, in a process called opsonization. Once the pit has been pinched off the plasma membrane due to the actions of two other proteins called amphiphysin and dynamin, it is now an endocytic vesicle. At this point, the adapter molecules and clathrin have dissociated, and the receptor is either trafficked back to the plasma membrane or targeted to lysosomes for degradation. At any point in this process, the β-arrestins may also recruit other proteins—such as the non-receptor tyrosine kinase (nRTK), c-SRC—which may activate ERK1/2, or other mitogen-activated protein kinase (MAPK) signaling through, for example, phosphorylation of the small GTPase, Ras, or recruit the proteins of the ERK cascade directly (i.e., Raf-1, MEK, ERK-1/2) at which point signaling is initiated due to their close proximity to one another. Another target of c-SRC are the dynamin molecules involved in endocytosis. Dynamins polymerize around the neck of an incoming vesicle, and their phosphorylation by c-SRC provides the energy necessary for the conformational change allowing the final "pinching off" from the membrane. GPCR cellular regulation Receptor desensitization is mediated through a combination phosphorylation, β-arr binding, and endocytosis as described above. Downregulation occurs when endocytosed receptor is embedded in an endosome that is trafficked to merge with an organelle called a lysosome. Because lysosomal membranes are rich in proton pumps, their interiors have low pH (≈4.8 vs. the pH≈7.2 cytosol), which acts to denature the GPCRs. In addition, lysosomes contain many degradative enzymes, including proteases, which can function only at such low pH, and so the peptide bonds joining the residues of the GPCR together may be cleaved. Whether or not a given receptor is trafficked to a lysosome, detained in endosomes, or trafficked back to the plasma membrane depends on a variety of factors, including receptor type and magnitude of the signal. GPCR regulation is additionally mediated by gene transcription factors. These factors can increase or decrease gene transcription and thus increase or decrease the generation of new receptors (up- or down-regulation) that travel to the cell membrane. Receptor oligomerization G-protein-coupled receptor oligomerisation is a widespread phenomenon. One of the best-studied examples is the metabotropic GABAB receptor. This so-called constitutive receptor is formed by heterodimerization of GABABR1 and GABABR2 subunits. Expression of the GABABR1 without the GABABR2 in heterologous systems leads to retention of the subunit in the endoplasmic reticulum. Expression of the GABABR2 subunit alone, meanwhile, leads to surface expression of the subunit, although with no functional activity (i.e., the receptor does not bind agonist and cannot initiate a response following exposure to agonist). Expression of the two subunits together leads to plasma membrane expression of functional receptor. It has been shown that GABABR2 binding to GABABR1 causes masking of a retention signal of functional receptors. Origin and diversification of the superfamily Signal transduction mediated by the superfamily of GPCRs dates back to the origin of multicellularity. Mammalian-like GPCRs are found in fungi, and have been classified according to the GRAFS classification system based on GPCR fingerprints. Identification of the superfamily members across the eukaryotic domain, and comparison of the family-specific motifs, have shown that the superfamily of GPCRs have a common origin. Characteristic motifs indicate that three of the five GRAFS families, Rhodopsin, Adhesion, and Frizzled, evolved from the Dictyostelium discoideum cAMP receptors before the split of opisthokonts. Later, the Secretin family evolved from the Adhesion GPCR receptor family before the split of nematodes. Insect GPCRs appear to be in their own group and Taste2 is identified as descending from Rhodopsin. Note that the Secretin/Adhesion split is based on presumed function rather than signature, as the classical Class B (7tm_2, ) is used to identify both in the studies. See also G protein-coupled receptors database List of MeSH codes (D12.776) Metabotropic receptor Orphan receptor Pepducins, a class of drug candidates targeted at GPCRs Receptor activated solely by a synthetic ligand, a technique for control of cell signaling through synthetic GPCRs TOG superfamily References Further reading External links GPCR Cell Line ; GPCR-HGmod , a database of 3D structural models of all human G-protein coupled receptors, built by the GPCR-I-TASSER pipeline Biochemistry Integral membrane proteins Molecular biology Protein families Signal transduction Protein superfamilies
G protein-coupled receptor
[ "Chemistry", "Biology" ]
9,970
[ "Protein classification", "Signal transduction", "G protein-coupled receptors", "nan", "Molecular biology", "Biochemistry", "Protein families", "Neurochemistry", "Protein superfamilies" ]
12,841
https://en.wikipedia.org/wiki/G%20protein
G proteins, also known as guanine nucleotide-binding proteins, are a family of proteins that act as molecular switches inside cells, and are involved in transmitting signals from a variety of stimuli outside a cell to its interior. Their activity is regulated by factors that control their ability to bind to and hydrolyze guanosine triphosphate (GTP) to guanosine diphosphate (GDP). When they are bound to GTP, they are 'on', and, when they are bound to GDP, they are 'off'. G proteins belong to the larger group of enzymes called GTPases. There are two classes of G proteins. The first function as monomeric small GTPases (small G-proteins), while the second function as heterotrimeric G protein complexes. The latter class of complexes is made up of alpha (Gα), beta (Gβ) and gamma (Gγ) subunits. In addition, the beta and gamma subunits can form a stable dimeric complex referred to as the beta-gamma complex . Heterotrimeric G proteins located within the cell are activated by G protein-coupled receptors (GPCRs) that span the cell membrane. Signaling molecules bind to a domain of the GPCR located outside the cell, and an intracellular GPCR domain then in turn activates a particular G protein. Some active-state GPCRs have also been shown to be "pre-coupled" with G proteins, whereas in other cases a collision coupling mechanism is thought to occur. The G protein triggers a cascade of further signaling events that finally results in a change in cell function. G protein-coupled receptors and G proteins working together transmit signals from many hormones, neurotransmitters, and other signaling factors. G proteins regulate metabolic enzymes, ion channels, transporter proteins, and other parts of the cell machinery, controlling transcription, motility, contractility, and secretion, which in turn regulate diverse systemic functions such as embryonic development, learning and memory, and homeostasis. History G proteins were discovered in 1980 when Alfred G. Gilman and Martin Rodbell investigated stimulation of cells by adrenaline. They found that when adrenaline binds to a receptor, the receptor does not stimulate enzymes (inside the cell) directly. Instead, the receptor stimulates a G protein, which then stimulates an enzyme. An example is adenylate cyclase, which produces the second messenger cyclic AMP. For this discovery, they won the 1994 Nobel Prize in Physiology or Medicine. Nobel prizes have been awarded for many aspects of signaling by G proteins and GPCRs. These include receptor antagonists, neurotransmitters, neurotransmitter reuptake, G protein-coupled receptors, G proteins, second messengers, the enzymes that trigger protein phosphorylation in response to cAMP, and consequent metabolic processes such as glycogenolysis. Prominent examples include (in chronological order of awarding): The 1947 Nobel Prize in Physiology or Medicine to Carl Cori, Gerty Cori and Bernardo Houssay, for their discovery of how glycogen is broken down to glucose and resynthesized in the body, for use as a store and source of energy. Glycogenolysis is stimulated by numerous hormones and neurotransmitters including adrenaline. The 1970 Nobel Prize in Physiology or Medicine to Julius Axelrod, Bernard Katz and Ulf von Euler for their work on the release and reuptake of neurotransmitters. The 1971 Nobel Prize in Physiology or Medicine to Earl Sutherland for discovering the key role of adenylate cyclase, which produces the second messenger cyclic AMP. The 1988 Nobel Prize in Physiology or Medicine to George H. Hitchings, Sir James Black and Gertrude Elion "for their discoveries of important principles for drug treatment" targeting GPCRs. The 1992 Nobel Prize in Physiology or Medicine to Edwin G. Krebs and Edmond H. Fischer for describing how reversible phosphorylation works as a switch to activate proteins, and to regulate various cellular processes including glycogenolysis. The 1994 Nobel Prize in Physiology or Medicine to Alfred G. Gilman and Martin Rodbell for their discovery of "G-proteins and the role of these proteins in signal transduction in cells". The 2000 Nobel Prize in Physiology or Medicine to Eric Kandel, Arvid Carlsson and Paul Greengard, for research on neurotransmitters such as dopamine, which act via GPCRs. The 2004 Nobel Prize in Physiology or Medicine to Richard Axel and Linda B. Buck for their work on G protein-coupled olfactory receptors. The 2012 Nobel Prize in Chemistry to Brian Kobilka and Robert Lefkowitz for their work on GPCR function. Function G proteins are important signal transducing molecules in cells. "Malfunction of GPCR [G Protein-Coupled Receptor] signaling pathways are involved in many diseases, such as diabetes, blindness, allergies, depression, cardiovascular defects, and certain forms of cancer. It is estimated that about 30% of the modern drugs' cellular targets are GPCRs." The human genome encodes roughly 800 G protein-coupled receptors, which detect photons of light, hormones, growth factors, drugs, and other endogenous ligands. Approximately 150 of the GPCRs found in the human genome still have unknown functions. Whereas G proteins are activated by G protein-coupled receptors, they are inactivated by RGS proteins (for "Regulator of G protein signalling"). Receptors stimulate GTP binding (turning the G protein on). RGS proteins stimulate GTP hydrolysis (creating GDP, thus turning the G protein off). Diversity All eukaryotes use G proteins for signaling and have evolved a large diversity of G proteins. For instance, humans encode 18 different Gα proteins, 5 Gβ proteins, and 12 Gγ proteins. Signaling G protein can refer to two distinct families of proteins. Heterotrimeric G proteins, sometimes referred to as the "large" G proteins, are activated by G protein-coupled receptors and are made up of alpha (α), beta (β), and gamma (γ) subunits. "Small" G proteins (20-25kDa) belong to the Ras superfamily of small GTPases. These proteins are homologous to the alpha (α) subunit found in heterotrimers, but are in fact monomeric, consisting of only a single unit. However, like their larger relatives, they also bind GTP and GDP and are involved in signal transduction. Heterotrimeric Different types of heterotrimeric G proteins share a common mechanism. They are activated in response to a conformational change in the GPCR, exchanging GDP for GTP, and dissociating in order to activate other proteins in a particular signal transduction pathway. The specific mechanisms, however, differ between protein types. Mechanism Receptor-activated G proteins are bound to the inner surface of the cell membrane. They consist of the Gα and the tightly associated Gβγ subunits. There are four main families of Gα subunits: Gαs (G stimulatory), Gαi (G inhibitory), Gαq/11, and Gα12/13. They behave differently in the recognition of the effector molecule, but share a similar mechanism of activation. Activation When a ligand activates the G protein-coupled receptor, it induces a conformational change in the receptor that allows the receptor to function as a guanine nucleotide exchange factor (GEF) that exchanges GDP for GTP. The GTP (or GDP) is bound to the Gα subunit in the traditional view of heterotrimeric GPCR activation. This exchange triggers the dissociation of the Gα subunit (which is bound to GTP) from the Gβγ dimer and the receptor as a whole. However, models which suggest molecular rearrangement, reorganization, and pre-complexing of effector molecules are beginning to be accepted. Both Gα-GTP and Gβγ can then activate different signaling cascades (or second messenger pathways) and effector proteins, while the receptor is able to activate the next G protein. Termination The Gα subunit will eventually hydrolyze the attached GTP to GDP by its inherent enzymatic activity, allowing it to re-associate with Gβγ and starting a new cycle. A group of proteins called Regulator of G protein signalling (RGSs), act as GTPase-activating proteins (GAPs), are specific for Gα subunits. These proteins accelerate the hydrolysis of GTP to GDP, thus terminating the transduced signal. In some cases, the effector itself may possess intrinsic GAP activity, which then can help deactivate the pathway. This is true in the case of phospholipase C-beta, which possesses GAP activity within its C-terminal region. This is an alternate form of regulation for the Gα subunit. Such Gα GAPs do not have catalytic residues (specific amino acid sequences) to activate the Gα protein. They work instead by lowering the required activation energy for the reaction to take place. Specific mechanisms Gαs Gαs activates the cAMP-dependent pathway by stimulating the production of cyclic AMP (cAMP) from ATP. This is accomplished by direct stimulation of the membrane-associated enzyme adenylate cyclase. cAMP can then act as a second messenger that goes on to interact with and activate protein kinase A (PKA). PKA can phosphorylate a myriad downstream targets. The cAMP-dependent pathway is used as a signal transduction pathway for many hormones including: ADH – Promotes water retention by the kidneys (created by the magnocellular neurosecretory cells of the posterior pituitary) GHRH – Stimulates the synthesis and release of GH (somatotropic cells of the anterior pituitary) GHIH – Inhibits the synthesis and release of GH (somatotropic cells of anterior pituitary) CRH – Stimulates the synthesis and release of ACTH (anterior pituitary) ACTH – Stimulates the synthesis and release of cortisol (zona fasciculata of the adrenal cortex in the adrenal glands) TSH – Stimulates the synthesis and release of a majority of T4 (thyroid gland) LH – Stimulates follicular maturation and ovulation in women; or testosterone production and spermatogenesis in men FSH – Stimulates follicular development in women; or spermatogenesis in men PTH – Increases blood calcium levels. This is accomplished via the parathyroid hormone 1 receptor (PTH1) in the kidneys and bones, or via the parathyroid hormone 2 receptor (PTH2) in the central nervous system and brain, as well as the bones and kidneys. Calcitonin – Decreases blood calcium levels (via the calcitonin receptor in the intestines, bones, kidneys, and brain) Glucagon – Stimulates glycogen breakdown in the liver hCG – Promotes cellular differentiation, and is potentially involved in apoptosis. Epinephrine – released by the adrenal medulla during the fasting state, when body is under metabolic duress. It stimulates glycogenolysis, in addition to the actions of glucagon. Gαi Gαi inhibits the production of cAMP from ATP. e.g. somatostatin, prostaglandins Gαq/11 Gαq/11 stimulates the membrane-bound phospholipase C beta, which then cleaves phosphatidylinositol 4,5-bisphosphate (PIP2) into two second messengers, inositol trisphosphate (IP3) and diacylglycerol (DAG). IP3 induces calcium release from the endoplasmic reticulum. DAG activates protein kinase C. The Inositol Phospholipid Dependent Pathway is used as a signal transduction pathway for many hormones including: Epinephrine ADH (Vasopressin/AVP) – Induces the synthesis and release of glucocorticoids (Zona fasciculata of adrenal cortex); Induces vasoconstriction (V1 Cells of Posterior pituitary) TRH – Induces the synthesis and release of TSH (Anterior pituitary gland) TSH – Induces the synthesis and release of a small amount of T4 (Thyroid Gland) Angiotensin II – Induces Aldosterone synthesis and release (zona glomerulosa of adrenal cortex in kidney) GnRH – Induces the synthesis and release of FSH and LH (Anterior Pituitary) Gα12/13 Gα12/13 are involved in Rho family GTPase signaling (see Rho family of GTPases). This is through the RhoGEF superfamily involving the RhoGEF domain of the proteins' structures). These are involved in control of cell cytoskeleton remodeling, and thus in regulating cell migration. Gβ, Gγ The Gβγ complexes sometimes also have active functions. Examples include coupling to and activating G protein-coupled inwardly-rectifying potassium channels. Small GTPases Small GTPases, also known as small G-proteins, bind GTP and GDP likewise, and are involved in signal transduction. These proteins are homologous to the alpha (α) subunit found in heterotrimers, but exist as monomers. They are small (20-kDa to 25-kDa) proteins that bind to guanosine triphosphate (GTP). This family of proteins is homologous to the Ras GTPases and is also called the Ras superfamily GTPases. Lipidation In order to associate with the inner leaflet of the plasma membrane, many G proteins and small GTPases are lipidated, that is, covalently modified with lipid extensions. They may be myristoylated, palmitoylated or prenylated. References External links Peripheral membrane proteins Cell signaling Signal transduction EC 3.6
G protein
[ "Chemistry", "Biology" ]
2,990
[ "Biochemistry", "Neurochemistry", "G proteins", "Signal transduction" ]
12,858
https://en.wikipedia.org/wiki/Galvanization
Galvanization (also spelled galvanisation) is the process of applying a protective zinc coating to steel or iron, to prevent rusting. The most common method is hot-dip galvanizing, in which the parts are coated by submerging them in a bath of hot, molten zinc. Protective action The zinc coating, when intact, prevents corrosive substances from reaching the underlying iron. Additional electroplating such as a chromate conversion coating may be applied to provide further surface passivation to the substrate material. History and etymology The process is named after the Italian physician, physicist, biologist and philosopher Luigi Galvani (9 September 1737 – 4 December 1798). The earliest known example of galvanized iron was discovered on 17th-century Indian armour in the Royal Armouries Museum collection in the United Kingdom. The term "galvanized" can also be used metaphorically of any stimulus which results in activity by a person or group of people. In modern usage, the term "galvanizing" has largely come to be associated with zinc coatings, to the exclusion of other metals. Galvanic paint, a precursor to hot-dip galvanizing, was patented by Stanislas Sorel, of Paris, on June 10, 1837, as an adoption of a term from a highly fashionable field of contemporary science, despite having no evident relation to it. Methods Hot-dip galvanizing deposits a thick, robust layer of zinc iron alloys on the surface of a steel item. In the case of automobile bodies, where additional decorative coatings of paint will be applied, a thinner form of galvanizing is applied by electrogalvanizing. The hot-dip process generally does not reduce strength to a measurable degree, with the exception of high-strength steels where hydrogen embrittlement can become a problem. Thermal diffusion galvanizing, or Sherardizing, provides a zinc diffusion coating on iron- or copper-based materials. Eventual corrosion Galvanized steel can last for many decades if other supplementary measures are maintained, such as paint coatings and additional sacrificial anodes. Corrosion in non-salty environments is caused mainly by levels of sulfur dioxide in the air. Galvanized construction steel This is the most common use for galvanized metal; hundreds of thousands of tons of steel products are galvanized annually worldwide. In developed countries, most larger cities have several galvanizing factories, and many items of steel manufacture are galvanized for protection. Typically these include street furniture, building frameworks, balconies, verandahs, staircases, ladders, walkways, and more. Hot dip galvanized steel is also used for making steel frames as a basic construction material for steel frame buildings. Galvanized piping In the early 20th century, galvanized piping swiftly took the place of previously used cast iron and lead in cold-water plumbing. Practically, galvanized piping rusts from the inside out, building up layers of plaque on the inside of the piping, causing both water pressure problems and eventual pipe failure. These plaques can flake off, leading to visible impurities in water and a slight metallic taste. The life expectancy of galvanized piping is about 40–50 years, but it may vary on how well the pipes were built and installed. Pipe longevity also depends on the thickness of zinc in the original galvanizing, which ranges on a scale from G01 to G360. See also Electroplating Aluminized steel Cathodic protection Corrugated galvanized iron Galvanic corrosion Galvannealed – galvanization and annealing Prepainted metal Rust Rustproofing Sendzimir process Sherardizing Corrosion Sacrificial metal Corrosion engineering References External links Chemical processes Corrosion prevention Metal plating Zinc Bimetal
Galvanization
[ "Chemistry", "Materials_science" ]
782
[ "Corrosion prevention", "Metallurgical processes", "Metallurgy", "Coatings", "Corrosion", "Bimetal", "Chemical processes", "nan", "Chemical process engineering", "Metal plating" ]
12,891
https://en.wikipedia.org/wiki/Gene%20therapy
Gene therapy is a medical technology that aims to produce a therapeutic effect through the manipulation of gene expression or through altering the biological properties of living cells. The first attempt at modifying human DNA was performed in 1980, by Martin Cline, but the first successful nuclear gene transfer in humans, approved by the National Institutes of Health, was performed in May 1989. The first therapeutic use of gene transfer as well as the first direct insertion of human DNA into the nuclear genome was performed by French Anderson in a trial starting in September 1990. Between 1989 and December 2018, over 2,900 clinical trials were conducted, with more than half of them in phase I. In 2003, Gendicine became the first gene therapy to receive regulatory approval. Since that time, further gene therapy drugs were approved, such as alipogene tiparvovec (2012), Strimvelis (2016), tisagenlecleucel (2017), voretigene neparvovec (2017), patisiran (2018), onasemnogene abeparvovec (2019), idecabtagene vicleucel (2021), nadofaragene firadenovec, valoctocogene roxaparvovec and etranacogene dezaparvovec (all 2022). Most of these approaches utilize adeno-associated viruses (AAVs) and lentiviruses for performing gene insertions, in vivo and ex vivo, respectively. AAVs are characterized by stabilizing the viral capsid, lower immunogenicity, ability to transduce both dividing and nondividing cells, the potential to integrate site specifically and to achieve long-term expression in the in-vivo treatment. ASO / siRNA approaches such as those conducted by Alnylam and Ionis Pharmaceuticals require non-viral delivery systems, and utilize alternative mechanisms for trafficking to liver cells by way of GalNAc transporters. Not all medical procedures that introduce alterations to a patient's genetic makeup can be considered gene therapy. Bone marrow transplantation and organ transplants in general have been found to introduce foreign DNA into patients. Background Gene therapy was first conceptualized in the 1960s, when the feasibility of adding new genetic functions to mammalian cells began to be researched. Several methods to do so were tested, including injecting genes with a micropipette directly into a living mammalian cell, and exposing cells to a precipitate of DNA that contained the desired genes. Scientists theorized that a virus could also be used as a vehicle, or vector, to deliver new genes into cells. One of the first scientists to report the successful direct incorporation of functional DNA into a mammalian cell was biochemist Dr. Lorraine Marquardt Kraus (6 September 1922 – 1 July 2016) at the University of Tennessee Health Science Center in Memphis, Tennessee. In 1961, she managed to genetically alter the hemoglobin of cells from bone marrow taken from a patient with sickle cell anaemia. She did this by incubating the patient's cells in tissue culture with DNA extracted from a donor with normal hemoglobin. In 1968, researchers Theodore Friedmann, Jay Seegmiller, and John Subak-Sharpe at the National Institutes of Health (NIH), Bethesda, in the United States successfully corrected genetic defects associated with Lesch-Nyhan syndrome, a debilitating neurological disease, by adding foreign DNA to cultured cells collected from patients suffering from the disease. The first attempt, an unsuccessful one, at gene therapy (as well as the first case of medical transfer of foreign genes into humans not counting organ transplantation) was performed by geneticist Martin Cline of the University of California, Los Angeles in California, United States on 10 July 1980. Cline claimed that one of the genes in his patients was active six months later, though he never published this data or had it verified. After extensive research on animals throughout the 1980s and a 1989 bacterial gene tagging trial on humans, the first gene therapy widely accepted as a success was demonstrated in a trial that started on 14 September 1990, when Ashanthi DeSilva was treated for ADA-SCID. The first somatic treatment that produced a permanent genetic change was initiated in 1993. The goal was to cure malignant brain tumors by using recombinant DNA to transfer a gene making the tumor cells sensitive to a drug that in turn would cause the tumor cells to die. The polymers are either translated into proteins, interfere with target gene expression, or possibly correct genetic mutations. The most common form uses DNA that encodes a functional, therapeutic gene to replace a mutated gene. The polymer molecule is packaged within a "vector", which carries the molecule inside cells. Early clinical failures led to dismissals of gene therapy. Clinical successes since 2006 regained researchers' attention, although , it was still largely an experimental technique. These include treatment of retinal diseases Leber's congenital amaurosis and choroideremia, X-linked SCID, ADA-SCID, adrenoleukodystrophy, chronic lymphocytic leukemia (CLL), acute lymphocytic leukemia (ALL), multiple myeloma, haemophilia, and Parkinson's disease. Between 2013 and April 2014, US companies invested over $600 million in the field. The first commercial gene therapy, Gendicine, was approved in China in 2003, for the treatment of certain cancers. In 2011, Neovasculgen was registered in Russia as the first-in-class gene-therapy drug for treatment of peripheral artery disease, including critical limb ischemia. In 2012, alipogene tiparvovec, a treatment for a rare inherited disorder, lipoprotein lipase deficiency, became the first treatment to be approved for clinical use in either the European Union or the United States after its endorsement by the European Commission. Following early advances in genetic engineering of bacteria, cells, and small animals, scientists started considering how to apply it to medicine. Two main approaches were considered – replacing or disrupting defective genes. Scientists focused on diseases caused by single-gene defects, such as cystic fibrosis, haemophilia, muscular dystrophy, thalassemia, and sickle cell anemia. alipogene tiparvovec treats one such disease, caused by a defect in lipoprotein lipase. DNA must be administered, reach the damaged cells, enter the cell and either express or disrupt a protein. Multiple delivery techniques have been explored. The initial approach incorporated DNA into an engineered virus to deliver the DNA into a chromosome. Naked DNA approaches have also been explored, especially in the context of vaccine development. Generally, efforts focused on administering a gene that causes a needed protein to be expressed. More recently, increased understanding of nuclease function has led to more direct DNA editing, using techniques such as zinc finger nucleases and CRISPR. The vector incorporates genes into chromosomes. The expressed nucleases then knock out and replace genes in the chromosome. these approaches involve removing cells from patients, editing a chromosome and returning the transformed cells to patients. Gene editing is a potential approach to alter the human genome to treat genetic diseases, viral diseases, and cancer. these approaches are being studied in clinical trials. Classification Breadth of definition In 1986, a meeting at the Institute Of Medicine defined gene therapy as the addition or replacement of a gene in a targeted cell type. In the same year, the FDA announced that it had jurisdiction over approving "gene therapy" without defining the term. The FDA added a very broad definition in 1993 of any treatment that would 'modify or manipulate the expression of genetic material or to alter the biological properties of living cells'. In 2018 this was narrowed to 'products that mediate their effects by transcription or translation of transferred genetic material or by specifically altering host (human) genetic sequences'. Writing in 2018, in the Journal of Law and the Biosciences, Sherkow et al. argued for a narrower definition of gene therapy than the FDA's in light of new technology that would consist of any treatment that intentionally and permanently modified a cell's genome, with the definition of genome including episomes outside the nucleus but excluding changes due to episomes that are lost over time. This definition would also exclude introducing cells that did not derive from a patient themselves, but include ex vivo approaches, and would not depend on the vector used. During the COVID-19 pandemic, some academics insisted that the mRNA vaccines for COVID were not gene therapy to prevent the spread of incorrect information that the vaccine could alter DNA, other academics maintained that the vaccines were a gene therapy because they introduced genetic material into a cell. Fact-checkers, such as Full Fact, Reuters, PolitiFact, and FactCheck.org said that calling the vaccines a gene therapy was incorrect. Podcast host Joe Rogan was criticized for calling mRNA vaccines gene therapy as was British politician Andrew Bridgen, with fact checker Full Fact calling for Bridgen to be removed from the conservative party for this and other statements. Genes present or added Gene therapy encapsulates many forms of adding different nucleic acids to a cell. Gene augmentation adds a new protein coding gene to a cell. One form of gene augmentiation is gene replacement therapy, a treatment for monogenic recessive disorders where a single gene is not functional an additional functional gene is added. For diseases caused by multiple genes or a dominant gene, gene silencing or gene editing approaches are more appropriate but gene addition, a form of gene augmentation where new gene is added, may improve a cells function without modifying the genes that cause a disorder. Cell types Gene therapy may be classified into two types by the type of cell it affects: somatic cell and germline gene therapy. In somatic cell gene therapy (SCGT), the therapeutic genes are transferred into any cell other than a gamete, germ cell, gametocyte, or undifferentiated stem cell. Any such modifications affect the individual patient only, and are not inherited by offspring. Somatic gene therapy represents mainstream basic and clinical research, in which therapeutic DNA (either integrated in the genome or as an external episome or plasmid) is used to treat disease. Over 600 clinical trials utilizing SCGT are underway in the US. Most focus on severe genetic disorders, including immunodeficiencies, haemophilia, thalassaemia, and cystic fibrosis. Such single gene disorders are good candidates for somatic cell therapy. The complete correction of a genetic disorder or the replacement of multiple genes is not yet possible. Only a few of the trials are in the advanced stages. In germline gene therapy (GGT), germ cells (sperm or egg cells) are modified by the introduction of functional genes into their genomes. Modifying a germ cell causes all the organism's cells to contain the modified gene. The change is therefore heritable and passed on to later generations. Australia, Canada, Germany, Israel, Switzerland, and the Netherlands prohibit GGT for application in human beings, for technical and ethical reasons, including insufficient knowledge about possible risks to future generations and higher risks versus SCGT. The US has no federal controls specifically addressing human genetic modification (beyond FDA regulations for therapies in general). In vivo versus ex vivo therapies In in vivo gene therapy, a vector (typically, a virus) is introduced to the patient, which then achieves the desired biological effect by passing the genetic material (e.g. for a missing protein) into the patient's cells. In ex vivo gene therapies, such as CAR-T therapeutics, the patient's own cells (autologous) or healthy donor cells (allogeneic) are modified outside the body (hence, ex vivo) using a vector to express a particular protein, such as a chimeric antigen receptor. In vivo gene therapy is seen as simpler, since it does not require the harvesting of mitotic cells. However, ex vivo gene therapies are better tolerated and less associated with severe immune responses. The death of Jesse Gelsinger in a trial of an adenovirus-vectored treatment for ornithine transcarbamylase deficiency due to a systemic inflammatory reaction led to a temporary halt on gene therapy trials across the United States. , in vivo and ex vivo therapeutics are both seen as safe. Gene editing The concept of gene therapy is to fix a genetic problem at its source. If, for instance, a mutation in a certain gene causes the production of a dysfunctional protein resulting (usually recessively) in an inherited disease, gene therapy could be used to deliver a copy of this gene that does not contain the deleterious mutation and thereby produces a functional protein. This strategy is referred to as gene replacement therapy and could be employed to treat inherited retinal diseases. While the concept of gene replacement therapy is mostly suitable for recessive diseases, novel strategies have been suggested that are capable of also treating conditions with a dominant pattern of inheritance. The introduction of CRISPR gene editing has opened new doors for its application and utilization in gene therapy, as instead of pure replacement of a gene, it enables correction of the particular genetic defect. Solutions to medical hurdles, such as the eradication of latent human immunodeficiency virus (HIV) reservoirs and correction of the mutation that causes sickle cell disease, may be available as a therapeutic option in the future. Prosthetic gene therapy aims to enable cells of the body to take over functions they physiologically do not carry out. One example is the so-called vision restoration gene therapy, that aims to restore vision in patients with end-stage retinal diseases. In end-stage retinal diseases, the photoreceptors, as the primary light sensitive cells of the retina are irreversibly lost. By the means of prosthetic gene therapy light sensitive proteins are delivered into the remaining cells of the retina, to render them light sensitive and thereby enable them to signal visual information towards the brain. In vivo, gene editing systems using CRISPR have been used in studies with mice to treat cancer and have been effective at reducing tumors. In vitro, the CRISPR system has been used to treat HPV cancer tumors. Adeno-associated virus, Lentivirus based vectors have been to introduce the genome for the CRISPR system. Vectors The delivery of DNA into cells can be accomplished by multiple methods. The two major classes are recombinant viruses (sometimes called biological nanoparticles or viral vectors) and naked DNA or DNA complexes (non-viral methods). Viruses In order to replicate, viruses introduce their genetic material into the host cell, tricking the host's cellular machinery into using it as blueprints for viral proteins. Retroviruses go a stage further by having their genetic material copied into the nuclear genome of the host cell. Scientists exploit this by substituting part of a virus's genetic material with therapeutic DNA or RNA. Like the genetic material (DNA or RNA) in viruses, therapeutic genetic material can be designed to simply serve as a temporary blueprint that degrades naturally, as in a non-integrative vectors, or to enter the host's nucleus becoming a permanent part of the host's nuclear DNA in infected cells. A number of viruses have been used for human gene therapy, including viruses such as lentivirus, adenoviruses, herpes simplex, vaccinia, and adeno-associated virus. Adenovirus viral vectors (Ad) temporarily modify a cell's genetic expression with genetic material that is not integrated into the host cell's DNA. As of 2017, such vectors were used in 20% of trials for gene therapy. Adenovirus vectors are mostly used in cancer treatments and novel genetic vaccines such as the Ebola vaccine, vaccines used in clinical trials for HIV and SARS-CoV-2, or cancer vaccines. Lentiviral vectors based on lentivirus, a retrovirus, can modify a cell's nuclear genome to permanently express a gene, although vectors can be modified to prevent integration. Retroviruses were used in 18% of trials before 2018. Libmeldy is an ex vivo stem cell treatment for metachromatic leukodystrophy which uses a lentiviral vector and was approved by the European medical agency in 2020. Adeno-associated virus (AAV) is a virus that is incapable of transmission between cells unless the cell is infected by another virus, a helper virus. Adenovirus and the herpes viruses act as helper viruses for AAV. AAV persists within the cell outside of the cell's nuclear genome for an extended period of time through the formation of concatemers mostly organized as episomes. Genetic material from AAV vectors is integrated into the host cell's nuclear genome at a low frequency and likely mediated by the DNA-modifying enzymes of the host cell. Animal models suggest that integration of AAV genetic material into the host cell's nuclear genome may cause hepatocellular carcinoma, a form of liver cancer. Several AAV investigational agents have been explored in treatment of wet age related macular degeneration by both intravitreal and subretinal approaches as a potential application of AAV gene therapy for human disease. Non-viral Non-viral vectors for gene therapy present certain advantages over viral methods, such as large scale production and low host immunogenicity. However, non-viral methods initially produced lower levels of transfection and gene expression, and thus lower therapeutic efficacy. Newer technologies offer promise of solving these problems, with the advent of increased cell-specific targeting and subcellular trafficking control. Methods for non-viral gene therapy include the injection of naked DNA, electroporation, the gene gun, sonoporation, magnetofection, the use of oligonucleotides, lipoplexes, dendrimers, and inorganic nanoparticles. These therapeutics can be administered directly or through scaffold enrichment. More recent approaches, such as those performed by companies such as Ligandal, offer the possibility of creating cell-specific targeting technologies for a variety of gene therapy modalities, including RNA, DNA and gene editing tools such as CRISPR. Other companies, such as Arbutus Biopharma and Arcturus Therapeutics, offer non-viral, non-cell-targeted approaches that mainly exhibit liver trophism. In more recent years, startups such as Sixfold Bio, GenEdit, and Spotlight Therapeutics have begun to solve the non-viral gene delivery problem. Non-viral techniques offer the possibility of repeat dosing and greater tailorability of genetic payloads, which in the future will be more likely to take over viral-based delivery systems. Companies such as Editas Medicine, Intellia Therapeutics, CRISPR Therapeutics, Casebia, Cellectis, Precision Biosciences, bluebird bio, Excision BioTherapeutics, and Sangamo have developed non-viral gene editing techniques, however frequently still use viruses for delivering gene insertion material following genomic cleavage by guided nucleases. These companies focus on gene editing, and still face major delivery hurdles. BioNTech, Moderna Therapeutics and CureVac focus on delivery of mRNA payloads, which are necessarily non-viral delivery problems. Alnylam, Dicerna Pharmaceuticals, and Ionis Pharmaceuticals focus on delivery of siRNA (antisense oligonucleotides) for gene suppression, which also necessitate non-viral delivery systems. In academic contexts, a number of laboratories are working on delivery of PEGylated particles, which form serum protein coronas and chiefly exhibit LDL receptor mediated uptake in cells in vivo. Treatment Cancer There have been attempts to treat cancer using gene therapy. As of 2017, 65% of gene therapy trials were for cancer treatment. Adenovirus vectors are useful for some cancer gene therapies because adenovirus can transiently insert genetic material into a cell without permanently altering the cell's nuclear genome. These vectors can be used to cause antigens to be added to cancers causing an immune response, or hinder angiogenesis by expressing certain proteins. An Adenovirus vector is used in the commercial products Gendicine and Oncorine. Another commercial product, Rexin G, uses a retrovirus-based vector and selectively binds to receptors that are more expressed in tumors. One approach, suicide gene therapy, works by introducing genes encoding enzymes that will cause a cancer cell to die. Another approach is the use oncolytic viruses, such as Oncorine, which are viruses that selectively reproduce in cancerous cells leaving other cells unaffected. mRNA has been suggested as a non-viral vector for cancer gene therapy that would temporarily change a cancerous cell's function to create antigens or kill the cancerous cells and there have been several trials. Afamitresgene autoleucel, sold under the brand name Tecelra, is an autologous T cell immunotherapy used for the treatment of synovial sarcoma. It is a T cell receptor (TCR) gene therapy. It is the first FDA-approved engineered cell therapy for a solid tumor. It uses a self-inactivating lentiviral vector to express a T-cell receptor specific for MAGE-A4, a melanoma-associated antigen. Genetic diseases Gene therapy approaches to replace a faulty gene with a healthy gene have been proposed and are being studied for treating some genetic diseases. As of 2017, 11.1% of gene therapy clinical trials targeted monogenic diseases. Diseases such as sickle cell disease that are caused by autosomal recessive disorders for which a person's normal phenotype or cell function may be restored in cells that have the disease by a normal copy of the gene that is mutated, may be a good candidate for gene therapy treatment. The risks and benefits related to gene therapy for sickle cell disease are not known. Gene therapy has been used in the eye. The eye is especially suitable for adeno-associated virus vectors. Voretigene neparvovec is an approved gene therapy to treat Leber's hereditary optic neuropathy. alipogene tiparvovec, a treatment for pancreatitis caused by a genetic condition, and Zolgensma for the treatment of spinal muscular atrophy both use an adeno-associated virus vector. Infectious diseases As of 2017, 7% of genetic therapy trials targeted infectious diseases. 69.2% of trials targeted HIV, 11% hepatitis B or C, and 7.1% malaria. List of gene therapies for treatment of disease Some genetic therapies have been approved by the U.S. Food and Drug Administration (FDA), the European Medicines Agency (EMA), and for use in Russia and China. Adverse effects, contraindications and hurdles for use Some of the unsolved problems include: Off-target effects – The possibility of unwanted, likely harmful, changes to the genome present a large barrier to the widespread implementation of this technology. Improvements to the specificity of gRNAs and Cas enzymes present viable solutions to this issue as well as the refinement of the delivery method of CRISPR. It is likely that different diseases will benefit from different delivery methods. Short-lived nature – Before gene therapy can become a permanent cure for a condition, the therapeutic DNA introduced into target cells must remain functional and the cells containing the therapeutic DNA must be stable. Problems with integrating therapeutic DNA into the nuclear genome and the rapidly dividing nature of many cells prevent it from achieving long-term benefits. Patients require multiple treatments. Immune response – Any time a foreign object is introduced into human tissues, the immune system is stimulated to attack the invader. Stimulating the immune system in a way that reduces gene therapy effectiveness is possible. The immune system's enhanced response to viruses that it has seen before reduces the effectiveness to repeated treatments. Problems with viral vectors – Viral vectors carry the risks of toxicity, inflammatory responses, and gene control and targeting issues. Multigene disorders – Some commonly occurring disorders, such as heart disease, high blood pressure, Alzheimer's disease, arthritis, and diabetes, are affected by variations in multiple genes, which complicate gene therapy. Some therapies may breach the Weismann barrier (between soma and germ-line) protecting the testes, potentially modifying the germline, falling afoul of regulations in countries that prohibit the latter practice. Insertional mutagenesis – If the DNA is integrated in a sensitive spot in the genome, for example in a tumor suppressor gene, the therapy could induce a tumor. This has occurred in clinical trials for X-linked severe combined immunodeficiency (X-SCID) patients, in which hematopoietic stem cells were transduced with a corrective transgene using a retrovirus, and this led to the development of T cell leukemia in 3 of 20 patients. One possible solution is to add a functional tumor suppressor gene to the DNA to be integrated. This may be problematic since the longer the DNA is, the harder it is to integrate into cell genomes. CRISPR technology allows researchers to make much more precise genome changes at exact locations. Cost – alipogene tiparvovec (Glybera), for example, at a cost of $1.6 million per patient, was reported in 2013, to be the world's most expensive drug. Deaths Three patients' deaths have been reported in gene therapy trials, putting the field under close scrutiny. The first was that of Jesse Gelsinger, who died in 1999, because of immune rejection response. One X-SCID patient died of leukemia in 2003. In 2007, a rheumatoid arthritis patient died from an infection; the subsequent investigation concluded that the death was not related to gene therapy. Regulations Regulations covering genetic modification are part of general guidelines about human-involved biomedical research. There are no international treaties which are legally binding in this area, but there are recommendations for national laws from various bodies. The Helsinki Declaration (Ethical Principles for Medical Research Involving Human Subjects) was amended by the World Medical Association's General Assembly in 2008. This document provides principles physicians and researchers must consider when involving humans as research subjects. The Statement on Gene Therapy Research initiated by the Human Genome Organization (HUGO) in 2001, provides a legal baseline for all countries. HUGO's document emphasizes human freedom and adherence to human rights, and offers recommendations for somatic gene therapy, including the importance of recognizing public concerns about such research. United States No federal legislation lays out protocols or restrictions about human genetic engineering. This subject is governed by overlapping regulations from local and federal agencies, including the Department of Health and Human Services, the FDA and NIH's Recombinant DNA Advisory Committee. Researchers seeking federal funds for an investigational new drug application, (commonly the case for somatic human genetic engineering,) must obey international and federal guidelines for the protection of human subjects. NIH serves as the main gene therapy regulator for federally funded research. Privately funded research is advised to follow these regulations. NIH provides funding for research that develops or enhances genetic engineering techniques and to evaluate the ethics and quality in current research. The NIH maintains a mandatory registry of human genetic engineering research protocols that includes all federally funded projects. An NIH advisory committee published a set of guidelines on gene manipulation. The guidelines discuss lab safety as well as human test subjects and various experimental types that involve genetic changes. Several sections specifically pertain to human genetic engineering, including Section III-C-1. This section describes required review processes and other aspects when seeking approval to begin clinical research involving genetic transfer into a human patient. The protocol for a gene therapy clinical trial must be approved by the NIH's Recombinant DNA Advisory Committee prior to any clinical trial beginning; this is different from any other kind of clinical trial. As with other kinds of drugs, the FDA regulates the quality and safety of gene therapy products and supervises how these products are used clinically. Therapeutic alteration of the human genome falls under the same regulatory requirements as any other medical treatment. Research involving human subjects, such as clinical trials, must be reviewed and approved by the FDA and an Institutional Review Board. Gene doping Athletes may adopt gene therapy technologies to improve their performance. Gene doping is not known to occur, but multiple gene therapies may have such effects. Kayser et al. argue that gene doping could level the playing field if all athletes receive equal access. Critics claim that any therapeutic intervention for non-therapeutic/enhancement purposes compromises the ethical foundations of medicine and sports. Genetic enhancement Genetic engineering could be used to cure diseases, but also to change physical appearance, metabolism, and even improve physical capabilities and mental faculties such as memory and intelligence. Ethical claims about germline engineering include beliefs that every fetus has a right to remain genetically unmodified, that parents hold the right to genetically modify their offspring, and that every child has the right to be born free of preventable diseases. For parents, genetic engineering could be seen as another child enhancement technique to add to diet, exercise, education, training, cosmetics, and plastic surgery. Another theorist claims that moral concerns limit but do not prohibit germline engineering. A 2020 issue of the journal Bioethics was devoted to moral issues surrounding germline genetic engineering in people. Possible regulatory schemes include a complete ban, provision to everyone, or professional self-regulation. The American Medical Association's Council on Ethical and Judicial Affairs stated that "genetic interventions to enhance traits should be considered permissible only in severely restricted situations: (1) clear and meaningful benefits to the fetus or child; (2) no trade-off with other characteristics or traits; and (3) equal access to the genetic technology, irrespective of income or other socioeconomic characteristics." As early in the history of biotechnology as 1990, there have been scientists opposed to attempts to modify the human germline using these new tools, and such concerns have continued as technology progressed. With the advent of new techniques like CRISPR, in March 2015 a group of scientists urged a worldwide moratorium on clinical use of gene editing technologies to edit the human genome in a way that can be inherited. In April 2015, researchers sparked controversy when they reported results of basic research to edit the DNA of non-viable human embryos using CRISPR. A committee of the American National Academy of Sciences and National Academy of Medicine gave qualified support to human genome editing in 2017 once answers have been found to safety and efficiency problems "but only for serious conditions under stringent oversight." History 1970s and earlier In 1972, Friedmann and Roblin authored a paper in Science titled "Gene therapy for human genetic disease?". Rogers (1970) was cited for proposing that exogenous good DNA be used to replace the defective DNA in those with genetic defects. 1980s In 1984, a retrovirus vector system was designed that could efficiently insert foreign genes into mammalian chromosomes. 1990s The first approved gene therapy clinical research in the US took place on 14 September 1990, at the National Institutes of Health (NIH), under the direction of William French Anderson. Four-year-old Ashanti DeSilva received treatment for a genetic defect that left her with adenosine deaminase deficiency (ADA-SCID), a severe immune system deficiency. The defective gene of the patient's blood cells was replaced by the functional variant. Ashanti's immune system was partially restored by the therapy. Production of the missing enzyme was temporarily stimulated, but the new cells with functional genes were not generated. She led a normal life only with the regular injections performed every two months. The effects were successful, but temporary. Cancer gene therapy was introduced in 1992/93 (Trojan et al. 1993). The treatment of glioblastoma multiforme, the malignant brain tumor whose outcome is always fatal, was done using a vector expressing antisense IGF-I RNA (clinical trial approved by NIH protocol no.1602 24 November 1993, and by the FDA in 1994). This therapy also represents the beginning of cancer immunogene therapy, a treatment which proves to be effective due to the anti-tumor mechanism of IGF-I antisense, which is related to strong immune and apoptotic phenomena. In 1992, Claudio Bordignon, working at the Vita-Salute San Raffaele University, performed the first gene therapy procedure using hematopoietic stem cells as vectors to deliver genes intended to correct hereditary diseases. In 2002, this work led to the publication of the first successful gene therapy treatment for ADA-SCID. The success of a multi-center trial for treating children with SCID (severe combined immune deficiency or "bubble boy" disease) from 2000 and 2002, was questioned when two of the ten children treated at the trial's Paris center developed a leukemia-like condition. Clinical trials were halted temporarily in 2002, but resumed after regulatory review of the protocol in the US, the United Kingdom, France, Italy, and Germany. In 1993, Andrew Gobea was born with SCID following prenatal genetic screening. Blood was removed from his mother's placenta and umbilical cord immediately after birth, to acquire stem cells. The allele that codes for adenosine deaminase (ADA) was obtained and inserted into a retrovirus. Retroviruses and stem cells were mixed, after which the viruses inserted the gene into the stem cell chromosomes. Stem cells containing the working ADA gene were injected into Andrew's blood. Injections of the ADA enzyme were also given weekly. For four years T cells (white blood cells), produced by stem cells, made ADA enzymes using the ADA gene. After four years more treatment was needed. In 1996, Luigi Naldini and Didier Trono developed a new class of gene therapy vectors based on HIV capable of infecting non-dividing cells that have since then been widely used in clinical and research settings, pioneering lentivirals vector in gene therapy. Jesse Gelsinger's death in 1999 impeded gene therapy research in the US. As a result, the FDA suspended several clinical trials pending the reevaluation of ethical and procedural practices. 2000s The modified gene therapy strategy of antisense IGF-I RNA (NIH n˚ 1602) using antisense / triple helix anti-IGF-I approach was registered in 2002, by Wiley gene therapy clinical trial - n˚ 635 and 636. The approach has shown promising results in the treatment of six different malignant tumors: glioblastoma, cancers of liver, colon, prostate, uterus, and ovary (Collaborative NATO Science Programme on Gene Therapy USA, France, Poland n˚ LST 980517 conducted by J. Trojan) (Trojan et al., 2012). This anti-gene antisense/triple helix therapy has proven to be efficient, due to the mechanism stopping simultaneously IGF-I expression on translation and transcription levels, strengthening anti-tumor immune and apoptotic phenomena. 2002 Sickle cell disease can be treated in mice. The mice – which have essentially the same defect that causes human cases – used a viral vector to induce production of fetal hemoglobin (HbF), which normally ceases to be produced shortly after birth. In humans, the use of hydroxyurea to stimulate the production of HbF temporarily alleviates sickle cell symptoms. The researchers demonstrated this treatment to be a more permanent means to increase therapeutic HbF production. A new gene therapy approach repaired errors in messenger RNA derived from defective genes. This technique has the potential to treat thalassaemia, cystic fibrosis and some cancers. Researchers created liposomes 25 nanometers across that can carry therapeutic DNA through pores in the nuclear membrane. 2003 In 2003, a research team inserted genes into the brain for the first time. They used liposomes coated in a polymer called polyethylene glycol, which unlike viral vectors, are small enough to cross the blood–brain barrier. Short pieces of double-stranded RNA (short, interfering RNAs or siRNAs) are used by cells to degrade RNA of a particular sequence. If a siRNA is designed to match the RNA copied from a faulty gene, then the abnormal protein product of that gene will not be produced. Gendicine is a cancer gene therapy that delivers the tumor suppressor gene p53 using an engineered adenovirus. In 2003, it was approved in China for the treatment of head and neck squamous cell carcinoma. 2006 In March, researchers announced the successful use of gene therapy to treat two adult patients for X-linked chronic granulomatous disease, a disease which affects myeloid cells and damages the immune system. The study is the first to show that gene therapy can treat the myeloid system. In May, a team reported a way to prevent the immune system from rejecting a newly delivered gene. Similar to organ transplantation, gene therapy has been plagued by this problem. The immune system normally recognizes the new gene as foreign and rejects the cells carrying it. The research utilized a newly uncovered network of genes regulated by molecules known as microRNAs. This natural function selectively obscured their therapeutic gene in immune system cells and protected it from discovery. Mice infected with the gene containing an immune-cell microRNA target sequence did not reject the gene. In August, scientists successfully treated metastatic melanoma in two patients using killer T cells genetically retargeted to attack the cancer cells. In November, researchers reported on the use of VRX496, a gene-based immunotherapy for the treatment of HIV that uses a lentiviral vector to deliver an antisense gene against the HIV envelope. In a phase I clinical trial, five subjects with chronic HIV infection who had failed to respond to at least two antiretroviral regimens were treated. A single intravenous infusion of autologous CD4 T cells genetically modified with VRX496 was well tolerated. All patients had stable or decreased viral load; four of the five patients had stable or increased CD4 T cell counts. All five patients had stable or increased immune response to HIV antigens and other pathogens. This was the first evaluation of a lentiviral vector administered in a US human clinical trial. 2007 In May 2007, researchers announced the first gene therapy trial for inherited retinal disease. The first operation was carried out on a 23-year-old British male, Robert Johnson, in early 2007. 2008 Leber's congenital amaurosis is an inherited blinding disease caused by mutations in the RPE65 gene. The results of a small clinical trial in children were published in April. Delivery of recombinant adeno-associated virus (AAV) carrying RPE65 yielded positive results. In May, two more groups reported positive results in independent clinical trials using gene therapy to treat the condition. In all three clinical trials, patients recovered functional vision without apparent side-effects. 2009 In September researchers were able to give trichromatic vision to squirrel monkeys. In November 2009, researchers halted a fatal genetic disorder called adrenoleukodystrophy in two children using a lentivirus vector to deliver a functioning version of ABCD1, the gene that is mutated in the disorder. 2010s 2010 An April paper reported that gene therapy addressed achromatopsia (color blindness) in dogs by targeting cone photoreceptors. Cone function and day vision were restored for at least 33 months in two young specimens. The therapy was less efficient for older dogs. In September it was announced that an 18-year-old male patient in France with beta thalassemia major had been successfully treated. Beta thalassemia major is an inherited blood disease in which beta haemoglobin is missing and patients are dependent on regular lifelong blood transfusions. The technique used a lentiviral vector to transduce the human β-globin gene into purified blood and marrow cells obtained from the patient in June 2007. The patient's haemoglobin levels were stable at 9 to 10 g/dL. About a third of the hemoglobin contained the form introduced by the viral vector and blood transfusions were not needed. Further clinical trials were planned. Bone marrow transplants are the only cure for thalassemia, but 75% of patients do not find a matching donor. Cancer immunogene therapy using modified antigene, antisense/triple helix approach was introduced in South America in 2010/11 in La Sabana University, Bogota (Ethical Committee 14 December 2010, no P-004-10). Considering the ethical aspect of gene diagnostic and gene therapy targeting IGF-I, the IGF-I expressing tumors i.e. lung and epidermis cancers were treated (Trojan et al. 2016). 2011 In 2007 and 2008, a man (Timothy Ray Brown) was cured of HIV by repeated hematopoietic stem cell transplantation (see also allogeneic stem cell transplantation, allogeneic bone marrow transplantation, allotransplantation) with double-delta-32 mutation which disables the CCR5 receptor. This cure was accepted by the medical community in 2011. It required complete ablation of existing bone marrow, which is very debilitating. In August two of three subjects of a pilot study were confirmed to have been cured from chronic lymphocytic leukemia (CLL). The therapy used genetically modified T cells to attack cells that expressed the CD19 protein to fight the disease. In 2013, the researchers announced that 26 of 59 patients had achieved complete remission and the original patient had remained tumor-free. Human HGF plasmid DNA therapy of cardiomyocytes is being examined as a potential treatment for coronary artery disease as well as treatment for the damage that occurs to the heart after myocardial infarction. In 2011, Neovasculgen was registered in Russia as the first-in-class gene-therapy drug for treatment of peripheral artery disease, including critical limb ischemia; it delivers the gene encoding for VEGF. Neovasculogen is a plasmid encoding the CMV promoter and the 165 amino acid form of VEGF. 2012 The FDA approved Phase I clinical trials on thalassemia major patients in the US for 10 participants in July. The study was expected to continue until 2015. In July 2012, the European Medicines Agency recommended approval of a gene therapy treatment for the first time in either Europe or the United States. The treatment used Alipogene tiparvovec (Glybera) to compensate for lipoprotein lipase deficiency, which can cause severe pancreatitis. The recommendation was endorsed by the European Commission in November 2012, and commercial rollout began in late 2014. Alipogene tiparvovec was expected to cost around $1.6 million per treatment in 2012, revised to $1 million in 2015, making it the most expensive medicine in the world at the time. , only the patients treated in clinical trials and a patient who paid the full price for treatment have received the drug. In December 2012, it was reported that 10 of 13 patients with multiple myeloma were in remission "or very close to it" three months after being injected with a treatment involving genetically engineered T cells to target proteins NY-ESO-1 and LAGE-1, which exist only on cancerous myeloma cells. 2013 In March researchers reported that three of five adult subjects who had acute lymphocytic leukemia (ALL) had been in remission for five months to two years after being treated with genetically modified T cells which attacked cells with CD19 genes on their surface, i.e. all B cells, cancerous or not. The researchers believed that the patients' immune systems would make normal T cells and B cells after a couple of months. They were also given bone marrow. One patient relapsed and died and one died of a blood clot unrelated to the disease. Following encouraging Phase I trials, in April, researchers announced they were starting Phase II clinical trials (called CUPID2 and SERCA-LVAD) on 250 patients at several hospitals to combat heart disease. The therapy was designed to increase the levels of SERCA2, a protein in heart muscles, improving muscle function. The U.S. Food and Drug Administration (FDA) granted this a breakthrough therapy designation to accelerate the trial and approval process. In 2016, it was reported that no improvement was found from the CUPID 2 trial. In July researchers reported promising results for six children with two severe hereditary diseases had been treated with a partially deactivated lentivirus to replace a faulty gene and after 7–32 months. Three of the children had metachromatic leukodystrophy, which causes children to lose cognitive and motor skills. The other children had Wiskott–Aldrich syndrome, which leaves them to open to infection, autoimmune diseases, and cancer. Follow up trials with gene therapy on another six children with Wiskott–Aldrich syndrome were also reported as promising. In October researchers reported that two children born with adenosine deaminase severe combined immunodeficiency disease (ADA-SCID) had been treated with genetically engineered stem cells 18 months previously and that their immune systems were showing signs of full recovery. Another three children were making progress. In 2014, a further 18 children with ADA-SCID were cured by gene therapy. ADA-SCID children have no functioning immune system and are sometimes known as "bubble children". Also in October researchers reported that they had treated six people with haemophilia in early 2011 using an adeno-associated virus. Over two years later all six were producing clotting factor. 2014 In January researchers reported that six choroideremia patients had been treated with adeno-associated virus with a copy of REP1. Over a six-month to two-year period all had improved their sight. By 2016, 32 patients had been treated with positive results and researchers were hopeful the treatment would be long-lasting. Choroideremia is an inherited genetic eye disease with no approved treatment, leading to loss of sight. In March researchers reported that 12 HIV patients had been treated since 2009 in a trial with a genetically engineered virus with a rare mutation (CCR5 deficiency) known to protect against HIV with promising results. Clinical trials of gene therapy for sickle cell disease were started in 2014. In February LentiGlobin BB305, a gene therapy treatment undergoing clinical trials for treatment of beta thalassemia gained FDA "breakthrough" status after several patients were able to forgo the frequent blood transfusions usually required to treat the disease. In March researchers delivered a recombinant gene encoding a broadly neutralizing antibody into monkeys infected with simian HIV; the monkeys' cells produced the antibody, which cleared them of HIV. The technique is named immunoprophylaxis by gene transfer (IGT). Animal tests for antibodies to ebola, malaria, influenza, and hepatitis were underway. In March, scientists, including an inventor of CRISPR, Jennifer Doudna, urged a worldwide moratorium on germline gene therapy, writing "scientists should avoid even attempting, in lax jurisdictions, germline genome modification for clinical application in humans" until the full implications "are discussed among scientific and governmental organizations". In December, scientists of major world academies called for a moratorium on inheritable human genome edits, including those related to CRISPR-Cas9 technologies but that basic research including embryo gene editing should continue. 2015 Researchers successfully treated a boy with epidermolysis bullosa using skin grafts grown from his own skin cells, genetically altered to repair the mutation that caused his disease. In November, researchers announced that they had treated a baby girl, Layla Richards, with an experimental treatment using donor T cells genetically engineered using TALEN to attack cancer cells. One year after the treatment she was still free of her cancer (a highly aggressive form of acute lymphoblastic leukaemia [ALL]). Children with highly aggressive ALL normally have a very poor prognosis and Layla's disease had been regarded as terminal before the treatment. 2016 In April the Committee for Medicinal Products for Human Use of the European Medicines Agency endorsed a gene therapy treatment called Strimvelis and the European Commission approved it in June. This treats children born with adenosine deaminase deficiency and who have no functioning immune system. This was the second gene therapy treatment to be approved in Europe. In October, Chinese scientists reported they had started a trial to genetically modify T cells from 10 adult patients with lung cancer and reinject the modified T cells back into their bodies to attack the cancer cells. The T cells had the PD-1 protein (which stops or slows the immune response) removed using CRISPR-Cas9. A 2016 Cochrane systematic review looking at data from four trials on topical cystic fibrosis transmembrane conductance regulator (CFTR) gene therapy does not support its clinical use as a mist inhaled into the lungs to treat cystic fibrosis patients with lung infections. One of the four trials did find weak evidence that liposome-based CFTR gene transfer therapy may lead to a small respiratory improvement for people with CF. This weak evidence is not enough to make a clinical recommendation for routine CFTR gene therapy. 2017 In February Kite Pharma announced results from a clinical trial of CAR-T cells in around a hundred people with advanced non-Hodgkin lymphoma. In March, French scientists reported on clinical research of gene therapy to treat sickle cell disease. In August, the FDA approved tisagenlecleucel for acute lymphoblastic leukemia. Tisagenlecleucel is an adoptive cell transfer therapy for B-cell acute lymphoblastic leukemia; T cells from a person with cancer are removed, genetically engineered to make a specific T-cell receptor (a chimeric T cell receptor, or "CAR-T") that reacts to the cancer, and are administered back to the person. The T cells are engineered to target a protein called CD19 that is common on B cells. This is the first form of gene therapy to be approved in the United States. In October, a similar therapy called axicabtagene ciloleucel was approved for non-Hodgkin lymphoma. In October, biophysicist and biohacker Josiah Zayner claimed to have performed the very first in-vivo human genome editing in the form of a self-administered therapy. On 13 November, medical scientists working with Sangamo Therapeutics, headquartered in Richmond, California, announced the first ever in-body human gene editing therapy. The treatment, designed to permanently insert a healthy version of the flawed gene that causes Hunter syndrome, was given to 44-year-old Brian Madeux and is part of the world's first study to permanently edit DNA inside the human body. The success of the gene insertion was later confirmed. Clinical trials by Sangamo involving gene editing using zinc finger nuclease (ZFN) are ongoing. In December the results of using an adeno-associated virus with blood clotting factor VIII to treat nine haemophilia A patients were published. Six of the seven patients on the high dose regime increased the level of the blood clotting VIII to normal levels. The low and medium dose regimes had no effect on the patient's blood clotting levels. In December, the FDA approved voretigene neparvovec, the first in vivo gene therapy, for the treatment of blindness due to Leber's congenital amaurosis. The price of this treatment is for both eyes. 2019 In May, the FDA approved onasemnogene abeparvovec (Zolgensma) for treating spinal muscular atrophy in children under two years of age. The list price of Zolgensma was set at per dose, making it the most expensive drug ever. In May, the EMA approved betibeglogene autotemcel (Zynteglo) for treating beta thalassemia for people twelve years of age and older. In July, Allergan and Editas Medicine announced phase I/II clinical trial of AGN-151587 for the treatment of Leber congenital amaurosis 10. This is one of the first studies of a CRISPR-based in vivo human gene editing therapy, where the editing takes place inside the human body. The first injection of the CRISPR-Cas System was confirmed in March 2020. Exagamglogene autotemcel, a CRISPR-based human gene editing therapy, was used for sickle cell and thalassemia in clinical trials. 2020s 2020 In May, onasemnogene abeparvovec (Zolgensma) was approved by the European Union for the treatment of spinal muscular atrophy in people who either have clinical symptoms of SMA type 1 or who have no more than three copies of the SMN2 gene, irrespective of body weight or age. In August, Audentes Therapeutics reported that three out of 17 children with X-linked myotubular myopathy participating the clinical trial of a AAV8-based gene therapy treatment AT132 have died. It was suggested that the treatment, whose dosage is based on body weight, exerts a disproportionately toxic effect on heavier patients, since the three patients who died were heavier than the others. The trial has been put on clinical hold. On 15 October, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorisation for the medicinal product Libmeldy (autologous CD34+ cell enriched population that contains hematopoietic stem and progenitor cells transduced ex vivo using a lentiviral vector encoding the human arylsulfatase A gene), a gene therapy for the treatment of children with the "late infantile" (LI) or "early juvenile" (EJ) forms of metachromatic leukodystrophy (MLD). The active substance of Libmeldy consists of the child's own stem cells which have been modified to contain working copies of the ARSA gene. When the modified cells are injected back into the patient as a one-time infusion, the cells are expected to start producing the ARSA enzyme that breaks down the build-up of sulfatides in the nerve cells and other cells of the patient's body. Libmeldy was approved for medical use in the EU in December 2020. On 15 October, Lysogene, a French biotechnological company, reported the death of a patient in who has received LYS-SAF302, an experimental gene therapy treatment for mucopolysaccharidosis type IIIA (Sanfilippo syndrome type A). 2021 In May, a new method using an altered version of HIV as a lentivirus vector was reported in the treatment of 50 children with ADA-SCID obtaining positive results in 48 of them, this method is expected to be safer than retroviruses vectors commonly used in previous studies of SCID where the development of leukemia was usually observed and had already been used in 2019, but in a smaller group with X-SCID. In June a clinical trial on six patients affected with transthyretin amyloidosis reported a reduction the concentration of missfolded transthretin (TTR) protein in serum through CRISPR-based inactivation of the TTR gene in liver cells observing mean reductions of 52% and 87% among the lower and higher dose groups.This was done in vivo without taking cells out of the patient to edit them and reinfuse them later. In July results of a small gene therapy phase I study was published reporting observation of dopamine restoration on seven patients between 4 and 9 years old affected by aromatic L-amino acid decarboxylase deficiency (AADC deficiency). 2022 In February, the first ever gene therapy for Tay–Sachs disease was announced, it uses an adeno-associated virus to deliver the correct instruction for the HEXA gene on brain cells which causes the disease. Only two children were part of a compassionate trial presenting improvements over the natural course of the disease and no vector-related adverse events. In May, eladocagene exuparvovec is recommended for approval by the European Commission. In July results of a gene therapy candidate for haemophilia B called FLT180 were announced, it works using an adeno-associated virus (AAV) to restore the clotting factor IX (FIX) protein, normal levels of the protein were observed with low doses of the therapy but immunosuppression was necessitated to decrease the risk of vector-related immune responses. In December, a 13-year girl that had been diagnosed with T-cell acute lymphoblastic leukaemia was successfully treated at Great Ormond Street Hospital (GOSH) in the first documented use of therapeutic gene editing for this purpose, after undergoing six months of an experimental treatment, where all attempts of other treatments failed. The procedure included reprogramming a healthy T-cell to destroy the cancerous T-cells to first rid her of leukaemia, and then rebuilding her immune system using healthy immune cells. The GOSH team used BASE editing and had previously treated a case of acute lymphoblastic leukaemia in 2015 using TALENs. 2023 In May 2023, the FDA approved beremagene geperpavec for the treatment of wounds in people with dystrophic epidermolysis bullosa (DEB) which is applied as a topical gel that delivers a herpes-simplex virus type 1 (HSV-1) vector encoding the collagen type VII alpha 1 chain (COL7A1) gene that is dysfunctional on those affected by DEB . One trial found 65% of the Vyjuvek-treated wounds completely closed while only 26% of the placebo-treated at 24 weeks. It has been also reported its use as an eyedrop for a patient with DEB that had vision loss due to the widespread blistering with good results. In June 2023, the FDA gave an accelerated approval to Elevidys for Duchenne muscular dystrophy (DMD) only for boys 4 to 5 years old as they are more likely to benefit from the therapy which consists of one-time intravenous infusion of a virus (AAV rh74 vector) that delivers a functioning "microdystrophin" gene (138 kDa) into the muscle cells to act in place of the normal dystrophin (427 kDa) that is found mutated in this disease. In July 2023, it was reported that it had been developed a new method to affect genetic expressions through direct current. In December 2023, two gene therapies were approved for sickle cell disease, exagamglogene autotemcel and lovotibeglogene autotemcel. 2024 In November 2024, FDA granted accelerated approval for eladocagene exuparvovec-tneq (Kebilidi, PTC Therapeutics), a direct-to-brain gene therapy for aromatic L-amino acid decarboxylase deficiency. It uses a recombinant adeno-associated virus serotype 2 (rAAV2) to deliver a functioning DOPA decarboxylase (DDC) gene directly into the putamen, increasing the AADC enzyme and restoring dopamine production. It is administered through a stereotactic surgical procedure. List of gene therapies Gene therapy for color blindness Gene therapy for epilepsy Gene therapy for osteoarthritis Gene therapy in Parkinson's disease Gene therapy of the human retina List of gene therapies References Further reading External links Applied genetics Approved gene therapies Bioethics Biotechnology Medical genetics Molecular biology Molecular genetics Gene delivery 1989 introductions 1996 introductions 1989 in biotechnology Genetic engineering
Gene therapy
[ "Chemistry", "Technology", "Engineering", "Biology" ]
12,502
[ "Bioethics", "Genetics techniques", "Biological engineering", "Biochemistry", "Genetic engineering", "Biotechnology", "Molecular biology techniques", "Molecular genetics", "Gene therapy", "nan", "Molecular biology", "Ethics of science and technology", "Gene delivery" ]
17,521,962
https://en.wikipedia.org/wiki/Certified%20wireless%20network%20expert
The Certified Wireless Network Expert (CWNE) is the highest level certification in the CWNP program started in 2001 by Planet3 Wireless. It certifies the ability to design, install, secure, optimize and troubleshoot IEEE 802.11 wireless networks. Certification track The CWNE credential is the final step in a four-level certification process. It validates the applicant's real-world application of the principles covered by the other CWNP certification exams, including wireless protocol analysis, security, advanced design, spectrum analysis, wired network administration, and troubleshooting. CWNE Requirements The requirements for earning the CWNE certification changed on October 1, 2010, when the CWNE exam (PW0-300) was retired. The new requirements for the CWNE certification are: Valid and current CWSP, CWAP, CWISA, and CWDP certifications (requires CWNA). Three (3) years of documented enterprise Wi-Fi implementation experience. Three (3) professional endorsements. One (1) other current, valid professional networking certifications. Documentation of three (3) enterprise Wi-Fi projects in which you participated or led in the form of 500 word essays. Recertification Like most other CWNP certifications, the CWNE certification is valid for three (3) years. The certification may be renewed by reporting at least sixty (60) hours of approved Continuing Education (CE). Passing the most current version of either the CWSP, CWAP, or CWDP exam, which was the only recertification requirement prior to the change, is now worth twenty (20) CE hours. See also Professional certification (Computer technology) References External links Official CWNP Site Wireless networking Information technology qualifications
Certified wireless network expert
[ "Technology", "Engineering" ]
354
[ "Wireless networking", "Computer occupations", "Computer networks engineering", "Information technology qualifications" ]
2,249,718
https://en.wikipedia.org/wiki/Representation%20theory%20of%20the%20Lorentz%20group
The Lorentz group is a Lie group of symmetries of the spacetime of special relativity. This group can be realized as a collection of matrices, linear transformations, or unitary operators on some Hilbert space; it has a variety of representations. This group is significant because special relativity together with quantum mechanics are the two physical theories that are most thoroughly established, and the conjunction of these two theories is the study of the infinite-dimensional unitary representations of the Lorentz group. These have both historical importance in mainstream physics, as well as connections to more speculative present-day theories. Development The full theory of the finite-dimensional representations of the Lie algebra of the Lorentz group is deduced using the general framework of the representation theory of semisimple Lie algebras. The finite-dimensional representations of the connected component of the full Lorentz group are obtained by employing the Lie correspondence and the matrix exponential. The full finite-dimensional representation theory of the universal covering group (and also the spin group, a double cover) of is obtained, and explicitly given in terms of action on a function space in representations of and . The representatives of time reversal and space inversion are given in space inversion and time reversal, completing the finite-dimensional theory for the full Lorentz group. The general properties of the (m, n) representations are outlined. Action on function spaces is considered, with the action on spherical harmonics and the Riemann P-functions appearing as examples. The infinite-dimensional case of irreducible unitary representations are realized for the principal series and the complementary series. Finally, the Plancherel formula for is given, and representations of are classified and realized for Lie algebras. The development of the representation theory has historically followed the development of the more general theory of representation theory of semisimple groups, largely due to Élie Cartan and Hermann Weyl, but the Lorentz group has also received special attention due to its importance in physics. Notable contributors are physicist E. P. Wigner and mathematician Valentine Bargmann with their Bargmann–Wigner program, one conclusion of which is, roughly, a classification of all unitary representations of the inhomogeneous Lorentz group amounts to a classification of all possible relativistic wave equations. The classification of the irreducible infinite-dimensional representations of the Lorentz group was established by Paul Dirac's doctoral student in theoretical physics, Harish-Chandra, later turned mathematician, in 1947. The corresponding classification for was published independently by Bargmann and Israel Gelfand together with Mark Naimark in the same year. Applications Many of the representations, both finite-dimensional and infinite-dimensional, are important in theoretical physics. Representations appear in the description of fields in classical field theory, most importantly the electromagnetic field, and of particles in relativistic quantum mechanics, as well as of both particles and quantum fields in quantum field theory and of various objects in string theory and beyond. The representation theory also provides the theoretical ground for the concept of spin. The theory enters into general relativity in the sense that in small enough regions of spacetime, physics is that of special relativity. The finite-dimensional irreducible non-unitary representations together with the irreducible infinite-dimensional unitary representations of the inhomogeneous Lorentz group, the Poincare group, are the representations that have direct physical relevance. Infinite-dimensional unitary representations of the Lorentz group appear by restriction of the irreducible infinite-dimensional unitary representations of the Poincaré group acting on the Hilbert spaces of relativistic quantum mechanics and quantum field theory. But these are also of mathematical interest and of potential direct physical relevance in other roles than that of a mere restriction. There were speculative theories, (tensors and spinors have infinite counterparts in the expansors of Dirac and the expinors of Harish-Chandra) consistent with relativity and quantum mechanics, but they have found no proven physical application. Modern speculative theories potentially have similar ingredients per below. Classical field theory While the electromagnetic field together with the gravitational field are the only classical fields providing accurate descriptions of nature, other types of classical fields are important too. In the approach to quantum field theory (QFT) referred to as second quantization, the starting point is one or more classical fields, where e.g. the wave functions solving the Dirac equation are considered as classical fields prior to (second) quantization. While second quantization and the Lagrangian formalism associated with it is not a fundamental aspect of QFT, it is the case that so far all quantum field theories can be approached this way, including the standard model. In these cases, there are classical versions of the field equations following from the Euler–Lagrange equations derived from the Lagrangian using the principle of least action. These field equations must be relativistically invariant, and their solutions (which will qualify as relativistic wave functions according to the definition below) must transform under some representation of the Lorentz group. The action of the Lorentz group on the space of field configurations (a field configuration is the spacetime history of a particular solution, e.g. the electromagnetic field in all of space over all time is one field configuration) resembles the action on the Hilbert spaces of quantum mechanics, except that the commutator brackets are replaced by field theoretical Poisson brackets. Relativistic quantum mechanics For the present purposes the following definition is made: A relativistic wave function is a set of functions on spacetime which transforms under an arbitrary proper Lorentz transformation as where is an -dimensional matrix representative of belonging to some direct sum of the representations to be introduced below. The most useful relativistic quantum mechanics one-particle theories (there are no fully consistent such theories) are the Klein–Gordon equation and the Dirac equation in their original setting. They are relativistically invariant and their solutions transform under the Lorentz group as Lorentz scalars () and bispinors () respectively. The electromagnetic field is a relativistic wave function according to this definition, transforming under . The infinite-dimensional representations may be used in the analysis of scattering. Quantum field theory In quantum field theory, the demand for relativistic invariance enters, among other ways in that the S-matrix necessarily must be Poincaré invariant. This has the implication that there is one or more infinite-dimensional representation of the Lorentz group acting on Fock space. One way to guarantee the existence of such representations is the existence of a Lagrangian description (with modest requirements imposed, see the reference) of the system using the canonical formalism, from which a realization of the generators of the Lorentz group may be deduced. The transformations of field operators illustrate the complementary role played by the finite-dimensional representations of the Lorentz group and the infinite-dimensional unitary representations of the Poincare group, witnessing the deep unity between mathematics and physics. For illustration, consider the definition an -component field operator: A relativistic field operator is a set of operator valued functions on spacetime which transforms under proper Poincaré transformations according to Here is the unitary operator representing on the Hilbert space on which is defined and is an -dimensional representation of the Lorentz group. The transformation rule is the second Wightman axiom of quantum field theory. By considerations of differential constraints that the field operator must be subjected to in order to describe a single particle with definite mass and spin (or helicity), it is deduced that where are interpreted as creation and annihilation operators respectively. The creation operator transforms according to and similarly for the annihilation operator. The point to be made is that the field operator transforms according to a finite-dimensional non-unitary representation of the Lorentz group, while the creation operator transforms under the infinite-dimensional unitary representation of the Poincare group characterized by the mass and spin of the particle. The connection between the two are the wave functions, also called coefficient functions that carry both the indices operated on by Lorentz transformations and the indices operated on by Poincaré transformations. This may be called the Lorentz–Poincaré connection. To exhibit the connection, subject both sides of equation to a Lorentz transformation resulting in for e.g. , where is the non-unitary Lorentz group representative of and is a unitary representative of the so-called Wigner rotation associated to and that derives from the representation of the Poincaré group, and is the spin of the particle. All of the above formulas, including the definition of the field operator in terms of creation and annihilation operators, as well as the differential equations satisfied by the field operator for a particle with specified mass, spin and the representation under which it is supposed to transform, and also that of the wave function, can be derived from group theoretical considerations alone once the frameworks of quantum mechanics and special relativity is given. Speculative theories In theories in which spacetime can have more than dimensions, the generalized Lorentz groups of the appropriate dimension take the place of . The requirement of Lorentz invariance takes on perhaps its most dramatic effect in string theory. Classical relativistic strings can be handled in the Lagrangian framework by using the Nambu–Goto action. This results in a relativistically invariant theory in any spacetime dimension. But as it turns out, the theory of open and closed bosonic strings (the simplest string theory) is impossible to quantize in such a way that the Lorentz group is represented on the space of states (a Hilbert space) unless the dimension of spacetime is 26. The corresponding result for superstring theory is again deduced demanding Lorentz invariance, but now with supersymmetry. In these theories the Poincaré algebra is replaced by a supersymmetry algebra which is a -graded Lie algebra extending the Poincaré algebra. The structure of such an algebra is to a large degree fixed by the demands of Lorentz invariance. In particular, the fermionic operators (grade ) belong to a or representation space of the (ordinary) Lorentz Lie algebra. The only possible dimension of spacetime in such theories is 10. Finite-dimensional representations Representation theory of groups in general, and Lie groups in particular, is a very rich subject. The Lorentz group has some properties that makes it "agreeable" and others that make it "not very agreeable" within the context of representation theory; the group is simple and thus semisimple, but is not connected, and none of its components are simply connected. Furthermore, the Lorentz group is not compact. For finite-dimensional representations, the presence of semisimplicity means that the Lorentz group can be dealt with the same way as other semisimple groups using a well-developed theory. In addition, all representations are built from the irreducible ones, since the Lie algebra possesses the complete reducibility property. But, the non-compactness of the Lorentz group, in combination with lack of simple connectedness, cannot be dealt with in all the aspects as in the simple framework that applies to simply connected, compact groups. Non-compactness implies, for a connected simple Lie group, that no nontrivial finite-dimensional unitary representations exist. Lack of simple connectedness gives rise to spin representations of the group. The non-connectedness means that, for representations of the full Lorentz group, time reversal and reversal of spatial orientation have to be dealt with separately. History The development of the finite-dimensional representation theory of the Lorentz group mostly follows that of representation theory in general. Lie theory originated with Sophus Lie in 1873. By 1888 the classification of simple Lie algebras was essentially completed by Wilhelm Killing. In 1913 the theorem of highest weight for representations of simple Lie algebras, the path that will be followed here, was completed by Élie Cartan. Richard Brauer was during the period of 1935–38 largely responsible for the development of the Weyl-Brauer matrices describing how spin representations of the Lorentz Lie algebra can be embedded in Clifford algebras. The Lorentz group has also historically received special attention in representation theory, see History of infinite-dimensional unitary representations below, due to its exceptional importance in physics. Mathematicians Hermann Weyl and Harish-Chandra and physicists Eugene Wigner and Valentine Bargmann made substantial contributions both to general representation theory and in particular to the Lorentz group. Physicist Paul Dirac was perhaps the first to manifestly knit everything together in a practical application of major lasting importance with the Dirac equation in 1928. The Lie algebra This section addresses the irreducible complex linear representations of the complexification of the Lie algebra of the Lorentz group. A convenient basis for is given by the three generators of rotations and the three generators of boosts. They are explicitly given in conventions and Lie algebra bases. The Lie algebra is complexified, and the basis is changed to the components of its two ideals The components of and separately satisfy the commutation relations of the Lie algebra and, moreover, they commute with each other, where are indices which each take values , and is the three-dimensional Levi-Civita symbol. Let and denote the complex linear span of and respectively. One has the isomorphisms where is the complexification of The utility of these isomorphisms comes from the fact that all irreducible representations of , and hence all irreducible complex linear representations of are known. The irreducible complex linear representation of is isomorphic to one of the highest weight representations. These are explicitly given in complex linear representations of The unitarian trick The Lie algebra is the Lie algebra of It contains the compact subgroup with Lie algebra The latter is a compact real form of Thus from the first statement of the unitarian trick, representations of are in one-to-one correspondence with holomorphic representations of By compactness, the Peter–Weyl theorem applies to , and hence orthonormality of irreducible characters may be appealed to. The irreducible unitary representations of are precisely the tensor products of irreducible unitary representations of . By appeal to simple connectedness, the second statement of the unitarian trick is applied. The objects in the following list are in one-to-one correspondence: Holomorphic representations of Smooth representations of Real linear representations of Complex linear representations of Tensor products of representations appear at the Lie algebra level as either of where is the identity operator. Here, the latter interpretation, which follows from , is intended. The highest weight representations of are indexed by for . (The highest weights are actually , but the notation here is adapted to that of ) The tensor products of two such complex linear factors then form the irreducible complex linear representations of Finally, the -linear representations of the real forms of the far left, , and the far right, in are obtained from the -linear representations of characterized in the previous paragraph. The (μ, ν)-representations of sl(2, C) The complex linear representations of the complexification of obtained via isomorphisms in , stand in one-to-one correspondence with the real linear representations of The set of all real linear irreducible representations of are thus indexed by a pair . The complex linear ones, corresponding precisely to the complexification of the real linear representations, are of the form , while the conjugate linear ones are the . All others are real linear only. The linearity properties follow from the canonical injection, the far right in , of into its complexification. Representations on the form or are given by real matrices (the latter are not irreducible). Explicitly, the real linear -representations of are where are the complex linear irreducible representations of and their complex conjugate representations. (The labeling is usually in the mathematics literature , but half-integers are chosen here to conform with the labeling for the Lie algebra.) Here the tensor product is interpreted in the former sense of . These representations are concretely realized below. The (m, n)-representations of so(3; 1) Via the displayed isomorphisms in and knowledge of the complex linear irreducible representations of upon solving for and , all irreducible representations of and, by restriction, those of are obtained. The representations of obtained this way are real linear (and not complex or conjugate linear) because the algebra is not closed upon conjugation, but they are still irreducible. Since is semisimple, all its representations can be built up as direct sums of the irreducible ones. Thus the finite dimensional irreducible representations of the Lorentz algebra are classified by an ordered pair of half-integers and , conventionally written as one of where is a finite-dimensional vector space. These are, up to a similarity transformation, uniquely given by where is the -dimensional unit matrix and are the -dimensional irreducible representations of also termed spin matrices or angular momentum matrices. These are explicitly given as where denotes the Kronecker delta. In components, with , , the representations are given by Common representations The representation is the one-dimensional trivial representation and is carried by relativistic scalar field theories. Fermionic supersymmetry generators transform under one of the or representations (Weyl spinors). The four-momentum of a particle (either massless or massive) transforms under the representation, a four-vector. A physical example of a (1,1) traceless symmetric tensor field is the traceless part of the energy–momentum tensor . Off-diagonal direct sums Since for any irreducible representation for which it is essential to operate over the field of complex numbers, the direct sum of representations and have particular relevance to physics, since it permits to use linear operators over real numbers. is the bispinor representation. See also Dirac spinor and Weyl spinors and bispinors below. is the Rarita–Schwinger field representation. would be the symmetry of the hypothesized gravitino. It can be obtained from the representation. is the representation of a parity-invariant 2-form field (a.k.a. curvature form). The electromagnetic field tensor transforms under this representation. The group The approach in this section is based on theorems that, in turn, are based on the fundamental Lie correspondence. The Lie correspondence is in essence a dictionary between connected Lie groups and Lie algebras. The link between them is the exponential mapping from the Lie algebra to the Lie group, denoted If for some vector space is a representation, a representation of the connected component of is defined by This definition applies whether the resulting representation is projective or not. Surjectiveness of exponential map for SO(3, 1) From a practical point of view, it is important whether the first formula in can be used for all elements of the group. It holds for all , however, in the general case, e.g. for , not all are in the image of . But is surjective. One way to show this is to make use of the isomorphism the latter being the Möbius group. It is a quotient of (see the linked article). The quotient map is denoted with The map is onto. Apply with being the differential of at the identity. Then Since the left hand side is surjective (both and are), the right hand side is surjective and hence is surjective. Finally, recycle the argument once more, but now with the known isomorphism between and to find that is onto for the connected component of the Lorentz group. Fundamental group The Lorentz group is doubly connected, i. e. is a group with two equivalence classes of loops as its elements. Projective representations Since has two elements, some representations of the Lie algebra will yield projective representations. Once it is known whether a representation is projective, formula applies to all group elements and all representations, including the projective ones — with the understanding that the representative of a group element will depend on which element in the Lie algebra (the in ) is used to represent the group element in the standard representation. For the Lorentz group, the -representation is projective when is a half-integer. See . For a projective representation of , it holds that since any loop in traversed twice, due to the double connectedness, is contractible to a point, so that its homotopy class is that of a constant map. It follows that is a double-valued function. It is not possible to consistently choose a sign to obtain a continuous representation of all of , but this is possible locally around any point. The covering group SL(2, C) Consider as a real Lie algebra with basis where the sigmas are the Pauli matrices. From the relations is obtained which are exactly on the form of the -dimensional version of the commutation relations for (see conventions and Lie algebra bases below). Thus, the map , , extended by linearity is an isomorphism. Since is simply connected, it is the universal covering group of . A geometric view Let be a path from to , denote its homotopy class by and let be the set of all such homotopy classes. Define the set and endow it with the multiplication operation where is the path multiplication of and : With this multiplication, becomes a group isomorphic to the universal covering group of . Since each has two elements, by the above construction, there is a 2:1 covering map . According to covering group theory, the Lie algebras and of are all isomorphic. The covering map is simply given by . An algebraic view For an algebraic view of the universal covering group, let act on the set of all Hermitian matrices by the operation The action on is linear. An element of may be written in the form The map is a group homomorphism into Thus is a 4-dimensional representation of . Its kernel must in particular take the identity matrix to itself, and therefore . Thus for in the kernel so, by Schur's lemma, is a multiple of the identity, which must be since . The space is mapped to Minkowski space , via The action of on preserves determinants. The induced representation of on via the above isomorphism, given by preserves the Lorentz inner product since This means that belongs to the full Lorentz group . By the main theorem of connectedness, since is connected, its image under in is connected, and hence is contained in . It can be shown that the Lie map of is a Lie algebra isomorphism: The map is also onto. Thus , since it is simply connected, is the universal covering group of , isomorphic to the group of above. Non-surjectiveness of exponential mapping for SL(2, C) The exponential mapping is not onto. The matrix is in but there is no such that . In general, if is an element of a connected Lie group with Lie algebra then, by , The matrix can be written Realization of representations of and and their Lie algebras The complex linear representations of and are more straightforward to obtain than the representations. They can be (and usually are) written down from scratch. The holomorphic group representations (meaning the corresponding Lie algebra representation is complex linear) are related to the complex linear Lie algebra representations by exponentiation. The real linear representations of are exactly the -representations. They can be exponentiated too. The -representations are complex linear and are (isomorphic to) the highest weight-representations. These are usually indexed with only one integer (but half-integers are used here). The mathematics convention is used in this section for convenience. Lie algebra elements differ by a factor of and there is no factor of in the exponential mapping compared to the physics convention used elsewhere. Let the basis of be This choice of basis, and the notation, is standard in the mathematical literature. Complex linear representations The irreducible holomorphic -dimensional representations can be realized on the space of homogeneous polynomial of degree in 2 variables the elements of which are The action of is given by The associated -action is, using and the definition above, for the basis elements of With a choice of basis for , these representations become matrix Lie algebras. Real linear representations The -representations are realized on a space of polynomials in homogeneous of degree in and homogeneous of degree in The representations are given by By employing again it is found that In particular for the basis elements, Properties of the (m, n) representations The representations, defined above via (as restrictions to the real form ) of tensor products of irreducible complex linear representations and of are irreducible, and they are the only irreducible representations. Irreducibility follows from the unitarian trick and that a representation of is irreducible if and only if , where are irreducible representations of . Uniqueness follows from that the are the only irreducible representations of , which is one of the conclusions of the theorem of the highest weight. Dimension The representations are -dimensional. This follows easiest from counting the dimensions in any concrete realization, such as the one given in representations of and . For a Lie general algebra the Weyl dimension formula, applies, where is the set of positive roots, is the highest weight, and is half the sum of the positive roots. The inner product is that of the Lie algebra invariant under the action of the Weyl group on the Cartan subalgebra. The roots (really elements of ) are via this inner product identified with elements of For the formula reduces to , where the present notation must be taken into account. The highest weight is . By taking tensor products, the result follows. Faithfulness If a representation of a Lie group is not faithful, then is a nontrivial normal subgroup. There are three relevant cases. is non-discrete and abelian. is non-discrete and non-abelian. is discrete. In this case , where is the center of . In the case of , the first case is excluded since is semi-simple. The second case (and the first case) is excluded because is simple. For the third case, is isomorphic to the quotient But is the center of It follows that the center of is trivial, and this excludes the third case. The conclusion is that every representation and every projective representation for finite-dimensional vector spaces are faithful. By using the fundamental Lie correspondence, the statements and the reasoning above translate directly to Lie algebras with (abelian) nontrivial non-discrete normal subgroups replaced by (one-dimensional) nontrivial ideals in the Lie algebra, and the center of replaced by the center of The center of any semisimple Lie algebra is trivial and is semi-simple and simple, and hence has no non-trivial ideals. A related fact is that if the corresponding representation of is faithful, then the representation is projective. Conversely, if the representation is non-projective, then the corresponding representation is not faithful, but is . Non-unitarity The Lie algebra representation is not Hermitian. Accordingly, the corresponding (projective) representation of the group is never unitary. This is due to the non-compactness of the Lorentz group. In fact, a connected simple non-compact Lie group cannot have any nontrivial unitary finite-dimensional representations. There is a topological proof of this. Let , where is finite-dimensional, be a continuous unitary representation of the non-compact connected simple Lie group . Then where is the compact subgroup of consisting of unitary transformations of . The kernel of is a normal subgroup of . Since is simple, is either all of , in which case is trivial, or is trivial, in which case is faithful. In the latter case is a diffeomorphism onto its image, and is a Lie group. This would mean that is an embedded non-compact Lie subgroup of the compact group . This is impossible with the subspace topology on since all embedded Lie subgroups of a Lie group are closed If were closed, it would be compact, and then would be compact, contrary to assumption. In the case of the Lorentz group, this can also be seen directly from the definitions. The representations of and used in the construction are Hermitian. This means that is Hermitian, but is anti-Hermitian. The non-unitarity is not a problem in quantum field theory, since the objects of concern are not required to have a Lorentz-invariant positive definite norm. Restriction to SO(3) The representation is, however, unitary when restricted to the rotation subgroup , but these representations are not irreducible as representations of SO(3). A Clebsch–Gordan decomposition can be applied showing that an representation have -invariant subspaces of highest weight (spin) , where each possible highest weight (spin) occurs exactly once. A weight subspace of highest weight (spin) is -dimensional. So for example, the (, ) representation has spin 1 and spin 0 subspaces of dimension 3 and 1 respectively. Since the angular momentum operator is given by , the highest spin in quantum mechanics of the rotation sub-representation will be and the "usual" rules of addition of angular momenta and the formalism of 3-j symbols, 6-j symbols, etc. applies. Spinors It is the -invariant subspaces of the irreducible representations that determine whether a representation has spin. From the above paragraph, it is seen that the representation has spin if is half-integer. The simplest are and , the Weyl-spinors of dimension . Then, for example, and are a spin representations of dimensions and respectively. According to the above paragraph, there are subspaces with spin both and in the last two cases, so these representations cannot likely represent a single physical particle which must be well-behaved under . It cannot be ruled out in general, however, that representations with multiple subrepresentations with different spin can represent physical particles with well-defined spin. It may be that there is a suitable relativistic wave equation that projects out unphysical components, leaving only a single spin. Construction of pure spin representations for any (under ) from the irreducible representations involves taking tensor products of the Dirac-representation with a non-spin representation, extraction of a suitable subspace, and finally imposing differential constraints. Dual representations The following theorems are applied to examine whether the dual representation of an irreducible representation is isomorphic to the original representation: The set of weights of the dual representation of an irreducible representation of a semisimple Lie algebra is, including multiplicities, the negative of the set of weights for the original representation. Two irreducible representations are isomorphic if and only if they have the same highest weight. For each semisimple Lie algebra there exists a unique element of the Weyl group such that if is a dominant integral weight, then is again a dominant integral weight. If is an irreducible representation with highest weight , then has highest weight . Here, the elements of the Weyl group are considered as orthogonal transformations, acting by matrix multiplication, on the real vector space of roots. If is an element of the Weyl group of a semisimple Lie algebra, then . In the case of the Weyl group is . It follows that each is isomorphic to its dual The root system of is shown in the figure to the right. The Weyl group is generated by where is reflection in the plane orthogonal to as ranges over all roots. Inspection shows that so . Using the fact that if are Lie algebra representations and , then , the conclusion for is Complex conjugate representations If is a representation of a Lie algebra, then is a representation, where the bar denotes entry-wise complex conjugation in the representative matrices. This follows from that complex conjugation commutes with addition and multiplication. In general, every irreducible representation of can be written uniquely as , where with holomorphic (complex linear) and anti-holomorphic (conjugate linear). For since is holomorphic, is anti-holomorphic. Direct examination of the explicit expressions for and in equation below shows that they are holomorphic and anti-holomorphic respectively. Closer examination of the expression also allows for identification of and for as Using the above identities (interpreted as pointwise addition of functions), for yields where the statement for the group representations follow from . It follows that the irreducible representations have real matrix representatives if and only if . Reducible representations on the form have real matrices too. The adjoint representation, the Clifford algebra, and the Dirac spinor representation In general representation theory, if is a representation of a Lie algebra then there is an associated representation of on , also denoted , given by Likewise, a representation of a group yields a representation on of , still denoted , given by If and are the standard representations on and if the action is restricted to then the two above representations are the adjoint representation of the Lie algebra and the adjoint representation of the group respectively. The corresponding representations (some or ) always exist for any matrix Lie group, and are paramount for investigation of the representation theory in general, and for any given Lie group in particular. Applying this to the Lorentz group, if is a projective representation, then direct calculation using shows that the induced representation on is a proper representation, i.e. a representation without phase factors. In quantum mechanics this means that if or is a representation acting on some Hilbert space , then the corresponding induced representation acts on the set of linear operators on . As an example, the induced representation of the projective spin representation on is the non-projective 4-vector (, ) representation. For simplicity, consider only the "discrete part" of , that is, given a basis for , the set of constant matrices of various dimension, including possibly infinite dimensions. The induced 4-vector representation of above on this simplified has an invariant 4-dimensional subspace that is spanned by the four gamma matrices. (The metric convention is different in the linked article.) In a corresponding way, the complete Clifford algebra of spacetime, whose complexification is generated by the gamma matrices decomposes as a direct sum of representation spaces of a scalar irreducible representation (irrep), the , a pseudoscalar irrep, also the , but with parity inversion eigenvalue , see the next section below, the already mentioned vector irrep, , a pseudovector irrep, with parity inversion eigenvalue +1 (not −1), and a tensor irrep, . The dimensions add up to . In other words, where, as is customary, a representation is confused with its representation space. The spin representation The six-dimensional representation space of the tensor -representation inside has two roles. The where are the gamma matrices, the sigmas, only of which are non-zero due to antisymmetry of the bracket, span the tensor representation space. Moreover, they have the commutation relations of the Lorentz Lie algebra, and hence constitute a representation (in addition to spanning a representation space) sitting inside the spin representation. For details, see bispinor and Dirac algebra. The conclusion is that every element of the complexified in (i.e. every complex matrix) has well defined Lorentz transformation properties. In addition, it has a spin-representation of the Lorentz Lie algebra, which upon exponentiation becomes a spin representation of the group, acting on making it a space of bispinors. Reducible representations There is a multitude of other representations that can be deduced from the irreducible ones, such as those obtained by taking direct sums, tensor products, and quotients of the irreducible representations. Other methods of obtaining representations include the restriction of a representation of a larger group containing the Lorentz group, e.g. and the Poincaré group. These representations are in general not irreducible. The Lorentz group and its Lie algebra have the complete reducibility property. This means that every representation reduces to a direct sum of irreducible representations. The reducible representations will therefore not be discussed. Space inversion and time reversal The (possibly projective) representation is irreducible as a representation , the identity component of the Lorentz group, in physics terminology the proper orthochronous Lorentz group. If it can be extended to a representation of all of , the full Lorentz group, including space parity inversion and time reversal. The representations can be extended likewise. Space parity inversion For space parity inversion, the adjoint action of on is considered, where is the standard representative of space parity inversion, , given by It is these properties of and under that motivate the terms vector for and pseudovector or axial vector for . In a similar way, if is any representation of and is its associated group representation, then acts on the representation of by the adjoint action, for . If is to be included in , then consistency with requires that holds, where and are defined as in the first section. This can hold only if and have the same dimensions, i.e. only if . When then can be extended to an irreducible representation of , the orthochronous Lorentz group. The parity reversal representative does not come automatically with the general construction of the representations. It must be specified separately. The matrix (or a multiple of modulus −1 times it) may be used in the representation. If parity is included with a minus sign (the matrix ) in the representation, it is called a pseudoscalar representation. Time reversal Time reversal , acts similarly on by By explicitly including a representative for , as well as one for , a representation of the full Lorentz group is obtained. A subtle problem appears however in application to physics, in particular quantum mechanics. When considering the full Poincaré group, four more generators, the , in addition to the and generate the group. These are interpreted as generators of translations. The time-component is the Hamiltonian . The operator satisfies the relation in analogy to the relations above with replaced by the full Poincaré algebra. By just cancelling the 's, the result would imply that for every state with positive energy in a Hilbert space of quantum states with time-reversal invariance, there would be a state with negative energy . Such states do not exist. The operator is therefore chosen antilinear and antiunitary, so that it anticommutes with , resulting in , and its action on Hilbert space likewise becomes antilinear and antiunitary. It may be expressed as the composition of complex conjugation with multiplication by a unitary matrix. This is mathematically sound, see Wigner's theorem, but with very strict requirements on terminology, is not a representation. When constructing theories such as QED which is invariant under space parity and time reversal, Dirac spinors may be used, while theories that do not, such as the electroweak force, must be formulated in terms of Weyl spinors. The Dirac representation, , is usually taken to include both space parity and time inversions. Without space parity inversion, it is not an irreducible representation. The third discrete symmetry entering in the CPT theorem along with and , charge conjugation symmetry , has nothing directly to do with Lorentz invariance. Action on function spaces If is a vector space of functions of a finite number of variables , then the action on a scalar function given by produces another function . Here is an -dimensional representation, and is a possibly infinite-dimensional representation. A special case of this construction is when is a space of functions defined on the a linear group itself, viewed as a -dimensional manifold embedded in (with the dimension of the matrices). This is the setting in which the Peter–Weyl theorem and the Borel–Weil theorem are formulated. The former demonstrates the existence of a Fourier decomposition of functions on a compact group into characters of finite-dimensional representations. The latter theorem, providing more explicit representations, makes use of the unitarian trick to yield representations of complex non-compact groups, e.g. The following exemplifies action of the Lorentz group and the rotation subgroup on some function spaces. Euclidean rotations The subgroup of three-dimensional Euclidean rotations has an infinite-dimensional representation on the Hilbert space where are the spherical harmonics. An arbitrary square integrable function on the unit sphere can be expressed as where the are generalized Fourier coefficients. The Lorentz group action restricts to that of and is expressed as where the are obtained from the representatives of odd dimension of the generators of rotation. The Möbius group The identity component of the Lorentz group is isomorphic to the Möbius group . This group can be thought of as conformal mappings of either the complex plane or, via stereographic projection, the Riemann sphere. In this way, the Lorentz group itself can be thought of as acting conformally on the complex plane or on the Riemann sphere. In the plane, a Möbius transformation characterized by the complex numbers acts on the plane according to and can be represented by complex matrices since multiplication by a nonzero complex scalar does not change . These are elements of and are unique up to a sign (since give the same ), hence The Riemann P-functions The Riemann P-functions, solutions of Riemann's differential equation, are an example of a set of functions that transform among themselves under the action of the Lorentz group. The Riemann P-functions are expressed as where the are complex constants. The P-function on the right hand side can be expressed using standard hypergeometric functions. The connection is The set of constants in the upper row on the left hand side are the regular singular points of the Gauss' hypergeometric equation. Its exponents, i. e. solutions of the indicial equation, for expansion around the singular point are and ,corresponding to the two linearly independent solutions, and for expansion around the singular point they are and . Similarly, the exponents for are and for the two solutions. One has thus where the condition (sometimes called Riemann's identity) on the exponents of the solutions of Riemann's differential equation has been used to define . The first set of constants on the left hand side in , denotes the regular singular points of Riemann's differential equation. The second set, , are the corresponding exponents at for one of the two linearly independent solutions, and, accordingly, are exponents at for the second solution. Define an action of the Lorentz group on the set of all Riemann P-functions by first setting where are the entries in for a Lorentz transformation. Define where is a Riemann P-function. The resulting function is again a Riemann P-function. The effect of the Möbius transformation of the argument is that of shifting the poles to new locations, hence changing the critical points, but there is no change in the exponents of the differential equation the new function satisfies. The new function is expressed as where Infinite-dimensional unitary representations History The Lorentz group and its double cover also have infinite dimensional unitary representations, studied independently by , and at the instigation of Paul Dirac. This trail of development begun with where he devised matrices and necessary for description of higher spin (compare Dirac matrices), elaborated upon by , see also , and proposed precursors of the Bargmann-Wigner equations. In he proposed a concrete infinite-dimensional representation space whose elements were called expansors as a generalization of tensors. These ideas were incorporated by Harish–Chandra and expanded with expinors as an infinite-dimensional generalization of spinors in his 1947 paper. The Plancherel formula for these groups was first obtained by Gelfand and Naimark through involved calculations. The treatment was subsequently considerably simplified by and , based on an analogue for of the integration formula of Hermann Weyl for compact Lie groups. Elementary accounts of this approach can be found in and . The theory of spherical functions for the Lorentz group, required for harmonic analysis on the hyperboloid model of 3-dimensional hyperbolic space sitting in Minkowski space is considerably easier than the general theory. It only involves representations from the spherical principal series and can be treated directly, because in radial coordinates the Laplacian on the hyperboloid is equivalent to the Laplacian on This theory is discussed in , , and the posthumous text of . Principal series for SL(2, C) The principal series, or unitary principal series, are the unitary representations induced from the one-dimensional representations of the lower triangular subgroup  of Since the one-dimensional representations of correspond to the representations of the diagonal matrices, with non-zero complex entries and , they thus have the form for an integer, real and with . The representations are irreducible; the only repetitions, i.e. isomorphisms of representations, occur when is replaced by . By definition the representations are realized on sections of line bundles on which is isomorphic to the Riemann sphere. When , these representations constitute the so-called spherical principal series. The restriction of a principal series to the maximal compact subgroup of  can also be realized as an induced representation of  using the identification , where is the maximal torus in  consisting of diagonal matrices with . It is the representation induced from the 1-dimensional representation , and is independent of . By Frobenius reciprocity, on  they decompose as a direct sum of the irreducible representations of  with dimensions with a non-negative integer. Using the identification between the Riemann sphere minus a point and the principal series can be defined directly on by the formula Irreducibility can be checked in a variety of ways: The representation is already irreducible on . This can be seen directly, but is also a special case of general results on irreducibility of induced representations due to François Bruhat and George Mackey, relying on the Bruhat decomposition where is the Weyl group element . The action of the Lie algebra of  can be computed on the algebraic direct sum of the irreducible subspaces of  can be computed explicitly and the it can be verified directly that the lowest-dimensional subspace generates this direct sum as a -module. Complementary series for The for , the complementary series is defined on for the inner product with the action given by The representations in the complementary series are irreducible and pairwise non-isomorphic. As a representation of , each is isomorphic to the Hilbert space direct sum of all the odd dimensional irreducible representations of . Irreducibility can be proved by analyzing the action of on the algebraic sum of these subspaces or directly without using the Lie algebra. Plancherel theorem for SL(2, C) The only irreducible unitary representations of are the principal series, the complementary series and the trivial representation. Since acts as on the principal series and trivially on the remainder, these will give all the irreducible unitary representations of the Lorentz group, provided is taken to be even. To decompose the left regular representation of  on only the principal series are required. This immediately yields the decomposition on the subrepresentations the left regular representation of the Lorentz group, and the regular representation on 3-dimensional hyperbolic space. (The former only involves principal series representations with k even and the latter only those with .) The left and right regular representation and are defined on by Now if is an element of , the operator defined by is Hilbert–Schmidt. Define a Hilbert space  by where and denotes the Hilbert space of Hilbert–Schmidt operators on Then the map  defined on by extends to a unitary of onto . The map  satisfies the intertwining property If are in then by unitarity Thus if denotes the convolution of and and then The last two displayed formulas are usually referred to as the Plancherel formula and the Fourier inversion formula respectively. The Plancherel formula extends to all By a theorem of Jacques Dixmier and Paul Malliavin, every smooth compactly supported function on is a finite sum of convolutions of similar functions, the inversion formula holds for such . It can be extended to much wider classes of functions satisfying mild differentiability conditions. Classification of representations of The strategy followed in the classification of the irreducible infinite-dimensional representations is, in analogy to the finite-dimensional case, to assume they exist, and to investigate their properties. Thus first assume that an irreducible strongly continuous infinite-dimensional representation on a Hilbert space of is at hand. Since is a subgroup, is a representation of it as well. Each irreducible subrepresentation of is finite-dimensional, and the representation is reducible into a direct sum of irreducible finite-dimensional unitary representations of if is unitary. The steps are the following: Choose a suitable basis of common eigenvectors of and . Compute matrix elements of and . Enforce Lie algebra commutation relations. Require unitarity together with orthonormality of the basis. Step 1 One suitable choice of basis and labeling is given by If this were a finite-dimensional representation, then would correspond the lowest occurring eigenvalue of in the representation, equal to , and would correspond to the highest occurring eigenvalue, equal to . In the infinite-dimensional case, retains this meaning, but does not. For simplicity, it is assumed that a given occurs at most once in a given representation (this is the case for finite-dimensional representations), and it can be shown that the assumption is possible to avoid (with a slightly more complicated calculation) with the same results. Step 2 The next step is to compute the matrix elements of the operators and forming the basis of the Lie algebra of The matrix elements of and (the complexified Lie algebra is understood) are known from the representation theory of the rotation group, and are given by where the labels and have been dropped since they are the same for all basis vectors in the representation. Due to the commutation relations the triple is a vector operator and the Wigner–Eckart theorem applies for computation of matrix elements between the states represented by the chosen basis. The matrix elements of where the superscript signifies that the defined quantities are the components of a spherical tensor operator of rank (which explains the factor as well) and the subscripts are referred to as in formulas below, are given by Here the first factors on the right hand sides are Clebsch–Gordan coefficients for coupling with to get . The second factors are the reduced matrix elements. They do not depend on or , but depend on and, of course, . For a complete list of non-vanishing equations, see . Step 3 The next step is to demand that the Lie algebra relations hold, i.e. that This results in a set of equations for which the solutions are where Step 4 The imposition of the requirement of unitarity of the corresponding representation of the group restricts the possible values for the arbitrary complex numbers and . Unitarity of the group representation translates to the requirement of the Lie algebra representatives being Hermitian, meaning This translates to leading to where is the angle of on polar form. For follows and is chosen by convention. There are two possible cases: In this case , real, This is the principal series. Its elements are denoted It follows: Since , is real and positive for , leading to . This is complementary series. Its elements are denoted This shows that the representations of above are all infinite-dimensional irreducible unitary representations. Explicit formulas Conventions and Lie algebra bases The metric of choice is given by , and the physics convention for Lie algebras and the exponential mapping is used. These choices are arbitrary, but once they are made, fixed. One possible choice of basis for the Lie algebra is, in the 4-vector representation, given by: The commutation relations of the Lie algebra are: In three-dimensional notation, these are The choice of basis above satisfies the relations, but other choices are possible. The multiple use of the symbol above and in the sequel should be observed. For example, a typical boost and a typical rotation exponentiate as, symmetric and orthogonal, respectively. Weyl spinors and bispinors By taking, in turn, and and by setting in the general expression , and by using the trivial relations and , it follows These are the left-handed and right-handed Weyl spinor representations. They act by matrix multiplication on 2-dimensional complex vector spaces (with a choice of basis) and , whose elements and are called left- and right-handed Weyl spinors respectively. Given their direct sum as representations is formed, This is, up to a similarity transformation, the Dirac spinor representation of It acts on the 4-component elements of , called bispinors, by matrix multiplication. The representation may be obtained in a more general and basis independent way using Clifford algebras. These expressions for bispinors and Weyl spinors all extend by linearity of Lie algebras and representations to all of Expressions for the group representations are obtained by exponentiation. Open problems The classification and characterization of the representation theory of the Lorentz group was completed in 1947. But in association with the Bargmann–Wigner programme, there are yet unresolved purely mathematical problems, linked to the infinite-dimensional unitary representations. The irreducible infinite-dimensional unitary representations may have indirect relevance to physical reality in speculative modern theories since the (generalized) Lorentz group appears as the little group of the Poincaré group of spacelike vectors in higher spacetime dimension. The corresponding infinite-dimensional unitary representations of the (generalized) Poincaré group are the so-called tachyonic representations. Tachyons appear in the spectrum of bosonic strings and are associated with instability of the vacuum. Even though tachyons may not be realized in nature, these representations must be mathematically understood in order to understand string theory. This is so since tachyon states turn out to appear in superstring theories too in attempts to create realistic models. One open problem is the completion of the Bargmann–Wigner programme for the isometry group of the de Sitter spacetime . Ideally, the physical components of wave functions would be realized on the hyperboloid of radius embedded in and the corresponding covariant wave equations of the infinite-dimensional unitary representation to be known. See also Bargmann–Wigner equations Dirac algebra Gamma matrices Lorentz group Möbius transformation Poincaré group Representation theory of the Poincaré group Symmetry in quantum mechanics Wigner's classification Remarks Notes Freely available online references Expanded version of the lectures presented at the second Modave summer school in mathematical physics (Belgium, August 2006). Group elements of SU(2) are expressed in closed form as finite polynomials of the Lie algebra generators, for all definite spin representations of the rotation group. References (the representation theory of SO(2,1) and SL(2, R); the second part on SO(3; 1) and SL(2, C), described in the introduction, was never published). (free access) (a general introduction for physicists) (elementary treatment for SL(2,C)) (a detailed account for physicists) (James K. Whittemore Lectures in Mathematics given at Yale University, 1967) , Chapter 9, SL(2, C) and more general Lorentz groups . Representation theory of Lie groups Special relativity Quantum mechanics
Representation theory of the Lorentz group
[ "Physics" ]
11,399
[ "Special relativity", "Theoretical physics", "Quantum mechanics", "Theory of relativity" ]
2,252,775
https://en.wikipedia.org/wiki/Air-cooled%20engine
Air-cooled engines rely on the circulation of air directly over heat dissipation fins or hot areas of the engine to cool them in order to keep the engine within operating temperatures. Air-cooled designs are far simpler than their liquid-cooled counterparts, which require a separate radiator, coolant reservoir, piping and pumps. Air-cooled engines are widely seen in applications where weight or simplicity is the primary goal. Their simplicity makes them suited for uses in small applications like chainsaws and lawn mowers, as well as small generators and similar roles. These qualities also make them highly suitable for aviation use, where they are widely used in general aviation aircraft and as auxiliary power units on larger aircraft. Their simplicity, in particular, also makes them common on motorcycles. [[Image:newjug1.jpg|justify|thumb|010(9_29743dfqrto apos) Introduction Most modern internal combustion engines are cooled by a closed circuit carrying liquid coolant through channels in the engine block and cylinder head. A fluid in these channels absorbs heat and then flows to a heat exchanger or radiator where the coolant releases heat into the air (or raw water, in the case of marine engines). Thus, while they are not ultimately cooled by the liquid, as the heat is exchanged with some other fluid like air, because of the liquid-coolant circuit they are known as liquid-cooled. In contrast, heat generated by an air-cooled engine is released directly into the air. Typically this is facilitated with metal fins covering the outside of the Cylinder Head and cylinders which increase the surface area that air can act on. Air may be force fed with the use of a fan and shroud to achieve efficient cooling with high volumes of air or simply by natural air flow with well designed and angled fins. In all combustion engines, a great percentage of the heat generated, around 44%, escapes through the exhaust. Another 8% or so ends up in the oil, which itself has to be cooled in an oil cooler. This means less than half of the heat has to be removed through other systems. In an air-cooled engine, only about 12% of the heat flows out through the metal fins. Air cooled engines usually run noisier, however it provides more simplicity which gives benefits when it comes to servicing and part replacement and is usually cheaper to be maintained. Applications Road vehicles Many motorcycles use air cooling for the sake of reducing weight and complexity. Few current production automobiles have air-cooled engines (such as Tatra 815), but historically it was common for many high-volume vehicles. The orientation of the engine cylinders is commonly found in either single-cylinder or coupled in groups of two, and cylinders are commonly oriented in a horizontal fashion as a Flat engine, while vertical Straight-four engine have been used. Examples of past air-cooled road vehicles, in roughly chronological order, include: Franklin (1902-1934) New Way (1905) - limited production run out from the "CLARKMOBILE" Chevrolet Series M Copper-Cooled (1921-1923) (very few built) Tatra all-wheel-drive military trucks Tatra 11 (1923-1927) and subsequent models Tatra T77 (1934-1938) Tatra T87 (1936-1950) Tatra T97 (1936-1939) Tatra T600 Tatraplan (1946-1952) Tatra T603 (1955-1975) Tatra T613 (1974-1996) Tatra T700 (1996-1999) Crosley (1939-1945) The East German Trabant (1957-1991) Trabant 500 (1957-1962) Trabant 600 (1962-1965) Trabant 601 (1964-1989) ZAZ Zaporozhets (1958-1994) Fiat 500 (1957-1975) Fiat 126 (1972-1987) Porsche 356 (1948-1965) Porsche 911 (1964-1998) Porsche 912 (1965-1969, 1976) VW-Porsche 914 (1969-1976) The Volkswagen Beetle, Type 2, SP2, Karmann Ghia, and Type 3 all utilized the same air-cooled engine (1938-2013) with various displacements Volkswagen Type 2 (T3) (1979–1982) Volkswagen Type 4 (1968-1974) Volkswagen Gol (G1) (1980-1986) Toyota U engine (1961-1976) Chevrolet Corvair (1960-1969) Citroën 2CV (1948-1990) (Featured a high pressure oil cooling system, and used a fan bolted to the crankshaft end) Citroën GS and GSA Honda 1300 (1969-1973) NSU Prinz Royal Enfield Motorcycles (India): The 350cc and 500cc Twinspark motorcycle engines are air-cooled Oltcit Club (1981–1995)T13/653, G11/631 and VO36/630 Demak Dzm 200 2015 Aviation During the 1920s and 30s there was a great debate in the aviation industry about the merits of air-cooled vs. liquid-cooled designs. At the beginning of this period, the liquid used for cooling was water at ambient pressure. The amount of heat carried away by a fluid is a function of its capacity and the difference in input and output temperatures. As the boiling point of water is reduced with lower pressure, and the water could not be efficiently pumped as steam, radiators had to have enough cooling power to account for the loss in cooling power as the aircraft climbed. The resulting radiators were quite large and caused a significant amount of aerodynamic drag. This placed the two designs roughly equal in terms of power to drag, but the air-cooled designs were almost always lighter and simpler. In 1921, the US Navy, largely due to the efforts of Commander Bruce G. Leighton, decided that the simplicity of the air-cooled design would result in less maintenance workload, which was paramount given the limited working area of aircraft carriers. Leighton's efforts led to the Navy underwriting air-cooled engine development at Pratt & Whitney and Wright Aeronautical. Most other groups, especially in Europe where aircraft performance was rapidly improving, were more concerned with the issue of drag. While air-cooled designs were common on light aircraft and trainers, as well as some transport aircraft and bombers, liquid-cooled designs remained much more common for fighters and high-performance bombers. The drag issue was upset by the 1929 introduction of the NACA cowl, which greatly reduced the drag of air-cooled engines in spite of their larger frontal area, and the drag related to cooling was at this point largely even. In the late 1920s into the 1930s, a number of European companies introduced cooling system that kept the water under pressure allowed it to reach much higher temperatures without boiling, carrying away more heat and thus reducing the volume of water required and the size of the radiator by as much as 30%, which opened the way to a new generation of high-powered, relatively low-drag liquid cooled inline engines such as the Rolls-Royce Merlin and Daimler-Benz DB601, which had an advantage over the unpressurized early versions of the Jumo 211. This also led to development work attempting to eliminate the radiator entirely using evaporative cooling, allowing it to turn to steam and running the steam through tubes located just under the skin of the wings and fuselage, where the fast moving outside air condensed it back to water. While this concept was used on a number of record-setting aircraft in the late 1930s, it always proved impractical for production aircraft for a wide variety of reasons. In 1929, Curtiss began experiments replacing water with ethylene glycol in a Curtiss D-12 engine. Glycol could run up to 250 C and reduced the radiator size by 50% compared to water cooled designs. The experiments were extremely successful and by 1932 the company had switched all future designs to this coolant. At the time, Union Carbide held a monopoly on the industrial process to make glycol, so it was initially used only in the US, with Allison Engines picking it up soon after. It was not until the mid-1930s that Rolls-Royce adopted it as supplies improved, converting all of their engines to glycol. With the much smaller radiators and less fluid in the system, the weight and drag of these designs was well below contemporary air-cooled designs. On a weight basis, these liquid-cooled designs offered as much as 30% better performance. In the late- and post-war era, the high-performance field quickly moved to jet engines. This took away the primary market for late-model liquid-cooled engines. Those roles that remained with piston power were mostly slower designs and civilian aircraft. In these roles, the simplicity and reduction in servicing needs is far more important than drag, and from the end of the war on almost all piston aviation engines have been air-cooled, with few exceptions. , most of the engines manufactured by Lycoming and Continental are used by major manufacturers of light aircraft Cirrus, Cessna and so on. Other engine manufactures using air-cooled engine technology are ULPower and Jabiru, more active in the Light-Sport Aircraft (LSA) and ultralight aircraft market. Rotax uses a combination of air-cooled cylinders and liquid-cooled cylinder heads. Diesel engines Some small diesel engines, e.g. those made by Deutz AG and Lister Petter are air-cooled. Probably the only big Euro 5 truck air-cooled engine (V8 320 kW power 2100 N·m torque one) is being produced by Tatra. BOMAG part of the FAYAT group also utilizes an air cooled inline 6 cylinder motor, in many of their construction vehicles. Stationary or portable engines Stationary or portable engines were commercially introduced early in the 1900s. The first commercial production was by the New Way Motor Company of Lansing, Michigan, US. The company produced air-cooled engines in single and twin cylinders in both horizontal and vertical cylinder format. Subsequent to their initial production which was exported worldwide, other companies took up the advantages of this cooling method, especially in small portable engines. Applications include mowers, generators, outboard motors, pump sets, saw benches and auxiliary power plants and more. References Bibliography Cited sources Further reading P V Lamarque, "The design of cooling fins for Motor-Cycle Engines". Report of the Automobile Research Committee, Institution of Automobile Engineers Magazine, March 1943 issue, and also in "The Institution of Automobile Engineers. Proceedings XXXVII, Session 1942-1943, pp 99-134 and 309-312. Julius Mackerle, "Air-cooled Automotive Engines", Charles Griffin & Company Ltd., London 1972. Engines
Air-cooled engine
[ "Physics", "Technology" ]
2,210
[ "Physical systems", "Machines", "Engines" ]
2,252,898
https://en.wikipedia.org/wiki/Martinet%20dioxindole%20synthesis
The Martinet dioxindole synthesis was first reported in 1913 by J. Martinet. It is a chemical reaction in which a primary or secondary aniline or substituted aromatic amine is condensed with ethyl or methyl ester of mesoxalic acid to make a dioxindole in the absence of oxygen. Proposed mechanism In the first step, the amino group on the aniline (1) attacks the carbonyl of the ethyl oxomalonate (2). A proton from the nitrogen is extracted by the oxygen and an alcohol group forms (3). The carbonyl re-forms to make a keto group and an ethanol molecule leaves (4). Next, a ring closing reaction occurs by the bond from the aromatic benzene ring attacking the partially positive carbonyl to form a five-member ring (5). After a proton transfer (6), an isomerization or a [1,3] hydride shift occurs and aromaticity is restored to the six-membered ring (7). In the presence of base, the ester is hydrolyzed, ethanol is lost (8) and a decarboxylation occurs (9). The resulting product is the desired dioxindole (10). In the presence of oxygen, dioxindole converts to isatin through oxidation. Applications The Martinet dioxindole synthesis is utilized in the preparation of oxindole derivatives. Oxindole derivatives found in natural products are gaining popularity in research because of their structural diversity. 3-substituted-3-hydroxy-2-oxindole is the central structure of a wide variety of biologically important compounds found in natural products. The 3-substituted-3-hydroxy-2-oxindole structure holds anti-oxidant, anti-cancer, anti-HIV, and neuroprotective properties. The utilization of this core structure for drug synthesis and the relevant cellular pathways involved are being extensively studied. The enantio-selective addition of 3-substituted oxindole derivatives to different electrophiles gives access to chiral 3,3-disubstituted oxindole derivatives. The dioxindole is a strong nucleophile for the Michael addition of dioxindoles to nitroalkenes in order to obtain 3,3-disubstituted oxindole derivatives. Experimental examples The Martinet dioxindole synthesis proceeds with an alkoxyaniline, 3,4,5-trimethoxyaniline, which reacts with an oxomalonic ester in glacial acetic acid to synthesize 2-carbethoxy-4,5,6-trimethoxyindoxyl, 2-carbethoxy-3,4,5,6-tetramethoxyindole and 4,5,6-trimethoxy-3-hydroxy-3-carbethoxyindole. Dioxindole Dioxindole is a non-aromatic heterocyclic organic compound. It has a bicyclic structure consisting of a six-membered aromatic ring fused to a five-membered nitrogen containing ring. It is a hydroxy derivative of oxindole first prepared by reducing isatin with sodium amalgam in an alkaline solution. See also Indole Oxindole References Indole forming reactions Name reactions
Martinet dioxindole synthesis
[ "Chemistry" ]
702
[ "Name reactions" ]
2,253,446
https://en.wikipedia.org/wiki/Coking
Coking is the process of heating coal in the absence of oxygen to a temperature above to drive off the volatile components of the raw coal, leaving behind a hard, strong, porous material with a high carbon content called coke. Coke is predominantly carbon. Its porous structure provides a high surface area, allowing it to burn more rapidly, much like how a bundle of tinder burns faster than a solid wooden log. As such, when a kilogram of coke is burned, it releases more heat than a kilogram of the original coal. Application to smelting iron Coke is used as fuel in a blast furnace. In a continuous process, coke, iron ore, and limestone are mixed together and placed in the top of the blast furnace, and at the bottom liquid iron and waste slag are removed. The raw materials continuously move down the blast furnace. During this continuous process more raw materials are placed at the top, and as the coke moves down, it must withstand the ever-increasing weight of the materials above it. It is the ability to withstand this crushing force, in addition to its high energy content and rapid combustion, that makes coke ideal for use in blast furnaces. Petroleum coking "Coking is a refinery unit operation that upgrades material called bottoms from the atmospheric or vacuum distillation column into higher-value products and produces petroleum coke—a coal-like material". In heterogeneous catalysis, the process is undesirable because the clinker blocks the catalytic sites. Coking is characteristic of high temperature reactions involving hydrocarbon feedstocks. Typically coking is reversed by combustion, provided that the catalyst will tolerate such. A simplified equation for coking is shown in the case of ethylene: 3 C2H4 → 2 C ("coke") + 2 C2H6 A more realistic but complex view involves the alkylation of an aromatic ring of a coke nucleus. Acidic catalysts are thus especially prone to coking because they are effective at generating carbocations (i.e., alkylating agents). Coking is one of several mechanisms for the deactivation of a heterogeneous catalyst. Other mechanisms include sintering, poisoning, and solid-state transformation of the catalyst. See also Coking factory Gas to liquids References Coking works Catalysis
Coking
[ "Chemistry" ]
477
[ "Catalysis", "Chemical kinetics" ]
12,008,648
https://en.wikipedia.org/wiki/Phreatic%20zone
The phreatic zone, saturated zone, or zone of saturation, is the part of an aquifer, below the water table, in which relatively all pores and fractures are saturated with water. The part above the water table is the vadose zone (also called unsaturated zone). The phreatic zone size, color, and depth may fluctuate with changes of season, and during wet and dry periods.<ref ></ref ><ref ></ref > Depending on the characteristics of soil particles, their packing and porosity, the boundary of a saturated zone can be stable or instable, exhibiting fingering patterns known as Saffman–Taylor instability. Predicting the onset of stable vs. unstable drainage fronts is of some importance in modelling phreatic zone boundaries.<ref > Dynamics of Drainage and Viscous Fingering in Transport in Porous Media Note that zones "behind" the drainage front are areas on the 'dry' (low-viscosity) (typically above / beyond the 'wet' zone). </ref > See also Index: Aquifer articles References Aquifers Cave geology Hydrogeology Soil physics
Phreatic zone
[ "Physics", "Environmental_science" ]
244
[ "Hydrology", "Applied and interdisciplinary physics", "Soil physics", "Hydrology stubs", "Aquifers", "Hydrogeology" ]
12,008,694
https://en.wikipedia.org/wiki/Junction%20temperature
Junction temperature, short for transistor junction temperature, is the highest operating temperature of the actual semiconductor in an electronic device. In operation, it is higher than case temperature and the temperature of the part's exterior. The difference is equal to the amount of heat transferred from the junction to case multiplied by the junction-to-case thermal resistance. Microscopic effects Various physical properties of semiconductor materials are temperature dependent. These include the diffusion rate of dopant elements, carrier mobilities and the thermal production of charge carriers. At the low end, sensor diode noise can be reduced by cryogenic cooling. On the high end, the resulting increase in local power dissipation can lead to thermal runaway that may cause transient or permanent device failure. Maximum junction temperature calculation Maximum junction temperature (sometimes abbreviated TJMax) is specified in a part's datasheet and is used when calculating the necessary case-to-ambient thermal resistance for a given power dissipation. This in turn is used to select an appropriate heat sink if applicable. Other cooling methods include thermoelectric cooling and coolants. In modern processors from manufacturer such as Intel, AMD, Qualcomm, the core temperature is measured by a network of sensors. Every time the temperature sensing network determines that a rise above the specified junction temperature (), is imminent, measures such as clock gating, clock stretching, clock speed reduction and others (commonly referred to as thermal throttling) are applied to prevent the temperature to raise further. If the applied mechanisms are not compensating enough for the processor to stay below the junction temperature, the device may shut down to prevent permanent damage. An estimation of the chip-junction temperature can be obtained from the following equation: where: = ambient temperature for the package [°C] = junction to ambient thermal resistance [°C / W] = power dissipation in package [W] Measuring junction temperature (TJ) Many semiconductors and their surrounding optics are small, making it difficult to measure junction temperature with direct methods such as thermocouples and infrared cameras. Junction temperature may be measured indirectly using the device's inherent voltage/temperature dependency characteristic. Combined with a Joint Electron Device Engineering Council (JEDEC) technique such as JESD 51-1 and JESD 51-51, this method will produce accurate measurements. However, this measurement technique is difficult to implement in multi-LED series circuits due to high common mode voltages and the need for fast, high duty cycle current pulses. This difficulty can be overcome by combining high-speed sampling digital multimeters and fast high-compliance pulsed current sources. Once junction temperature is known, another important parameter, thermal resistance (Rθ), may be calculated using the following equation: Junction temperature of LEDs and laser diodes An LED or laser diode’s junction temperature (Tj) is a primary determinate for long-term reliability; it also is a key factor for photometry. For example, a typical white LED output declines 20% for a 50 °C rise in junction temperature. Because of this temperature sensitivity, LED measurement standards, like IESNA’s LM-85 , require that the junction temperature is determined when making photometric measurements. Junction heating can be minimized in these devices by using the Continuous Pulse Test Method specified in LM-85. An L-I sweep conducted with an Osram Yellow LED shows that Single Pulse Test Method measurements yield a 25% drop in luminous flux output and DC Test Method measurements yield a 70% drop. See also Safe operating area P-N Junction Metal Semiconductor Junction References Semiconductors
Junction temperature
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
740
[ "Electrical resistance and conductance", "Physical quantities", "Semiconductors", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Matter" ]
12,009,413
https://en.wikipedia.org/wiki/Ames%20process
The Ames process is a process by which pure uranium metal is obtained. It can be achieved by mixing any of the uranium halides (commonly uranium tetrafluoride) with magnesium metal powder or aluminium metal powder. History The Ames process was used on August 3, 1942, by a group of chemists led by Frank Spedding and Harley Wilhelm at the Ames Laboratory as part of the Manhattan Project. It is a type of thermite-based purification, which was patented in 1895 by German chemist Hans Goldschmidt. Development of the Ames process came at a time of increased research into mass uranium-metal production. The desire for increased production was motivated by a fear of Nazi Germany's developing nuclear weapons before the Allies. The process originally involved mixing powdered uranium tetrafluoride and powdered magnesium together. This mixture was placed inside an iron pipe that was welded shut on one side and capped shut on another side. This container, called a "bomb" by Spedding, was placed into a furnace. When heated to a temperature of , the contents of the container reacted violently, leaving a 35-gram ingot of pure uranium metal. The process was quickly scaled up; by October 1942 the "Ames Project" was producing metal at a rate of per week. The uranium tetrafluoride and magnesium were sealed in a refractory-lined reactor vessel, still referred to as a "bomb". The thermite reaction was initiated by furnace heating the assembly to ; the large difference in density between slag and metal allowed complete separation in the liquid state, yielding slag-free metal. By July 1943, the production rate exceeded of uranium metal per month. Approximately 1000 tons of uranium ingots were produced at Ames before the process was transferred to industry. The Ames project received the Army-Navy "E" Award for Excellence in Production on October 12, 1945, signifying 2.5 years of excellence in industrial production of metallic uranium as a vital war material. Iowa State University is unique among educational institutions to have received this award for outstanding service, an honor normally given to industry. Ames process for rare-earth metals The metallothermic reduction of anhydrous rare-earth fluorides to rare-earth metals is also referred to as the Ames process. The study of rare earths was also advanced during World War II: synthetic plutonium was believed to be rare-earth-like, and it was assumed that knowledge of rare earths would assist in planning for and the study of transuranic elements; ion-exchange methods developed for actinide processing were forerunners to processing methods for rare-earth oxides; methods used for uranium were modified for plutonium, which were subsequently the basis for rare-earth metal preparation. References Notes External links Uranium Chemical processes Metallurgical processes Manhattan Project Iowa State University
Ames process
[ "Chemistry", "Materials_science" ]
580
[ "Metallurgical processes", "Metallurgy", "Chemical processes", "nan", "Chemical process engineering" ]
12,010,787
https://en.wikipedia.org/wiki/Magnetorotational%20instability
The magnetorotational instability (MRI) is a fluid instability that causes an accretion disk orbiting a massive central object to become turbulent. It arises when the angular velocity of a conducting fluid in a magnetic field decreases as the distance from the rotation center increases. It is also known as the Velikhov–Chandrasekhar instability or Balbus–Hawley instability in the literature, not to be confused with the electrothermal Velikhov instability. The MRI is of particular relevance in astrophysics where it is an important part of the dynamics in accretion disks. Gases or liquids containing mobile electrical charges are subject to the influence of a magnetic field. In addition to hydrodynamical forces such as pressure and gravity, an element of magnetized fluid also feels the Lorentz force where is the current density and is the magnetic field vector. If the fluid is in a state of differential rotation about a fixed origin, this Lorentz force can be surprisingly disruptive, even if the magnetic field is very weak. In particular, if the angular velocity of rotation decreases with radial distance the motion is unstable: a fluid element undergoing a small displacement from circular motion experiences a destabilizing force that increases at a rate which is itself proportional to the displacement. This process is known as the Magnetorotational Instability, or "MRI". In astrophysical settings, differentially rotating systems are very common and magnetic fields are ubiquitous. In particular, thin disks of gas are often found around forming stars or in binary star systems, where they are known as accretion disks. Accretion disks are also commonly present in the centre of galaxies, and in some cases can be extremely luminous: quasars, for example, are thought to originate from a gaseous disk surrounding a very massive black hole. Our modern understanding of the MRI arose from attempts to understand the behavior of accretion disks in the presence of magnetic fields; it is now understood that the MRI is likely to occur in a very wide variety of different systems. Discovery The MRI was first noticed in a non-astrophysical context by Evgeny Velikhov in 1959 when considering the stability of Couette flow of an ideal hydromagnetic fluid. His result was later generalized by Subrahmanyan Chandrasekhar in 1960. This mechanism was proposed by David Acheson and Raymond Hide (1973) to perhaps play a role in the context of the Earth's geodynamo problem. Although there was some follow-up work in later decades (Fricke, 1969; Acheson and Hide 1972; Acheson and Gibbons 1978), the generality and power of the instability were not fully appreciated until 1991, when Steven A. Balbus and John F. Hawley gave a relatively simple elucidation and physical explanation of this important process. Physical process In a magnetized, perfectly conducting fluid, the magnetic forces behave in some very important respects as though the elements of fluid were connected with elastic bands: trying to displace such an element perpendicular to a magnetic line of force causes an attractive force proportional to the displacement, like a spring under tension. Normally, such a force is restoring, a strongly stabilizing influence that would allow a type of magnetic wave to propagate. If the fluid medium is not stationary but rotating, however, attractive forces can actually be destabilizing. The MRI is a consequence of this surprising behavior. Consider, for example, two masses, mi ("inner") and mo ("outer") connected by a spring under tension, both masses in orbit around a central body, Mc. In such a system, the angular velocity of circular orbits near the center is greater than the angular velocity of orbits farther from the center, but the angular momentum of the inner orbits is smaller than that of the outer orbits. If mi is allowed to orbit a little bit closer to the center than mo, it will have a slightly higher angular velocity. The connecting spring will pull back on mi, and drag mo forward. This means that mi experiences a retarding torque, loses angular momentum, and must fall inward to an orbit of smaller radius, corresponding to a smaller angular momentum. mo, on the other hand, experiences a positive torque, acquires more angular momentum, and moves outward to a higher orbit. The spring stretches yet more, the torques become yet larger, and the motion is unstable! Because magnetic forces act like a spring under tension connecting fluid elements, the behavior of a magnetized fluid is almost exactly analogous to this simple mechanical system. This is the essence of the MRI . A more detailed explanation To see this unstable behavior more quantitatively, consider the equations of motion for a fluid element mass in circular motion with an angular velocity In general will be a function of the distance from the rotation axis and we assume that the orbital radius is The centripetal acceleration required to keep the mass in orbit is ; the minus sign indicates a direction toward the center. If this force is gravity from a point mass at the center, then the centripetal acceleration is simply where is the gravitational constant and is the central mass. Let us now consider small departures from the circular motion of the orbiting mass element caused by some perturbing force. We transform variables into a rotating frame moving with the orbiting mass element at angular velocity with origin located at the unperturbed, orbiting location of the mass element. As usual when working in a rotating frame, we need to add to the equations of motion a Coriolis force plus a centrifugal force The velocity is the velocity as measured in the rotating frame. Furthermore, we restrict our attention to a small neighborhood near say with much smaller than Then the sum of the centrifugal and centripetal forces is to linear order in With our axis pointing radial outward from the unperturbed location of the fluid element and our axis pointing in the direction of increasing azimuthal angle (the direction of the unperturbed orbit), the and equations of motion for a small departure from a circular orbit are: where and are the forces per unit mass in the and directions, and a dot indicates a time derivative (i.e., is the velocity, is the acceleration, etc.). Provided that and are either 0 or linear in x and y, this is a system of coupled second-order linear differential equations that can be solved analytically. In the absence of external forces, and , the equations of motion have solutions with the time dependence where the angular frequency satisfies the equation where is known as the epicyclic frequency. In our solar system, for example, deviations from a sun-centered circular orbit that are familiar ellipses when viewed by an external viewer at rest, appear instead as small radial and azimuthal oscillations of the orbiting element when viewed by an observer moving with the undisturbed circular motion. These oscillations trace out a small retrograde ellipse (i.e. rotating in the opposite sense of the large circular orbit), centered on the undisturbed orbital location of the mass element. The epicyclic frequency may equivalently be written which shows that it is proportional to the radial derivative of the angular momentum per unit mass, or specific angular momentum. The specific angular momentum must increase outward if stable epicyclic oscillations are to exist, otherwise displacements would grow exponentially, corresponding to instability. This is a very general result known as the Rayleigh criterion (Chandrasekhar 1961) for stability. For orbits around a point mass, the specific angular momentum is proportional to so the Rayleigh criterion is well satisfied. Consider next the solutions to the equations of motion if the mass element is subjected to an external restoring force, where is an arbitrary constant (the "spring constant"). If we now seek solutions for the modal displacements in and with time dependence we find a much more complex equation for Even though the spring exerts an attractive force, it may destabilize. For example, if the spring constant is sufficiently weak, the dominant balance will be between the final two terms on the left side of the equation. Then, a decreasing outward angular velocity profile will produce negative values for and both positive and negative imaginary values for The negative imaginary root results not in oscillations, but in exponential growth of very small displacements. A weak spring therefore causes the type of instability described qualitatively at the end of the previous section. A strong spring on the other hand, will produce oscillations, as one intuitively expects. The spring-like nature of magnetic fields The conditions inside a perfectly conducting fluid in motion is often a good approximation to astrophysical gases. In the presence of a magnetic field a moving conductor responds by trying to eliminate the Lorentz force on the free charges. The magnetic force acts in such a way as to locally rearrange these charges to produce an internal electric field of In this way, the direct Lorentz force on the charges vanishes. (Alternatively, the electric field in the local rest frame of the moving charges vanishes.) This induced electric field can now itself induce further changes in the magnetic field according to Faraday's law, Another way to write this equation is that if in time the fluid makes a displacement then the magnetic field changes by The equation of a magnetic field in a perfect conductor in motion has a special property: the combination of Faraday induction and zero Lorentz force makes the field lines behave as though they were painted, or "frozen," into the fluid. In particular, if is initially nearly constant and is a divergence-free displacement, then our equation reduces to because of the vector calculus identity Out of these 4 terms, is one of Maxwell's equations. By the divergence-free assumption, . because B is assumed to be nearly constant. Equation shows that changes only when there is a shearing displacement along the field line. To understand the MRI, it is sufficient to consider the case in which is uniform in vertical direction, and varies as Then where it is understood that the real part of this equation expresses its physical content. (If is proportional to for example, then is proportional to ) A magnetic field exerts a force per unit volume on an electrically neutral, conducting fluid equal to Ampere's circuital law gives because Maxwell's correction is neglected in the MHD approximation. The force per unit volume becomes where we have used the same vector calculus identity. This equation is fully general, and makes no assumptions about the strength or direction of the magnetic field. The first term on the right is analogous to a pressure gradient. In our problem it may be neglected because it exerts no force in the plane of the disk, perpendicular to The second term acts like a magnetic tension force, analogous to a taut string. For a small disturbance it exerts an acceleration given by force divided by mass, or equivalently, force per unit volume divided by mass per unit volume: Thus, a magnetic tension force gives rise to a return force which is directly proportional to the displacement. This means that the oscillation frequency for small displacements in the plane of rotation of a disk with a uniform magnetic field in the vertical direction satisfies an equation ("dispersion relation") exactly analogous to equation , with the "spring constant" As before, if there is an exponentially growing root of this equation for wavenumbers satisfying This corresponds to the MRI. Notice that the magnetic field appears in equation only as the product Thus, even if is very small, for very large wavenumbers this magnetic tension can be important. This is why the MRI is so sensitive to even very weak magnetic fields: their effect is amplified by multiplication by Moreover, it can be shown that MRI is present regardless of the magnetic field geometry, as long as the field is not too strong. In astrophysics, one is generally interested in the case for which the disk is supported by rotation against the gravitational attraction of a central mass. A balance between the Newtonian gravitational force and the radial centripetal force immediately gives where is the Newtonian gravitational constant, is the central mass, and is radial location in the disk. Since this so-called Keplerian disk is unstable to the MRI . Without a weak magnetic field, the flow would be stable. For a Keplerian disk, the maximum growth rate is which occurs at a wavenumber satisfying is very rapid, corresponding to an amplification factor of more than 100 per rotation period. The nonlinear development of the MRI into fully developed turbulence may be followed via large scale numerical computation. Applications and laboratory experiments Interest in the MRI is based on the fact that it appears to give an explanation for the origin of turbulent flow in astrophysical accretion disks (Balbus and Hawley, 1991). A promising model for the compact, intense X-ray sources discovered in the 1960s was that of a neutron star or black hole drawing in ("accreting") gas from its surroundings (Prendergast and Burbidge, 1968). Such gas always accretes with a finite amount of angular momentum relative to the central object, and so it must first form a rotating disk — it cannot accrete directly onto the object without first losing its angular momentum. But how an element of gaseous fluid managed to lose its angular momentum and spiral onto the central object was not at all obvious. One explanation involved shear-driven turbulence (Shakura and Sunyaev, 1973). There would be significant shear in an accretion disk (gas closer to the centre rotates more rapidly than outer disk regions), and shear layers often break down into turbulent flow. The presence of shear-generated turbulence, in turn, produces the powerful torques needed to transport angular momentum from one (inner) fluid element to another (farther out). The breakdown of shear layers into turbulence is routinely observed in flows with velocity gradients, but without systematic rotation. This is an important point, because rotation produces strongly stabilizing Coriolis forces, and this is precisely what occurs in accretion disks . As can be seen in equation , the K = 0 limit produces Coriolis-stabilized oscillations, not exponential growth. These oscillations are present under much more general conditions as well: a recent laboratory experiment (Ji et al., 2006) has shown stability of the flow profile expected in accretion disks under conditions in which otherwise troublesome dissipation effects are (by a standard measure known as the Reynolds number) well below one part in a million. All of this changes, however, are when even a very weak magnetic field is present. The MRI produces torques that are not stabilized by Coriolis forces. Large scale numerical simulations of the MRI indicate that the rotational disk flow breaks down into turbulence (Hawley et al., 1995), with strongly enhanced angular momentum transport properties. This is just what is required for the accretion disk model to work. The formation of stars (Stone et al., 2000), the production of X-rays in neutron star and black hole systems (Blaes, 2004), and the creation of active galactic nuclei (Krolik, 1999) and gamma ray bursts (Wheeler, 2004) are all thought to involve the development of the MRI at some level. Thus far, we have focused rather exclusively on the dynamical breakdown of laminar flow into turbulence triggered by a weak magnetic field, but it is also the case that the resulting highly agitated flow can act back on this same magnetic field. Embedded magnetic field lines are stretched by the turbulent flow, and it is possible that systematic field amplification could result. The process by which fluid motions are converted to magnetic field energy is known as a dynamo (Moffatt, 1978); the two best studied examples are the Earth's liquid outer core and the layers close to the surface of the Sun. Dynamo activity in these regions is thought to be responsible for maintaining the terrestrial and solar magnetic fields. In both of these cases thermal convection is likely to be the primary energy source, though in the case of the Sun differential rotation may also play an important role. Whether the MRI is an efficient dynamo process in accretion disks is currently an area of active research (Fromang and Papaloizou, 2007). There may also be applications of the MRI outside of the classical accretion disk venue. Internal rotation in stars (Ogilvie, 2007), and even planetary dynamos (Petitdemange et al., 2008) may, under some circumstances, be vulnerable to the MRI in combination with convective instabilities. These studies are also ongoing. Finally, the MRI can, in principle, be studied in the laboratory (Ji et al., 2001), though these experiments are very difficult to implement. A typical set-up involves either concentric spherical shells or coaxial cylindrical shells. Between (and confined by) the shells, there is a conducting liquid metal such as sodium or gallium. The inner and outer shells are set in rotation at different rates, and viscous torques compel the trapped liquid metal to differentially rotate. The experiment then investigates whether the differential rotation profile is stable or not in the presence of an applied magnetic field. A claimed detection of the MRI in a spherical shell experiment (Sisan et al., 2004), in which the underlying state was itself turbulent, awaits confirmation at the time of this writing (2009). A magnetic instability that bears some similarity to the MRI can be excited if both vertical and azimuthal magnetic fields are present in the undisturbed state (Hollerbach and Rüdiger, 2005). This is sometimes referred to as the helical-MRI, (Liu et al., 2006) though its precise relation to the MRI described above has yet to be fully elucidated. Because it is less sensitive to stabilizing ohmic resistance than is the classical MRI, this helical magnetic instability is easier to excite in the laboratory, and there are indications that it may have been found (Stefani et al., 2006). The detection of the classical MRI in a hydrodynamically quiescent background state has yet to be achieved in the laboratory, however. The spring-mass analogue of the standard MRI has been demonstrated in rotating Taylor–Couette / Keplerian-like flow (Hung et al. 2019). References Balbus, S. A., and Hawley, J. F. 1991, Astrophys. J., 376, 214 Blaes, O. M. 2004, in Physics Fundamentals of Luminous Accretion Disks Around Black Holes. Proc. LXXVIII of Les Houches Summer School, Chamonix, France, ed. F. Menard, G. Pelletier, V. Beskin, J. Dalibard, p. 137. Paris/Berlin: Springer Chandrasekhar, S. 1961, Hydrodynamic and Hydromagnetic Instability, Oxford: Clarendon Fricke, K. 1969, Astron. Astrophys., 1, 388 Fromang, S., and Papaloizou J. 2007, Astron. Astrophys., 476, 1113 Hawley, J. F., Gammie, C. F., and Balbus, S. A. 1995, Astrophys. J., 440, 742 Ji, H., Goodman, J., and Kageyama, A. 2001, MNRAS, 325, L1 Krolik, J. 1999, Active Galactic Nuclei, Princeton: Princeton Univ. Moffatt, H. K. 1978, Magnetic Field Generation in Electrically Conducting Fluids. Cambridge: Cambridge Univ Ogilvie G., 2007, in The Solar Tachocline. ed. D. Hughes, R. Rosner, N. Weiss, p. 299. Cambridge: Cambridge Univ. Prendergast, K., and Burbidge, G. R. 1968, Astrophys. J. Lett., 151, L83 Shakura, N., and Sunyaev, R. A. 1973, Astron. Astrophys., 24, 337 Stone, J. M., Gammie, C. F., Balbus, S. A., and Hawley, J. F. 2000, in Protostars and Planets IV, ed. V.Mannings, A.Boss, and S.Russell, Space Science Reviews, p. 589. Tucson: U. Arizona Velikhov, E. P. 1959, J. Exp. Theor. Phys. (USSR), 36, 1398 Wheeler, J. C. 2004, Advances in Space Research, 34, 12, 2744 Further reading Fluid dynamics Magnetohydrodynamics Plasma instabilities
Magnetorotational instability
[ "Physics", "Chemistry", "Engineering" ]
4,296
[ "Physical phenomena", "Magnetohydrodynamics", "Chemical engineering", "Plasma phenomena", "Plasma instabilities", "Piping", "Fluid dynamics" ]
12,013,342
https://en.wikipedia.org/wiki/Chemical%20transport%20reaction
In chemistry, a chemical transport reaction describes a process for purification and crystallization of non-volatile solids. The process is also responsible for certain aspects of mineral growth from the effluent of volcanoes. The technique is distinct from chemical vapor deposition, which usually entails decomposition of molecular precursors and which gives conformal coatings. The technique, which was popularized by Harald Schäfer, entails the reversible conversion of nonvolatile elements and chemical compounds into volatile derivatives. The volatile derivative migrates throughout a sealed reactor, typically a sealed and evacuated glass tube heated in a tube furnace. Because the tube is under a temperature gradient, the volatile derivative reverts to the parent solid and the transport agent is released at the end opposite to which it originated (see next section). The transport agent is thus catalytic. The technique requires that the two ends of the tube (which contains the sample to be crystallized) be maintained at different temperatures. So-called two-zone tube furnaces are employed for this purpose. The method derives from the Van Arkel de Boer process which was used for the purification of titanium and vanadium and uses iodine as the transport agent. Cases of the exothermic and endothermic reactions of the transporting agent Transport reactions are classified according to the thermodynamics of the reaction between the solid and the transporting agent. When the reaction is exothermic, then the solid of interest is transported from the cooler end (which can be quite hot) of the reactor to a hot end, where the equilibrium constant is less favorable and the crystals grow. The reaction of molybdenum dioxide with the transporting agent iodine is an exothermic process, thus the MoO2 migrates from the cooler end (700 °C) to the hotter end (900 °C): MoO2 + I2 MoO2I2 ΔHrxn < 0 (exothermic) Using 10 milligrams of iodine for 4 grams of the solid, the process requires several days. Alternatively, when the reaction of the solid and the transport agent is endothermic, the solid is transported from a hot zone to a cooler one. For example: Fe2O3 + 6 HCl Fe2Cl6+ 3 H2O ΔHrxn > 0 (endothermic) The sample of iron(III) oxide is maintained at 1000 °C, and the product is grown at 750 °C. HCl is the transport agent. Crystals of hematite are reportedly observed at the mouths of volcanoes because of chemical transport reactions whereby volcanic hydrogen chloride volatilizes iron(III) oxides. Halogen lamp A similar reaction like that of MoO2 is used in halogen lamps. The tungsten is evaporated from the tungsten filament and converted with traces of oxygen and iodine into the WO2I2, at the high temperatures near the filament the compound decomposes back to tungsten, oxygen and iodine. WO2 + I2 WO2I2, ΔHrxn < 0 (exothermic) References Inorganic chemistry Solid-state chemistry
Chemical transport reaction
[ "Physics", "Chemistry", "Materials_science" ]
650
[ "Condensed matter physics", "nan", "Solid-state chemistry" ]
12,017,057
https://en.wikipedia.org/wiki/Self-focusing
Self-focusing is a non-linear optical process induced by the change in refractive index of materials exposed to intense electromagnetic radiation. A medium whose refractive index increases with the electric field intensity acts as a focusing lens for an electromagnetic wave characterized by an initial transverse intensity gradient, as in a laser beam. The peak intensity of the self-focused region keeps increasing as the wave travels through the medium, until defocusing effects or medium damage interrupt this process. Self-focusing of light was discovered by Gurgen Askaryan. Self-focusing is often observed when radiation generated by femtosecond lasers propagates through many solids, liquids and gases. Depending on the type of material and on the intensity of the radiation, several mechanisms produce variations in the refractive index which result in self-focusing: the main cases are Kerr-induced self-focusing and plasma self-focusing. Kerr-induced self-focusing Kerr-induced self-focusing was first predicted in the 1960s and experimentally verified by studying the interaction of ruby lasers with glasses and liquids. Its origin lies in the optical Kerr effect, a non-linear process which arises in media exposed to intense electromagnetic radiation, and which produces a variation of the refractive index as described by the formula , where n0 and n2 are the linear and non-linear components of the refractive index, and I is the intensity of the radiation. Since n2 is positive in most materials, the refractive index becomes larger in the areas where the intensity is higher, usually at the centre of a beam, creating a focusing density profile which potentially leads to the collapse of a beam on itself. Self-focusing beams have been found to naturally evolve into a Townes profile regardless of their initial shape. Self-focusing beyond a threshold of power can lead to laser collapse and damage to the medium, which occurs if the radiation power is greater than the critical power , where λ is the radiation wavelength in vacuum and α is a constant which depends on the initial spatial distribution of the beam. Although there is no general analytical expression for α, its value has been derived numerically for many beam profiles. The lower limit is α ≈ 1.86225, which corresponds to Townes beams, whereas for a Gaussian beam α ≈ 1.8962. For air, n0 ≈ 1, n2 ≈ 4×10−23 m2/W for λ = 800 nm, and the critical power is Pcr ≈ 2.4 GW, corresponding to an energy of about 0.3 mJ for a pulse duration of 100 fs. For silica, n0 ≈ 1.453, n2 ≈ 2.4×10−20 m2/W, and the critical power is Pcr ≈ 2.8 MW. Kerr-induced self-focusing is crucial for many applications in laser physics, both as a key ingredient and as a limiting factor. For example, the technique of chirped pulse amplification was developed to overcome the nonlinearities and damage of optical components that self-focusing would produce in the amplification of femtosecond laser pulses. On the other hand, self-focusing is a major mechanism behind Kerr-lens modelocking, laser filamentation in transparent media, self-compression of ultrashort laser pulses, parametric generation, and many areas of laser-matter interaction in general. Self-focusing and defocusing in gain medium Kelley predicted that homogeneously broadened two-level atoms may focus or defocus light when carrier frequency is detuned downward or upward the center of gain line . Laser pulse propagation with slowly varying envelope is governed in gain medium by the nonlinear Schrödinger-Frantz-Nodvik equation. When is detuned downward or upward from the refractive index is changed. "Red" detuning leads to an increased index of refraction during saturation of the resonant transition, i.e. to self-focusing, while for "blue" detuning the radiation is defocused during saturation: where is the stimulated emission cross section, is the population inversion density before pulse arrival, and are longitudinal and transverse lifetimes of two-level medium and is the propagation axis. Filamentation The laser beam with a smooth spatial profile is affected by modulational instability. The small perturbations caused by roughnesses and medium defects are amplified in propagation. This effect is referred to as Bespalov-Talanov instability. In a framework of nonlinear Schrödinger equation : . The rate of the perturbation growth or instability increment is linked with filament size via simple equation: . Generalization of this link between Bespalov-Talanov increments and filament size in gain medium as a function of linear gain and detuning had been realized in . Plasma self-focusing Advances in laser technology have recently enabled the observation of self-focusing in the interaction of intense laser pulses with plasmas. Self-focusing in plasma can occur through thermal, relativistic and ponderomotive effects. Thermal self-focusing is due to collisional heating of a plasma exposed to electromagnetic radiation: the rise in temperature induces a hydrodynamic expansion which leads to an increase of the index of refraction and further heating. Relativistic self-focusing is caused by the mass increase of electrons travelling at speed approaching the speed of light, which modifies the plasma refractive index nrel according to the equation , where ω is the radiation angular frequency and ωp the relativistically corrected plasma frequency . Ponderomotive self-focusing is caused by the ponderomotive force, which pushes electrons away from the region where the laser beam is more intense, therefore increasing the refractive index and inducing a focusing effect. The evaluation of the contribution and interplay of these processes is a complex task, but a reference threshold for plasma self-focusing is the relativistic critical power , where me is the electron mass, c the speed of light, ω the radiation angular frequency, e the electron charge and ωp the plasma frequency. For an electron density of 1019 cm−3 and radiation at the wavelength of 800 nm, the critical power is about 3 TW. Such values are realisable with modern lasers, which can exceed PW powers. For example, a laser delivering 50 fs pulses with an energy of 1 J has a peak power of 20 TW. Self-focusing in a plasma can balance the natural diffraction and channel a laser beam. Such effect is beneficial for many applications, since it helps increasing the length of the interaction between laser and medium. This is crucial, for example, in laser-driven particle acceleration, laser-fusion schemes and high harmonic generation. Accumulated self-focusing Self-focusing can be induced by a permanent refractive index change resulting from a multi-pulse exposure. This effect has been observed in glasses which increase the refractive index during an exposure to ultraviolet laser radiation. Accumulated self-focusing develops as a wave guiding, rather than a lensing effect. The scale of actively forming beam filaments is a function of the exposure dose. Evolution of each beam filament towards a singularity is limited by the maximum induced refractive index change or by laser damage resistance of the glass. Self-focusing in soft matter and polymer systems Self-focusing can also been observed in a number of soft matter systems, such as solutions of polymers and particles as well as photo-polymers. Self-focusing was observed in photo-polymer systems with microscale laser beams of either UV or visible light. The self-trapping of incoherent light was also later observed. Self-focusing can also be observed in wide-area beams, wherein the beam undergoes filamentation, or Modulation Instability, spontaneous dividing into a multitude of microscale self-focused beams, or filaments. The balance of self-focusing and natural beam divergence results in the beams propagating divergence-free. Self-focusing in photopolymerizable media is possible, owing to a photoreaction dependent refractive index, and the fact that refractive index in polymers is proportional to molecular weight and crosslinking degree which increases over the duration of photo-polymerization. See also Filament propagation References Bibliography Nonlinear optics Plasma phenomena Laser science
Self-focusing
[ "Physics" ]
1,701
[ "Plasma phenomena", "Physical phenomena", "Plasma physics" ]
8,865,790
https://en.wikipedia.org/wiki/International%20Congress%20of%20Quantum%20Chemistry
The International Congress of Quantum Chemistry (ICQC), is an international conference dedicated to the field of quantum chemistry. It is organized by the International Academy of Quantum Molecular Science. The first conference was held from July 4 to 10, 1973 in Menton, France. The first conference marked the "50th anniversary of the discovery of wave mechanics". Past meetings In chronological order: Menton, France July 4–10, 1973 New Orleans (1976) Kyoto (1979) Uppsala (1982) Montreal (1985) Jerusalem (1988) Menton (1991) Prague (1994) Atlanta (1997) Menton (2000) Bonn (2003) Kyoto (2006) Helsinki (2009) Boulder (2012) Beijing (2015) Menton June 18–23 (2018) Bratislava (2023) Auckland (2026) Papers from the Congresses have been published by the International Journal of Quantum Chemistry (IJQC). References Academic conferences
International Congress of Quantum Chemistry
[ "Physics", "Chemistry" ]
190
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", " molecular", "Atomic", " and optical physics" ]
8,867,314
https://en.wikipedia.org/wiki/Stefan%20number
The Stefan number (St or Ste) is defined as the ratio of sensible heat to latent heat. It is given by the formula where cp is the specific heat, cp is the specific heat of solid phase in the freezing process while cp is the specific heat of liquid phase in the melting process. ∆T is the temperature difference between phases, L is the latent heat of melting. It is a dimensionless parameter that is useful in analyzing a Stefan problem. The parameter was developed from Josef Stefan's calculations of the rate of phase change of water into ice on the polar ice caps and coined by G.S.H. Lock in 1969. The problems origination is fully described by Vuik and further commentary on its place in Josef Stefan's larger career can be found in Same number exist for vapor to liquid phase change, Nombre de Jakob. Notes Dimensionless numbers of thermodynamics
Stefan number
[ "Physics", "Chemistry" ]
185
[ "Thermodynamic properties", "Physical quantities", "Dimensionless numbers of thermodynamics" ]
8,868,378
https://en.wikipedia.org/wiki/Axenic
In biology, axenic (, ) describes the state of a culture in which only a single species, variety, or strain of organism is present and entirely free of all other contaminating organisms. The earliest axenic cultures were of bacteria or unicellular eukaryotes, but axenic cultures of many multicellular organisms are also possible. Axenic culture is an important tool for the study of symbiotic and parasitic organisms in a controlled environment. Preparation Axenic cultures of microorganisms are typically prepared by subculture of an existing mixed culture. This may involve use of a dilution series, in which a culture is successively diluted to the point where subsamples of it contain only a few individual organisms, ideally only a single individual (in the case of an asexual species). These subcultures are allowed to grow until the identity of their constituent organisms can be ascertained. Selection of those cultures consisting solely of the desired organism produces the axenic culture. Subculture selection may also involve manually sampling the target organism from an uncontaminated growth front in an otherwise mixed culture, and using this as an inoculum source for the subculture. Axenic cultures are usually checked routinely to ensure that they remain axenic. One standard approach with microorganisms is to spread a sample of the culture onto an agar plate, and to incubate this for a fixed period of time. The agar should be an enriched medium that will support the growth of common "contaminating" organisms. Such "contaminating" organisms will grow on the plate during this period, identifying cultures that are no longer axenic. Experimental use As axenic cultures are derived from very few organisms, or even a single individual, they are useful because the organisms present within them share a relatively narrow gene pool. In the case of an asexual species derived from a single individual, the resulting culture should consist of identical organisms (though processes such as mutation and horizontal gene transfer may introduce a degree of variability). Consequently, they will generally respond in a more uniform and reproducible fashion, simplifying the interpretation of experiments. Problems The axenic culture of some pathogens is complicated because they normally thrive within host tissues which exhibit properties that are difficult to replicate in vitro. This is especially true in the case of intracellular pathogens. However, careful replication of key features of the host environment can resolve these difficulties (e.g. host metabolites, dissolved oxygen), such as with the Q fever pathogen, Coxiella burnetii. See also Asepsis Gnotobiotic animal Germ-free animal Sterilization (microbiology) References Bacteria Bacteriology Biotechnology Cell biology Cell culture Microbiology techniques Microbiology terms
Axenic
[ "Chemistry", "Biology" ]
566
[ "Cell biology", "Prokaryotes", "Biotechnology", "Model organisms", "Microbiology techniques", "Bacteria", "nan", "Microbiology terms", "Cell culture", "Microorganisms" ]
8,874,934
https://en.wikipedia.org/wiki/CXCL13
Chemokine (C-X-C motif) ligand 13 (CXCL13), also known as B lymphocyte chemoattractant (BLC) or B cell-attracting chemokine 1 (BCA-1), is a protein ligand that in humans is encoded by the CXCL13 gene. Function CXCL13 is a small chemokine belonging to the CXC chemokine family. As its other names suggest, this chemokine is selectively chemotactic for B cells belonging to both the B-1 and B-2 subsets, and elicits its effects by interacting with chemokine receptor CXCR5. CXCL13 and its receptor CXCR5 control the organization of B cells within follicles of lymphoid tissues and is expressed highly in the liver, spleen, lymph nodes, and gut of humans. The gene for CXCL13 is located on human chromosome 4 in a cluster of other CXC chemokines. In T lymphocytes, CXCL13 expression is thought to reflect a germinal center origin of the T cell, particularly a subset of T cells called follicular B helper T cells (or TFH cells). Hence, expression of CXCL13 in T-cell lymphomas, such as angioimmunoblastic T-cell lymphoma, is thought to reflect a germinal center origin of the neoplastic T-cells. References External links Cytokines
CXCL13
[ "Chemistry" ]
336
[ "Cytokines", "Signal transduction" ]
16,280,475
https://en.wikipedia.org/wiki/Dyfi%20Furnace
Dyfi Furnace is a restored mid 18th century charcoal fired blast furnace used for smelting iron ore. It has given its name to the adjoining hamlet of Furnace (). Location The Dyfi Furnace is in the village of Furnace, Ceredigion, Wales, adjoining the A487 trunk road from Machynlleth to Aberystwyth, near Eglwysfach. History The site for Dyfi Furnace was chosen downstream of the waterfall on the River Einion to take advantage of the water power from the river and charcoal produced from the local woodlands, with the iron ore being shipped in from Cumbria via the Afon Dyfi. The furnace built around 1755 was only used for about fifty years to smelt iron ore. By 1810 it had been abandoned and the waterwheel removed. The etching by John George Wood to accompany his "The Principal Rivers of Wales", published 1813, shows the furnace in its transitional form with no waterwheel attached. Some time later a new waterwheel was installed - the one that has been renovated and is visible today - and the furnace became a sawmill. The furnace site was renovated around 1988. The furnace was built by Ralph Vernon and the brothers Edward Bridge and William Bridge . Vernon retired between 1765 and 1770, and the Bridges (who also owned Conwy Furnace) became bankrupt in 1773. It is likely that the furnace was then transferred to Kendall & Co. (Jonathan Kendall and his brother Henry), ironmasters from the West Midlands with extensive interests scattered across Staffordshire, Cheshire, The Lake District and Scotland. After the original lease expired in 1796, it appears the furnace was then owned by Bell and Gaskell, including Thomas Bell, who had managed it for the Kendalls, whose main activity by then was running the Beaufort Ironworks in Beaufort, Ebbw Vale, in the South Wales Valleys. The water wheel, shown in the photographs, provided power for a sawmill. The site was previously a Silver Mill of the Society of Mines Royal. See also Harrison Ainslie References James Dinn, 'Dyfi Furnace excavations 1982-87', Post-medieval Archaeology 22 (1988), 111-42. External links www.geograph.co.uk : photos of Dyfi Furnace and surrounding area Visitor information CADW page Industrial history of Wales Ironworks and steelworks in Wales History of Ceredigion Furnaces Grade II* listed buildings in Ceredigion
Dyfi Furnace
[ "Engineering" ]
498
[ "Furnaces", "Combustion engineering" ]
16,285,128
https://en.wikipedia.org/wiki/Metal%E2%80%93ligand%20multiple%20bond
In organometallic chemistry, a metal–ligand multiple bond describes the interaction of certain ligands with a metal with a bond order greater than one. Coordination complexes featuring multiply bonded ligands are of both scholarly and practical interest. transition metal carbene complexes catalyze the olefin metathesis reaction. Metal oxo intermediates are pervasive in oxidation catalysis. As a cautionary note, the classification of a metal–ligand bond as being "multiple" bond order is ambiguous and even arbitrary because bond order is a formalism. Furthermore, the usage of multiple bonding is not uniform. Symmetry arguments suggest that most ligands engage metals via multiple bonds. The term 'metal–ligand multiple bond" is often reserved for ligands of the type and (n = 0, 1, 2) and (n = 0, 1) where R is H or an organic substituent, or pseudohalide. Historically, and are not included in this classification, nor are halides. Pi-donor ligands In coordination chemistry, a pi-donor ligand is a kind of ligand endowed with filled non-bonding orbitals that overlap with metal-based orbitals. Their interaction is complementary to the behavior of pi-acceptor ligands. The existence of terminal oxo ligands for the early transition metals is one consequence of this kind of bonding. Classic pi-donor ligands are oxide (O2−), nitride (N3−), imide (RN2−), alkoxide (RO−), amide (R2N−), and fluoride. For late transition metals, strong pi-donors form anti-bonding interactions with the filled d-levels, with consequences for spin state, redox potentials, and ligand exchange rates. Pi-donor ligands are low in the spectrochemical series. Multiple bond stabilization Metals bound to so-called triply bonded carbyne, imide, nitride (nitrido), and oxide (oxo) ligands are generally assigned to high oxidation states with low d electron counts. The high oxidation state stabilizes the highly reduced ligands. The low d electron count allow for many bonds between ligands and the metal center. A d0 metal center can accommodate up to 9 bonds without violating the 18 electron rule, whereas a d6 species can only accommodate 6 bonds. Reactivity explained through ligand hybridization A ligand described in ionic terms can bond to a metal through however many lone pairs it has available. For example, many alkoxides use one of their three lone pairs to make a single bond to a metal center. In this situation the oxygen is sp3 hybridized according to valence bond theory. Increasing the bond order to two by involving another lone pair changes the hybridization at the oxygen to an sp2 center with an expected expansion in the M-O-R bond angle and contraction in the M-O bond length. If all three lone pairs are included for a bond order of three than the M-O bond distance contracts further and since the oxygen is a sp center the M-O-R bond angle is 180˚ or linear. Similarly with the imidos are commonly referred to as either bent (sp2) or linear (sp). Even the oxo can be sp2 or sp hybridized. The triply bonded oxo, similar to carbon monoxide, is partially positive at the oxygen atom and unreactive toward Brønsted acids at the oxygen atom. When such a complex is reduced, the triple bond can be converted to a double bond at which point the oxygen no longer bears a partial positive charge and is reactive toward acid. Conventions Bonding representations Imido ligands, also known as imides or nitrenes, most commonly form "linear six electron bonds" with metal centers. Bent imidos are a rarity limited by complexes electron count, orbital bonding availability, or some similar phenomenon. It is common to draw only two lines of bonding for all imidos, including the most common linear imidos with a six electron bonding interaction to the metal center. Similarly amido complexes are usually drawn with a single line even though most amido bonds involve four electrons. Alkoxides are generally drawn with a single bond although both two and four electron bonds are common. Oxo can be drawn with two lines regardless of whether four electrons or six are involved in the bond, although it is not uncommon to see six electron oxo bonds represented with three lines. Representing oxidation states There are two motifs to indicate a metal oxidation state based around the actual charge separation of the metal center. Oxidation states up to +3 are believed to be an accurate representation of the charge separation experienced by the metal center. For oxidation states of +4 and larger, the oxidation state becomes more of a formalism with much of the positive charge distributed between the ligands. This distinction can be expressed by using a Roman numeral for the lower oxidation states in the upper right of the metal atomic symbol and an Arabic number with a plus sign for the higher oxidation states (see the example below). This formalism is not rigorously followed and the use of Roman numerals to represent higher oxidation states is common. [MIIILn]3+ vs. [O=M5+Ln]3+ References Further reading (specialized literature) Heidt, L.J.; Koster, G.F.; Johnson, A.M. "Experimental and Crystal Field Study of the Absorption Spectrum at 2000 to 8000 A of to Manganous Perchlorate in Aqueous Perchloric Acid" J. Am. Chem. Soc. 1959, 80, 6471–6477. Rohde,J; In,J.; Lim, M.H.; Brennessel, W.W.; Bukowski, M.R.; Stubna, A.; Muonck, E.; Nam, W.; Que L. "Crystallographic and Spectroscopic Characterization of a Nonheme Fe(IV)O Complex" Science VOL 299 1037–1039. Decker, A.; Rohde,J.; Que, L.; Solomon, E.I. "Spectroscopic and Quantum Chemical Characterization of the Electronic Structure and Bonding in a Non-Heme FeIVO Complex" J. Am. Chem. Soc. 2004, 126, 5378–5379. Aliaga-Alcalde, N.; George, S.D.; Mienert, B.; Bill, E.; Wieghardt, K.; Neese, F. "The Geometric and Electronic Structure of [(cyclam-acetato)Fe(N)]+: A Genuine Iron(V) Species with a Ground-State Spin S=1/2" Angew. Chem. Int. Ed. 2005, 44, 2908–2912. Chemical bonding Coordination chemistry
Metal–ligand multiple bond
[ "Physics", "Chemistry", "Materials_science" ]
1,435
[ "Chemical bonding", "Coordination chemistry", "Condensed matter physics", "nan" ]
7,373,540
https://en.wikipedia.org/wiki/Remote%20manipulator
A remote manipulator, also known as a telefactor, telemanipulator, or waldo (after the 1942 short story "Waldo" by Robert A. Heinlein which features a man who invents and uses such devices), is a device which, through electronic, hydraulic, or mechanical linkages, allows a hand-like mechanism to be controlled by a human operator. The purpose of such a device is usually to move or manipulate hazardous materials for reasons of safety, similar to the operation and play of a claw crane game. History In 1945, the company Central Research Laboratories was given the contract to develop a remote manipulator for the Argonne National Laboratory. The intent was to replace devices which manipulated highly radioactive materials from above a sealed chamber or hot cell, with a mechanism which operated through the side wall of the chamber, allowing a researcher to stand normally while working. The result was the Master-Slave Manipulator Mk. 8, or MSM-8, which became the iconic remote manipulator seen in newsreels and movies, such as The Andromeda Strain or THX 1138. Robert A. Heinlein claimed a much earlier origin for remote manipulators. He wrote that he got the idea for "waldos" after reading a 1918 article in Popular Mechanics about "a poor fellow afflicted with myasthenia gravis ... [who] devised complicated lever arrangements to enable him to use what little strength he had." An article in Science Robotics on robots, science fiction, and nuclear accidents discusses how the science fiction waldos are now a major type of real-world robots used in the nuclear industry. See also Glovebox Dextre Doctor Octopus Teleoperation Telerobotics Master/slave (technology) Avatar (computing) Pantograph Man-Machine References External links Central Research Laboratories web site A video of a Remote Manipulator being used to make an origami crane Master-slave manipulator at Argonne National Laboratory Nuclear technology
Remote manipulator
[ "Physics" ]
422
[ "Nuclear technology", "Nuclear physics" ]
7,376,733
https://en.wikipedia.org/wiki/Electron%20pair
In chemistry, an electron pair or Lewis pair consists of two electrons that occupy the same molecular orbital but have opposite spins. Gilbert N. Lewis introduced the concepts of both the electron pair and the covalent bond in a landmark paper he published in 1916. Because electrons are fermions, the Pauli exclusion principle forbids these particles from having all the same quantum numbers. Therefore, for two electrons to occupy the same orbital, and thereby have the same orbital quantum number, they must have different spin quantum numbers. This also limits the number of electrons in the same orbital to two. The pairing of spins is often energetically favorable, and electron pairs therefore play a large role in chemistry. They can form a chemical bond between two atoms, or they can occur as a lone pair of valence electrons. They also fill the core levels of an atom. Because the spins are paired, the magnetic moment of the electrons cancel one another, and the pair's contribution to magnetic properties is generally diamagnetic. Although a strong tendency to pair off electrons can be observed in chemistry, it is also possible for electrons to occur as unpaired electrons. In the case of metallic bonding, the magnetic moments also compensate to a large extent, but the bonding is more communal, so that individual pairs of electrons cannot be distinguished and it is better to consider the electrons as a collective 'sea'. See also Electron pair production Frustrated Lewis pair Jemmis mno rules Lewis acids and bases Nucleophile Polyhedral skeletal electron pair theory References Quantum chemistry Chemical bonding Molecular physics
Electron pair
[ "Physics", "Chemistry", "Materials_science" ]
315
[ "Quantum chemistry", "Molecular physics", "Quantum mechanics", "Theoretical chemistry", " molecular", "Condensed matter physics", "nan", "Atomic", "Chemical bonding", " and optical physics" ]
7,378,151
https://en.wikipedia.org/wiki/World%20Ocean%20Circulation%20Experiment
The World Ocean Circulation Experiment (WOCE) was a component of the international World Climate Research Program, and aimed to establish the role of the World Ocean in the Earth's climate system. WOCE's field phase ran between 1990 and 1998, and was followed by an analysis and modeling phase that ran until 2002. When the WOCE was conceived, there were three main motivations for its creation. The first of these is the inadequate coverage of the World Ocean, specifically in the Southern Hemisphere. Data was also much more sparse during the winter months than the summer months, and there was—and still is to some extent—a critical need for data covering all seasons. Secondly, the data that did exist was not initially collected for studying ocean circulation and was not well suited for model comparison. Lastly, there were concerns involving the accuracy and reliability of some measurements. The WOCE was meant to address these problems by providing new data collected in ways designed to "meet the needs of global circulation models for climate prediction." Goals Two major goals were set for the campaign. 1. Develop ocean models that can be used in climate models and collect the data necessary for testing them Specifically, understand: Large scale fluxes of heat and fresh water Dynamical balance of World Ocean circulation Components of ocean variability on months to years The rates and nature of formation, ventilation and circulation of water masses that influence the climate system on time scales from ten to one hundred years In order to achieve Goal 1, the WCRP outlined and established Core Projects that would receive priority. The first of these was the "Global Description" project, which was meant to obtain data on the circulation of heat, fresh water and chemicals, as well as the statistics of eddies. The second project—"Southern Ocean"—placed particular emphasis on studying the Antarctic Circumpolar Current and the Southern Ocean’s interaction with the World Ocean. The third and final Core Project serving goal one was the "Gyre Dynamics Experiment." The second and third of these focuses are designed specifically to address the ocean’s role in decadal climate changes. Initial planning of the WOCE states that achievement of Goal 1 would involve "strong interaction between modeling and field activities," which are described further below. 2. Find the representativeness of the dataset for long-term behavior and find methods for determining long-term changes in ocean currents Specifically: Determine representative of specific WOCE data sets Identify those oceanographic parameters, indices and fields that are essential for continuing measurements in a climate observing system on decadal time scales Develop cost effective techniques suitable for deployment in an ongoing climate observing system Modeling Models in WOCE were used for both experimental design and data analysis. Models with use of data can incorporate various properties, including thermal wind balance, maintenance of the barotropic vorticity budget, and conservation of heat, fresh water, or mass. Measurements useful for these parameters are heat, fresh water or tracer concentration; current, surface fluxes of heat and fresh water; sea surface elevation. Both inverse modeling and data assimilation were employed during WOCE. Inverse modeling is the fitting of data using a numerical least squares or maximum likelihood fitting procedure. The data assimilation technique requires data to be compared with an initial integration of a model. The model is then progressed in time using new data and repeating the process. The success of these methods requires sufficient data to fully constrain the model, hence the need for a comprehensive field program. Field Program Goals for the WOCE Field Program were as follows. The experiment will be global in nature and the major observational components will be deployed in all oceans. The requirement of simultaneity of measurements will be imposed only where essential. The flexibility inherent in the existing arrangements for cooperative research in the worldwide oceanographic (and meteorological) community will be exploited as far as possible. Major elements of the WOCE Field Program Satellite Altimetry plans built around the availability of ERS–1 and ERS–2 (European), TOPEX/POSEIDON (US/French) to study fields of surface forcing and oceanic surface topography Hydrography high quality conductivity-temperature-pressure profilers as well as free-fall instruments to provide a climatological temperature-salinity database Geochemical Tracers using chemical information (such as radioactive decay and atmospheric history) of passive compounds to study the formation rates and transport of water masses on climatological timescales Ocean Surface Fluxes using in-situ and satellite measurements to quantify fluxes of heat, water and momentum (necessary for modeling thermohaline and wind-driven circulation) Satellite Winds using surface buoys, Voluntary Observing Ships (VOS) and satellite microwave scatterometer systems to measure the surface wind field Surface Meteorological Observations from VOS improvement of sampling and accuracy in surface meteorological measurements, as well increasing area coverage Upper Ocean Observations from Merchant Ships-of-Opportunity expendable bathythermograph (XBT) sampling lines to study changes in heat content of the upper ocean In-Situ Sea Level Measurements upgrading and installing new sea-level gauges to calibrate altimetry measurements Drifting Buoys and Floats surface drifting buoys provide measurements such as sea level pressure, sea-surface temperature, humidity, precipitation, surface salinity, and near-surface and mid-depth currents Moored instrumentation provides detailed temporal information at a number of sites and depths Resulting Conclusions This list, though not comprehensive, outlines a sampling of the most highly cited articles and books resulting from the WOCE. Ocean Circulation and Climate, Observing and Modelling the Global Ocean, 1st Edition, Eds. Gerold Siedler, John Gould & John Church, Academic Press, 736pp. (International Geophysics Series 77) 2001 Revisiting the South Pacific subtropical circulation: A synthesis of World Ocean Circulation Experiment observations along 32°S, S. E. Wijffels, J. M. Toole, R. Davis, Journal of Geophysical Research, September 2012 See also Geochemical Ocean Sections Study (GEOSECS) Global Ocean Data Analysis Project (GLODAP) World Ocean Atlas (WOA) External links Access page to the WOCE data legacy at the National Oceanographic Data Center (US) Electronic Atlas of WOCE Data at the Alfred Wegener Institute for Polar and Marine Research, Bremerhaven at the National Oceanography Centre, Southampton (UK) Searchable set of WOCE data, archived in the information system PANGAEA WOCE observations 1990–1998; a summary of the WOCE global data resource, WOCE International Project Office, WOCE Report No. 179/02., Southampton, UK. (pdf 18.9 MB) References Oceanography Physical oceanography World Ocean
World Ocean Circulation Experiment
[ "Physics", "Environmental_science" ]
1,366
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics", "Physical oceanography" ]
11,036,055
https://en.wikipedia.org/wiki/Grid-tied%20electrical%20system
A grid-tied electrical system, also called tied to grid or grid tie system, is a semi-autonomous electrical generation or grid energy storage system which links to the mains to feed excess capacity back to the local mains electrical grid. When insufficient electricity is available, electricity drawn from the mains grid can make up the shortfall. Conversely when excess electricity is available, it is sent to the main grid. When the Utility or network operator restricts the amount of energy that goes into the grid, it is possible to prevent any input into the grid by installing Export Limiting devices. When batteries are used for storage, the system is called battery-to-grid (B2G), which includes vehicle-to-grid (V2G). How it works Direct Current (DC) electricity from sources such as hydro, wind or solar is passed to an inverter which is grid tied. The inverter monitors the alternating current mains supply frequency and generates electricity that is phase matched to the mains. When the grid fails to accept power during a "black out", most inverters can continue to provide courtesy power. Battery-to-grid A key concept of this system is the possibility of creating an electrical micro-system that is not dependent on the grid-tie to provide a high level quality of service. If the mains supply of the region is unreliable, the local generation system can be used to power important equipment. Battery-to-grid can also spare the use of fossil fuel power plants to supply energy during peak loads on the public electric grid. Regions that charge based on time of use metering may benefit by using stored battery power during prime time. Environmentally friendly Local generation can be from an environmentally friendly source such as pico hydro, solar panels or a wind turbine. Individuals can choose to install their own system if an environmentally friendly mains provider is not available in their location. Small scale start A micro generation facility can be started with a very small system such as a home wind power generation, photovoltaic (solar cells) generation, or micro combined heat and power (Micro-CHP) system. Sell to and buy from mains Excess electricity can be sold to mains. Electrical shortfall can be bought from mains. List of countries or regions that legally allow grid-tied electrical systems Armenia Australia Bangladesh Bosnia and Herzegovina Brazil Canada Chile Dominican Republic El Salvador European Union Guatemala India Iran Israel Japan Jordan Mexico New Zealand Pakistan Panama Philippines (via Meralco) Russia (from Dec 2019) Singapore South Africa (Only by arrangement with municipality) Sri Lanka United States of America Venezuela (no legal restrictions) See also Cost of electricity by source Distributed network Electric power transmission Electranet Photovoltaic system Grid tie inverter Inverter Deep cycle battery Power outage V2G Grid-connected photovoltaic power system Off the grid - direct current buildings References External links Grid Tied Solar explained Distributed generation Battery (electricity) Electric power Low-carbon economy
Grid-tied electrical system
[ "Physics", "Engineering" ]
608
[ "Power (physics)", "Electrical engineering", "Electric power", "Physical quantities" ]
11,038,447
https://en.wikipedia.org/wiki/Neural%20tissue%20engineering
Neural tissue engineering is a specific sub-field of tissue engineering. Neural tissue engineering is primarily a search for strategies to eliminate inflammation and fibrosis upon implantation of foreign substances. Often foreign substances in the form of grafts and scaffolds are implanted to promote nerve regeneration and to repair nerves of both the central nervous system (CNS) and peripheral nervous system (PNS) due to injury. Introduction There are two parts of the nervous system: the central nervous system (CNS) and the peripheral nervous system (PNS). General body functions are supervised by the central nervous system (CNS), which includes the brain and spinal cord. The PNS delivers motor signals to control body activities and receives sensory data from the CNS. The PNS It is made up of nerve fibers arranged into nerves. The PNS's autonomic nervous system (ANS), whose sympathetic and parasympathetic branches preserve homeostasis and regulate involuntary physiological functions. The "fight-or-flight" reaction is triggered by the sympathetic nervous system (SNS), which is derived from the thoracic and upper lumbar spinal cord. It readies the body for quick reactions under pressure. The parasympathetic nervous system (PSNS), on the other hand, is derived from the brainstem and sacral spinal cord and facilitates normal physiological processes by encouraging rest and energy conservation. One of the main nerves in the PSNS, the vagus nerve, originates in the brainstem and travels throughout the body, affecting different organs. It has sensory and motor fibers. Sensory messages tell the brain what the body is doing, allowing it to maintain homeostasis and control activities. Additionally, the vagus nerve influences emotions and memory through connections to several brain regions. Neuroimmune Interactions The immune system's role is to identify and protect the body against external chemicals and infections. It is separated into innate and adaptive immunity and consists of immune organs, cells, and active ingredients. Remarkably, under certain circumstances, a variety of non-immune cells can display immunological properties. The immune system and the neurological system, which control body processes, are interdependent. By controlling humoral chemicals on a systemic level, the central nervous system CNS affects the immune system. Sleep and other psychosocial variables can affect immunological responses. Obesity and sleep deprivation, for example, can impair immunity, and long-term stress can erode immunological responses, making people more vulnerable to infections like COVID-19. In diseases like asthma that are made worse by psychological stress or depression, neuroimmune interactions are clearly seen. The immune response can impact brain activity, and neuroendocrine hormones control the release of cytokines. Fever symptoms like drowsiness and decreased appetite are caused by proinflammatory mediators. Immune system organs get autonomic innervation from the peripheral nervous system (PNS), which facilitates specialized communication between the two systems. Comprehensive information on bidirectional crosstalk pathways is frequently lacking, despite evidence of functional links between the neurological and immune systems already in place. lymph nodes are essential components of the immune system because they serve as both collecting places for various immune cells and act as filters for dangerous chemicals. Their well-structured composition promotes efficient immune responses, protecting the body against external chemicals, infections, and malignancies. Regional innervation of lymph nodes involves complex participation from the sympathetic and parasympathetic branches of the autonomic nervous system (ANS). Furthermore, there is afferent innervation, which is in charge of immune responses in particular areas. Through the use of neuropeptides, nociceptors—specialized nerve endings that feel pain—control the immune system. Distinct nerve fibers inside lymph nodes are identified by several markers, such as TH, anti-β2-AR, ChAT, and VAChT. Studies have shown that nerve fibers originate from the hilum, travel along blood vessels, cross medullary areas, and form subscapular plexuses. Some limitations do, however, remain. These include the sparse identification of neurons and nerve fibers, the lack of a thorough examination of fine nerve fibers, the incomplete knowledge of innervation in particular regions, and the inadequate documentation in certain studies of close interactions between immune and non-immune cells and nerve fibers. Neuroimmune interplays have possible therapeutical approaches Novel approaches focusing on neuroimmune interactions may alter the course of the disease or reduce symptoms. Targeting neuroimmune pathways is a holistic approach that seeks to affect both immune responses and brain functioning. The term "acupuncture" refers to the ancient Chinese medical technique of gently stimulating nociceptors and receptors with tiny needles inserted into certain body sites in order to treat various ailments, including pain and inflammation. The FDA-approved therapy for depression and epilepsy, vagus nerve stimulation (VNS), may also be beneficial for non-neurological conditions such rheumatoid arthritis and inflammatory bowel disease. Chemical therapies, such as peripheral nervous system (PNS) modulation, are being investigated for the treatment of infectious and inflammatory disorders, such as rheumatoid arthritis and issues associated with diabetes. Targeting tumor innervation is being explored as a potential new treatment approach. Intratumoral innervation, which involves nerves inside or around tumors, influences the biology of cancer. Peripheral neuropathy is one of the PNS-associated disorders that can be treated with immunotherapy manipulation. According to many experimental researchers, extensive clinical studies are necessary to confirm the safety, effectiveness, and regulatory approval of these experimental techniques prior to their establishment as established therapies. Tissue Engineering The need for neural tissue engineering arises from the difficulty of the nerve cells and neural tissues to regenerate on their own after neural damage has occurred. The PNS has some, but limited, regeneration of neural cells. Adult stem cell neurogenesis in the CNS has been found to occur in the hippocampus, the subventricular zone (SVZ), and spinal cord. CNS injuries can be caused by stroke, neurodegenerative disorders, trauma, or encephalopathy. A few methods currently being investigated to treat CNS injuries are: implanting stem cells directly into the injury site, delivering morphogens to the injury site, or growing neural tissue in vitro with neural stem or progenitor cells in a 3D scaffold. Proposed use of electrospun polymeric fibrous scaffolds for neural repair substrates dates back to at least 1986 in a NIH SBIR application from Simon. For the PNS, a severed nerve can be reconnected and reinnervated using grafts or guidance of the existing nerve through a channel. Recent research into creating miniature cortexes, known as corticopoiesis, and brain models, known as cerebral organoids, are techniques that could further the field of neural tissue regeneration. The native cortical progenitors in corticopoiesis are neural tissues that could be effectively embedded into the brain. Cerebral organoids are 3D human pluripotent stem cells developed into sections of the brain cortex, showing that there is a potential to isolate and develop certain neural tissues using neural progenitors. Another situation that calls for implanting of foreign tissue is use of recording electrodes. Chronic Electrode Implants are a tool being used in research applications to record signals from regions of the cerebral cortex. Research into the stimulation of PNS neurons in patients with paralysis and prosthetics could further the knowledge of reinnervation of neural tissue in both the PNS and the CNS. This research is capable of making one difficult aspect of neural tissue engineering, functional innervation of neural tissue, more manageable. CNS Causes of CNS injury There are four main causes of CNS injury: stroke, traumatic brain injury (TBI), brain tumors, or developmental complications. Strokes are classified as either hemorrhagic (when a vessel is damaged to the point of bleeding into the brain) or ischemic (when a clot blocks the blood flow through the vessel in the brain). When a hemorrhage occurs, blood seeps into the surrounding tissue, resulting in tissue death, while ischemic hemorrhages result in a lack of blood flow to certain tissues. Traumatic brain injury is caused by external forces impacting the cranium or the spinal cord. Problems with CNS development results in abnormal tissue growth during development, thus decreasing the function of the CNS. CNS treatments and research Implantation of stem cells to the injury site One method to treat CNS injury involves culturing stem cells in vitro and implanting the non-directed stem cells into the brain injury site. Implanting stem cells directly into the injury site prevents glial scar formation and promotes neurogenesis originating from the patient, but also runs the risk of tumor development, inflammation, and migration of the stem cells out of the injury location. Tumorigenesis can occur due to the uncontrolled nature of the stem cell differentiation, inflammation can occur due to rejection of the implanted cells by the host cells, and the highly migratory nature of stem cells results in the cells moving away from the injury site, thus not having the desired effect on the injury site. Other concerns of neural tissue engineering include establishing safe sources of stem cells and getting reproducible results from treatment to treatment. Alternatively, these stem cells can act as carriers for other therapies, though the positive effects of using stem cells as a delivery mechanism has not been confirmed. Direct stem cell delivery has an increased beneficial effect if they are directed to be neuronal cells in vitro. This way, the risks associated with undirected stem cells are decreased; additionally, injuries that do not have a specific boundary could be treated efficiently. Delivery of molecules to the injury site Molecules that promote the regeneration of neural tissue, including pharmaceutical drugs, growth factors known as morphogens, and miRNA can also be directly introduced to the injury site of the damaged CNS tissue. Neurogenesis has been seen in animals that are treated with psychotropic drugs through the inhibition of serotonin reuptake and induction of neurogenesis in the brain. When stem cells are differentiating, the cells secrete morphogens such as growth factors to promote healthy development. These morphogens help maintain homeostasis and neural signaling pathways, and they can be delivered into the injury site to promote the growth of the injured tissues. Currently, morphogen delivery has minimal benefits because of the interactions the morphogens have with the injured tissue. Morphogens that are not innate in the body have a limited effect on the injured tissue due to the physical size and their limited mobility within CNS tissue. To be an effective treatment, the morphogens must be present at the injury site at a specific and constant concentration. miRNA has also been shown to affect neurogenesis by directing the differentiation of undifferentiated neural cells. Implantation of neural tissue developed in vitro A third method for treating CNS injuries is to artificially create tissue outside of the body to implant into the injury site. This method could treat injuries that consist of large cavities, where larger amounts of neural tissue needs to be replaced and regenerated. Neural tissue is grown in vitro with neural stem or progenitor cells in a 3D scaffold, forming embryoid bodies (EBs). These EBs consist of a sphere of stem cells, where the inner cells are undifferentiated neural cells, and the surrounding cells are increasingly more differentiated. 3D scaffolds are used to transplant tissue to the injury site and to make the appropriate interface between the artificial and the brain tissue. The scaffolds must be: biocompatible, biodegradable, fit injury site, similar to existing tissue in elasticity and stiffness, and support growing cells and tissues. The combination of using directed stem cells and scaffolds to support the neural cells and tissues increase the survival of the stem cells in the injury site, increasing the efficacy of the treatment. There are 6 different types of scaffolds that are being researched to use in this method for treating neural tissue injury: Liquid hydrogels are cross-linked hydrophobic polymer chains, and the neural stem cells are either grown on the surface of the gel or integrated into the gel during cross-linking of the polymer chains. The major drawback of liquid hydrogels is there is limited protection of the cells that are transplanted. Supportive scaffolds are made from solid bead-shaped or microporous structures, and can act as carriers for the transplanted cells or for the growth factors that the stem cells secrete when they are differentiating. The cells adhere to the surface of the matrix in 2D layers. The supportive scaffolds are easily transplanted into the brain injury site because of the scaffold size. They provide a matrix promoting cell adhesion and aggregation, thus increasing increased healthy cell culture. Aligning scaffolds can be silk-based, polysaccharide-based, or based on other materials such as a collagen-rich hydrogel. These gels are now enhanced with micro-patterns on the surface for the promotion of neuronal outgrowths. These scaffolds are primarily used for regeneration that needs to occur in a specific orientation, such as in spinal cord injuries. Integrative scaffolds are mainly used to protect the transplanted cells from mechanical forces that they are exposed to in the process of implantation into the site of the injury. These scaffolds also decrease the likelihood of having the inflammatory cells located at the site of the injury migrate into the scaffold with the stem cells. Blood vessels have been observed to grow through the scaffold, thus the scaffold and cells are being integrated into the host tissue. A combination of engineered scaffolds presents an option for a 3D scaffold that can have both the necessary patterns for cell adhesion and the flexibility to adapt to the ever changing environment at the injury site. Decellularized ECM scaffolds is an option for scaffolds because they more closely mimc the native tissue, but these scaffolds can only currently be harvested from amputations and cadavers. These 3D scaffolds can be fabricated using particulate leaching, gas foaming, fiber bonding, solvent casting, or electrospinning techniques; each technique creates a scaffold with different properties than the other techniques. Incorporation success of 3D scaffolds into the CNS has been shown to depend on the stage at which the cells have differentiated. Later stages provide a more efficient implantation, while earlier staged cells need to be exposed to factors that coerce the cells to differentiate and thus respond appropriately to the signals the cells will receive at the CNS injury site. Brain-derived neurotrophic factor is a potential co-factor to promote functional activation of ES cell-derived neurons into the CNS injury sites. PNS Causes of PNS injury Trauma to the PNS can cause damage as severe as a severance of the nerve, splitting the nerve into a proximal and distal section. The distal nerve degenerates over time due to inactivity, while the proximal end swells over time. The distal end does not degenerate right away, and the swelling of the proximal end does not render it nonfunctional, so methods to reestablish the connection between the two ends of the nerve are being investigated. PNS treatments and research Surgical reconnection One method to treat PNS injury is surgical reconnection of the severed nerve by taking the two ends of the nerve and suturing them together. When suturing the nerves together, the fascicles of the nerve are each reconnected, bridging the nerve back together. Though this method works for severances that create a small gap between the proximal and distal nerve ends, this method does not work over gaps of greater distances due to the tension that must be put on the nerve endings. This tension results in the nerve degeneration, and therefore the nerve cannot regenerate and form a functional neural connection. Tissue grafts Tissue grafts utilize nerves or other materials to bridge the two ends of the severed nerve. There are three categories of tissue grafts: autologous tissue grafts, nonautologous tissue grafts, and acellular grafts. Autologous tissue grafts transplant nerves from a different part of the body of the patient to fill the gap between either end of the injured nerve. These nerves are typically cutaneous nerves, but other nerves have been researched as well with encouraging results. These autologous nerve grafts are the current gold standard for PNS nerve grafting because of the highly biocompatible nature of the autologous nerve graft, but there are issues concerning harvesting the nerve from the patients themselves and being able to store a large amount of autologous grafts for future use. Nonautologous and acellular grafts (including ECM-based materials) are tissues that do not come from the patient, but instead can be harvested from cadavers (known as allogenic tissue) or animals (known as xenogeneic tissue). While these tissues have an advantage over autologous tissue grafts because the tissue does not need to be taken from the patient, difficulty arises with the potential of disease transmission and thus immunogenic problems. Methods of eliminating the immunogenic cells, thus leaving behind only the ECM-components of the tissue, are currently being investigated to increase the efficacy of nonautologous tissue grafts. Guidance Guidance methods of PNS regeneration use nerve guide channels to help axons regrow along the correct path, and may direct growth factors secreted by both ends of the nerve to promote growth and reconnection. Guidance methods reduce scarring of the nerves, increasing the functionality of the nerves to transmit action potentials after reconnection. Two types of materials are used in guidance methods of PNS regeneration: natural-based materials and synthetic materials. Natural-based materials are modified scaffolds stemming from ECM components and glycosaminoglycans. Laminin, collagen, and fibronectin, which are all ECM components, guide axonal development and promote neural stimulation and activity. Other molecules that have the potential to promote nerve repair are: hyaluronic acid, fibrinogen, fibrin gels, self-assembling peptide scaffolds, alginate, agarose, and chitosan. Synthetic materials also provide another method for tissue regeneration in which the graft's chemical and physical properties can be controlled. Since the properties of a material may be specified for the situation in which it is being used, synthetic materials are an attractive option for PNS regeneration. The use of synthetic materials come with certain concerns, such as: easy formation of the graft material into the necessary dimensions, biodegradable, sterilizable, tear resistant, easy to operate with, low risk of infection, and low inflammation response due to the material. The material must also maintain the channel during the nerve regeneration. Currently, the materials most commonly researched mainly focus on polyesters, but biodegradable polyurethane, other polymers, and biodegradable glass are also being investigated. Other possibilities for synthetic materials are conducting polymers and polymers biologically modified to promote cell axon growth and maintain the axon channel. Neuroimmune Enhancement Through EVs Extracellular vesicles (EVs) are bilayer-bound lipid particles that participate in intercellular communication by releasing a variety of substances, including nucleic acids, lipids, and proteins. Exosomes, macrovesicles, and apoptotic bodies are the three primary forms; each has unique properties. EVs have the potential to be used as therapeutic delivery vehicles and diagnostic biomarkers and play roles in immunological responses, cancer, tissue regeneration, and neurological diseases. Damaged neurons generate neuron-derived exosomes (NDEs), which can influence target cells by transferring a variety of cargos, including the Zika virus. Neurodegenerative illnesses are linked to NDEs. Immune cell exosomes (IEEs) have the potential to be used in immunotherapy and vaccine development since they influence immune responses and interact with other cells. Immune cells such as DCs, macrophages, B cells, and T cells produce IEEs. EVs have been shown to promote neuroimmune crosstalk, allowing for both local and distant tissue and cell communication. Difficulty of research Because there are so many factors that contribute to the success or failure of neural tissue engineering, there are many difficulties that arise in using neural tissue engineering to treat CNS and PNS injuries. First, the therapy needs to be delivered to the site of the injury. This means that the injury site needs to be accessed by surgery or drug delivery. Both of these methods have inherent risks and difficulties in themselves, compounding the problems associated with the treatments. A second concern is keeping the therapy at the site of the injury. Stem cells have a tendency to migrate out of the injury site to other sections of the brain, thus the therapy is not as effective as it could be as when the cells stay at the injury site. Additionally, the delivery of stem cells and other morphogens to the site of injury can cause more harm than good if they induce tumorigenesis, inflammation, or other unforeseen effects. Finally, the findings in laboratories may not translate to practical clinical treatments. Treatments are successful in a lab, or even an animal model of the injury, may not be effective in a human patient. Related research Modeling brain tissue development in vitro Two models for brain tissue development are cerebral organoids and corticopoiesis. These models provide an "in vitro" model for normal brain development, but they can be manipulated to represent neural defects. Therefore, the mechanisms behind healthy and malfunctioning development can be studied by researchers using these models. These tissues can be made with either mouse embryonic stem cells (ESC)s or human ESCs. Mouse ESCs are cultured in a protein called Sonic Hedgehog inhibitor to promote the development of dorsal forebrain and study cortical fate. This method has been shown to produce axonal layers that mimic a broad range of cortical layers. Human ESC-derived tissues use pluripotent stem cells to form tissues on scaffold, forming human EBs. These human ESC-derived tissues are formed by culturing human pluripotent EBs in a spinning bioreactor. Targeted reinnervation Targeted reinnervation is a method to reinnervate the neural connections in the CNS and PNS, specifically in paralyzed patients and amputees using prosthetic limbs. Currently, devices are being investigated that take in and record the electrical signals that are propagated through neurons in response to a person's intent to move. This research could shed light on how to reinnervate the neural connections between severed PNS nerves and the connections between the transplanted 3D scaffolds into the CNS. References Biological engineering Neurology Nervous system Articles containing video clips
Neural tissue engineering
[ "Engineering", "Biology" ]
4,866
[ "Organ systems", "Biological engineering", "Nervous system" ]
11,041,514
https://en.wikipedia.org/wiki/Mitochondrial%20shuttle
The mitochondrial shuttles are biochemical transport systems used to transport reducing agents across the inner mitochondrial membrane. NADH as well as NAD+ cannot cross the membrane, but it can reduce another molecule like FAD and [QH2] that can cross the membrane, so that its electrons can reach the electron transport chain. The two main systems in humans are the glycerol phosphate shuttle and the malate-aspartate shuttle. The malate/a-ketoglutarate antiporter functions move electrons while the aspartate/glutamate antiporter moves amino groups. This allows the mitochondria to receive the substrates that it needs for its functionality in an efficient manner. Shuttles In humans, the glycerol phosphate shuttle is primarily found in brown adipose tissue, as the conversion is less efficient, thus generating heat, which is one of the main purposes of brown fat. It is primarily found in babies, though it is present in small amounts in adults around the kidneys and on the back of our necks. The malate-aspartate shuttle is found in much of the rest of the body. The shuttles contains a system of mechanisms used to transport metabolites that lack a protein transporter in the membrane, such as oxaloacetate. Malate shuttle The malate shuttle allows the mitochondria to move electrons from NADH without the consumption of metabolites and it uses two antiporters to transport metabolites and keep balance within the mitochondrial matrix and cytoplasm. On the cytoplasmic side a transaminase enzyme is used to remove an amino group from aspartate which is converted into oxaloacetate, then malate dehydrogenase enzyme uses an NADH cofactor to reduce oxaloacetate to malate which can be transported across the membrane because of the presence of a transporter. Once the malate is inside the matrix its converted back to oxaloacetate, which is converted to aspartate and can be transported back outside the mitochondria to allow the cycle to continue. The movement of oxaloacetate across the membrane transports electrons and is known as the outer ring. The inner ring primary function is not to move electrons but regenerate the metabolites. Glycerol phosphate shuttle The transamination of oxaloacetate to aspartate is achieved through the use of glutamate. Glutamate is transported with aspartate via antiporter, thus as one aspartate leaves the cell, a glutamate enters. Glutamate in the matrix is converted into an a-ketoglutarate which is transported in an antiporter with malate. In the cytoplasmic side a-ketoglutarate is converted back into glutamate when aspartate is converted back to oxaloacetate. Use against cancer Most cancer cells cause mutation in the bodies' metabolic activities to increase glucose metabolism in order to rapidly proliferate. Mutations that increase the cells metabolic activity and turn a normal cell into a tumor cell are called oncogenes. Cancer cells are unlike many other cells. They have very little vulnerabilities, but experiments in which the inhibition of transamination of malate-shuttle slowed proliferation due to the fact metabolism of glucose was being slowed. See also Mitochondrial carrier Notes and references Cellular respiration
Mitochondrial shuttle
[ "Chemistry", "Biology" ]
702
[ "Biochemistry", "Cellular respiration", "Metabolism" ]
11,042,546
https://en.wikipedia.org/wiki/Oxygen%20enhancement%20ratio
The oxygen enhancement ratio (OER) or oxygen enhancement effect in radiobiology refers to the enhancement of therapeutic or detrimental effect of ionizing radiation due to the presence of oxygen. This so-called oxygen effect is most notable when cells are exposed to an ionizing radiation dose. The OER is traditionally defined as the ratio of radiation doses during lack of oxygen compared to no lack of oxygen for the same biological effect. This may give varying numerical values depending on the chosen biological effect. Additionally, OER may be presented in terms of hyperoxic environments and/or with altered oxygen baseline, complicating the significance of this value. The maximum OER depends mainly on the ionizing density or LET of the radiation. Radiation with higher LET and higher relative biological effectiveness (RBE) have a lower OER in mammalian cell tissues. The value of the maximum OER varies from about 1–4. The maximum OER ranges from about 2–4 for low-LET radiations such as X-rays, beta particles and gamma rays, whereas the OER is unity for high-LET radiations such as low energy alpha particles. Uses in medicine The effect is used in medical physics to increase the effect of radiation therapy in oncology treatments. Additional oxygen abundance creates additional free radicals and increases the damage to the target tissue. In solid tumors the inner parts become less oxygenated than normal tissue and up to three times higher dose is needed to achieve the same tumor control probability as in tissue with normal oxygenation. Explanation of the Oxygen Effect The best known explanation of the oxygen effect is the oxygen fixation hypothesis which postulates that oxygen permanently fixes radical-induced DNA damage so it becomes permanent. Recently, it has been posited that the oxygen effect involves radiation exposures of cells causing their mitochondria to produce greater amounts of reactive oxygen species. See also Radiation therapy Radiobiology Health physics Hypoxia Oxygen effect References Eric J. Hall and Amato J. Giaccia: Radiobiology for the radiologist, Lippincott Williams & Wilkins, 6th Ed., 2006 Radiation therapy Nuclear medicine Radiobiology
Oxygen enhancement ratio
[ "Chemistry", "Biology" ]
429
[ "Radiobiology", "Radioactivity" ]
11,044,649
https://en.wikipedia.org/wiki/Scoring%20functions%20for%20docking
In the fields of computational chemistry and molecular modelling, scoring functions are mathematical functions used to approximately predict the binding affinity between two molecules after they have been docked. Most commonly one of the molecules is a small organic compound such as a drug and the second is the drug's biological target such as a protein receptor. Scoring functions have also been developed to predict the strength of intermolecular interactions between two proteins or between protein and DNA. Utility Scoring functions are widely used in drug discovery and other molecular modelling applications. These include: Virtual screening of small molecule databases of candidate ligands to identify novel small molecules that bind to a protein target of interest and therefore are useful starting points for drug discovery De novo design (design "from scratch") of novel small molecules that bind to a protein target Lead optimization of screening hits to optimize their affinity and selectivity A potentially more reliable but much more computationally demanding alternative to scoring functions are free energy perturbation calculations. Prerequisites Scoring functions are normally parameterized (or trained) against a data set consisting of experimentally determined binding affinities between molecular species similar to the species that one wishes to predict. For currently used methods aiming to predict affinities of ligands for proteins the following must first be known or predicted: Protein tertiary structure – arrangement of the protein atoms in three-dimensional space. Protein structures may be determined by experimental techniques such as X-ray crystallography or solution phase NMR methods or predicted by homology modelling. Ligand active conformation – three-dimensional shape of the ligand when bound to the protein Binding-mode – orientation of the two binding partners relative to each other in the complex The above information yields the three-dimensional structure of the complex. Based on this structure, the scoring function can then estimate the strength of the association between the two molecules in the complex using one of the methods outlined below. Finally the scoring function itself may be used to help predict both the binding mode and the active conformation of the small molecule in the complex, or alternatively a simpler and computationally faster function may be utilized within the docking run. Classes There are four general classes of scoring functions: Force field – affinities are estimated by summing the strength of intermolecular van der Waals and electrostatic interactions between all atoms of the two molecules in the complex using a force field. The intramolecular energies (also referred to as strain energy) of the two binding partners are also frequently included. Finally since the binding normally takes place in the presence of water, the desolvation energies of the ligand and of the protein are sometimes taken into account using implicit solvation methods such as GBSA or PBSA. Empirical – based on counting the number of various types of interactions between the two binding partners. Counting may be based on the number of ligand and receptor atoms in contact with each other or by calculating the change in solvent accessible surface area (ΔSASA) in the complex compared to the uncomplexed ligand and protein. The coefficients of the scoring function are usually fit using multiple linear regression methods. These interactions terms of the function may include for example: hydrophobic — hydrophobic contacts (favorable), hydrophobic — hydrophilic contacts (unfavorable) (Accounts for unmet hydrogen bonds, which are an important enthalpic contribution to binding. One lost hydrogen bond can account for 1–2 orders of magnitude in binding affinity.), number of hydrogen bonds (favorable contribution to affinity, especially if shielded from solvent, if solvent exposed no contribution), number of rotatable bonds immobilized in complex formation (unfavorable conformational entropy contribution). Knowledge-based – based on statistical observations of intermolecular close contacts in large 3D databases (such as the Cambridge Structural Database or Protein Data Bank) which are used to derive statistical "potentials of mean force". This method is founded on the assumption that close intermolecular interactions between certain types of atoms or functional groups that occur more frequently than one would expect by a random distribution are likely to be energetically favorable and therefore contribute favorably to binding affinity. Machine-learning – Unlike these classical scoring functions, machine-learning scoring functions are characterized by not assuming a predetermined functional form for the relationship between binding affinity and the structural features describing the protein-ligand complex. In this way, the functional form is inferred directly from the data. Machine-learning scoring functions have consistently been found to outperform classical scoring functions at binding affinity prediction of diverse protein-ligand complexes. This has also been the case for target-specific complexes, although the advantage is target-dependent and mainly depends on the volume of relevant data available. When appropriate care is taken, machine-learning scoring functions tend to strongly outperform classical scoring functions at the related problem of structure-based virtual screening. Furthermore, if data specific for the target is available, this performance gap widens These reviews provide a broader overview on machine-learning scoring functions for structure-based drug design. The choice of decoys for a given target is one of the most important factors for training and testing any scoring function. The first three types, force-field, empirical and knowledge-based, are commonly referred to as classical scoring functions and are characterized by assuming their contributions to binding are linearly combined. Due to this constraint, classical scoring functions are unable to take advantage of large amounts of training data. Refinement Since different scoring functions are relatively co-linear, consensus scoring functions may not improve accuracy significantly. This claim went somewhat against the prevailing view in the field, since previous studies had suggested that consensus scoring was beneficial. A perfect scoring function would be able to predict the binding free energy between the ligand and its target. But in reality both the computational methods and the computational resources put restraints to this goal. So most often methods are selected that minimize the number of false positive and false negative ligands. In cases where an experimental training set of data of binding constants and structures are available a simple method has been developed to refine the scoring function used in molecular docking. References Docking Computational chemistry Cheminformatics Protein structure Bioinformatics
Scoring functions for docking
[ "Chemistry", "Engineering", "Biology" ]
1,249
[ "Biological engineering", "Molecular physics", "Bioinformatics", "Theoretical chemistry", "Computational chemistry", "Molecular modelling", "Cheminformatics", "Structural biology", "nan", "Protein structure" ]
11,044,843
https://en.wikipedia.org/wiki/Hayashi%20limit
The Hayashi limit is a theoretical constraint upon the maximum radius of a star for a given mass. When a star is fully within hydrostatic equilibrium—a condition where the inward force of gravity is matched by the outward pressure of the gas—the star can not exceed the radius defined by the Hayashi limit. This has important implications for the evolution of a star, both during the formulative contraction period and later when the star has consumed most of its hydrogen supply through nuclear fusion. A Hertzsprung-Russell diagram displays a plot of a star's surface temperature against the luminosity. On this diagram, the Hayashi limit forms a nearly vertical line at about 3,500 K. The outer layers of low temperature stars are always convective, and models of stellar structure for fully convective stars do not provide a solution to the right of this line. Thus in theory, stars are constrained to remain to the left of this limit during all periods when they are in hydrostatic equilibrium, and the region to the right of the line forms a type of "forbidden zone". Note, however, that there are exceptions to the Hayashi limit. These include collapsing protostars, as well as stars with magnetic fields that interfere with the internal transport of energy through convection. Red giants are stars that have expanded their outer envelope in order to support the nuclear fusion of helium. This moves them up and to the right on the H-R diagram. However, they are constrained by the Hayashi limit not to expand beyond a certain radius. Stars that find themselves across the Hayashi limit have large convection currents in their interior driven by massive temperature gradients. Additionally, those stars states are unstable so the stars rapidly adjust their states, moving in the Hertzprung-Russel diagram until they reach the Hayashi limit. When lower mass stars in the main sequence start expanding and becoming a red giant the stars revisit the Hayashi track. The Hayashi limit constrains the asymptotic giant branch evolution of stars which is important in the late evolution of stars and can be observed, for example, in the ascending branches of the Hertzsprung–Russell diagrams of globular clusters, which have stars of approximately the same age and composition. The Hayashi limit is named after Chūshirō Hayashi, a Japanese astrophysicist. Despite its importance to protostars and late stage main sequence stars, the Hayashi limit was only recognized in Hayashi’s paper in 1961. This late recognition may be because the properties of the Hayashi track required numerical calculations that were not fully developed before. Derivation of the limit We can derive the relation between the luminosity, temperature and pressure for a simple model for a fully convective star and from the form of this relation we can infer the Hayashi limit. This is an extremely crude model of what occurs in convective stars, but it has good qualitative agreement with the full model with less complications. We follow the derivation in Kippenhahn, Weigert, and Weiss in Stellar Structure and Evolution. Nearly all of the interior part of convective stars has an adiabatic stratification (corrections to this are small for fully convective regions), such that , which holds for an adiabatic expansion of an ideal gas. We assume that this relation holds from the interior to the surface of the star—the surface is called photosphere. We assume to be constant throughout the interior of the star with value 0.4. However, we obtain the correct distinctive behavior. For the interior we consider a simple polytropic relation between P and T: With the index . We assume the relation above to hold until the photosphere where we assume to have a simple absorption law Then, we use the hydrostatic equilibrium equation and integrate it with respect to the radius to give us For the solution in the interior we set ; in the P-T relation and then eliminate pressure of this equation. Luminosity is given by the Stephan-Boltzmann law applied to a perfect black body: . Thus, any value of R corresponds to a certain point in the Hertzsprung–Russell diagram. Finally, after some algebra this is the equation for the Hayashi limit in the Hertzsprung–Russell diagram: With coefficients , Takeaways from plugin in and for a cool hydrogen ion dominated atmosphere oppacity model (): The Hayashi limit must be far to the right in the Hertzsprung–Russell diagram which means temperatures have to be low. The Hayashi limit must be very steep. The gradient of Luminosity with respect to temperature has to be large. The Hayashi limit shifts slightly to the left in the Hertzsprung–Russell diagram for increasing M. These predictions are supported by numerical simulations of stars. What happens when stars cross the limit Until now we have made no claims on the stability of locale to the left, right or at the Hayashi limit in the Hertzsprung–Russell diagram. To the left of the Hayashi limit, we have and some part of the model is radiative. The model is fully convective at the Hayashi limit with . Models to the right of the Hayashi limit should have . If a star is formed such that some region in its deep interior has large large convective fluxes with velocities . The convective fluxes of energy cooldown the interior rapidly until and the star has moved to the Hayashi limit. In fact, it can be shown from the mixing length model that even a small excess can transport energy from the deep interior to the surface by convective fluxes. This will happen within the short timescale for the adjustment of convection which is still larger than timescales for non-equilibrium processes in the star such as hydrodynamic adjustment associated with the thermal time scale. Hence, the limit between an “allowed” stable region (left) and a “forbidden” unstable region (right) for stars of given M and composition that are in hydrostatic equilibrium and have a fully adjusted convection is the Hayashi limit. See also Eddington limit References Concepts in astrophysics Stellar evolution
Hayashi limit
[ "Physics" ]
1,276
[ "Concepts in astrophysics", "Astrophysics", "Stellar evolution" ]
3,100,521
https://en.wikipedia.org/wiki/Radiation%20chemistry
Radiation chemistry is a subdivision of nuclear chemistry which studies the chemical effects of ionizing radiation on matter. This is quite different from radiochemistry, as no radioactivity needs to be present in the material which is being chemically changed by the radiation. An example is the conversion of water into hydrogen gas and hydrogen peroxide. Radiation interactions with matter As ionizing radiation moves through matter its energy is deposited through interactions with the electrons of the absorber. The result of an interaction between the radiation and the absorbing species is removal of an electron from an atom or molecular bond to form radicals and excited species. The radical species then proceed to react with each other or with other molecules in their vicinity. It is the reactions of the radical species that are responsible for the changes observed following irradiation of a chemical system. Charged radiation species (α and β particles) interact through Coulombic forces between the charges of the electrons in the absorbing medium and the charged radiation particle. These interactions occur continuously along the path of the incident particle until the kinetic energy of the particle is sufficiently depleted. Uncharged species (γ photons, x-rays) undergo a single event per photon, totally consuming the energy of the photon and leading to the ejection of an electron from a single atom. Electrons with sufficient energy proceed to interact with the absorbing medium identically to β radiation. An important factor that distinguishes different radiation types from one another is the linear energy transfer (LET), which is the rate at which the radiation loses energy with distance traveled through the absorber. Low LET species are usually low mass, either photons or electron mass species (β particles, positrons) and interact sparsely along their path through the absorber, leading to isolated regions of reactive radical species. High LET species are usually greater in mass than one electron, for example α particles, and lose energy rapidly resulting in a cluster of ionization events in close proximity to one another. Consequently, the heavy particle travels a relatively short distance from its origin. Areas containing a high concentration of reactive species following absorption of energy from radiation are referred to as spurs. In a medium irradiated with low LET radiation, the spurs are sparsely distributed across the track and are unable to interact. For high LET radiation, the spurs can overlap, allowing for inter-spur reactions, leading to different yields of products when compared to the same medium irradiated with the same energy of low LET radiation. Reduction of organics by solvated electrons A recent area of work has been the destruction of toxic organic compounds by irradiation; after irradiation, "dioxins" (polychlorodibenzo-p-dioxins) are dechlorinated in the same way as PCBs can be converted to biphenyl and inorganic chloride. This is because the solvated electrons react with the organic compound to form a radical anion, which decomposes by the loss of a chloride anion. If a deoxygenated mixture of PCBs in isopropanol or mineral oil is irradiated with gamma rays, then the PCBs will be dechlorinated to form inorganic chloride and biphenyl. The reaction works best in isopropanol if potassium hydroxide (caustic potash) is added. The base deprotonates the hydroxydimethylmethyl radical to be converted into acetone and a solvated electron, as the result the G value (yield for a given energy due to radiation deposited in the system) of chloride can be increased because the radiation now starts a chain reaction, each solvated electron formed by the action of the gamma rays can now convert more than one PCB molecule. If oxygen, acetone, nitrous oxide, sulfur hexafluoride or nitrobenzene is present in the mixture, then the reaction rate is reduced. This work has been done recently in the US, often with used nuclear fuel as the radiation source. In addition to the work on the destruction of aryl chlorides, it has been shown that aliphatic chlorine and bromine compounds such as perchloroethylene, Freon (1,1,2-trichloro-1,2,2-trifluoroethane) and halon-2402 (1,2-dibromo-1,1,2,2-tetrafluoroethane) can be dehalogenated by the action of radiation on alkaline isopropanol solutions. Again a chain reaction has been reported. In addition to the work on the reduction of organic compounds by irradiation, some work on the radiation induced oxidation of organic compounds has been reported. For instance, the use of radiogenic hydrogen peroxide (formed by irradiation) to remove sulfur from coal has been reported. In this study it was found that the addition of manganese dioxide to the coal increased the rate of sulfur removal. The degradation of nitrobenzene under both reducing and oxidizing conditions in water has been reported. Reduction of metal compounds In addition to the reduction of organic compounds by the solvated electrons it has been reported that upon irradiation a pertechnetate solution at pH 4.1 is converted to a colloid of technetium dioxide. Irradiation of a solution at pH 1.8 soluble Tc(IV) complexes are formed. Irradiation of a solution at pH 2.7 forms a mixture of the colloid and the soluble Tc(IV) compounds. Gamma irradiation has been used in the synthesis of nanoparticles of gold on iron oxide (Fe2O3). It has been shown that the irradiation of aqueous solutions of lead compounds leads to the formation of elemental lead. When an inorganic solid such as bentonite and sodium formate are present then the lead is removed from the aqueous solution. Polymer modification Another key area uses radiation chemistry to modify polymers. Using radiation, it is possible to convert monomers to polymers, to crosslink polymers, and to break polymer chains. Both man-made and natural polymers (such as carbohydrates) can be processed in this way. Water chemistry Both the harmful effects of radiation upon biological systems (induction of cancer and acute radiation injuries) and the useful effects of radiotherapy involve the radiation chemistry of water. The vast majority of biological molecules are present in an aqueous medium; when water is exposed to radiation, the water absorbs energy, and as a result forms chemically reactive species that can interact with dissolved substances (solutes). Water is ionized to form a solvated electron and H2O+, the H2O+ cation can react with water to form a hydrated proton (H3O+) and a hydroxyl radical (HO.). Furthermore, the solvated electron can recombine with the H2O+ cation to form an excited state of the water. This excited state then decomposes to species such as hydroxyl radicals (HO.), hydrogen atoms (H.) and oxygen atoms (O.). Finally, the solvated electron can react with solutes such as solvated protons or oxygen molecules to form hydrogen atoms and dioxygen radical anions, respectively. The fact that oxygen changes the radiation chemistry might be one reason why oxygenated tissues are more sensitive to irradiation than the deoxygenated tissue at the center of a tumor. The free radicals, such as the hydroxyl radical, chemically modify biomolecules such as DNA, leading to damage such as breaks in the DNA strands. Some substances can protect against radiation-induced damage by reacting with the reactive species generated by the irradiation of the water. It is important to note that the reactive species generated by the radiation can take part in following reactions; this is similar to the idea of the non-electrochemical reactions which follow the electrochemical event which is observed in cyclic voltammetry when a non-reversible event occurs. For example, the SF5 radical formed by the reaction of solvated electrons and SF6 undergo further reactions which lead to the formation of hydrogen fluoride and sulfuric acid. In water, the dimerization reaction of hydroxyl radicals can form hydrogen peroxide, while in saline systems the reaction of the hydroxyl radicals with chloride anions forms hypochlorite anions. The action of radiation upon underground water is responsible for the formation of hydrogen which is converted by bacteria into methane. Equipment Radiation chemistry applied in industrial processing equipment To process materials, either a gamma source or an electron beam can be used. The international type IV (wet storage) irradiator is a common design, of which the JS6300 and JS6500 gamma sterilizers (made by 'Nordion International', which used to trade as 'Atomic Energy of Canada Ltd') are typical examples. In these irradiation plants, the source is stored in a deep well filled with water when not in use. When the source is required, it is moved by a steel wire to the irradiation room where the products which are to be treated are present; these objects are placed inside boxes which are moved through the room by an automatic mechanism. By moving the boxes from one point to another, the contents are given a uniform dose. After treatment, the product is moved by the automatic mechanism out of the room. The irradiation room has very thick concrete walls (about 3 m thick) to prevent gamma rays from escaping. The source consists of 60Co rods sealed within two layers of stainless steel. The rods are combined with inert dummy rods to form a rack with a total activity of about 12.6PBq (340kCi). Research equipment While it is possible to do some types of research using an irradiator much like that used for gamma sterilization, it is common in some areas of science to use a time resolved experiment where a material is subjected to a pulse of radiation (normally electrons from a LINAC). After the pulse of radiation, the concentration of different substances within the material are measured by emission spectroscopy or Absorption spectroscopy, hence the rates of reactions can be determined. This allows the relative abilities of substances to react with the reactive species generated by the action of radiation on the solvent (commonly water) to be measured. This experiment is known as pulse radiolysis which is closely related to flash photolysis. In the latter experiment the sample is excited by a pulse of light to examine the decay of the excited states by spectroscopy; sometimes the formation of new compounds can be investigated. Flash photolysis experiments have led to a better understanding of the effects of halogen-containing compounds upon the ozone layer. Chemosensor The SAW chemosensor is nonionic and nonspecific. It directly measures the total mass of each chemical compound as it exits the gas chromatography column and condenses on the crystal surface, thus causing a change in the fundamental acoustic frequency of the crystal. Odor concentration is directly measured with this integrating type of detector. Column flux is obtained from a microprocessor that continuously calculates the derivative of the SAW frequency. See also Radiolysis Milton Burton References Nuclear chemistry
Radiation chemistry
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,324
[ "Physical phenomena", "Nuclear chemistry", "Materials science", "Radiation", "Condensed matter physics", "nan", "Nuclear physics", "Radiation effects" ]
3,101,703
https://en.wikipedia.org/wiki/Diethyl%20sulfoxide
Diethyl sulfoxide, C4H10OS, is a sulfur-containing organic compound. References Sulfoxides
Diethyl sulfoxide
[ "Chemistry" ]
27
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
3,104,065
https://en.wikipedia.org/wiki/Ring-closing%20metathesis
Ring-closing metathesis (RCM) is a widely used variation of olefin metathesis in organic chemistry for the synthesis of various unsaturated rings via the intramolecular metathesis of two terminal alkenes, which forms the cycloalkene as the E- or Z- isomers and volatile ethylene. The most commonly synthesized ring sizes are between 5-7 atoms; however, reported syntheses include 45- up to 90- membered macroheterocycles. These reactions are metal-catalyzed and proceed through a metallacyclobutane intermediate. It was first published by Dider Villemin in 1980 describing the synthesis of an Exaltolide precursor, and later become popularized by Robert H. Grubbs and Richard R. Schrock, who shared the Nobel Prize in Chemistry, along with Yves Chauvin, in 2005 for their combined work in olefin metathesis. RCM is a favorite among organic chemists due to its synthetic utility in the formation of rings, which were previously difficult to access efficiently, and broad substrate scope. Since the only major by-product is ethylene, these reactions may also be considered atom economic, an increasingly important concern in the development of green chemistry. There are several reviews published on ring-closing metathesis. History The first example of ring-closing metathesis was reported by Dider Villemin in 1980 when he synthesized an Exaltolide precursor using a WCl6/Me4Sn catalyzed metathesis cyclization in 60-65% yield depending on ring size (A). In the following months, Jiro Tsuji reported a similar metathesis reaction describing the preparation of a macrolide catalyzed by WCl6 and dimethyltitanocene (Cp2TiMe2) in a modest 17.9% yield (B). Tsuji describes the olefin metathesis reaction as “…potentially useful in organic synthesis” and addresses the need for the development of a more versatile catalyst to tolerate various functional groups. In 1987, Siegfried Warwel and Hans Kaitker published a synthesis of symmetric macrocycles through a cross-metathesis dimerization of starting cycloolefins to afford C14, C18, and C20 dienes in 58-74% yield, as well as C16 in 30% yield, using Re2O7 on Al2O3 and Me4Sn for catalyst activation. After a decade since its initial discovery, Grubbs and Fu published two influential reports in 1992 detailing the synthesis of O- and N- heterocycles via RCM utilizing Schrock’s molybdenum alkylidene catalysts, which had proven more robust and functional group tolerant than the tungsten chloride catalysts. The synthetic route allowed access to dihydropyrans in high yield (89-93%) from readily available starting materials. In addition, synthesis of substituted pyrrolines, tetrahydropyridines, and amides were illustrated in modest to high yield (73-89% ). The driving force for the cyclization reaction was attributed to entropic favorability by forming two molecules per one molecule of starting material. The loss of the second molecule, ethylene, a highly volatile gas, drives the reaction in the forward direction according to Le Châtelier's principle. In 1993, Grubbs and others not only published a report on carbocycle synthesis using a molybdenum catalyst, but also detailed the initial use of a novel ruthenium carbene complex for metathesis reactions, which later became a popular catalyst due to its extraordinary utility. The ruthenium catalysts are not sensitive to air and moisture, unlike the molybdenum catalysts. The ruthenium catalysts, known better as the Grubbs Catalysts, as well as molybdenum catalysts, or Schrock’s Catalysts, are still used today for many metathesis reactions, including RCM. Overall, it was shown that metal-catalyzed RCM reactions were very effective in C-C bond forming reactions, and would prove of great importance in organic synthesis, chemical biology, materials science, and various other fields to access a wide variety of unsaturated and highly functionalized cyclic analogues. Mechanism General mechanism The mechanism for transition metal-catalyzed olefin metathesis has been widely researched over the past forty years. RCM undergoes a similar mechanistic pathway as other olefin metathesis reactions, such as cross metathesis (CM), ring-opening metathesis polymerization (ROMP), and acyclic diene metathesis (ADMET). Since all steps in the catalytic cycle are considered reversible, it is possible for some of these other pathways to intersect with RCM depending on the reaction conditions and substrates. In 1971, Chauvin proposed the formation of a metallacyclobutane intermediate through a [2+2] cycloaddition which then cycloeliminates to either yield the same alkene and catalytic species (a nonproductive pathway), or produce a new catalytic species and an alkylidene (a productive pathway). This mechanism has become widely accepted among chemists and serves as the model for the RCM mechanism. Initiation occurs through substitution of the catalyst’s alkene ligand with substrate. This process occurs via formation of a new alkylidene through one round of [2+2] cycloaddition and cycloelimination. Association and dissociation of a phosphine ligand also occurs in the case of Grubbs catalysts. In an RCM reaction, the alkylidene undergoes an intramolecular [2+2] cycloaddition with the second reactive terminal alkene on the same molecule, rather than an intermolecular addition of a second molecule of starting material, a common competing side reaction which may lead to polymerization Cycloelimination of the metallacyclobutane intermediate forms the desired RCM product along with a [M]=CH2, or alkylidene, species which reenters the catalytic cycle. While the loss of volatile ethylene is a driving force for RCM, it is also generated by competing metathesis reactions and therefore cannot be considered the only driving force of the reaction. Thermodynamics The reaction can be under kinetic or thermodynamic control depending on the exact reaction conditions, catalyst, and substrate. Common rings, 5- through 7-membered cycloalkenes, have a high tendency for formation and are often under greater thermodynamic control due to the enthalpic favorability of the cyclic products, as shown by Illuminati and Mandolini on the formation of lactone rings. Smaller rings, between 5 and 8 atoms, are more thermodynamically favored over medium to large rings due to lower ring strain. Ring strain arises from abnormal bond angles resulting in a higher heat of combustion relative to the linear counterpart. If the RCM product contains a strained olefin, polymerization becomes preferable through ring-opening metathesis polymerization of the newly formed olefin. Medium rings in particular have greater ring strain, in part due to greater transannular interactions from opposing sides of the ring, but also the inability to orient the molecule in such a way to prevent penalizing gauche interactions. RCM may be considered to have a kinetic bias if the products cannot reenter the catalytic cycle or interconvert through an equilibrium. A kinetic product distribution could lead to mostly RCM products or may lead to oligomers and polymers, which are most often disfavored. Equilibrium With the advent of more reactive catalysts, equilibrium RCM is observed quite often which may lead to a greater product distribution. The mechanism can be expanded to include the various competing equilibrium reactions as well as indicate where various side-products are formed along the reaction pathway, such as oligomers. Although the reaction is still under thermodynamic control, an initial kinetic product, which may be dimerization or oligomerization of the starting material, is formed at the onset of the reaction as a result of higher catalyst reactivity. Increased catalyst activity also allows for the olefin products to reenter the catalytic cycle via non-terminal alkene addition onto the catalyst. Due to additional reactivity in strained olefins, an equilibrium distribution of products is observed; however, this equilibrium can be perturbed through a variety of techniques to overturn the product ratios in favor of the desired RCM product. Since the probability for reactive groups on the same molecule to encounter each other is inversely proportional to the ring size, the necessary intramolecular cycloaddition becomes increasingly difficult as ring size increases. This relationship means that the RCM of large rings is often performed under high dilution (0.05 - 100 mM) (A) to reduce intermolecular reactions; while the RCM of common rings can be performed at greater concentrations, even neat in rare cases. The equilibrium reaction can be driven to the desired thermodynamic products by increasing temperature (B), to decrease viscosity of the reaction mixture and therefore increase thermal motion, as well as increasing or decreasing reaction time (C). Catalyst choice (D) has also been shown to be critical in controlling product formation. A few of the catalysts commonly used in ring-closing metathesis are shown below. Reaction scope Alkene substrate Ring-closing Metathesis has shown utility in the synthesis of 5-30 membered rings, polycycles, and heterocycles containing atoms such as N, O, S, P, and even Si. Due to the functional group tolerance of modern RCM reactions, the synthesis of structurally complex compounds containing a range of functional groups such as epoxides, ketones, alcohols, ethers, amines, amides, and many others can be achieved more easily than previous methods. Oxygen and nitrogen heterocycles dominate due to their abundance in natural products and pharmaceuticals. Some examples are shown below (the red alkene indicates C-C bond formed through RCM). In addition to terminal alkenes, tri- and tetrasubstituted alkenes have been used in RCM reactions to afford substituted cyclic olefin products. Ring-closing metathesis has also been used to cyclize rings containing an alkyne to produce a new terminal alkene, or even undergo a second cyclization to form bicycles. This type of reaction is more formally known as enyne ring-closing metathesis. E/Z selectivity In RCM reactions, two possible geometric isomers, either E- or Z-isomer, may be formed. Stereoselectivity is dependent on the catalyst, ring strain, and starting diene. In smaller rings, Z-isomers predominate as the more stable product reflecting ring-strain minimization. In macrocycles, the E-isomer is often obtained as a result of the thermodynamic bias in RCM reactions as E-isomers are more stable compared to Z-isomers. As a general trend, ruthenium NHC (N-heterocyclic carbene) catalysts favor E selectivity to form the trans isomer. This in part due to the steric clash between the substituents, which adopt a trans configuration as the most stable conformation in the metallacyclobutane intermediate, to form the E-isomer. The synthesis of stereopure Z- isomers were previously achieved via ring-closing alkyne metathesis. However, in 2013 Grubbs reported the use of a chelating ruthenium catalyst to afford Z macrocycles in high selectivity. The selectivity is attributed to the increased steric clash between the catalyst ligands and the metallacyclobutane intermediate that is formed. The increased steric interactions in the transition state lead to the Z olefin rather than the E olefin, because the transition state required to form the E- isomer is highly disfavored. Cocatalyst Additives are also used to overturn conformational preferences, increase reaction concentration, and chelate highly polar groups, such as esters or amides, which can bind to the catalyst. Titanium isopropoxide (Ti(OiPr)4) is commonly used to chelate polar groups to prevent catalyst poisoning and in the case of an ester, the titanium Lewis acid binds the carbonyl oxygen. Once the oxygen is chelated with the titanium it can no longer bind to the ruthenium metal of the catalyst, which would result in catalyst deactivation. This also allows the reaction to be run at a higher effective concentration without dimerization of starting material. Another classic example is the use of a bulky Lewis acid to form the E-isomer of an ester over the preferred Z-isomer for cyclolactonization of medium rings. In one study, the addition of aluminum tris(2,6-diphenylphenoxide) (ATPH) was added to form a 7-membered lactone. The aluminum binds with the carbonyl oxygen forcing the bulky diphenylphenoxide groups in close proximity to the ester compound. As a result, the ester adopts the E-isomer to minimize penalizing steric interactions. Without the Lewis acid, only the 14-membered dimer ring was observed. By orienting the molecule in such a way that the two reactive alkenes are in close proximity, the risk of intermolecular cross-metathesis is minimized. Limitations Many metathesis reactions with ruthenium catalysts are hampered by unwanted isomerization of the newly formed double bond, and it is believed that ruthenium hydrides that form as a side reaction are responsible. In one study it was found that isomerization is suppressed in the RCM reaction of diallyl ether with specific additives capable of removing these hydrides. Without an additive, the reaction product is 2,3-dihydrofuran (2,3-DHF) and not the expected 2,5-dihydrofuran (2,5-DHF) together with the formation of ethylene gas. Radical scavengers, such as TEMPO or phenol, do not suppress isomerization; however, additives such as 1,4-benzoquinone or acetic acid successfully prevent unwanted isomerization. Both additives are able to oxidize the ruthenium hydrides which may explain their behavior. Another common problem associated with RCM is the risk of catalyst degradation due to the high dilution required for some cyclizations. High dilution is also a limiting factor in industrial applications due to the large amount of waste generated from large-scale reactions at a low concentration. Efforts have been made to increase reaction concentration without compromising selectivity. Synthetic applications Ring-closing metathesis has been used historically in numerous organic syntheses and continues to be used today in the synthesis of a variety of compounds. The following examples are only representative of the broad utility of RCM, as there are numerous possibilities. For additional examples see the many review articles. Ring-closing metathesis is important in total synthesis. One example is its use in the formation of the 12-membered ring in the synthesis of the naturally occurring cyclophane floresolide. Floresolide B was isolated from an ascidian of the genus Apidium and showed cytotoxicity against KB tumor cells. In 2005, K. C. Nicolaou and others completed a synthesis of both isomers through late-stage ring-closing metathesis using the 2nd Generation Grubbs catalyst to afford a mixture of E- and Z- isomers (1:3 E/Z) in 89% yield. Although one prochiral center is present the product is racemic. Floresolide is an atropisomer as the new ring forms (due to steric constraints in the transition state) passing through the front of the carbonyl group in and not the back. The carbonyl group then locks the ring permanently in place. The E/Z isomers were then separated and then the phenol nitrobenzoate protective group was removed in the final step by potassium carbonate to yield the final product and the unnatural Z-isomer. In 1995, Robert Grubbs and others highlighted the stereoselectivity possible with RCM. The group synthesized a diene with an internal hydrogen bond forming a β-turn. The hydrogen bond stabilized the macrocycle precursor placing both dienes in close proximity, primed for metathesis. After subjecting a mixture of diastereomers to the reaction conditions, only one diastereomer of the olefin β-turn was obtained. The experiment was then repeated with (S,S,S) and (R,S,R) peptides. Only the (S,S,S) diastereomer was reactive illustrating the configuration needed for ring-closing to be possible. The olefin product’s absolute configuration mimics that of Balaram’s disulfide peptide. The ring strain in 8-11 atom rings has proven to be challenging for RCM; however, there are many cases where these cyclic systems have been synthesized. In 1997, Fürstner reported a facile synthesis to access jasmine ketolactone (E/Z) through a final RCM step. At the time, no previous 10-membered ring had been formed through RCM, and previous syntheses were often lengthy, involving a macrolactonization to form the decanolide. By adding the diene and catalyst over a 12-hour period to refluxing toluene, Fürstner was able to avoid oligomerization and obtain both E/Z isomers in 88% yield. CH2Cl2 favored the formation of the Z-isomer in 1:2.5 (E/Z) ratio, whereas, toluene only afforded a 1:1.4 (E/Z) mixture. In 2000, Alois Fürstner reported an eight step synthesis to access (−)-balanol using RCM to form a 7-member heterocycle intermediate. Balanol is a metabolite isolated from erticiullium balanoides and shows inhibitory action towards protein kinase C (PKC). In the ring closing metathesis step, a ruthenium indenylidene complex was used as the precatalyst to afford the desired 7-member ring in 87% yield. In 2002, Stephen F. Martin and others reported the 24-step synthesis of manzamine A with two ring-closing metathesis steps to access the polycyclic alkaloid. The natural product was isolated from marine sponges off the coast of Okinawa. Manzamine is a good target due to its potential as an antitumor compound. The first RCM step was to form the 13-member D ring as solely the Z-isomer in 67% yield, a unique contrast to the usual favored E-isomer of metathesis. After further transformations, the second RCM was used to form the 8-member E ring in 26% yield using stoichiometric 1st Generation Grubbs catalyst. The synthesis highlights the ability for functional group tolerance metathesis reactions as well as the ability to access complex molecules of varying ring sizes. In 2003, Danishefsky and others reported the total synthesis of (+)-migrastatin, a macrolide isolated from Streptomyces which inhibited tumor cell migration. The macrolide contains a 14-member heterocycle that was formed through RCM. The metathesis reaction yielded the protected migrastatin in 70% yield as only the (E,E,Z) isomer. It is reported that this selectivity arises from the preference for the ruthenium catalyst to add to the less hindered olefin first then cyclize to the most accessible olefin. The final deprotection of the silyl ether yielded (+)-migrastatin. Overall, ring-closing metathesis is a highly useful reaction to readily obtain cyclic compounds of varying size and chemical makeup; however, it does have some limitations such as high dilution, selectivity, and unwanted isomerization. See also Olefin Metathesis Ring-opening metathesis polymerization Alkane metathesis Alkyne metathesis Enyne metathesis References External links Ring-Closing Metathesis at organic-chemistry.org Sigma-Aldrich Ring-Closing Metathesis at sigmaaldrich.com The Olefin Metathesis Reaction Andrew Myers’ Group Notes Rearrangement reactions Organometallic chemistry Carbon-carbon bond forming reactions Homogeneous catalysis
Ring-closing metathesis
[ "Chemistry" ]
4,335
[ "Catalysis", "Carbon-carbon bond forming reactions", "Rearrangement reactions", "Organic reactions", "Homogeneous catalysis", "Organometallic chemistry", "Ring forming reactions" ]
3,104,166
https://en.wikipedia.org/wiki/3-Phosphoglyceric%20acid
3-Phosphoglyceric acid (3PG, 3-PGA, or PGA) is the conjugate acid of 3-phosphoglycerate or glycerate 3-phosphate (GP or G3P). This glycerate is a biochemically significant metabolic intermediate in both glycolysis and the Calvin-Benson cycle. The anion is often termed as PGA when referring to the Calvin-Benson cycle. In the Calvin-Benson cycle, 3-phosphoglycerate is typically the product of the spontaneous scission of an unstable 6-carbon intermediate formed upon CO2 fixation. Thus, two equivalents of 3-phosphoglycerate are produced for each molecule of CO2 that is fixed. In glycolysis, 3-phosphoglycerate is an intermediate following the dephosphorylation (reduction) of 1,3-bisphosphoglycerate. Glycolysis In the glycolytic pathway, 1,3-bisphosphoglycerate is dephosphorylated to form 3-phosphoglyceric acid in a coupled reaction producing two ATP via substrate-level phosphorylation. The single phosphate group left on the 3-PGA molecule then moves from an end carbon to a central carbon, producing 2-phosphoglycerate. This phosphate group relocation is catalyzed by phosphoglycerate mutase, an enzyme that also catalyzes the reverse reaction. Calvin-Benson cycle In the light-independent reactions (also known as the Calvin-Benson cycle), two 3-phosphoglycerate molecules are synthesized. RuBP, a 5-carbon sugar, undergoes carbon fixation, catalyzed by the rubisco enzyme, to become an unstable 6-carbon intermediate. This intermediate is then cleaved into two, separate 3-carbon molecules of 3-PGA. One of the resultant 3-PGA molecules continues through the Calvin-Benson cycle to be regenerated into RuBP while the other is reduced to form one molecule of glyceraldehyde 3-phosphate (G3P) in two steps: the phosphorylation of 3-PGA into 1,3-bisphosphoglyceric acid via the enzyme phosphoglycerate kinase (the reverse of the reaction seen in glycolysis) and the subsequent catalysis by glyceraldehyde 3-phosphate dehydrogenase into G3P. G3P eventually reacts to form the sugars such as glucose or fructose or more complex starches. Amino acid synthesis Glycerate 3-phosphate (formed from 3-phosphoglycerate) is also a precursor for serine, which, in turn, can create cysteine and glycine through the homocysteine cycle. Measurement 3-phosphoglycerate can be separated and measured using paper chromatography as well as with column chromatography and other chromatographic separation methods. It can be identified using both gas-chromatography and liquid-chromatography mass spectrometry and has been optimized for evaluation using tandem MS techniques. See also 2-Phosphoglyceric acid Calvin-Benson cycle Photosynthesis Ribulose 1,5-bisphosphate References Carboxylate anions Organophosphates Photosynthesis Glycolysis Metabolic intermediates Biomolecules
3-Phosphoglyceric acid
[ "Chemistry", "Biology" ]
751
[ "Carbohydrate metabolism", "Natural products", "Biochemistry", "Glycolysis", "Photosynthesis", "Organic compounds", "Metabolic intermediates", "Biomolecules", "Molecular biology", "Structural biology", "Metabolism" ]
3,104,807
https://en.wikipedia.org/wiki/Temporal%20mean
The temporal mean is the arithmetic mean of a series of values over a time period. Assuming equidistant measuring or sampling times, it can be computed as the sum of the values over a period divided by the number of values. A simple moving average can be considered to be a sequence of temporal means over periods of equal duration. (If the time variable is continuous, the average value during the time period is the integral over the period divided by the length of the duration of the period.) See also Moving average References Means
Temporal mean
[ "Physics", "Mathematics" ]
109
[ "Means", "Mathematical analysis", "Point (geometry)", "Geometric centers", "Symmetry" ]
3,105,253
https://en.wikipedia.org/wiki/Water%20transportation
Water transportation is the international movement of water over large distances. Methods of transportation fall into three categories: Aqueducts, which include pipelines, canals, tunnels and bridges Container shipment, which includes transport by tank truck, tank car, and tank ship. Towing, where a tugboat is used to pull an iceberg or a large water bag along behind it. Due to its weight, the transportation of water is very energy-intensive. Unless it has the assistance of gravity, a canal or long-distance pipeline will need pumping stations at regular intervals. In this regard, the lower friction levels of the canal make it a more economical solution than the pipeline. Water transportation is also very common in rivers and oceans. Major water transportation projects The Grand Canal of China, completed in the 7th century AD and measuring . The California Aqueduct, near Sacramento, is long. The Great Manmade River is a vast underground network of pipes in the Sahara desert, transporting water from an immense aquifer to the largest cities in the region. The Keita Integrated Development Project used specially created plows called the donaldo and Scarabeo to build water catchments. In these catchments, trees were planted which grow on the water flowing through the ditches. The Kimberley Water Source Project is currently under way in Australia to determine the best method of transporting water from the Fitzroy River to the city of Perth. Options being considered include a 3,700-kilometre canal, a pipeline of at least 1,800 kilometres, tankers of 300,000 to 500,000 tonnes, and water bags each carrying between 0.5 and 1.5 gigalitres. The Goldfields Pipeline built in Western Australia in 1903 was the longest pipeline of its day, at 597 kilometres. It supplies water from Perth to the gold mining centre of Kalgoorlie. Manual water transportation Historically water was transported by hand in dry countries, by traditional waterers such as the Sakkas of Arabia and Bhishti of India. Africa is another area where water is often transported by hand, especially in rural areas. See also Pipeline#Water Water export Water management Water supply References Water supply
Water transportation
[ "Chemistry", "Engineering", "Environmental_science" ]
434
[ "Hydrology", "Water supply", "Environmental engineering" ]
3,105,510
https://en.wikipedia.org/wiki/Piezochromism
Piezochromism, from the Greek piezô "to squeeze, to press" and chromos "color", describes the tendency of certain materials to change color with the application of pressure. This effect is closely related to the electronic band gap change, which can be found in plastics, semiconductors (e.g. hybrid perovskites) and hydrocarbons. One simple molecule displaying this property is 5-methyl-2-[(2-nitrophenyl)amino]-3-thiophenecarbonitrile, also known as ROY owing to its red, orange and yellow crystalline forms. Individual yellow and pale orange versions transform reversibly to red at high pressure. References External links Piezochromism Chromism
Piezochromism
[ "Physics", "Chemistry", "Materials_science", "Astronomy", "Engineering" ]
159
[ "Spectroscopy stubs", "Materials science stubs", "Spectrum (physical sciences)", "Chromism", "Astronomy stubs", "Materials science", "Smart materials", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs" ]
1,601,998
https://en.wikipedia.org/wiki/Organic%20field-effect%20transistor
An organic field-effect transistor (OFET) is a field-effect transistor using an organic semiconductor in its channel. OFETs can be prepared either by vacuum evaporation of small molecules, by solution-casting of polymers or small molecules, or by mechanical transfer of a peeled single-crystalline organic layer onto a substrate. These devices have been developed to realize low-cost, large-area electronic products and biodegradable electronics. OFETs have been fabricated with various device geometries. The most commonly used device geometry is bottom gate with top drain and source electrodes, because this geometry is similar to the thin-film silicon transistor (TFT) using thermally grown SiO2 as gate dielectric. Organic polymers, such as poly(methyl-methacrylate) (PMMA), can also be used as dielectric. One of the benefits of OFETs, especially compared with inorganic TFTs, is their unprecedented physical flexibility, which leads to biocompatible applications, for instance in the future health care industry of personalized biomedicines and bioelectronics. In May 2007, Sony reported the first full-color, video-rate, flexible, all plastic display, in which both the thin-film transistors and the light-emitting pixels were made of organic materials. History The concept of a field-effect transistor (FET) was first proposed by Julius Edgar Lilienfeld, who received a patent for his idea in 1930. He proposed that a field-effect transistor behaves as a capacitor with a conducting channel between a source and a drain electrode. Applied voltage on the gate electrode controls the amount of charge carriers flowing through the system. The first insulated-gate field-effect transistor was designed and prepared by Frosch and Derrick in 1957, using masking and predeposition, were able to manufacture silicon dioxide transistors and showed that silicon dioxide insulated, protected silicon wafers and prevented dopants from diffusing into the wafer. Later, following this research, Mohamed Atalla and Dawon Kahng proposed a silicon MOS transistor in 1959 and successfully demonstrated a working MOS device with their Bell Labs team in 1960. Their team included E. E. LaBate and E. I. Povilonis who fabricated the device; M. O. Thurston, L. A. D’Asaro, and J. R. Ligenza who developed the diffusion processes, and H. K. Gummel and R. Lindner who characterized the device. Also known as the MOS transistor, the MOSFET is the most widely manufactured device in the world. The concept of a thin-film transistor (TFT) was first proposed by John Wallmark who in 1957 filed a patent for a thin film MOSFET in which germanium monoxide was used as a gate dielectric. Thin-film transistor was developed in 1962 by Paul K. Weimer who implemented Wallmark's ideas. The TFT is a special type of MOSFET. Rising costs of materials and manufacturing, as well as public interest in more environmentally friendly electronics materials, have supported development of organic based electronics in more recent years. In 1986, Mitsubishi Electric researchers H. Koezuka, A. Tsumura and Tsuneya Ando reported the first organic field-effect transistor, based on a polymer of thiophene molecules. The thiophene polymer is a type of conjugated polymer that is able to conduct charge, eliminating the need to use expensive metal oxide semiconductors. Additionally, other conjugated polymers have been shown to have semiconducting properties. OFET design has also improved in the past few decades. Many OFETs are now designed based on the thin-film transistor (TFT) model, which allows the devices to use less conductive materials in their design. Improvement on these models in the past few years have been made to field-effect mobility and on–off current ratios. Materials One common feature of OFET materials is the inclusion of an aromatic or otherwise conjugated π-electron system, facilitating the delocalization of orbital wavefunctions. Electron withdrawing groups or donating groups can be attached that facilitate hole or electron transport. OFETs employing many aromatic and conjugated materials as the active semiconducting layer have been reported, including small molecules such as rubrene, tetracene, pentacene, diindenoperylene, perylenediimides, tetracyanoquinodimethane (TCNQ), and polymers such as polythiophenes (especially poly(3-hexylthiophene) (P3HT)), polyfluorene, polydiacetylene, poly(2,5-thienylene vinylene), poly(p-phenylene vinylene) (PPV). The field is very active, with newly synthesized and tested compounds reported weekly in prominent research journals. Many review articles exist documenting the development of these materials. Rubrene-based OFETs show the highest carrier mobility 20–40 cm2/(V·s). Another popular OFET material is pentacene, which has been used since the 1980s, but with mobilities 10 to 100 times lower (depending on the substrate) than rubrene. The major problem with pentacene, as well as many other organic conductors, is its rapid oxidation in air to form pentacene-quinone. However if the pentacene is preoxidized, and the thus formed pentacene-quinone is used as the gate insulator, then the mobility can approach the rubrene values. This pentacene oxidation technique is akin to the silicon oxidation used in the silicon electronics. Polycrystalline tetrathiafulvalene and its analogues result in mobilities in the range 0.1–1.4 cm2/(V·s). However, the mobility exceeds 10 cm2/(V·s) in solution-grown or vapor-transport-grown single crystalline hexamethylene-tetrathiafulvalene (HMTTF). The ON/OFF voltage is different for devices grown by those two techniques, presumably due to the higher processing temperatures using in the vapor transport grows. All the above-mentioned devices are based on p-type conductivity. N-type OFETs are yet poorly developed. They are usually based on perylenediimides or fullerenes or their derivatives, and show electron mobilities below 2 cm2/(V·s). Device design of organic field-effect transistors Three essential components of field-effect transistors are the source, the drain and the gate. Field-effect transistors usually operate as a capacitor. They are composed of two plates. One plate works as a conducting channel between two ohmic contacts, which are called the source and the drain contacts. The other plate works to control the charge induced into the channel, and it is called the gate. The direction of the movement of the carriers in the channel is from the source to the drain. Hence the relationship between these three components is that the gate controls the carrier movement from the source to the drain. When this capacitor concept is applied to the device design, various devices can be built up based on the difference in the controller – i.e. the gate. This can be the gate material, the location of the gate with respect to the channel, how the gate is isolated from the channel, and what type of carrier is induced by the gate voltage into channel (such as electrons in an n-channel device, holes in a p-channel device, and both electrons and holes in a double injection device). Classified by the properties of the carrier, three types of FETs are shown schematically in Figure 1. They are MOSFET (metal–oxide–semiconductor field-effect transistor), MESFET (metal–semiconductor field-effect transistor) and TFT (thin-film transistor). MOSFET The most prominent and widely used FET in modern microelectronics is the MOSFET (metal–oxide–semiconductor FET). There are different kinds in this category, such as MISFET (metal–insulator–semiconductor field-effect transistor), and IGFET (insulated-gate FET). A schematic of a MISFET is shown in Figure 1a. The source and the drain are connected by a semiconductor and the gate is separated from the channel by a layer of insulator. If there is no bias (potential difference) applied on the gate, the Band bending is induced due to the energy difference of metal conducting band and the semiconductor Fermi level. Therefore, a higher concentration of holes is formed on the interface of the semiconductor and the insulator. When an enough positive bias is applied on the gate contact, the bended band becomes flat. If a larger positive bias is applied, the band bending in the opposite direction occurs and the region close to the insulator-semiconductor interface becomes depleted of holes. Then the depleted region is formed. At an even larger positive bias, the band bending becomes so large that the Fermi level at the interface of the semiconductor and the insulator becomes closer to the bottom of the conduction band than to the top of the valence band, therefore, it forms an inversion layer of electrons, providing the conducting channel. Finally, it turns the device on. MESFET The second type of device is described in Fig.1b. The only difference of this one from the MISFET is that the n-type source and drain are connected by an n-type region. In this case, the depletion region extends all over the n-type channel at zero gate voltage in a normally “off” device (it is similar to the larger positive bias in MISFET case). In the normally “on” device, a portion of the channel is not depleted, and thus leads to passage of a current at zero gate voltage. TFT A thin-film transistor (TFT) is illustrated in Figure 1c. Here the source and drain electrodes are directly deposited onto the conducting channel (a thin layer of semiconductor) then a thin film of insulator is deposited between the semiconductor and the metal gate contact. This structure suggests that there is no depletion region to separate the device from the substrate. If there is zero bias, the electrons are expelled from the surface due to the Fermi-level energy difference of the semiconductor and the metal. This leads to band bending of semiconductor. In this case, there is no carrier movement between the source and drain. When the positive charge is applied, the accumulation of electrons on the interface leads to the bending of the semiconductor in an opposite way and leads to the lowering of the conduction band with regards to the Fermi-level of the semiconductor. Then a highly conductive channel forms at the interface (shown in Figure 2). OFET OFETs adopt the architecture of TFT. With the development of the conducting polymer, the semiconducting properties of small conjugated molecules have been recognized. The interest in OFETs has grown enormously in the past ten years. The reasons for this surge of interest are manifold. The performance of OFETs, which can compete with that of amorphous silicon (a-Si) TFTs with field-effect mobilities of 0.5–1 cm2 V−1 s−1 and ON/OFF current ratios (which indicate the ability of the device to shut down) of 106–108, has improved significantly. Currently, thin-film OFET mobility values of 5 cm2 V−1 s−1 in the case of vacuum-deposited small molecules and 0.6 cm2 V−1 s−1 for solution-processed polymers have been reported. As a result, there is now a greater industrial interest in using OFETs for applications that are currently incompatible with the use of a-Si or other inorganic transistor technologies. One of their main technological attractions is that all the layers of an OFET can be deposited and patterned at room temperature by a combination of low-cost solution-processing and direct-write printing, which makes them ideally suited for realization of low-cost, large-area electronic functions on flexible substrates. Device preparation Thermally oxidized silicon is a traditional substrate for OFETs where the silicon dioxide serves as the gate insulator. The active FET layer is usually deposited onto this substrate using either (i) thermal evaporation, (ii) coating from organic solution, or (iii) electrostatic lamination. The first two techniques result in polycrystalline active layers; they are much easier to produce, but result in relatively poor transistor performance. Numerous variations of the solution coating technique (ii) are known, including dip-coating, spin-coating, inkjet printing and screen printing. The electrostatic lamination technique is based on manual peeling of a thin layer off a single organic crystal; it results in a superior single-crystalline active layer, yet it is more tedious. The thickness of the gate oxide and the active layer is below one micrometer. Carrier transport The carrier transport in OFET is specific for two-dimensional (2D) carrier propagation through the device. Various experimental techniques were used for this study, such as Haynes - Shockley experiment on the transit times of injected carriers, time-of-flight (TOF) experiment for the determination of carrier mobility, pressure-wave propagation experiment for probing electric-field distribution in insulators, organic monolayer experiment for probing orientational dipolar changes, optical time-resolved second harmonic generation (TRM-SHG), etc. Whereas carriers propagate through polycrystalline OFETs in a diffusion-like (trap-limited) manner, they move through the conduction band in the best single-crystalline OFETs. The most important parameter of OFET carrier transport is carrier mobility. Its evolution over the years of OFET research is shown in the graph for polycrystalline and single crystalline OFETs. The horizontal lines indicate the comparison guides to the main OFET competitors – amorphous (a-Si) and polycrystalline silicon. The graph reveals that the mobility in polycrystalline OFETs is comparable to that of a-Si whereas mobility in rubrene-based OFETs (20–40 cm2/(V·s)) approaches that of best poly-silicon devices. Development of accurate models of charge carrier mobility in OFETs is an active field of research. Fishchuk et al. have developed an analytical model of carrier mobility in OFETs that accounts for carrier density and the polaron effect. While average carrier density is typically calculated as function of gate voltage when used as an input for carrier mobility models, modulated amplitude reflectance spectroscopy (MARS) has been shown to provide a spatial map of carrier density across an OFET channel. Light-emitting OFETs Because an electric current flows through such a transistor, it can be used as a light-emitting device, thus integrating current modulation and light emission. In 2003, a German group reported the first organic light-emitting field-effect transistor (OLET). The device structure comprises interdigitated gold source- and drain electrodes and a polycrystalline tetracene thin film. Both positive charges (holes) as well as negative charges (electrons) are injected from the gold contacts into this layer leading to electroluminescence from the tetracene. See also Charge modulation spectroscopy OLED Organic electronics Oxide thin-film transistor Thin film transistor References Molecular electronics Organic electronics Flexible displays
Organic field-effect transistor
[ "Chemistry", "Materials_science", "Mathematics" ]
3,283
[ "Molecular physics", "Molecular electronics", "Flexible displays", "Nanotechnology", "Planes (geometry)", "Thin films" ]