text
stringlengths
2
132k
source
dict
of P. vivax transmission, there is evidence for the transmission of P. vivax among Duffy-negative populations in Western Kenya, the Brazilian Amazon region, and Madagascar. The Malagasy people on Madagascar have an admixture of Duffy-positive and Duffy-negative people of diverse ethnic backgrounds. 72% of the island population were found to be Duffy-negative. P. vivax positivity was found in 8.8% of 476 asymptomatic Duffy-negative people, and clinical P. vivax malaria was found in 17 such persons. Genotyping indicated that multiple P. vivax strains were invading the red cells of Duffy-negative people. The authors suggest that among Malagasy populations there are enough Duffy-positive people to maintain mosquito transmission and liver infection. More recently, Duffy negative individuals infected with two different strains of P. vivax were found in Angola and Equatorial Guinea; further, P. vivax infections were found both in humans and mosquitoes, which means that active transmission is occurring. The frequency of such transmission is still unknown. Because of these several reports from different parts of the world it is clear that some variants of P. vivax are being transmitted to humans who are not expressing DARC on their red cells. The same phenomenon has been observed in New World monkeys. However, DARC still appears to be a major receptor for human transmission of P. vivax. The distribution of Duffy negativity in Africa does not correlate precisely with that of P. vivax transmission. Frequencies of Duffy negativity are as high in East Africa (above 80%), where the parasite is transmitted, as they are in West Africa, where it is not. The potency of P. vivax as an agent of natural selection is unknown and may vary from location to location. DARC negativity remains a good example of innate resistance to an infection, but it produces a relative and not an absolute resistance
{ "page_id": 24973826, "source": null, "title": "Human genetic resistance to malaria" }
to P. vivax transmission. ==== Gerbich antigen receptor negativity ==== The Gerbich antigen system is an integral membrane protein of the erythrocyte and plays a functionally important role in maintaining erythrocyte shape. It also acts as the receptor for the P. falciparum erythrocyte binding protein. There are four alleles of the gene which encodes the antigen, Ge-1 to Ge-4. Three types of Ge antigen negativity are known: Ge-1,-2,-3, Ge-2,-3 and Ge-2,+3. persons with the relatively rare phenotype Ge-1,-2,-3, are less susceptible (~60% of the control rate) to invasion by P. falciparum. Such individuals have a subtype of a condition called hereditary elliptocytosis, characterized by oval or elliptical shape erythrocytes. ==== Other rare erythrocyte mutations ==== Rare mutations of glycophorin A and B proteins are also known to mediate resistance to P. falciparum. === Human leucocyte antigen polymorphisms === Human leucocyte antigen (HLA) polymorphisms common in West Africans but rare in other racial groups are associated with protection from severe malaria. This group of genes encodes cell-surface antigen-presenting proteins and has many other functions. In West Africa, they account for as great a reduction in disease incidence as the sickle-cell hemoglobin variant. The studies suggest that the unusual polymorphism of major histocompatibility complex genes has evolved primarily through natural selection by infectious pathogens. Polymorphisms at the HLA loci, which encode proteins that participate in antigen presentation, influence the course of malaria. In West Africa an HLA class I antigen (HLA Bw53) and an HLA class II haplotype (DRB1*13OZ-DQB1*0501) are independently associated with protection against severe malaria. However, HLA correlations vary, depending on the genetic constitution of the polymorphic malaria parasite, which differs in different geographic locations. === Hereditary persistence of fetal hemoglobin === Some studies suggest that high levels of fetal hemoglobin (HbF) confer some protection against falciparum malaria in adults
{ "page_id": 24973826, "source": null, "title": "Human genetic resistance to malaria" }
with Hereditary persistence of fetal hemoglobin. == Validating the malaria hypothesis == Evolutionary biologist J.B.S. Haldane was the first to give a hypothesis on the relationship between malaria and the genetic disease. He first delivered his hypothesis at the Eighth International Congress of Genetics held in 1948 at Stockholm on a topic "The Rate of Mutation of Human Genes". He formalised in a technical paper published in 1949 in which he made a prophetic statement: "The corpuscles of the anaemic heterozygotes are smaller than normal, and more resistant to hypotonic solutions. It is at least conceivable that they are also more resistant to attacks by the sporozoa which cause malaria." This became known as 'Haldane's malaria hypothesis', or concisely, the 'malaria hypothesis'. Detailed study of a cohort of 1022 Kenyan children living near Lake Victoria, published in 2002, confirmed this prediction. Many SS children still died before they attained one year of age. Between 2 and 16 months the mortality in AS children was found to be significantly lower than that in AA children. This well-controlled investigation shows the ongoing action of natural selection through disease in a human population. Analysis of genome wide association (GWA) and fine-resolution association mapping is a powerful method for establishing the inheritance of resistance to infections and other diseases. Two independent preliminary analyses of GWA association with severe falciparum malaria in Africans have been carried out, one by the Malariagen Consortium in a Gambian population and the other by Rolf Horstmann (Bernhard Nocht Institute for Tropical Medicine, Hamburg) and his colleagues on a Ghanaian population. In both cases the only signal of association reaching genome-wide significance was with the HBB locus encoding the β-chain of hemoglobin, which is abnormal in HbS. This does not imply that HbS is the only gene conferring innate resistance
{ "page_id": 24973826, "source": null, "title": "Human genetic resistance to malaria" }
to falciparum malaria; there could be many such genes exerting more modest effects that are challenging to detect by GWA because of the low levels of linkage disequilibrium in African populations. However, the same GWA association in two populations is powerful evidence that the single gene conferring strongest innate resistance to falciparum malaria is that encoding HbS. === Fitnesses of different genotypes === The fitnesses of different genotypes in an African region where there is intense malarial selection were estimated by Anthony Allison in 1954. In the Baamba population living in the Semliki Forest region in Western Uganda the sickle-cell heterozygote (AS) frequency is 40%, which means that the frequency of the sickle-cell gene is 0.255 and 6.5% of children born are SS homozygotes. It is a reasonable assumption that until modern treatment was available three-quarters of the SS homozygotes failed to reproduce. To balance this loss of sickle-cell genes, a mutation rate of 1:10.2 per gene per generation would be necessary. This is about 1000 times greater than mutation rates measured in Drosophila and other organisms and much higher than recorded for the sickle-cell locus in Africans. To balance the polymorphism, Anthony Allison estimated that the fitness of the AS heterozygote would have to be 1.26 times than that of the normal homozygote. Later analyses of survival figures have given similar results, with some differences from site to site. In Gambians, it was estimated that AS heterozygotes have 90% protection against P. falciparum-associated severe anemia and cerebral malaria, whereas in the Luo population of Kenya it was estimated that AS heterozygotes have 60% protection against severe malarial anemia. These differences reflect the intensity of transmission of P. falciparum malaria from locality to locality and season to season, so fitness calculations will also vary. In many African populations the AS
{ "page_id": 24973826, "source": null, "title": "Human genetic resistance to malaria" }
frequency is about 20%, and a fitness superiority over those with normal hemoglobin of the order of 10% is sufficient to produce a stable polymorphism. == Glossary == actin, ankrin, spectrin – proteins that are the major components of the cytoskeleton scaffolding within a cell's cytoplasm aerobic – uses oxygen for the production of energy (contrast anaerobic) allele – one of two or more alternative forms of a gene that arise by mutation α-chain / β-chain (hemoglobin) – subcomponents of the hemoglobin molecule; two α-chains and two β-chains make up normal hemoglobin (HbA) alveolar – pertaining to the alveoli, the tiny air sacs in the lungs amino acid – any of twenty organic compounds that are subunits of protein in the human body anabolic – of or relating to the synthesis of complex molecules in living organisms from simpler ones together with the storage of energy; constructive metabolism (contrast catabolic) anaerobic – refers to a process or reaction which does not require oxygen, but produces energy by other means (contrast aerobic) anion transporter (organic) – molecules that play an essential role in the distribution and excretion of numerous endogenous metabolic products and exogenous organic anions antigen – any substance (as an immunogen or a hapten) foreign to the body that evokes an immune response either alone or after forming a complex with a larger molecule (as a protein) and that is capable of binding with a component (as an antibody or T cell) of the immune system ATP – (Adenosine triphosphate) – an organic molecule containing high energy phosphate bonds used to transport energy within a cell catabolic – of or relatig to the breakdown of complex molecules in living organisms to form simpler ones, together with the release of energy; destructive metabolism (contrast anabolic) chemokine – are a family
{ "page_id": 24973826, "source": null, "title": "Human genetic resistance to malaria" }
of small cytokines, or signaling proteins secreted by cells codon – a sequence of three nucleotides which specify which amino acid will be added next during protein synthesis corpuscle – obsolete name for red blood cell cytoadherance – infected red blood cells may adhere to blood vellel walls and uninfected red blood cells cytoplasm – clear jelly-like substance, mostly water, inside a cell diathesis – a tendency to suffer from a particular medical condition DNA – deoxyribonucleic acid, the hereditary material of the genome Drosophila – a kind of fruit fly used for genetic experimentation because of ease of reproduction and manipulation of its genome endocytic – the transport of solid matter or liquid into a cell by means of a coated vacuole or vesicle endogamy – the custom of marrying only within the limits of a local community, clan, or tribe endothelial – of or referring to the thin inner surface of blood vessels enzyme – a protein that promotes a cellular process, much like a catalyst in an ordinary chemical reaction epidemiology – the study of the spread of disease within a population erythrocyte – red blood cell, which with the leucocytes make up the cellular content of the blood (contrast leucocyte) erythroid – of or referring to erythrocytes, red blood cells fitness (genetic) – loosely, reproductive success that tends to propagate a trait or traits (see natural selection) genome – (abstractly) all the inheritable traits of an organism; represented by its chromosomes genotype – the genetic makeup of a cell, an organism, or an individual usually with reference to a specific trait glycolysis – the breakdown of glucose by enzymes, releasing energy glycophorin – transmembrane proteins of red blood cells haplotype – a set of DNA variations, or polymorphisms, that tend to be inherited together. Hb (HbC,
{ "page_id": 24973826, "source": null, "title": "Human genetic resistance to malaria" }
HbE, HbS, etc.) hemoglobin (hemoglobin polymorphisms: hemoglobin type C, hemoglobin type E, hemoglobin type S) hematopoietic (stem cell) – the blood stem cells that give rise to all other blood cells heme oxygenase-1 (HO-1) – an enzyme that breaks down heme, the iron-containing non-protein part of hemoglobin hemoglobin – iron based organic molecule in red blood cells that transports oxygen and gives blood its red color hemolysis – the rupturing of red blood cells and the release of their contents (cytoplasm) into surrounding fluid (e.g., blood plasma) heterozygous – possessing only one copy of a gene for a particular trait homozygous – possessing two identical copies of a gene for a particular trait, one from each parent hypotonic – denotes a solution of lower osmotic pressure than another solution with which it is in contact, so that certain molecules will migrate from the region of higher osmotic pressure to the region of lower osmotic pressure, until the pressures are equalized in vitro – in a test tube or other laboratory vessel; usually used in regard to a testing protocol in vivo – in a live human (or animal); usually used in regard to a testing protocol leucocyte – white blood cell, part of the immune system, which together with red blood cells, comprise the cellular component of the blood (contrast erythrocyte) ligand – an extracellular signal molecule, which when it binds to a cellular receptor, causes a response by the cell locus (gene or chromosome) – the specific location of a gene or DNA sequence or position on a chromosome macrophage – a large white blood cell, part of the immune system that ingests foreign particles and infectious microorganisms major histocompatibility complex (MHC) – proteins found on the surfaces of cells that help the immune system recognize foreign substances; also
{ "page_id": 24973826, "source": null, "title": "Human genetic resistance to malaria" }
called the human leucocyte antigen (HLA) system micro-RNA – a cellular RNA fragment that prevents the production of a particular protein by binding to and destroying the messenger RNA that would have produced the protein. microvasculature – very small blood vessels mitochondria – energy producing organelles of a cell mutation – a spontaneous change to a gene, arising from an error in replication of DNA; usually mutations are referred to in the context of inherited mutations, i.e. changes to the gametes natural selection – the gradual process by which biological traits become either more or less common in a population as a function of the effect of inherited traits on the differential reproductive success of organisms interacting with their environment (closely related to fitness) nucleotide – organic molecules that are subunits, of nucleic acids like DNA and RNA nucleic acid – a complex organic molecule present in living cells, esp. DNA or RNA, which consist of many nucleotides linked in a long chain. oxygen radical – a highly reactive ion containing oxygen, capable of damaging microorganisms and normal tissues. pathogenesis – the manner of development of a disease PCR – Polymerase Chain Reaction, an enzymatic reaction by which DNA is replicated in a test tube for subsequent testing or analysis phenotype – the composite of an organism's observable characteristics or traits, such as its morphology Plasmodium – the general type (genus) of the protozoan microorganisms that cause malaria, though only a few of them do polymerize – to combine replicated subunits into a longer molecule (usually referring to synthetic materials, but also organic molecules) polymorphism – the occurrence of something in several different forms, as for example hemoglobin (HbA, HbC, etc.) polypeptide – a chain of amino acids forming part of a protein molecule receptor (cellular surface) – specialized integral
{ "page_id": 24973826, "source": null, "title": "Human genetic resistance to malaria" }
membrane proteins that take part in communication between the cell and the outside world; receptors are responsive to specific ligands that attach to them. reducing environment (cellular) – reducing environment is one where oxidation is prevented by removal of oxygen and other oxidising gases or vapours, and which may contain actively reducing gases such as hydrogen, carbon monoxide and gases that would oxidize in the presence of oxygen, such as hydrogen sulfide. RNA – ribonucleic acid, a nucleic acid present in all living cells. Its principal role is to act as a messenger carrying instructions from DNA for controlling the synthesis of proteins sequestration (biology) – process by which an organism accumulates a compound or tissue (as red blood cells) from the environment sex-linked – a trait associated with a gene that is carried only by the male or female parent (contrast with autosomal) Sporozoa – a large class of strictly parasitic nonmotile protozoans, including Plasmodia which cause malaria TCA cycle – TriCarboxylic Acid cycle is a series of enzyme-catalyzed chemical reactions that form a key part of aerobic respiration in cells translocation (cellular biology) – movement of molecules from outside to inside (or vice versa) of a cell transmembrane – existing or occurring across a cell membrane venous – of or referring to the veins vesicle – a small organelle within a cell, consisting of fluid enclosed by a fatty membrane virulence factors – enable an infectious agent to replicate and disseminate within a host in part by subverting or eluding host defenses. == See also == Adaptive immunity Malaria vaccine == Notes == == References == == Further reading == Dronamraju KR, Arese P (2006) Malaria: Genetic and Evolutionary Aspects, Springer; Berlin, ISBN 0-387-28294-7 / ISBN 978-0-387-28294-7 Faye FBK (2009) Malaria Resistance or Susceptibility in Red Cells Disorders,
{ "page_id": 24973826, "source": null, "title": "Human genetic resistance to malaria" }
Nova Science Publishers Inc, New York. ISBN 9781606929438 == External links == Favism Hemoglobinopathies Malaria and the Red Cell
{ "page_id": 24973826, "source": null, "title": "Human genetic resistance to malaria" }
Aptamers are oligomers of artificial ssDNA, RNA, XNA, or peptide that bind a specific target molecule, or family of target molecules. They exhibit a range of affinities (KD in the pM to μM range), with variable levels of off-target binding and are sometimes classified as chemical antibodies. Aptamers and antibodies can be used in many of the same applications, but the nucleic acid-based structure of aptamers, which are mostly oligonucleotides, is very different from the amino acid-based structure of antibodies, which are proteins. This difference can make aptamers a better choice than antibodies for some purposes (see antibody replacement). Aptamers are used in biological lab research and medical tests. If multiple aptamers are combined into a single assay, they can measure large numbers of different proteins in a sample. They can be used to identify molecular markers of disease, or can function as drugs, drug delivery systems and controlled drug release systems. They also find use in other molecular engineering tasks. Most aptamers originate from SELEX, a family of test-tube experiments for finding useful aptamers in a massive pool of different DNA sequences. This process is much like natural selection, directed evolution or artificial selection. In SELEX, the researcher repeatedly selects for the best aptamers from a starting DNA library made of about a quadrillion different randomly generated pieces of DNA or RNA. After SELEX, the researcher might mutate or change the chemistry of the aptamers and do another selection, or might use rational design processes to engineer improvements. Non-SELEX methods for discovering aptamers also exist. Researchers optimize aptamers to achieve a variety of beneficial features. The most important feature is specific and sensitive binding to the chosen target. When aptamers are exposed to bodily fluids, as in serum tests or aptamer therapeutics, it is often important for them to
{ "page_id": 1970691, "source": null, "title": "Aptamer" }
resist digestion by DNA- and RNA-destroying enzymes. Therapeutic aptamers often must be modified to clear slowly from the body. Aptamers that change their shape dramatically when they bind their target are useful as molecular switches to turn a sensor on and off. Some aptamers are engineered to fit into a biosensor or in a test of a biological sample. It can be useful in some cases for the aptamer to accomplish a pre-defined level or speed of binding. As the yield of the synthesis used to produce known aptamers shrinks quickly for longer sequences, researchers often truncate aptamers to the minimal binding sequence to reduce the production cost. == Etymology == The word "aptamer" is a neologism coined by Andrew D. Ellington and Jack Szostak in their first publication on the topic. They did not provide a precise definition, stating "We have termed these individual RNA sequences 'aptamers', from the Latin 'aptus', to fit." The word itself, however, derives from the Greek word ἅπτω, to connect or fit (as used by Homer (c. 8th century BC)) and μέρος, a component of something larger. == Classification == A typical aptamer is a synthetically generated ligand exploiting the combinatorial diversity of DNA, RNA, XNA, or peptide to achieve strong, specific binding for a particular target molecule or family of target molecules. Aptamers are occasionally classified as "chemical antibodies" or "antibody mimics". However, most aptamers are small, with a molecular weight of 6-30 kDa, in contrast to the 150 kDa size of antibodies, and contain one binding site rather than the two matching antigen binding regions of a typical antibody. == History == Since its first application in 1967, directed evolution methodologies have been used to develop biomolecules with new properties and functions. Early examples include the modification of the bacteriophage Qbeta replication
{ "page_id": 1970691, "source": null, "title": "Aptamer" }
system and the generation of ribozymes with modified cleavage activity. In 1990, two teams independently developed and published SELEX (Systematic Evolution of Ligands by EXponential enrichment) methods and generated RNA aptamers: the lab of Larry Gold, using the term SELEX for their process of selecting RNA ligands against T4 DNA polymerase and the lab of Jack Szostak, selecting RNA ligands against various organic dyes. Two years later, the Szostak lab and Gilead Sciences, acting independently of one another, used in vitro selection schemes to generate DNA aptamers for organic dyes and human thrombin, respectively. In 2001, SELEX was automated by J. Colin Cox in the Ellington lab, reducing the duration of a weeks-long selection experiment to just three days. In 2002, two groups led by Ronald Breaker and Evgeny Nudler published the first definitive evidence for a riboswitch, a nucleic acid-based genetic regulatory element, the existence of which had previously been suspected. Riboswitches possess similar molecular recognition properties to aptamers. This discovery added support to the RNA World hypothesis, a postulated stage in time in the origin of life on Earth. == Properties == === Structure === Most aptamers are based on a specific oligomer sequence of 20-100 bases and 3-20 kDa. Some have chemical modifications for functional enhancements or compatibility with larger engineered molecular systems. DNA, RNA, XNA, and peptide aptamer chemistries can each offer distinct profiles in terms of shelf stability, durability in serum or in vivo, specificity and sensitivity, cost, ease of generation, amplification, and characterization, and familiarity to users. Typically, DNA- and RNA-based aptamers exhibit low immunogenicity, are amplifiable via Polymerase Chain Reaction (PCR), and have complex secondary structure and tertiary structure. DNA- and XNA-based aptamers exhibit superior shelf stability. XNA-based aptamers can introduce additional chemical diversity to increase binding affinity or greater durability in serum
{ "page_id": 1970691, "source": null, "title": "Aptamer" }
or in vivo. As 22 genetically-encoded and over 500 naturally-occurring amino acids exist, peptide aptamers, as well as antibodies, have much greater potential combinatorial diversity per unit length relative to the 4 nucleic acids in DNA or RNA. Chemical modifications of nucleic acid bases or backbones increase the chemical diversity of standard nucleic acid bases. Split aptamers are composed of two or more DNA strands that are pieces of a larger parent aptamer that has been broken in two by a molecular nick. The ability of each component strand to bind targets will depend on the location of the nick, as well as the secondary structures of the daughter strands. The presence of a target molecule supports the joining of DNA fragments. This can be used as the basis for biosensors. Once assembled, the two separate DNA strands can be ligated into a single strand. Unmodified aptamers are cleared rapidly from the bloodstream, with a half-life of seconds to hours. This is mainly due to nuclease degradation, which physically destroys the aptamers, as well as clearance by the kidneys, a result of the aptamer's low molecular weight and size. Several modifications, such as 2'-fluorine-substituted pyrimidines and polyethylene glycol (PEG) linkage, permit a serum half-life of days to weeks. PEGylation can add sufficient mass and size to prevent clearance by the kidneys in vivo. Unmodified aptamers can treat coagulation disorders. The problem of clearance and nuclease digestion is diminished when they are applied to the eye, where there is a lower concentration of nuclease and the rate of clearance is lower. Rapid clearance from serum can also be useful in some applications, such as in vivo diagnostic imaging. In a study on aptamers designed to bind with proteins associated with Ebola infection, a comparison was made among three aptamers isolated for
{ "page_id": 1970691, "source": null, "title": "Aptamer" }
their ability to bind the target protein EBOV sGP. Although these aptamers vary in both sequence and structure, they exhibit remarkably similar relative affinities for sGP from EBOV and SUDV, as well as EBOV GP1.2. Notably, these aptamers demonstrated a high degree of specificity for the GP gene products. One aptamer, in particular, proved effective as a recognition element in an electrochemical sensor, enabling the detection of sGP and GP1.2 in solution, as well as GP1.2 within a membrane context.The results of this research point to the intriguing possibility that certain regions on protein surfaces may possess aptatropic qualities. Identifying the key features of such sites, in conjunction with improved 3-D structural predictions for aptamers, holds the potential to enhance the accuracy of predicting aptamer interaction sites on proteins. This, in turn, may help identify aptamers with a heightened likelihood of binding proteins with high affinity, as well as shed light on protein mutations that could significantly impact aptamer binding.This comprehensive understanding of the structure-based interactions between aptamers and proteins is vital for refining the computational predictability of aptamer-protein binding. Moreover, it has the potential to eventually eliminate the need for the experimental SELEX protocol. === Targets === Aptamer targets can include small molecules and heavy metal ions, larger ligands such as proteins, and even whole cells. These targets include lysozyme, thrombin, human immunodeficiency virus trans-acting responsive element (HIV TAR), hemin, interferon γ, vascular endothelial growth factor (VEGF), prostate specific antigen (PSA), dopamine, and the non-classical oncogene, heat shock factor 1 (HSF1). Aptamers have been generated against cancer cells, prions, bacteria, and viruses. Viral targets of aptamers include influenza A and B viruses, Respiratory syncytial virus (RSV), SARS coronavirus (SARS-CoV) and SARS-CoV-2. Aptamers may be particularly useful for environmental science proteomics. Antibodies, like other proteins, are more difficult to sequence
{ "page_id": 1970691, "source": null, "title": "Aptamer" }
than nucleic acids. They are also costly to maintain and produce, and are at constant risk of contamination, as they are produced via cell culture or are harvested from animal serum. For this reason, researchers interested in little-studied proteins and species may find that companies will not produce, maintain, or adequately validate the quality of antibodies against their target of interest. By contrast, aptamers are simple to sequence and cost nothing to maintain, as their exact structure can be stored digitally and synthesized on demand. This may make them more economically feasible as research tools for underfunded biological research subjects. Aptamers exist for plant compounds, such as theophylline (found in tea) and abscisic acid (a plant immune hormone). An aptamer against a-amanitin (the toxin that causes lethal Amanita poisoning) has been developed, an example of an aptamer against a mushroom target. Aptamer applications can be roughly grouped into sensing, therapeutic, reagent production, and engineering categories. Sensing applications are important in environmental, biomedical, epidemiological, biosecurity, and basic research applications, where aptamers act as probes in assays, imaging methods, diagnostic assays, and biosensors. In therapeutic applications and precision medicine, aptamers can function as drugs, as targeted drug delivery vehicles, as controlled release mechanisms, and as reagents for drug discovery via high-throughput screening for small molecules and proteins. Aptamers have application for protein production monitoring, quality control, and purification. They can function in molecular engineering applications as a way to modify proteins, such as enhancing DNA polymerase to make PCR more reliable. Because the affinity of the aptamer also affects its dynamic range and limit of detection, aptamers with a lower affinity may be desirable when assaying high concentrations of a target molecule. Affinity chromatography also depends on the ability of the affinity reagent, such as an aptamer, to bind and release its
{ "page_id": 1970691, "source": null, "title": "Aptamer" }
target, and lower affinities may aid in the release of the target molecule. Hence, specific applications determine the useful range for aptamer affinity. === Antibody replacement === Aptamers can replace antibodies in many biotechnology applications. In laboratory research and clinical diagnostics, they can be used in aptamer-based versions of immunoassays including enzyme-linked immunosorbent assay (ELISA), western blot, immunohistochemistry (IHC), and flow cytometry. As therapeutics, they can function as agonists or antagonists of their ligand. While antibodies are a familiar technology with a well-developed market, aptamers are a relatively new technology to most researchers, and aptamers have been generated against only a fraction of important research targets. Unlike antibodies, unmodified aptamers are more susceptible to nuclease digestion in serum and renal clearance in vivo. Aptamers are much smaller in size and mass than antibodies, which could be a relevant factor in choosing which is best suited for a given application. When aptamers are available for a particular application, their advantages over antibodies include potentially lower immunogenicity, greater replicability and lower cost, a greater level of control due to the in vitro selection conditions, and capacity to be efficiently engineered for durability, specificity, and sensitivity. In addition, aptamers contribute to reduction of research animal use. While antibodies often rely on animals for initial discovery, as well as for production in the case of polyclonal antibodies, both the selection and production of aptamers is typically animal-free. However, phage display methods allow for selection of antibodies in vitro, followed by production from a monoclonal cell line, avoiding the use of animals entirely. === Controlled release of therapeutics === The ability of aptamers to reversibly bind molecules such as proteins has generated increasing interest in using them to facilitate controlled release of therapeutic biomolecules, such as growth factors. This can be accomplished by tuning the
{ "page_id": 1970691, "source": null, "title": "Aptamer" }
binding strength to passively release the growth factors, along with active release via mechanisms such as hybridization of the aptamer with complementary oligonucleotides or unfolding of the aptamer due to cellular traction forces. === Tissue Engineering Application === Aptamer, known for their ability to bind specific molecules reversibly, have been used in 3D bioprinting tissues to precisely deliver growth factors to promote vascularization. This controlled delivery allows growth factors to be released at the right place and time, encouraging the formation of localized and complex vascular networks. Additionally, the properties of these networks can be fine-tuned by adjusting how growth factors are released over time, making this approach a powerful tool for creating vascularized engineered tissues. === AptaBiD === AptaBiD (Aptamer-Facilitated Biomarker Discovery) is an aptamer-based method for biomarker discovery. == Peptide Aptamers == While most aptamers are based on DNA, RNA, or XNA, peptide aptamers are artificial proteins selected or engineered to bind specific target molecules. === Structure === Peptide aptamers consist of one or more peptide loops of variable sequence displayed by a protein scaffold. Derivatives known as tadpoles, in which peptide aptamer "heads" are covalently linked to unique sequence double-stranded DNA "tails", allow quantification of scarce target molecules in mixtures by PCR (using, for example, the quantitative real-time polymerase chain reaction) of their DNA tails. The peptides that form the aptamer variable regions are synthesized as part of the same polypeptide chain as the scaffold and are constrained at their N and C termini by linkage to it. This double structural constraint decreases the diversity of the 3D structures that the variable regions can adopt, and this reduction in structural diversity lowers the entropic cost of molecular binding when interaction with the target causes the variable regions to adopt a uniform structure. === Selection === The most
{ "page_id": 1970691, "source": null, "title": "Aptamer" }
common peptide aptamer selection system is the yeast two-hybrid system. Peptide aptamers can also be selected from combinatorial peptide libraries constructed by phage display and other surface display technologies such as mRNA display, ribosome display, bacterial display and yeast display. These experimental procedures are also known as biopanning. All the peptides panned from combinatorial peptide libraries have been stored in the MimoDB database. === Applications === Libraries of peptide aptamers have been used as "mutagens", in studies in which an investigator introduces a library that expresses different peptide aptamers into a cell population, selects for a desired phenotype, and identifies those aptamers that cause the phenotype. The investigator then uses those aptamers as baits, for example in yeast two-hybrid screens to identify the cellular proteins targeted by those aptamers. Such experiments identify particular proteins bound by the aptamers, and protein interactions that the aptamers disrupt, to cause the phenotype. In addition, peptide aptamers derivatized with appropriate functional moieties can cause specific post-translational modification of their target proteins, or change the subcellular localization of the targets. == Industry and Research Community == Commercial products and companies based on aptamers include the drug Macugen (pegaptanib) and the clinical diagnostic company SomaLogic. The International Society on Aptamers (INSOAP), a professional society for the aptamer research community, publishes a journal devoted to the topic, Aptamers. Apta-index is a current database cataloging and simplifying the ordering process for over 700 aptamers. == See also == == References == == Further reading ==
{ "page_id": 1970691, "source": null, "title": "Aptamer" }
The molecular formula C22H28O4 may refer to: Gestadienol acetate, an orally active progestin Vamorolone, a synthetic steroid Estradiol diacetate, an estrogen ester
{ "page_id": 55251458, "source": null, "title": "C22H28O4" }
In genetics and cell biology, repression is a mechanism often used to decrease or inhibit the expression of a gene. Removal of repression is called derepression. This mechanism may occur at different stages in the expression of a gene, all resulting with increasing the overall RNA or protein products. Dysregulation of derepression mechanisms might result in altered gene expression patterns, which may lead to negative phenotypic consequences, such as disease. == Derepression of Transcription == Transcription can be repressed in a variety of ways, and also therefore can be derepressed in different ways. A common mechanism is allosteric regulation, when a substrate binds a repressor protein and causes it to undergo a conformational change. If the repressor is bound upstream of a gene, for example in an operator sequence, then it would be repressing the gene's expression. This conformational change would take away the repressor’s ability to bind DNA, thus removing its repressive effect on transcription. Another form of transcriptional derepression uses chromatin remodeling complexes. For transcription to occur, RNA polymerase needs to have access to the promoter sequence of the gene or it cannot bind the DNA. Sometimes these sequences are wrapped around nucleosomes or are in condensed heterochromatin regions, and are therefore inaccessible. Through different chromatin remodeling mechanisms, these promoter sequences can become accessible to the RNA polymerase, and transcription becomes derepressed. Transcriptional derepression may also occur at the level of transcription factor activation. Certain families of transcription factors are non-functional on their own because their active domains are blocked by another part of the protein. The substrate binding to this second, regulatory domain causes a conformational change in the protein to allows access to the active domain. This lets the transcription factor bind to DNA and serve its function, thus derepressing the transcription factor. == Derepression of
{ "page_id": 8786437, "source": null, "title": "Derepression" }
Translation == Derepression of translation increases protein production without altering the levels of mRNA in the cell. miRNAs are a common mechanism of translation repression, binding to the mRNA through complementary base pairing to silence them. Certain RNA binding proteins have been shown to target untranslated regions of the mRNAs and upregulate the translation initiation rates by alleviating the repressive miRNA effects. == Example of Derepression == === Auxin Signalling === An example is the auxin mediated derepression of the auxin response factor family of transcription factors in plants. These auxin response factors are repressed by Aux/IAA repressors. In the presence of auxin, these Aux/AII proteins undergo ubiquitination and are then degraded. This derepresses the auxin response factors so they may carry out their functions in the cell. == Altered Derepression Causing Diseases == === Familial Alzheimer’s Disease === Alzheimer’s is a neurodegenerative disease involving progressive memory loss and other declines in brain function. One common cause of familial Alzheimer’s is mutation in the PSEN1 gene. This gene encodes a protein that cleaves certain intracellular peptides which, once free in the cytoplasm, promote CBPdegradation. Mutations in PSEN1 decrease its production or ability to cleave proteins. This derepresses the CBP proteins, and allows them to perform their function of upregulating transcription of their target genes. === Rett Syndrome === Rett syndrome is a neurodevelopmental disorder involving deterioration of learned language and motor skills, autism, and seizures starting in infancy. Many cases of Rett syndrome are associated with mutations in MECP2, a gene encoding a transcriptional repressor. Mutations in this gene decrease the levels of MeCP2 binding to different promoter sequences, resulting in their overall derepression. The increased expression of these MeCP2 regulated genes in neurons contribute to the Rett syndrome phenotype. === Beckwith-Wiedemann Syndrome === This syndrome is associated with increased
{ "page_id": 8786437, "source": null, "title": "Derepression" }
susceptibility to tumors and growth abnormalities in children. A common cause of this syndrome is a mutation in an imprint control region near the Igf2 gene. This imprint control region is normally bound by an insulator on the maternal allele, which represses an enhancer from acting on the Igf2 gene. This insulator is absent on the paternal allele and allows it access to the gene. Mutations in this imprint control region inhibit the insulator from binding, which derepresses enhancer activity on the maternal Igf2 gene. This abnormal derepression and increase in gene expression can result in Beckwith-Wiedemann syndrome. == References ==
{ "page_id": 8786437, "source": null, "title": "Derepression" }
Before the fertilized ovum reaches the uterus, the mucous membrane of the body of the uterus undergoes important changes and is then known as the decidua. The thickness and vascularity of the mucous membrane are greatly increased; its glands are elongated and open on its free surface by funnel-shaped orifices, while their deeper portions are tortuous and dilated into irregular spaces. The interglandular tissue is also increased in quantity, and is crowded with large round, oval, or polygonal cells, termed decidual cells. Their enlargement is due to glycogen and lipid accumulation in the cytoplasm allowing these cells to provide a rich source of nutrition for the developing embryo. Decidual cells are also thought to control the invasion of the endometrium by trophoblast cells. Experimentally, human endometrial stromal cells can be decidualized in culture by using analogs of cAMP and progesterone. The cells will exhibit a decidualized phenotype and display upregulation of common decidualization markers such as prolactin and IGFBP1. == References == This article incorporates text in the public domain from page 59 of the 20th edition of Gray's Anatomy (1918) == External links == Histology image: 19905loa – Histology Learning System at Boston University - "Female Reproductive System: placenta, decidual cells" UIUC Histology Subject 1107
{ "page_id": 8196616, "source": null, "title": "Decidual cells" }
Liver cytology is the branch of cytology that studies the liver cells and its functions. The liver is a vital organ, in charge of almost all the body’s metabolism. Main liver cells are hepatocytes, Kupffer cells, and hepatic stellate cells; each one with a specific function. == Definitions == Cytology is the name given to the branch of biology that deals with the formation, structure and functionality of the cells. Liver cytology specializes in the study of liver cells. The main liver cells are called hepatocytes; however, there are other cells that can be observed in a liver sample such as Kupffer cells (macrophages). The liver is the biggest gland of the body. It has a wide variety of functions that range from the destruction of old blood cells to the control of the whole metabolism of macromolecules. In the fetus, the liver works as a principal center for hematopoiesis, function that is later replaced by the bone marrow. This hematopoietic function is not normally seen after birth; however, in certain pathological conditions this function may still be seen. It is important to note that the liver is an essential organ and it is the only one in the body that has the ability to regenerate itself after surgery or damage. == Obtaining samples == Since cytology deals with tissues, which are composed of cells, samples of tissues must be obtained in order to analyze the cells. There are several ways of obtaining a sample, the first is through dissecting a corpse, with a sample of tissue taken during an autopsy. The second is performing an aspirate (bone marrow, cerebrospinal fluid, etc.). To perform an aspirate in liquid tissues, a needle is inserted inside the body and a sample is extracted. Another common method is surgery, with a piece being
{ "page_id": 39326219, "source": null, "title": "Liver cytology" }
removed during the procedure for later analysis. Finally, another common method is biopsy. In a biopsy, a needle is inserted into the skin and a solid sample of tissue is obtained. After the sample is obtained, it has to be processed by different methods depending on the nature of the sample. Liquid samples, such as blood, are extracted and dried out, while solid samples must be dehydrated using a different combination of alcoholic compounds. The tissue must also be stained, usually with haematoxylin and eosin, a pair of colorants that identify the acidic-basic nature of the cells. After this treatment the samples are analyzed under a microscope, which can be optical or electronic, to determine if the sample is normal or pathological. == Hepatocytes == The hepatocytes are the parenchymal cells of the liver, which form the lobules. They are intimately associated with the sinusoids, which are a network of capillaries. Since they are metabolically active cells, their cytoplasm has many organelles. Hepatocytes are the main cells of the liver. They are large polyhedral cells, with six surfaces, three of which have a relevant function. The three relevant type of surfaces are sinusoidal, canalicular and intercellular. These surfaces are involved in the exchange of substances between the hepatocyte, the vessels and the biliar canaliculi. The sinusoidal surfaces are separated from the sinusoids because of the perisinusoidal space. They represent 70% of the total hepatocyte surface. They are coated by microvilli which emerge to the perisinusoidal space. These surfaces are the place where the exchange of substances between the hepatocytes and the sinusoids occurs. The canalicular surfaces are the ones through which bile drains from the hepatocytes to the canaliculi. They represent 15% of the surface of the cell. The cytoplasm of the hepatocyte near canaliculi is rich in actin filaments,
{ "page_id": 39326219, "source": null, "title": "Liver cytology" }
and they are probably capable of modifying the canaliculi’s diameter, thus influencing the flow; however this is not yet proven. The intercellular surfaces are the ones that are between two adjacent hepatocytes and they are not in contact with sinusoids or canaliculi. These are simple surfaces specialized in the cellular adherence and in the communication between hepatocytes through gap junctions. Hepatocytes measure between 20 and 30 μm in each dimension. They are in charge of developing all the functions of the liver such as the metabolism of lipids, carbohydrates and proteins, as well as the processing of hormones and drugs. Hepatocytes constitute about 80% of the cell population of the liver, with the other 20% being occupied by Kupffer cells, hepatic stellate cells, endothelial cells and mesothelial cells, which are not exactly characteristic of the liver, but are present in the liver samples. Histologically speaking, hepatocytes have specific characteristics. Their nuclei are large and spheroidal, occupying the center of the cell. There is at least one nucleolus in each nucleus. In the adult liver, most of the cells are binucleated, and most of the hepatocytes are tetraploid, which means that they have four times the amount of normal DNA. Their average lifespan is from approximately five months, and hepatocytes have a significant regeneration capacity after parenchymal loss by toxic processes, diseases or surgeries. Their cytoplasm is mostly acidophilic. Basophilic regions correspond to the RER and free ribosomes. Mitochondria are abundant in hepatocytes, from 800 to 1000 per cell. They can be detected using Janus green B or enzimo-histochemistry. Hepatocytes possess multiple Golgi complexes, and have large numbers of peroxisomes, which can be detected with immunohistochemistry. Smooth endoplasmic reticulum can be extensive and may contain enzymes involved in degradation and conjugation of toxins and drugs, and other enzymes involved in the
{ "page_id": 39326219, "source": null, "title": "Liver cytology" }
synthesis of cholesterol and lipoproteins. == Kupffer cells == Liver sinusoids are different from the rest of the body’s sinusoids since they have macrophage cells intercalated in between their endothelial cells. Kupffer cells have a different embryological origin, coming from the myeloid line in the reticuloendohelial system (also called mononuclear phagocyte system) and are related to the immune system. They first develop in the bone marrow and then migrate to the liver where they differentiate into Kupffer cells. In fact, they are the macrophages of the liver and are located in the sinusoids. Sinusoids are vascular channels that receive blood from terminal branches of the hepatic artery and portal vein and make it flow to the central veins. The space located between the endothelium is known as the Disse Space. Histologically speaking, Kupffer cells are difficult to identify; however, they are easily seen if there are stained particles that were phagocytosed. The main function of the Kupffer cells is the destruction of old blood cells that go through the liver. == Hepatic stellate cells == In the perisinusoidal space, a different type of cells can be found. These cells are characteristic of the liver, since they are not found in any other tissue. These hepatic stellate cells, also named lipocytes, have lipid drops in their cytosol. It is thought that these drops store a fraction of the body’s vitamin A supply. Hepatic stellate cells rest over the Remak trabecules, and they emit extensions to the sinusoids. == References ==
{ "page_id": 39326219, "source": null, "title": "Liver cytology" }
A nuclear emulsion plate is a type of particle detector first used in nuclear and particle physics experiments in the early decades of the 20th century. It is a modified form of photographic plate that can be used to record and investigate fast charged particles like alpha-particles, nucleons, leptons or mesons. After exposing and developing the emulsion, single particle tracks can be observed and measured using a microscope. == Description == The nuclear emulsion plate is a modified form of photographic plate, coated with a thicker photographic emulsion of gelatine containing a higher concentration of very fine silver halide grains; the exact composition of the emulsion being optimised for particle detection. It has the primary advantage of extremely high spatial precision and resolution, limited only by the size of the silver halide grains (sub micron); precision and resolution that surpass even the best of modern particle detectors (observe the scale in the image below, of K-meson decay). A stack of emulsion plates, effectively forming a block of emulsion, can record and preserve the interactions of particles so that their trajectories are recorded in 3-dimensional space as a trail of silver-halide grains, which can be viewed from any aspect on a microscopic scale. In addition, the emulsion plate is an integrating device that can be exposed or irradiated until the desired amount of data has been accumulated. It is compact, with no associated read-out cables or electronics, allowing the plates to be installed in very confined spaces and, compared to other detector technologies, is significantly less expensive to manufacture, operate and maintain. These features were decisive in enabling the high-altitude, mountain and balloon based studies of cosmic rays that led to the discovery of the pi-meson and parity violating charged K-meson decays; shedding light on the true nature and extent of
{ "page_id": 9769489, "source": null, "title": "Nuclear emulsion" }
the subnuclear "particle zoo", defining a milestone in the development of modern experimental particle physics. The chief disadvantage of nuclear emulsion is that it is a dense and complex material (silver, bromine, carbon, nitrogen, oxygen) which potentially impedes the flight of particles to other detector components through multiple scattering and ionising energy loss. Finally, the development and scanning of large volumes of emulsion, to obtain useful, 3-dimensional digitised data, has in the past been a slow and labour intensive process. However, recent developments in automation of the process may overcome that drawback. These disadvantages, coupled with the emergence of new particle detector and particle accelerator technologies, led to a decline in use of nuclear emulsion plates in particle physics towards the end of the 20th century. However there remains a continuing use of the method in the study of rare processes and in other branches of science, such as autoradiography in medicine and biology. For a comprehensive and technically detailed account of the subject refer to the books by Powell, Fowler and Perkins and by Barkas. For an extensive review of the history and wider scientific context of the nuclear emulsion method, refer to the book by Galison. == History == Following the 1896 discovery of radioactivity by Henri Becquerel using photographic emulsion, Ernest Rutherford, working first at McGill University in Canada, then at the University of Manchester in England, was one of the first physicists to use that method to study in detail the radiation emitted by radioactive materials. In 1905 he was using commercially available photographic plates to continue his research into the properties of the recently discovered alpha rays produced in the radioactive decay of some atomic nuclei. This involved analysing the darkening of photographic plates caused by irradiation with the alpha rays. This darkening was enabled
{ "page_id": 9769489, "source": null, "title": "Nuclear emulsion" }
by the interaction of the many charged alpha particles, making up the rays, with silver halide grains in the photographic emulsion that were made visible by photographic development. Rutherford encouraged his research colleague at Manchester, Kinoshita Suekiti, to investigate in more detail the photographic action of the alpha-particles. Kinoshita included in his objectives “to see whether a single 𝛂-particle produced a detectable photographic event”. His method was to expose the emulsion to radiation from a well measured radioactive source, for which the emission rate of 𝛂-particles was known. He used that knowledge and the relative proximity of the plate to the source, to compute the number of 𝛂-particles expected to traverse the plate. He compared that number with the number of developed halide grains he counted in the emulsion, taking careful account of 'background radiation' that produced additional 'non-alpha' grains in the exposure. He completed this research project in 1909, showing that it was possible “by preparing an emulsion film of very fine silver halide grains, and by using a microscope of high magnification, that the photographic method can be applied for counting 𝛂-particles with considerable accuracy”. This was the first time that the observation of individual charged particles by means of a photographic emulsion had been achieved. However, that was the detection of individual particle impacts, not the observation of a particle's extended trajectory. Soon after that, in 1911, Max Reinganum showed that the passage of an 𝛂-particle at glancing incidence through a photographic emulsion produced, when the emulsion was developed, a row of silver halide grains outlining the trajectory of the 𝛂-particle; the first recorded observation of an extended particle track in an emulsion. The next steps would naturally have been to apply this technique to the detection and research of other particle types, including the Cosmic Rays
{ "page_id": 9769489, "source": null, "title": "Nuclear emulsion" }
newly discovered by Victor Hess in 1912. However, progress was halted by the onset of World War I in 1914. The outstanding issue of improving the particle detection performance of standard photographic emulsions, in order to detect other types of particle - protons, for example, produce about one quarter of the ionisation caused by an 𝛂-particle - was taken up again by various physical research laboratories in the 1920s. In particular Marietta Blau, working at the Institute for Radium Research, Vienna in Austria, began in 1923 to investigate alternative types of photographic emulsion plates for detection of protons, known as “H-rays” at that time.She used a radioactive source of 𝛂-particles to irradiate paraffin wax, which has a high content of hydrogen. An 𝛂-particle may collide with a hydrogen nucleus (proton), knocking that proton out of the wax and into the photographic emulsion, where it produces a visible track of silver halide grains. After many trials, using different plates and careful shielding of the emulsion from unwanted radiation, she succeeded in making the first ever observation of proton tracks in a nuclear emulsion. By an ingenious example of lateral thinking, she applied a similar method to make the first ever observation of the impact of neutrons in nuclear emulsion. Being electrically neutral the neutron cannot, of course, be directly detected in a photographic emulsion, but if it strikes a proton in the emulsion, that recoiling proton can be detected. She used this method to determine the energy spectrum of neutrons resulting from specific nuclear reaction processes. She developed a method to determine proton energies by measuring the exposed grain density along their tracks (fast minimum ionising particles interact with fewer grains than slow particles). To record the long tracks of fast protons more accurately, she enlisted British film manufacturer Ilford (now
{ "page_id": 9769489, "source": null, "title": "Nuclear emulsion" }
Ilford Photo) to thicken the emulsion on its commercial plates, and she experimented with other emulsion parameters — grain size, latent image retention, development conditions — to improve the visibility of alpha-particle and fast-proton tracks. In 1937, Marietta Blau and her former student Hertha Wambacher discovered nuclear disintegration stars (Zertrümmerungsterne) due to spallation in nuclear emulsions that had been exposed to cosmic radiation at a height of 2300m on the Hafelekarspitze above Innsbruck. This discovery caused a sensation in the world of nuclear and cosmic ray physics, which brought the nuclear emulsion method to the attention of a wider audience. But the onset of political unrest in Austria and Germany, leading to World War II, brought a sudden halt to progress in that field of research for Marietta Blau. In 1938, the German physicist Walter Heitler, who had escaped Germany as a scientific refugee to live and work in England, was at Bristol University researching a number of theoretical topics, including the formation of cosmic ray showers. He mentioned to Cecil Powell, at that time considering the use of cloud chambers for cosmic ray detection, that in 1937 the two Viennese physicists, Blau and Wambacher, had exposed photographic emulsions in the Austrian Alps and had seen the tracks of low energy protons as well as 'stars' or nuclear disintegrations caused by cosmic rays. This intrigued Powell, who convinced Heitler to travel to Switzerland with a batch of llford half-tone emulsions and expose them on the Jungfraujoch at 3,500 m. In a letter to 'Nature' in August 1939, they were able to confirm the observations of Blau and Wambacher. Although war brought a decisive halt to cosmic ray research in Europe between 1939 and 1945, in India Debendra Mohan Bose and Bibha Chowdhuri, working at the Bose Institute, Kolkata, undertook a
{ "page_id": 9769489, "source": null, "title": "Nuclear emulsion" }
series of high altitude mountain-top experiments using photographic emulsion to detect and analyse cosmic rays. These measurements were notable for the first ever detection of muons by the photographic method: Chowdhuri's painstaking analysis of the observed tracks’ properties, including exposed halide grain densities with range and multiple-scattering correlations, revealing the detected particles to have a mass about 200 times that of the electron - the same ‘mesotron’ (later 'mu-meson' now muon) discovered in 1936 by Anderson and Neddermeyer using a Cloud Chamber. Distance and circumstances denied Bose and Chowdhuri the relatively easy access to manufacturers of photographic plates available to Blau and later, to Heitler, Powell et al.. It meant that Bose and Chowdhuri had to use standard commercial half-tone emulsions, rather than nuclear emulsions specifically designed for particle detection, which makes even more remarkable the quality of their work. Following on from those developments, after World War II, Powell and his research group at Bristol University collaborated with Ilford (now Ilford Photo), to further optimise emulsions for the detection of cosmic ray particles. Ilford produced a concentrated ‘nuclear-research’ emulsion containing eight times the normal amount of silver bromide per unit volume (see External Link to 'Nuclear emulsions by Ilford'). Powell's group first calibrated the new ‘nuclear-research’ emulsions using the University of Cambridge Cockcroft-Walton generator/accelerator, which provided artificial disintegration particles as probes to measure the required range-energy relations for charged particles in the new emulsion. They subsequently used these emulsions to make two of the most significant discoveries in physics of the 20th century. First, in 1947, Cecil Powell, César Lattes, Giuseppe Occhialini and Hugh Muirhead (University of Bristol), using plates exposed to cosmic rays at the Pic du Midi Observatory in the Pyrenees and scanned by Irene Roberts and Marietta Kurz, discovered the charged Pi-meson. Second, two years later
{ "page_id": 9769489, "source": null, "title": "Nuclear emulsion" }
In 1949, analysing plates exposed at the Sphinx Observatory on the Jungfraujoch in Switzerland, first precise observations of the positive K-meson and its ‘strange’ decays were made by Rosemary Brown (now Rosemary Fowler), a research student in Cecil Powell's group at Bristol. Then known as the ‘Tau meson’ in the Tau-theta puzzle, precise measurement of these K-meson decay modes led to the introduction of the quantum concept of Strangeness and to the discovery of Parity violation in the weak interaction. Rosemary Brown called the striking four-track emulsion image, of one 'Tau' decaying to three charged pions, her "K track", thus effectively naming the newly discovered ‘strange’ K-meson. Cecil Powell was awarded the 1950 Nobel Prize in Physics "for his development of the photographic method of studying nuclear processes and his discoveries regarding mesons made with this method". The emergence of new particle detector and particle accelerator technologies, coupled with the disadvantages noted in the introduction, led to a decline in use of Nuclear Emulsion plates in Particle Physics towards the end of the 20th century. However there remained a continuing use of the method in the study of rare interactions and decay processes. More recently, searches for "Physics beyond the Standard Model", in particular the study of neutrinos and dark matter in their exceedingly rare interactions with normal matter, have led to a revival of the technique, including automation of emulsion image processing. Examples are the OPERA experiment, studying neutrino oscillations at the Gran Sasso Laboratory in Italy, and the FASER experiment at the CERN LHC, which will search for new, light and weakly interacting particles including dark photons. == Other applications == There exist a number of scientific and technical fields where the ability of nuclear emulsion to accurately record the position, direction and energy of electrically charged particles,
{ "page_id": 9769489, "source": null, "title": "Nuclear emulsion" }
or to integrate their effect, has found application. These applications in most cases involve the tracing of implanted radioactive markers by Autoradiography. Examples are: Medical research Biological research Metallurgy Reactive surface chemistry Radiation protection Muon tomography (Muography) Archaeology. == References & Footnotes == == External links == Nuclear emulsions by Ilford
{ "page_id": 9769489, "source": null, "title": "Nuclear emulsion" }
A tertiary carbon atom is a carbon atom bound to three other carbon atoms. For this reason, tertiary carbon atoms are found only in hydrocarbons containing at least four carbon atoms. They are called saturated hydrocarbons because they only contain carbon-carbon single bonds. Tertiary carbons have a hybridization of sp3. Tertiary carbon atoms can occur, for example, in branched alkanes, but not in linear alkanes. == Nomenclature == The R is the functional group attached to a tertiary carbon. If the functional group was an OH group, this compound would be commonly called tert-butanol or t-butanol. When a functional group is attached to a tertiary carbon, the prefix -tert (-t) is used in the common name for the compound. An example of this is shown in the figure. == Significance == === Carbocation Stability === Tertiary carbons form the most stable carbocations due to a combination of factors. The three alkyl groups on the tertiary carbon contribute to a strong inductive effect. This is because each alkyl group will share its electron density with the central carbocation to stabilize it. Additionally, the surrounding sp3 hybridized carbons can stabilize the carbocation through hyperconjugation. This occurs when adjacent sp3 orbitals have a weak overlap with the vacant p orbital; since there are 3 surrounding carbons with sp3 hybridization, there are more opportunities for overlap, which contributes to increasing carbocation stability. === Reaction Mechanisms === A tertiary carbocation will maximize the rate of reaction for an SN1 reaction by producing a stable carbocation. This happens because the rate determining step of a SN1 reaction is the formation of the carbocation. The rate of the reaction is therefore reliant on the stability of the carbocation because it means that the transition state has a lower energy level which makes the activation energy lower. Tertiary
{ "page_id": 16912915, "source": null, "title": "Tertiary carbon" }
carbons are similarly preferred in E1 for the same reasons as it has a carbocation intermediate. E1 and E2 reactions follow Zaitsev's rule which states that the most substituted product in an elimination reactions is going to be the major product because it will be favored for its stability. This leads to tertiary carbons being preferred for their stability in elimination reactions. In general, SN2 reactions do not occur with tertiary carbons because of the steric hindrance produced by the substituted groups. However, recent research has shown there are exceptions to this rule; for the first time, a bimolecular nucleophilic substitution, aka SN2 reaction, can happen to a tertiary carbon. == References ==
{ "page_id": 16912915, "source": null, "title": "Tertiary carbon" }
In physical chemistry, there are numerous quantities associated with chemical compounds and reactions; notably in terms of amounts of substance, activity or concentration of a substance, and the rate of reaction. This article uses SI units. == Introduction == Theoretical chemistry requires quantities from core physics, such as time, volume, temperature, and pressure. But the highly quantitative nature of physical chemistry, in a more specialized way than core physics, uses molar amounts of substance rather than simply counting numbers; this leads to the specialized definitions in this article. Core physics itself rarely uses the mole, except in areas overlapping thermodynamics and chemistry. == Notes on nomenclature == Entity refers to the type of particle/s in question, such as atoms, molecules, complexes, radicals, ions, electrons etc. Conventionally for concentrations and activities, square brackets [ ] are used around the chemical molecular formula. For an arbitrary atom, generic letters in upright non-bold typeface such as A, B, R, X or Y etc. are often used. No standard symbols are used for the following quantities, as specifically applied to a substance: the mass of a substance m, the number of moles of the substance n, partial pressure of a gas in a gaseous mixture p (or P), some form of energy of a substance (for chemistry enthalpy H is common), entropy of a substance S the electronegativity of an atom or chemical bond χ. Usually the symbol for the quantity with a subscript of some reference to the quantity is used, or the quantity is written with the reference to the chemical in round brackets. For example, the mass of water might be written in subscripts as mH2O, mwater, maq, mw (if clear from context) etc., or simply as m(H2O). Another example could be the electronegativity of the fluorine-fluorine covalent bond, which might
{ "page_id": 33034771, "source": null, "title": "Defining equation (physical chemistry)" }
be written with subscripts χF-F, χFF or χF-F etc., or brackets χ(F-F), χ(FF) etc. Neither is standard. For the purpose of this article, the nomenclature is as follows, closely (but not exactly) matching standard use. For general equations with no specific reference to an entity, quantities are written as their symbols with an index to label the component of the mixture - i.e. qi. The labeling is arbitrary in initial choice, but once chosen fixed for the calculation. If any reference to an actual entity (say hydrogen ions H+) or any entity at all (say X) is made, the quantity symbol q is followed by curved ( ) brackets enclosing the molecular formula of X, i.e. q(X), or for a component i of a mixture q(Xi). No confusion should arise with the notation for a mathematical function. == Quantification == === General basic quantities === === General derived quantities === == Kinetics and equilibria == The defining formulae for the equilibrium constants Kc (all reactions) and Kp (gaseous reactions) apply to the general chemical reaction: ν X 1 1 + ν X 2 2 + ⋯ + ν X r r ↽ − − ⇀ η Y 1 1 + η Y 2 2 + ⋯ + η Y p p {\displaystyle {\ce {{\nu _{1}X1}+{\nu _{2}X2}+\cdots +\nu _{\mathit {r}}X_{\mathit {r}}<=>{\eta _{1}Y1}+{\eta _{2}Y2}+\cdots +\eta _{\mathit {p}}{Y}_{\mathit {p}}}}} and the defining equation for the rate constant k applies to the simpler synthesis reaction (one product only): ν X 1 1 + ν X 2 2 + ⋯ + ν X r r ⟶ η Y {\displaystyle {\ce {{\nu _{1}X1}+{\nu _{2}X2}+\cdots +\nu _{\mathit {r}}X_{\mathit {r}}->\eta {Y}}}} where: i = dummy index labelling component i of reactant mixture, j = dummy index labelling component i of product mixture, Xi = component i of the
{ "page_id": 33034771, "source": null, "title": "Defining equation (physical chemistry)" }
reactant mixture, Yj = reactant component j of the product mixture, r (as an index) = number of reactant components, p (as an index) = number of product components, νi = stoichiometry number for component i in product mixture, ηj = stoichiometry number for component j in product mixture, σi = order of reaction for component i in reactant mixture. The dummy indices on the substances X and Y label the components (arbitrary but fixed for calculation); they are not the numbers of each component molecules as in usual chemistry notation. The units for the chemical constants are unusual since they can vary depending on the stoichiometry of the reaction, and the number of reactant and product components. The general units for equilibrium constants can be determined by usual methods of dimensional analysis. For the generality of the kinetics and equilibria units below, let the indices for the units be; S 1 = ∑ j = 1 p η j − ∑ i = 1 r ν i , S 2 = 1 − ∑ i = 1 r σ i . {\displaystyle S_{1}=\sum _{j=1}^{p}\eta _{j}-\sum _{i=1}^{r}\nu _{i}\,,\quad \,S_{2}=1-\sum _{i=1}^{r}\sigma _{i}\,.} == Electrochemistry == Notation for half-reaction standard electrode potentials is as follows. The redox reaction A + BX ↽ − − ⇀ B + AX {\displaystyle {\ce {A + BX <=> B + AX}}} split into: a reduction reaction: B + + e − ↽ − − ⇀ B {\displaystyle {\ce {B+ + e^- <=> B}}} and an oxidation reaction: A + + e − ↽ − − ⇀ A {\displaystyle {\ce {A+ + e^- <=> A}}} (written this way by convention) the electrode potential for the half reactions are written as E ⊖ ( A + | A ) {\displaystyle E^{\ominus }\left({\ce {A+}}\vert {\ce {A}}\right)} and E ⊖
{ "page_id": 33034771, "source": null, "title": "Defining equation (physical chemistry)" }
( B + | B ) {\displaystyle E^{\ominus }\left({\ce {B+}}\vert {\ce {B}}\right)} respectively. For the case of a metal-metal half electrode, letting M represent the metal and z be its valency, the half reaction takes the form of a reduction reaction: M + z + z e − ↽ − − ⇀ M {\displaystyle {\ce {{M^{+{\mathit {z}}}}+{\mathit {z}}e^{-}<=>M}}} == Quantum chemistry == == References == == Sources == Physical chemistry, P.W. Atkins, Oxford University Press, 1978, ISBN 0-19-855148-7 Chemistry, Matter and the Universe, R.E. Dickerson, I. Geis, W.A. Benjamin Inc. (USA), 1976, ISBN 0-8053-2369-4 Chemical thermodynamics, D.J.G. Ives, University Chemistry Series, Macdonald Technical and Scientific co. ISBN 0-356-03736-3. Elements of Statistical Thermodynamics (2nd Edition), L.K. Nash, Principles of Chemistry, Addison-Wesley, 1974, ISBN 0-201-05229-6 Statistical Physics (2nd Edition), F. Mandl, Manchester Physics, John Wiley & Sons, 2008, ISBN 978-0-471-91533-1 == Further reading == Quanta: A handbook of concepts, P.W. Atkins, Oxford University Press, 1974, ISBN 0-19-855493-1 Molecular Quantum Mechanics Parts I and II: An Introduction to QUANTUM CHEMISTRY (Volume 1), P.W. Atkins, Oxford University Press, 1977, ISBN 0-19-855129-0 Thermodynamics, From Concepts to Applications (2nd Edition), A. Shavit, C. Gutfinger, CRC Press (Taylor and Francis Group, USA), 2009, ISBN 978-1-4200-7368-3 Properties of matter, B.H. Flowers, E. Mendoza, Manchester Physics Series, J. Wiley and Sons, 1970, ISBN 978-0-471-26498-9
{ "page_id": 33034771, "source": null, "title": "Defining equation (physical chemistry)" }
Johan Peter Holtsmark (13 February 1894 – 10 December 1975) was a Norwegian physicist, who studied spectral line broadening and electron scattering. In 1929, while at the Norwegian Institute of Technology, Holtsmark established acoustics research laboratories, focusing on architectural acoustics and sound insulation. Holtsmark was also a consultant for the Norwegian Broadcasting Corporation (NRK) throughout the 1930s. Together with the Swedish physicist Hilding Faxén published Holtsmark a work in 1927 about scattering of electrons in gases. Here they introduced a new, mathematical method based upon partial waves. This is now standard and described in almost every modern book on quantum mechanics. Between 1934 and 1937 he led the construction of a Van de Graaff accelerator at the Norwegian Institute of Technology, which became the first particle accelerator to go into operation in Scandinavia. Holtsmark was one of the founding fathers of CERN and represented Norway to the European Council for Nuclear Research, which later led into the establishment of the organization itself. He was awarded the Fridtjof Nansen Excellent Research Award in 1969, was a fellow of the Norwegian Academy of Science and Letters from 1925 and the Royal Norwegian Society of Sciences and Letters from 1926. == References ==
{ "page_id": 20976147, "source": null, "title": "Johan Peter Holtsmark" }
Since the first printing of Carl Linnaeus's Species Plantarum in 1753, plants have been assigned one epithet or name for their species and one name for their genus, a grouping of related species. Thousands of plants have been named for people, including botanists and their colleagues, plant collectors, horticulturists, explorers, rulers, politicians, clerics, doctors, philosophers and scientists. Even before Linnaeus, botanists such as Joseph Pitton de Tournefort, Charles Plumier and Pier Antonio Micheli were naming plants for people, sometimes in gratitude for the financial support of their patrons. Early works researching the naming of plant genera include an 1810 glossary by Alexandre de Théis and an etymological dictionary in two editions (1853 and 1856) by Georg Christian Wittstein. Modern works include The Gardener's Botanical by Ross Bayton, Index of Eponymic Plant Names and Encyclopedia of Eponymic Plant Names by Lotte Burkhardt, Plants of the World by Maarten J. M. Christenhusz (lead author), Michael F. Fay and Mark W. Chase, The A to Z of Plant Names by Allan J. Coombes, the four-volume CRC World Dictionary of Plant Names by Umberto Quattrocchi, and Stearn's Dictionary of Plant Names for Gardeners by William T. Stearn; these supply the seed-bearing genera listed in the first column below. Excluded from this list are genus names not accepted (as of January 2021) at Plants of the World Online, which includes updates to Plants of the World (2017). == Key == Ba = listed in Bayton's The Gardener's Botanical Bt = listed in Burkhardt's Encyclopedia of Eponymic Plant Names Bu = listed in Burkhardt's Index of Eponymic Plant Names Ch = listed in Christenhusz's Plants of the World Co = listed in Coombes's The A to Z of Plant Names Qu = listed in Quattrocchi's CRC World Dictionary of Plant Names St = listed in Stearn's
{ "page_id": 66261527, "source": null, "title": "List of plant genera named for people (D–J)" }
Dictionary of Plant Names for Gardeners In addition, Burkhardt's Index is used as a reference for every row in the table, except as noted. == Genera == == See also == List of plant genus names with etymologies: A–C, D–K, L–P, Q–Z List of plant family names with etymologies == Notes == == Citations == == References == Bayton, Ross (2020). The Gardener's Botanical: An Encyclopedia of Latin Plant Names. Princeton, New Jersey: Princeton University Press. ISBN 978-0-691-20017-0. Burkhardt, Lotte (2018). Verzeichnis eponymischer Pflanzennamen – Erweiterte Edition [Index of Eponymic Plant Names – Extended Edition] (pdf) (in German). Berlin: Botanic Garden and Botanical Museum, Freie Universität Berlin. doi:10.3372/epolist2018. ISBN 978-3-946292-26-5. S2CID 187926901. Retrieved January 1, 2021. See http://creativecommons.org/licenses/by/4.0/ for license. Burkhardt, Lotte (2022). Eine Enzyklopädie zu eponymischen Pflanzennamen [Encyclopedia of eponymic plant names] (pdf) (in German). Berlin: Botanic Garden and Botanical Museum, Freie Universität Berlin. doi:10.3372/epolist2022. ISBN 978-3-946292-41-8. S2CID 246307410. Retrieved January 27, 2022. See http://creativecommons.org/licenses/by/4.0/ for license. Christenhusz, Maarten; Fay, Michael Francis; Chase, Mark Wayne (2017). Plants of the World: An Illustrated Encyclopedia of Vascular Plants. Chicago, Illinois: Kew Publishing and The University of Chicago Press. ISBN 978-0-226-52292-0. Coombes, Allen (2012). The A to Z of Plant Names: A Quick Reference Guide to 4000 Garden Plants. Portland, Oregon: Timber Press. ISBN 978-1-60469-196-2. Cullen, Katherine E. (2006). Biology: The People Behind the Science. New York, New York: Infobase Publishing. ISBN 978-0-8160-7221-7. POWO (2019). "Plants of the World Online". London: Royal Botanic Gardens, Kew. Archived from the original on March 22, 2017. Retrieved January 1, 2021. See http://www.plantsoftheworldonline.org/terms-and-conditions Archived 2021-04-23 at the Wayback Machine for license. Quattrocchi, Umberto (2000). CRC World Dictionary of Plant Names, Volume II, D–L. Boca Raton, Florida: CRC Press. ISBN 978-0-8493-2676-9. Stearn, William (2002). Stearn's Dictionary of Plant Names for Gardeners. London: Cassell. ISBN 978-0-304-36469-5. == Further
{ "page_id": 66261527, "source": null, "title": "List of plant genera named for people (D–J)" }
reading == Gledhill, David (2008). The Names of Plants. New York, New York: Cambridge University Press. ISBN 978-0-521-86645-3.
{ "page_id": 66261527, "source": null, "title": "List of plant genera named for people (D–J)" }
The Twister supersonic separator is a compact tubular device which is used for removing water and/or hydrocarbon dewpointing of natural gas. The principle of operation is similar to the near isentropic Brayton cycle of a turboexpander. The gas is accelerated to supersonic velocities within the tube using a De Laval nozzle and inlet guide vanes spin the gas around an inner-body which creates the "ballerina effect" and centrifugally separates the water and liquids in the tube. Hydrates do not form in the Twister tube due to the very short residence time of the gas in the tube (around 2 milliseconds). A secondary separator treats the liquids and slip gas and also acts as a hydrate control vessel. Twister is able to dehydrate to typical pipeline dewpoint specifications and relies on a pressure drop from the inlet of about 25%, dependent on the performance required. The fundamental mathematics behind supersonic separation can be found in the Society of Petroleum Engineers paper (number 100442) entitled "Selective Removal of Water from Supercritical Natural Gas". The closed Twister system enables gas treatment subsea . It is a product of Twister BV, a Dutch firm acquired by WAEP Coöperatief U.A. == References == == External links == Company website Offshore Engineer Annular Twister takes subsea turn Commercial Supersonic Separator Starts Up In Nigeria Supersonic Separator Gains Market Acceptance Supersonic Separation in onshore natural gas dew point plant - May 2012 [1]
{ "page_id": 7737880, "source": null, "title": "Twister supersonic separator" }
Recycling antimatter pertains to recycling antiprotons and antihydrogen atoms. == References == == Further reading == Surko, C. M.; Greaves, R. G. (2004). "Emerging science and technology of antimatter plasmas and trap-based beams" (PDF). Physics of Plasmas. 11 (5): 2333. Bibcode:2004PhPl...11.2333S. doi:10.1063/1.1651487. Archived from the original (PDF) on 7 April 2014. Retrieved 5 April 2014.
{ "page_id": 42406424, "source": null, "title": "Recycling antimatter" }
The isoenthalpic-isobaric ensemble (constant enthalpy and constant pressure ensemble) is a statistical mechanical ensemble that maintains constant enthalpy H {\displaystyle H\,} and constant pressure P {\displaystyle P\,} applied. It is also called the N P H {\displaystyle NPH} -ensemble, where the number of particles N {\displaystyle N\,} is also kept as a constant. It was developed by physicist H. C. Andersen in 1980. The ensemble adds another degree of freedom, which represents the variable volume V {\displaystyle V\,} of a system to which the coordinates of all particles are relative. The volume V {\displaystyle V\,} becomes a dynamical variable with potential energy and kinetic energy given by P V {\displaystyle PV\,} . The enthalpy H = E + P V {\displaystyle H=E+PV\,} is a conserved quantity. Using the isoenthalpic-isobaric ensemble of the Lennard-Jones fluid, it was shown that the Joule–Thomson coefficient and inversion curve can be computed directly from a single molecular dynamics simulation. A complete vapor-compression refrigeration cycle and a vapor–liquid coexistence curve, as well as a reasonable estimate of the supercritical point can be also simulated from this approach. NPH simulation can be carried out using GROMACS and LAMMPS. == References ==
{ "page_id": 14422554, "source": null, "title": "Isoenthalpic–isobaric ensemble" }
Localized molecular orbitals are molecular orbitals which are concentrated in a limited spatial region of a molecule, such as a specific bond or lone pair on a specific atom. They can be used to relate molecular orbital calculations to simple bonding theories, and also to speed up post-Hartree–Fock electronic structure calculations by taking advantage of the local nature of electron correlation. Localized orbitals in systems with periodic boundary conditions are known as Wannier functions. Standard ab initio quantum chemistry methods lead to delocalized orbitals that, in general, extend over an entire molecule and have the symmetry of the molecule. Localized orbitals may then be found as linear combinations of the delocalized orbitals, given by an appropriate unitary transformation. In the water molecule for example, ab initio calculations show bonding character primarily in two molecular orbitals, each with electron density equally distributed among the two O-H bonds. The localized orbital corresponding to one O-H bond is the sum of these two delocalized orbitals, and the localized orbital for the other O-H bond is their difference; as per Valence bond theory. For multiple bonds and lone pairs, different localization procedures give different orbitals. The Boys and Edmiston-Ruedenberg localization methods mix these orbitals to give equivalent bent bonds in ethylene and rabbit ear lone pairs in water, while the Pipek-Mezey method preserves their respective σ and π symmetry. == Equivalence of localized and delocalized orbital descriptions == For molecules with a closed electron shell, in which each molecular orbital is doubly occupied, the localized and delocalized orbital descriptions are in fact equivalent and represent the same physical state. It might seem, again using the example of water, that placing two electrons in the first bond and two other electrons in the second bond is not the same as having four electrons free to
{ "page_id": 9769502, "source": null, "title": "Localized molecular orbitals" }
move over both bonds. However, in quantum mechanics all electrons are identical and cannot be distinguished as same or other. The total wavefunction must have a form which satisfies the Pauli exclusion principle such as a Slater determinant (or linear combination of Slater determinants), and it can be shown that if two electrons are exchanged, such a function is unchanged by any unitary transformation of the doubly occupied orbitals. For molecules with an open electron shell, in which some molecular orbitals are singly occupied, the electrons of alpha and beta spin must be localized separately. This applies to radical species such as nitric oxide and dioxygen. Again, in this case the localized and delocalized orbital descriptions are equivalent and represent the same physical state. == Computation methods == Localized molecular orbitals (LMO) are obtained by unitary transformation upon a set of canonical molecular orbitals (CMO). The transformation usually involves the optimization (either minimization or maximization) of the expectation value of a specific operator. The generic form of the localization potential is: ⟨ L ^ ⟩ = ∑ i = 1 n ⟨ ϕ i ϕ i | L ^ | ϕ i ϕ i ⟩ {\displaystyle \langle {\hat {L}}\rangle =\sum _{i=1}^{n}\langle \phi _{i}\phi _{i}|{\hat {L}}|\phi _{i}\phi _{i}\rangle } , where L ^ {\displaystyle {\hat {L}}} is the localization operator and ϕ i {\displaystyle \phi _{i}} is a molecular spatial orbital. Many methodologies have been developed during the past decades, differing in the form of L ^ {\displaystyle {\hat {L}}} . The optimization of the objective function is usually performed using pairwise Jacobi rotations. However, this approach is prone to saddle point convergence (if it even converges), and thus other approaches have also been developed, from simple conjugate gradient methods with exact line searches, to Newton-Raphson and trust-region methods. === Foster-Boys
{ "page_id": 9769502, "source": null, "title": "Localized molecular orbitals" }
=== The Foster-Boys (also known as Boys) localization method minimizes the spatial extent of the orbitals by minimizing ⟨ L ^ ⟩ {\displaystyle \langle {\hat {L}}\rangle } , where L ^ = | r → 1 − r → 2 | 2 {\displaystyle {\hat {L}}=|{\vec {r}}_{1}-{\vec {r}}_{2}|^{2}} . This turns out to be equivalent to the easier task of maximizing ∑ i n [ ⟨ ϕ i | r → | ϕ i ⟩ ] 2 {\displaystyle \sum _{i}^{n}[\langle \phi _{i}|{\vec {r}}|\phi _{i}\rangle ]^{2}} . In one dimension, the Foster-Boys (FB) objective function can also be written as ⟨ L ^ FB ⟩ = ∑ i ⟨ ϕ i | ( x ^ − ⟨ i | x ^ | i ⟩ ) 2 | ϕ i ⟩ {\displaystyle \langle {\hat {L}}_{\text{FB}}\rangle =\sum _{i}\langle \phi _{i}|({\hat {x}}-\langle i|{\hat {x}}|i\rangle )^{2}|\phi _{i}\rangle } . === Fourth moment === The fourth moment (FM) procedure is analogous to Foster-Boys scheme, however the orbital fourth moment is used instead of the orbital second moment. The objective function to be minimized is ⟨ L ^ FM ⟩ = ∑ i ⟨ ϕ i | ( x ^ − ⟨ ϕ i | x ^ | ϕ i ⟩ ) 4 | ϕ i ⟩ {\displaystyle \langle {\hat {L}}_{\text{FM}}\rangle =\sum _{i}\langle \phi _{i}|({\hat {x}}-\langle \phi _{i}|{\hat {x}}|\phi _{i}\rangle )^{4}|\phi _{i}\rangle } . The fourth moment method produces more localized virtual orbitals than Foster-Boys method, since it implies a larger penalty on the delocalized tails. For graphene (a delocalized system), the fourth moment method produces more localized occupied orbitals than Foster-Boys and Pipek-Mezey schemes. === Edmiston-Ruedenberg === Edmiston-Ruedenberg localization maximizes the electronic self-repulsion energy by maximizing ⟨ L ^ ER ⟩ {\displaystyle \langle {\hat {L}}_{\text{ER}}\rangle } , where L ^ = | r → 1 − r
{ "page_id": 9769502, "source": null, "title": "Localized molecular orbitals" }
→ 2 | − 1 {\displaystyle {\hat {L}}=|{\vec {r}}_{1}-{\vec {r}}_{2}|^{-1}} . === Pipek-Mezey === Pipek-Mezey localization takes a slightly different approach, maximizing the sum of orbital-dependent partial charges on the nuclei: ⟨ L ^ ⟩ PM = ∑ A atoms ∑ i orbitals | q i A | 2 {\displaystyle \langle {\hat {L}}\rangle _{\textrm {PM}}=\sum _{A}^{\textrm {atoms}}\sum _{i}^{\textrm {orbitals}}|q_{i}^{A}|^{2}} . Pipek and Mezey originally used Mulliken charges, which are mathematically ill defined. Recently, Pipek-Mezey style schemes based on a variety of mathematically well-defined partial charge estimates have been discussed. Some notable choices are Voronoi charges, Becke charges, Hirshfeld or Stockholder charges, intrinsic atomic orbital charges (see intrinsic bond orbitals)", Bader charges, or "fuzzy atom" charges. Rather surprisingly, despite the wide variation in the (total) partial charges reproduced by the different estimates, analysis of the resulting Pipek-Mezey orbitals has shown that the localized orbitals are rather insensitive to the partial charge estimation scheme used in the localization process. However, due to the ill-defined mathematical nature of Mulliken charges (and Löwdin charges, which have also been used in some works), as better alternatives are nowadays available it is advisable to use them in favor of the original version. The most important quality of the Pipek-Mezey scheme is that it preserves σ-π separation in planar systems, which sets it apart from the Foster-Boys and Edmiston-Ruedenberg schemes that mix σ and π bonds. This property holds independent of the partial charge estimate used. While the usual formulation of the Pipek-Mezey method invokes an iterative procedure to localize the orbitals, a non-iterative method has also been recently suggested. == In organic chemistry == Organic chemistry is often discussed in terms of localized molecular orbitals in a qualitative and informal sense. Historically, much of classical organic chemistry was built on the older valence bond / orbital
{ "page_id": 9769502, "source": null, "title": "Localized molecular orbitals" }
hybridization models of bonding. To account for phenomena like aromaticity, this simple model of bonding is supplemented by semi-quantitative results from Hückel molecular orbital theory. However, the understanding of stereoelectronic effects requires the analysis of interactions between donor and acceptor orbitals between two molecules or different regions within the same molecule, and molecular orbitals must be considered. Because proper (symmetry-adapted) molecular orbitals are fully delocalized and do not admit a ready correspondence with the "bonds" of the molecule, as visualized by the practicing chemist, the most common approach is to instead consider the interaction between filled and unfilled localized molecular orbitals that correspond to σ bonds, π bonds, lone pairs, and their unoccupied counterparts. These orbitals and typically given the notation σ (sigma bonding), π (pi bonding), n (occupied nonbonding orbital, "lone pair"), p (unoccupied nonbonding orbital, "empty p orbital"; the symbol n* for unoccupied nonbonding orbital is seldom used), π* (pi antibonding), and σ* (sigma antibonding). (Woodward and Hoffmann use ω for nonbonding orbitals in general, occupied or unoccupied.) When comparing localized molecular orbitals derived from the same atomic orbitals, these classes generally follow the order σ < π < n < p (n*) < π* < σ* when ranked by increasing energy. The localized molecular orbitals that organic chemists often depict can be thought of as qualitative renderings of orbitals generated by the computational methods described above. However, they do not map onto any single approach, nor are they used consistently. For instance, the lone pairs of water are usually treated as two equivalent spx hybrid orbitals, while the corresponding "nonbonding" orbitals of carbenes are generally treated as a filled σ(out) orbital and an unfilled pure p orbital, even though the lone pairs of water could be described analogously by filled σ(out) and p orbitals (for further discussion,
{ "page_id": 9769502, "source": null, "title": "Localized molecular orbitals" }
see the article on lone pair and the discussion above on sigma-pi and equivalent-orbital models). In other words, the type of localized orbital invoked depends on context and considerations of convenience and utility. == References ==
{ "page_id": 9769502, "source": null, "title": "Localized molecular orbitals" }
Prey naïveté hypothesis is a theory that suggests that native prey often struggle to recognize or avoid an introduced predator because they lack a coevolutionary history with it. Prey naïveté is believed to intensify the effects of non-native predators, which can contribute significantly to the risks of extinction and endangerment of prey populations. == Overview == The prey naïveté hypothesis suggests that ineffective antipredator defenses result from a lack of evolutionary exposure to specific predators. This naiveté towards non-native predators is likely influenced by eco-evolutionary factors such as biogeographic isolation and prey adaptation. A prey species' ability to detect and evade predators can be shaped by the life history, ecology, and evolutionary context of both predator and prey. While some predator-prey systems display species-specific avoidance behaviors, many taxa require learned olfactory recognition of predators. Certain antipredator behaviors that develop in response to coevolved predators may persist over time, even in their absence, particularly when other predators are present, as suggested by the "multipredator hypothesis." For instance, rats introduced to oceanic islands have been implicated in the extinction of many mammals, birds, and reptiles that lack evolutionary experience with generalist mammalian nest predators. However, the negative effects of rats are lessened on islands with native rats or functionally similar land crabs, as the fauna on these islands appear to be less naïve to the threats posed by introduced omnivores. Prey are generally naïve towards non-native predators in marine and freshwater environments, but not in terrestrial ones. The naïveté was most significant towards non-native predators lacking native relatives in the community. Time since introduction plays a role, with prey naïveté diminishing over generations; approximately 200 generations may be needed for prey to sufficiently develop antipredator behaviors towards these non-native threats. == Driving factors == The occurrence and intensity of prey naiveté are
{ "page_id": 77992485, "source": null, "title": "Prey naiveté" }
hypothesized to arise from several interrelated factors, categorized into four themes: Biogeographic isolation: Prey naiveté is thought to be exacerbated by evolutionary isolation between predator and prey, particularly in freshwater environments. Island ecosystems may also experience heightened naiveté due to lack of eco-evolutionary experience with both non-native and native predators. Adaptation over time: Prey may acquire effective antipredator responses over generations following the introduction of a predator, with naiveté diminishing as prey adapt. Latitude and biodiversity: The latitude of predator introduction may influence prey recognition, with lower latitudes possibly exhibiting higher recognition rates due to greater predation pressure and biodiversity. Taxonomic specificity: Recognition of introduced predators may vary by taxonomic group, suggesting that certain prey species are better equipped to recognize specific predators. == Levels of prey naiveté == Prey naiveté was initially conceptualized as a straightforward phenomenon in which native fauna become vulnerable to non-native predators due to naive behavioral responses. It is now understood to be a multifaceted issue, and is classified into four distinct levels: In addition to behavioral inadequacies, prey species lacking evolutionary exposure to non-native predation may possess morphological or physiological traits that render them more susceptible to such threats, including insufficient defensive structures, flightlessness, conspicuous odors, or inadequate camouflage. Although prey naiveté is widely recognized in ecological studies, its variability under the influence of eco-evolutionary factors is not yet fully quantified. == Impact == Prey naïveté contributes significantly to the extinction and endangerment of prey species globally, as well as to the failure of wildlife reintroductions. === Mitigation === While excluding novel predators from conservation areas has had mixed results, the absence of any predators can worsen prey naiveté. Reintroducing native predators has been proposed as a potential solution to enhance prey behavioral responses. A study published in 2024 assessed the behavioral reactions of
{ "page_id": 77992485, "source": null, "title": "Prey naiveté" }
two prey species—the burrowing bettong (Bettongia lesueur) and spinifex hopping mouse (Notomys alexis)—to the reintroduction of a native predator, the western quoll (Dasyurus geoffroii), and its impact on their responses to feral cats (Felis catus). Results indicated that quoll-exposed bettongs engaged in less inattentive foraging compared to controls but did not differentiate between predator and non-predator cues. In contrast, quoll-exposed hopping mice adjusted their foraging behaviors in open areas and increased their wariness in response to quoll stimuli, while cat-exposed hopping mice only heightened their caution in the presence of cat stimuli. Although reintroducing native predators improved general antipredator responses among naïve prey populations, evidence for enhanced discrimination towards introduced predators was limited, although the findings suggest that exposure to native predators may better prepare naïve prey for environments where novel predators are present. A 2019 study explored whether exposing predator-naïve prey, specifically the greater bilby (Macrotis lagotis), to controlled numbers of introduced predators (feral cats, Felis catus) can enhance their survival upon reintroduction. Over two years, bilbies were exposed to feral cats in a fenced area, and their behaviors were assessed. Results showed that predator-exposed bilbies exhibited increased wariness—spending less time moving and more time in cover—compared to naïve bilbies. Following translocation, the predator-exposed group had higher survival rates and was less likely to be predated upon than their naïve counterparts. The study suggests that training naïve prey in the presence of predators may improve their survival in reintroduction efforts. == References ==
{ "page_id": 77992485, "source": null, "title": "Prey naiveté" }
Iron(II) gluconate, or ferrous gluconate, is a black compound often used as an iron supplement. It is the iron(II) salt of gluconic acid. It is marketed under brand names such as Fergon, Ferralet and Simron. == Uses == === Medical === Ferrous gluconate is effectively used in the treatment of hypochromic anemia. The use of this compound compared with other iron preparations results in satisfactory reticulocyte responses, a high percentage utilization of iron, and daily increase in hemoglobin that a normal level occurs in a reasonably short time. === Food additive === Ferrous gluconate is also used as a food additive when processing black olives. It is represented by the food labeling E number E579 in Europe. It imparts a uniform jet black color to the olives. == Toxicity == Ferrous gluconate may be toxic in case of overdose. Children may show signs of toxicity with ingestions of 10–20 mg/kg of elemental iron. Serious toxicity may result from ingestions of more than 60 mg/kg. Iron exerts both local and systemic effects: it is corrosive to the gastrointestinal mucosa, it can have a negative impact on the heart and blood (dehydration, low blood pressure, fast and weak pulse, shock), lungs, liver, gastrointestinal system (diarrhea, nausea, vomiting blood), nervous system (chills, dizziness, coma, convulsions, headache), and skin (flushing, loss of color, bluish-colored lips and fingernails). The symptoms may disappear in a few hours, but then emerge again after 1 or more days. == See also == Acceptable daily intake Iron poisoning == References ==
{ "page_id": 4198953, "source": null, "title": "Iron(II) gluconate" }
The molecular formula C15H24O may refer to: Butylated hydroxytoluene, a food additive Khusimol Nonylphenol 1-Nonyl-4-phenol α-Santalol β-Santalol Spathulenol
{ "page_id": 7475753, "source": null, "title": "C15H24O" }
Effervescence is the escape of gas from an aqueous solution and the foaming or fizzing that results from that release. The word effervescence is derived from the Latin verb fervere (to boil), preceded by the adverb ex. It has the same linguistic root as the word fermentation. Effervescence can also be observed when opening a bottle of champagne, beer or carbonated beverages such as some carbonated soft drinks. The visible bubbles are produced by the escape from solution of the dissolved gas (which itself is not visible while dissolved in the liquid). == In beverages == Although CO2 is most common for beverages, nitrogen gas is sometimes deliberately added to certain beers. The smaller bubble size creates a smoother beer head. Due to the poor solubility of nitrogen in beer, kegs or widgets are used for this. == Chemistry == In the laboratory, a common example of effervescence is seen if hydrochloric acid is added to a block of limestone. If a few pieces of marble or an antacid tablet are put in hydrochloric acid in a test tube fitted with a bung, effervescence of carbon dioxide can be witnessed. CaCO3 + 2 HCl → CaCl2 + H2O + CO2↑ This process is generally represented by the following reaction, where a pressurized dilute solution of carbonic acid in water releases gaseous carbon dioxide at decompression: H2CO3 → H2O + CO2↑ In simple terms, it is the result of the chemical reaction occurring in the liquid which produces a gaseous product. == See also == Cavitation Carbonation Effervescent tablet Precipitation (chemistry), the "down-arrow" == References ==
{ "page_id": 856618, "source": null, "title": "Effervescence" }
Bromochlorofluoroiodomethane is a hypothetical haloalkane with all four stable halogen substituents present in it. == Overview == This compound can be seen as a methane molecule, whose four hydrogen atoms are each replaced with a different halogen atom. As the mirror images of this molecule are not superimposable, the molecule has two enantiomers. As one of the simplest such molecules, it is often cited as the prototypical chiral compound. However, since there is no synthetic route known to produce bromochlorofluoroiodomethane, the related simple chiral compound bromochlorofluoromethane is used instead when such a compound is required for research. == References ==
{ "page_id": 23990825, "source": null, "title": "Bromochlorofluoroiodomethane" }
Methoxy arachidonyl fluorophosphonate, commonly referred as MAFP, is an irreversible active site-directed enzyme inhibitor that inhibits nearly all serine hydrolases and serine proteases. It inhibits phospholipase A2 and fatty acid amide hydrolase with special potency, displaying IC50 values in the low-nanomolar range. In addition, it binds to the CB1 receptor in rat brain membrane preparations (IC50 = 20 nM), but does not appear to agonize or antagonize the receptor, though some related derivatives do show cannabinoid-like properties. == See also == DIFP – diisopropyl fluorophosphate, a related inhibitor IDFP – isopropyl dodecylfluorophosphonate, another related inhibitor with selectivity for FAAH and MAGL Activity-based probes == References ==
{ "page_id": 12063277, "source": null, "title": "Methoxy arachidonyl fluorophosphonate" }
Australasian Agribusiness Review (ISSN 1442-6951) is a peer-reviewed academic journal in the field of agribusiness. One of the initial co-editors was Bill Schroder. == References == == External links == Official website ISSN 1320-0348
{ "page_id": 25170477, "source": null, "title": "Australasian Agribusiness Review" }
A black hole is a massive, compact astronomical object so dense that its gravity prevents anything from escaping, even light. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass will form a black hole. The boundary of no escape is called the event horizon. A black hole has a great effect on the fate and circumstances of an object crossing it, but has no locally detectable features according to general relativity. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first published the interpretation of "black hole" as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The discovery of neutron stars by Jocelyn Bell Burnell in 1967 sparked interest in gravitationally collapsed compact objects as a possible astrophysical reality. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes typically form when massive stars collapse at the end of their life cycle. After a black hole
{ "page_id": 4650, "source": null, "title": "Black hole" }
has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter falling toward a black hole can form an accretion disk of infalling plasma, heated by friction and emitting light. In extreme cases, this creates a quasar, some of the brightest objects in the universe. Stars passing too close to a supermassive black hole can be shredded into streamers that shine very brightly before being "swallowed." If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. == History == The idea of a body so big that even light could not escape was briefly proposed by English astronomical pioneer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars rather than the modern model of stars with extraordinary density. Michell's idea, in a short part of a letter published in 1784, calculated that a star with the same density but 500 times the radius of the sun would not let any emitted light escape; the surface escape velocity would exceed the
{ "page_id": 4650, "source": null, "title": "Black hole" }
speed of light. Michell correctly noted that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in journal edited by von Zach. Scholars of the time were initially excited by the proposal that giant but invisible 'dark stars' might be hiding in plain view, but enthusiasm dampened when the wavelike nature of light became apparent in the early nineteenth century, since light was understood as a wave rather than a particle, it was unclear what, if any, influence gravity would have on escaping light waves. === General relativity === In 1915, Albert Einstein developed his theory of general relativity, having earlier shown that gravity does influence light's motion. Only a few months later, Karl Schwarzschild found a solution to the Einstein field equations that describes the gravitational field of a point mass and a spherical mass. A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution for the point mass and wrote more extensively about its properties. This solution had a peculiar behaviour at what is now called the Schwarzschild radius, where it became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this surface was not quite understood at the time. In 1924, Arthur Eddington showed that the singularity disappeared after a change of coordinates. In 1933, Georges Lemaître realised that this meant the singularity at the Schwarzschild radius was a non-physical coordinate singularity. Arthur Eddington commented on the possibility of
{ "page_id": 4650, "source": null, "title": "Black hole" }
a star with mass compressed to the Schwarzschild radius in a 1926 book, noting that Einstein's theory allows us to rule out overly large densities for visible stars like Betelgeuse because "a star of 250 million km radius could not possibly have so high a density as the Sun. Firstly, the force of gravitation would be so great that light would be unable to escape from it, the rays falling back to the star like a stone to the earth. Secondly, the red shift of the spectral lines would be so great that the spectrum would be shifted out of existence. Thirdly, the mass would produce so much curvature of the spacetime metric that space would close up around the star, leaving us outside (i.e., nowhere)." In 1931, Subrahmanyan Chandrasekhar calculated, using special relativity, that a non-rotating body of electron-degenerate matter above a certain limiting mass (now called the Chandrasekhar limit at 1.4 M☉) has no stable solutions. His arguments were opposed by many of his contemporaries like Eddington and Lev Landau, who argued that some yet unknown mechanism would stop the collapse. They were partly correct: a white dwarf slightly more massive than the Chandrasekhar limit will collapse into a neutron star, which is itself stable. In 1939, Robert Oppenheimer and others predicted that neutron stars above another limit, the Tolman–Oppenheimer–Volkoff limit, would collapse further for the reasons presented by Chandrasekhar, and concluded that no law of physics was likely to intervene and stop at least some stars from collapsing to black holes. Their original calculations, based on the Pauli exclusion principle, gave it as 0.7 M☉. Subsequent consideration of neutron-neutron repulsion mediated by the strong force raised the estimate to approximately 1.5 M☉ to 3.0 M☉. Observations of the neutron star merger GW170817, which is thought to have
{ "page_id": 4650, "source": null, "title": "Black hole" }
generated a black hole shortly afterward, have refined the TOV limit estimate to ~2.17 M☉. Oppenheimer and his co-authors interpreted the singularity at the boundary of the Schwarzschild radius as indicating that this was the boundary of a bubble in which time stopped. This is a valid point of view for external observers, but not for infalling observers. The hypothetical collapsed stars were called "frozen stars", because an outside observer would see the surface of the star frozen in time at the instant where its collapse takes it to the Schwarzschild radius. Also in 1939, Einstein attempted to prove that black holes were impossible in his publication "On a Stationary System with Spherical Symmetry Consisting of Many Gravitating Masses", using his theory of general relativity to defend his argument. Months later, Oppenheimer and his student Hartland Snyder provided the Oppenheimer–Snyder model in their paper "On Continued Gravitational Contraction", which predicted the existence of black holes. In the paper, which made no reference to Einstein's recent publication, Oppenheimer and Snyder used Einstein's own theory of general relativity to show the conditions on how a black hole could develop, for the first time in contemporary physics. ==== Golden age ==== In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, "a perfect unidirectional membrane: causal influences can cross it in only one direction". This did not strictly contradict Oppenheimer's results, but extended them to include the point of view of infalling observers. Finkelstein's solution extended the Schwarzschild solution for the future of observers falling into a black hole. A complete extension had already been found by Martin Kruskal, who was urged to publish it. These results came at the beginning of the golden age of general relativity, which was marked by general relativity and black holes becoming mainstream subjects of
{ "page_id": 4650, "source": null, "title": "Black hole" }
research. This process was helped by the discovery of pulsars by Jocelyn Bell Burnell in 1967, which, by 1969, were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities; but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. In this period more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the axisymmetric solution for a black hole that is both rotating and electrically charged. Through the work of Werner Israel, Brandon Carter, and David Robinson the no-hair theorem emerged, stating that a stationary black hole solution is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange features of the black hole solutions were pathological artefacts from the symmetry conditions imposed, and that the singularities would not appear in generic situations. This view was held in particular by Vladimir Belinsky, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions. However, in the late 1960s Roger Penrose and Stephen Hawking used global techniques to prove that singularities appear generically. For this work, Penrose received half of the 2020 Nobel Prize in Physics, Hawking having died in 2018. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe
{ "page_id": 4650, "source": null, "title": "Black hole" }
the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. === Observation === On 11 February 2016, the LIGO Scientific Collaboration and the Virgo collaboration announced the first direct detection of gravitational waves, representing the first observation of a black hole merger. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. Gaia mission observations have found evidence of a Sun-like star orbiting a black hole named Gaia BH1 around 1,560 light-years (480 parsecs) away; evidence suggests a brown dwarf star orbits Gaia BH2. Though only a couple dozen black holes have been found so far in the Milky Way, there are thought to be hundreds of millions, most of which are solitary and do not cause emission of radiation. Therefore, they would only be detectable by gravitational lensing. === Etymology === Science writer Marcia Bartusiak traces the term "black hole" to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term "black hole" was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting
{ "page_id": 4650, "source": null, "title": "Black hole" }
of the American Association for the Advancement of Science held in Cleveland, Ohio. In December 1967, a student reportedly suggested the phrase "black hole" at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and it quickly caught on, leading some to credit Wheeler with coining the phrase. == Properties and structure == The escape velocity from a black hole exceeds the speed of light. The formula for escape velocity is V = 2 M G / R {\displaystyle V={\sqrt {2MG/R}}} for an object at radius R from a spherical mass M, with G being the gravitational constant. When the velocity is the speed of light, c, the radius, R s = 2 M G / c 2 , {\displaystyle R_{s}=2MG/c^{2},} is called the Schwarzschild radius.: 27 A technical definition of a black hole is any object whose mass is contained in a radius smaller than its Schwarzschild radius, a limit derived from one solution to the equations of general relativity.: 410 The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes under the laws of modern physics is currently an unsolved problem. These properties are special because they are visible from outside a black hole. For example, a charged black hole repels other like charges just like any other charged object. Similarly, the total mass inside a sphere containing a black hole can be found by using the gravitational analogue of Gauss's law
{ "page_id": 4650, "source": null, "title": "Black hole" }
(through the ADM mass), far away from the black hole. Likewise, the angular momentum (or spin) can be measured from far away using frame dragging by the gravitomagnetic field, through for example the Lense–Thirring effect. When an object falls into a black hole, any information about the shape of the object or distribution of charge on it is evenly distributed along the horizon of the black hole, and is lost to outside observers. The behaviour of the horizon in this situation is a dissipative system that is closely analogous to that of a conductive stretchy membrane with friction and electrical resistance—the membrane paradigm. This is different from other field theories such as electromagnetism, which do not have any friction or resistivity at the microscopic level, because they are time-reversible. Because a black hole eventually achieves a stable state with only three parameters, there is no way to avoid losing information about the initial conditions: the gravitational and electric fields of a black hole give very little information about what went in. The information that is lost includes every quantity that cannot be measured far away from the black hole horizon, including approximately conserved quantum numbers such as the total baryon number and lepton number. This behaviour is so puzzling that it has been called the black hole information loss paradox. === Physical properties === The simplest static black holes have mass but neither electric charge nor angular momentum. These black holes are often referred to as Schwarzschild black holes after Karl Schwarzschild who discovered this solution in 1916. According to Birkhoff's theorem, it is the only vacuum solution that is spherically symmetric. This means there is no observable difference at a distance between the gravitational field of such a black hole and that of any other spherical object of the
{ "page_id": 4650, "source": null, "title": "Black hole" }
same mass. The popular notion of a black hole "sucking in everything" in its surroundings is therefore correct only near a black hole's horizon; far away, the external gravitational field is identical to that of any other body of the same mass. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. While the mass of a black hole can take any positive value, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q 2 4 π ϵ 0 + c 2 J 2 G M 2 ≤ G M 2 {\displaystyle {\frac {Q^{2}}{4\pi \epsilon _{0}}}+{\frac {c^{2}J^{2}}{GM^{2}}}\leq GM^{2}} for a black hole of mass M. Black holes with the minimum possible mass satisfying this inequality are called extremal. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These solutions have so-called naked singularities that can be observed from the outside, and hence are deemed unphysical. The cosmic censorship hypothesis rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. This is supported by numerical simulations. Due to the relatively large strength of the electromagnetic force, black holes forming from the collapse of stars are expected to retain the nearly neutral charge of the star. Rotation, however, is expected to be a universal feature of compact astrophysical objects. The black-hole candidate binary X-ray source GRS 1915+105 appears to have an angular momentum near the maximum allowed
{ "page_id": 4650, "source": null, "title": "Black hole" }
value. That uncharged limit is J ≤ G M 2 c , {\displaystyle J\leq {\frac {GM^{2}}{c}},} allowing definition of a dimensionless spin parameter such that 0 ≤ c J G M 2 ≤ 1. {\displaystyle 0\leq {\frac {cJ}{GM^{2}}}\leq 1.} Black holes are commonly classified according to their mass, independent of angular momentum, J. The size of a black hole, as determined by the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through r s = 2 G M c 2 ≈ 2.95 M M ⊙ k m , {\displaystyle r_{\mathrm {s} }={\frac {2GM}{c^{2}}}\approx 2.95\,{\frac {M}{M_{\odot }}}~\mathrm {km,} } where rs is the Schwarzschild radius and M☉ is the mass of the Sun. For a black hole with nonzero spin or electric charge, the radius is smaller, until an extremal black hole could have an event horizon close to r + = G M c 2 . {\displaystyle r_{\mathrm {+} }={\frac {GM}{c^{2}}}.} === Event horizon === The defining feature of a black hole is the appearance of an event horizon—a boundary in spacetime through which matter and light can pass only inward towards the mass of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach an outside observer, making it impossible to determine whether such an event occurred. As predicted by general relativity, the presence of a mass deforms spacetime in such a way that the paths taken by particles bend towards the mass. At the event horizon of a black hole, this deformation becomes so strong that there are no paths that lead away from the black hole. In a thought experiment, a distant observer can imagine
{ "page_id": 4650, "source": null, "title": "Black hole" }
clocks near a black hole which would appear to tick more slowly than those farther away from the black hole. This effect, known as gravitational time dilation, would also cause an object falling into a black hole to appear to slow as it approaches the event horizon, taking an infinite amount of time to reach it. All processes on this object would appear to slow down, from the viewpoint of a fixed outside observer, and any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. Eventually, the falling object fades away until it can no longer be seen. Typically this process happens very rapidly with an object disappearing from view within less than a second. On the other hand, imaginary, indestructible observers falling into a black hole would not notice any of these effects as they cross the event horizon. Their own clocks appear to them to tick normally, they cross the event horizon after a finite time without noting any singular behaviour. In general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle. The topology of the event horizon of a black hole at equilibrium is always spherical. For non-rotating (static) black holes the geometry of the event horizon is precisely spherical, while for rotating black holes the event horizon is oblate. === Singularity === At the centre of the unrealistically simple Schwarzschild model of a black hole is a gravitational singularity a region where the spacetime curvature becomes infinite. For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation. In both cases, the singular region
{ "page_id": 4650, "source": null, "title": "Black hole" }
has zero volume. It can also be shown that the singular region contains all the mass of the black hole solution. The singular region can thus be thought of as having infinite density. Even though the Schwarzschild model is not valid at the singularity, observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. They can prolong the experience by accelerating away to slow their descent, but only up to a limit. When they reach the singularity, they are crushed to infinite density and their mass is added to the total of the black hole. Before that happens, they will have been torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the "noodle effect". In the case of a charged (Reissner–Nordström) or rotating (Kerr) black hole, it is possible to avoid the singularity. Extending these solutions as far as possible reveals the hypothetical possibility of exiting the black hole into a different spacetime with the black hole acting as a wormhole. The possibility of travelling to another universe is, however, only theoretical since any perturbation would destroy this possibility. It also appears to be possible to follow closed timelike curves (returning to one's own past) around the Kerr singularity, which leads to problems with causality like the grandfather paradox. It is expected that none of these peculiar effects would survive in a proper quantum treatment of rotating and charged black holes. The appearance of singularities in general relativity signals the breakdown of the theory. This breakdown occurs where quantum effects should describe these actions, due to the extremely high density and therefore particle interactions. To date, it has not been possible to combine quantum and gravitational effects into
{ "page_id": 4650, "source": null, "title": "Black hole" }
a single theory, although there exist attempts to formulate such a theory of quantum gravity. It is generally expected that such a theory will not feature singularities. === Photon sphere === The photon sphere is a spherical boundary where photons that move on tangents to that sphere would be trapped in a non-stable but circular orbit around the black hole. For non-rotating black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius. Their orbits would be dynamically unstable, hence any small perturbation, such as a particle of infalling matter, would cause an instability that would grow over time, either setting the photon on an outward trajectory causing it to escape the black hole, or on an inward spiral where it would eventually cross the event horizon. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Hence any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. For a Kerr black hole the radius of the photon sphere depends on the spin parameter and on the details of the photon orbit, which can be prograde (the photon rotates in the same sense of the black hole spin) or retrograde. === Ergosphere === Rotating black holes are surrounded by a region of spacetime in which it is impossible to stand still, called the ergosphere. This is the result of a process known as frame-dragging; general relativity predicts that any rotating mass will tend to slightly "drag" along the spacetime immediately surrounding it. Any object near the rotating mass will tend to start moving in the direction of rotation. For a rotating black hole, this effect is
{ "page_id": 4650, "source": null, "title": "Black hole" }
so strong near the event horizon that an object would have to move faster than the speed of light in the opposite direction to just stand still. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but is at a much greater distance around the equator. Objects and radiation can escape normally from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole. Thereby the rotation of the black hole slows down. A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. === Innermost stable circular orbit (ISCO) === In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists an innermost stable circular orbit (often called the ISCO), for which any infinitesimal inward perturbations to a circular orbit will lead to spiraling into the black hole, and any outward perturbations will, depending on the energy, result in spiraling in, stably orbiting between apastron and periastron, or escaping to infinity. The location of the ISCO depends on the spin of the black hole, in the case of a Schwarzschild black hole (spin zero) is: r I S C O = 3 r s = 6 G M c 2 , {\displaystyle r_{\rm {ISCO}}=3\,r_{s}={\frac {6\,GM}{c^{2}}},} and decreases with increasing black hole spin for particles orbiting in the same direction as the spin. === Plunging region === The final observable region of spacetime around a black
{ "page_id": 4650, "source": null, "title": "Black hole" }
hole is called the plunging region. In this area it is no longer possible for matter to follow circular orbits or to stop a final descent into the black hole. Instead it will rapidly plunge toward the black hole close to the speed of light. == Formation and evolution == Given the bizarre character of black holes, it was long questioned whether such objects could actually exist in nature or whether they were merely pathological solutions to Einstein's equations. Einstein himself wrongly thought black holes would not form, because he held that the angular momentum of collapsing particles would stabilise their motion at some radius. This led the general relativity community to dismiss all results to the contrary for many years. However, a minority of relativists continued to contend that black holes were physical objects, and by the end of the 1960s, they had persuaded the majority of researchers in the field that there is no obstacle to the formation of an event horizon. Penrose demonstrated that once an event horizon forms, general relativity without quantum mechanics requires that a singularity will form within. Shortly afterwards, Hawking showed that many cosmological solutions that describe the Big Bang have singularities without scalar fields or other exotic matter. The Kerr solution, the no-hair theorem, and the laws of black hole thermodynamics showed that the physical properties of black holes were simple and comprehensible, making them respectable subjects for research. Conventional black holes are formed by gravitational collapse of heavy objects such as stars, but they can also in theory be formed by other processes. === Gravitational collapse === Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. For stars this usually occurs either because a star has too little "fuel" left to maintain its temperature
{ "page_id": 4650, "source": null, "title": "Black hole" }
through stellar nucleosynthesis, or because a star that would have been stable receives extra matter in a way that does not raise its core temperature. In either case the star's temperature is no longer high enough to prevent it from collapsing under its own weight. The collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. The result is one of the various types of compact star. Which type forms depends on the mass of the remnant of the original star left if the outer layers have been blown away (for example, in a Type II supernova). The mass of the remnant, the collapsed object that survives the explosion, can be substantially less than that of the original star. Remnants exceeding 5 M☉ are produced by stars that were over 20 M☉ before the collapse. If the mass of the remnant exceeds about 3–4 M☉ (the Tolman–Oppenheimer–Volkoff limit), either because the original star was very heavy or because the remnant collected additional mass through accretion of matter, even the degeneracy pressure of neutrons is insufficient to stop the collapse. No known mechanism (except possibly quark degeneracy pressure) is powerful enough to stop the implosion and the object will inevitably collapse to form a black hole. The gravitational collapse of heavy stars is assumed to be responsible for the formation of stellar mass black holes. Star formation in the early universe may have resulted in very massive stars, which upon their collapse would have produced black holes of up to 103 M☉. These black holes could be the seeds of the supermassive black holes found in the centres of most galaxies. It has further been suggested that massive black holes with typical masses of ~105 M☉ could have formed
{ "page_id": 4650, "source": null, "title": "Black hole" }
from the direct collapse of gas clouds in the young universe. These massive objects have been proposed as the seeds that eventually formed the earliest quasars observed already at redshift z ∼ 7 {\displaystyle z\sim 7} . Some candidates for such objects have been found in observations of the young universe. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the light emitted just before the event horizon forms delayed an infinite amount of time. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. ==== Primordial black holes and the Big Bang ==== Gravitational collapse requires great density. In the current epoch of the universe these high densities are found only in stars, but in the early universe shortly after the Big Bang densities were much greater, possibly allowing for the creation of black holes. High density alone is not enough to allow black hole formation since a uniform mass distribution will not allow the mass to bunch up. In order for primordial black holes to have formed in such a dense medium, there must have been initial density perturbations that could then grow under their own gravity. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging
{ "page_id": 4650, "source": null, "title": "Black hole" }
in size from a Planck mass ( m P = ℏ c / G {\displaystyle m_{P}={\sqrt {\hbar c/G}}} ≈ 1.2×1019 GeV/c2 ≈ 2.2×10−8 kg) to hundreds of thousands of solar masses. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the expansion rate was greater than the attraction. Following inflation theory there was a net repulsive gravitation in the beginning until the end of inflation. Since then the Hubble flow was slowed by the energy density of the universe. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. === High-energy collisions === Gravitational collapse is not the only process that could create black holes. In principle, black holes could be formed in high-energy collisions that achieve sufficient density. As of 2002, no such events have been detected, either directly or indirectly as a deficiency of the mass balance in particle accelerator experiments. This suggests that there must be a lower limit for the mass of black holes. Theoretically, this boundary is expected to lie around the Planck mass, where quantum effects are expected to invalidate the predictions of general relativity. This would put the creation of black holes firmly out of reach of any high-energy process occurring on or near the Earth. However, certain developments in quantum gravity suggest that the minimum black hole mass could be much lower: some braneworld scenarios for example put the boundary as low as 1 TeV/c2. This would make it conceivable for micro black holes to be created in the high-energy collisions that occur when cosmic rays hit the Earth's atmosphere, or possibly in the Large Hadron Collider at CERN. These
{ "page_id": 4650, "source": null, "title": "Black hole" }
theories are very speculative, and the creation of black holes in these processes is deemed unlikely by many specialists. Even if micro black holes could be formed, it is expected that they would evaporate in about 10−25 seconds, posing no threat to the Earth. === Growth === Once a black hole has formed, it can continue to grow by absorbing additional matter. Any black hole will continually absorb gas and interstellar dust from its surroundings. This growth process is one possible way through which some supermassive black holes may have been formed, although the formation of supermassive black holes is still an open field of research. A similar process has been suggested for the formation of intermediate-mass black holes found in globular clusters. Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. === Evaporation === In 1974, Hawking predicted that black holes are not entirely black but emit small amounts of thermal radiation at a temperature ħc3/(8πGMkB); this effect has become known as Hawking radiation. By applying quantum field theory to a static black hole background, he determined that a black hole should emit particles that display a perfect black body spectrum. Since Hawking's publication, many others have verified the result through various approaches. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the
{ "page_id": 4650, "source": null, "title": "Black hole" }
black hole, which, for a Schwarzschild black hole, is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes. A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. If a black hole is very small, the radiation effects are expected to become very strong. A black hole with the mass of a car would have a diameter of about 10−24 m and take a nanosecond to evaporate, during which time it would briefly have a luminosity of more than 200 times that of the Sun. Lower-mass black holes are expected to evaporate even faster; for example, a black hole of mass 1 TeV/c2 would take less than 10−88 seconds to evaporate completely. For such a small black hole, quantum gravity effects are expected to play an important role and could hypothetically make such a small black hole stable, although current developments in quantum gravity do not indicate this is the case. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception, however, is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent
{ "page_id": 4650, "source": null, "title": "Black hole" }
limits on the possibility of existence of low mass primordial black holes. NASA's Fermi Gamma-ray Space Telescope launched in 2008 will continue the search for these flashes. If black holes evaporate via Hawking radiation, a solar mass black hole will evaporate (beginning once the temperature of the cosmic microwave background drops below that of the black hole) over a period of 1064 years. A supermassive black hole with a mass of 1011 M☉ will evaporate in around 2×10100 years. During the collapse of a supercluster of galaxies, supermassive black holes are predicted to grow to perhaps 1014 M☉. Even these would evaporate over a timescale of up to 10106 years. == Observational evidence == By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. For example, a black hole's existence can sometimes be inferred by observing its gravitational influence on its surroundings. === Direct interferometry === The Event Horizon Telescope (EHT) is an active program that directly observes the immediate environment of black holes' event horizons, such as the black hole at the centre of the Milky Way. In April 2017, EHT began observing the black hole at the centre of Messier 87. "In all, eight radio observatories on six mountains and four continents observed the galaxy in Virgo on and off for 10 days in April 2017" to provide the data yielding the image in April 2019. After two years of data processing, EHT released its first image of a black hole, at the center of the Messier 87 galaxy. What is visible is not the black hole—which shows as black because of the loss of all light within this dark region. Instead, it is the gases at the edge
{ "page_id": 4650, "source": null, "title": "Black hole" }
of the event horizon, displayed as orange or red, that define the black hole. On 12 May 2022, the EHT released the first image of Sagittarius A*, the supermassive black hole at the centre of the Milky Way galaxy. The published image displayed the same ring-like structure and "shadow" seen in the M87* black hole. The boundary of the shadow or area of less brightness matches the predicted gravitationally lensed photon orbits. The image was created using the same techniques as for the M87 black hole. The imaging process for Sagittarius A*, which is more than a thousand times smaller and less massive than M87*, was significantly more complex because of the instability of its surroundings. The image of Sagittarius A* was partially blurred by turbulent plasma on the way to the galactic centre, an effect which prevents resolution of the image at longer wavelengths. The brightening of this material in the 'bottom' half of the processed EHT image is thought to be caused by Doppler beaming, whereby material approaching the viewer at relativistic speeds is perceived as brighter than material moving away. In the case of a black hole, this phenomenon implies that the visible material is rotating at relativistic speeds (>1,000 km/s [2,200,000 mph]), the only speeds at which it is possible to centrifugally balance the immense gravitational attraction of the singularity, and thereby remain in orbit above the event horizon. This configuration of bright material implies that the EHT observed M87* from a perspective catching the black hole's accretion disc nearly edge-on, as the whole system rotated clockwise. The extreme gravitational lensing associated with black holes produces the illusion of a perspective that sees the accretion disc from above. In reality, most of the ring in the EHT image was created when the light emitted by the
{ "page_id": 4650, "source": null, "title": "Black hole" }
far side of the accretion disc bent around the black hole's gravity well and escaped, meaning that most of the possible perspectives on M87* can see the entire disc, even that directly behind the "shadow". In 2015, the EHT detected magnetic fields just outside the event horizon of Sagittarius A* and even discerned some of their properties. The field lines that pass through the accretion disc were a complex mixture of ordered and tangled. Theoretical studies of black holes had predicted the existence of magnetic fields. In April 2023, an image of the shadow of the Messier 87 black hole and the related high-energy jet, viewed together for the first time, was presented. === Detection of gravitational waves from merging black holes === On 14 September 2015, the LIGO gravitational wave observatory made the first-ever successful direct observation of gravitational waves. The signal was consistent with theoretical predictions for the gravitational waves produced by the merger of two black holes: one with about 36 solar masses, and the other around 29 solar masses. This observation provides the most concrete evidence for the existence of black holes to date. For instance, the gravitational wave signal suggests that the separation of the two objects before the merger was just 350 km, or roughly four times the Schwarzschild radius corresponding to the inferred masses. The objects must therefore have been extremely compact, leaving black holes as the most plausible interpretation. More importantly, the signal observed by LIGO also included the start of the post-merger ringdown, the signal produced as the newly formed compact object settles down to a stationary state. Arguably, the ringdown is the most direct way of observing a black hole. From the LIGO signal, it is possible to extract the frequency and damping time of the dominant mode of the
{ "page_id": 4650, "source": null, "title": "Black hole" }
ringdown. From these, it is possible to infer the mass and angular momentum of the final object, which match independent predictions from numerical simulations of the merger. The frequency and decay time of the dominant mode are determined by the geometry of the photon sphere. Hence, observation of this mode confirms the presence of a photon sphere; however, it cannot exclude possible exotic alternatives to black holes that are compact enough to have a photon sphere. The observation also provides the first observational evidence for the existence of stellar-mass black hole binaries. Furthermore, it is the first observational evidence of stellar-mass black holes weighing 25 solar masses or more. Since then, many more gravitational wave events have been observed. === Stars orbiting Sagittarius A* === The proper motions of stars near the centre of our own Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. By fitting their motions to Keplerian orbits, the astronomers were able to infer, in 1998, that a 2.6×106 M☉ object must be contained in a volume with a radius of 0.02 light-years to cause the motions of those stars. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass to 4.3×106 M☉ and a radius of less than 0.002 light-years for the object causing the orbital motion of those stars. The upper limit on the object's size is still too large to test whether it is smaller than its Schwarzschild radius. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for
{ "page_id": 4650, "source": null, "title": "Black hole" }
confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. === Accretion of matter === Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object. Artists' impressions such as the accompanying representation of a black hole with corona commonly depict the black hole as if it were a flat-space body hiding the part of the disk just behind it, but in reality gravitational lensing would greatly distort the image of the accretion disk. Within such a disk, friction would cause angular momentum to be transported outward, allowing matter to fall farther inward, thus releasing potential energy and increasing the temperature of the gas. When the accreting object is a neutron star or a black hole, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the compact object. The resulting friction is so significant that it heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays). These bright X-ray sources may be detected by telescopes. This process of accretion is one of the most efficient energy-producing processes known. Up to 40% of the rest mass of the accreted material can be emitted as radiation. In nuclear fusion only about 0.7% of the rest mass will be emitted as energy. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. As such, many of the universe's more energetic phenomena
{ "page_id": 4650, "source": null, "title": "Black hole" }
have been attributed to the accretion of matter on black holes. In particular, active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. Similarly, X-ray binaries are generally accepted to be binary star systems in which one of the two stars is a compact object accreting matter from its companion. It has also been suggested that some ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. In November 2011 the first direct observation of a quasar accretion disk around a supermassive black hole was reported. ==== X-ray binaries ==== X-ray binaries are binary star systems that emit a majority of their radiation in the X-ray part of the spectrum. These X-ray emissions are generally thought to result when one of the stars (compact object) accretes matter from another (regular) star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. If such a system emits signals that can be directly traced back to the compact object, it cannot be a black hole. The absence of such a signal does, however, not exclude the possibility that the compact object is a neutron star. By studying the companion star it is often possible to obtain the orbital parameters of the system and to obtain an estimate for the mass of the compact object. If this is much larger than the Tolman–Oppenheimer–Volkoff limit
{ "page_id": 4650, "source": null, "title": "Black hole" }
(the maximum mass a star can have without collapsing) then the object cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Some doubt remained, due to the uncertainties that result from the companion star being much heavier than the candidate black hole. Currently, better candidates for black holes are found in a class of X-ray binaries called soft X-ray transients. In this class of system, the companion star is of relatively low mass allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star during this period. One of the best such candidates is V404 Cygni. ===== Quasi-periodic oscillations ===== The X-ray emissions from accretion disks sometimes flicker at certain frequencies. These signals are called quasi-periodic oscillations and are thought to be caused by material moving along the inner edge of the accretion disk (the innermost stable circular orbit). As such their frequency is linked to the mass of the compact object. They can thus be used as an alternative way to determine the mass of candidate black holes. ==== Galactic nuclei ==== Astronomers use the term "active galaxy" to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the activity in these active galactic nuclei (AGN) may be explained by the presence of supermassive black holes, which can be millions of times more massive than stellar ones. The models of
{ "page_id": 4650, "source": null, "title": "Black hole" }
these AGN consist of a central black hole that may be millions or billions of times more massive than the Sun; a disk of interstellar gas and dust called an accretion disk; and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, M32, M87, NGC 3115, NGC 3377, NGC 4258, NGC 4889, NGC 1277, OJ 287, APM 08279+5255 and the Sombrero Galaxy. It is now widely accepted that the centre of nearly every galaxy, not just active ones, contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. === Microlensing === Another way the black hole nature of an object may be tested is through observation of effects caused by a strong gravitational field in their vicinity. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, such as light passing through an optic lens. Observations have been made of weak gravitational lensing, in which light rays are deflected by only a few arcseconds. Microlensing occurs when the sources are unresolved and the observer sees a small brightening. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black
{ "page_id": 4650, "source": null, "title": "Black hole" }
hole. Another possibility for observing gravitational lensing by a black hole would be to observe stars orbiting the black hole. There are several candidates for such an observation in orbit around Sagittarius A*. == Alternatives == The evidence for stellar black holes strongly relies on the existence of an upper limit for the mass of a neutron star. The size of this limit heavily depends on the assumptions made about the properties of dense matter. New exotic phases of matter could push up this bound. A phase of free quarks at high density might allow the existence of dense quark stars, and some supersymmetric models predict the existence of Q stars. Some extensions of the standard model posit the existence of preons as fundamental building blocks of quarks and leptons, which could hypothetically form preon stars. These hypothetical models could potentially explain a number of observations of stellar black hole candidates. However, it can be shown from arguments in general relativity that any such object will have a maximum mass. Since the average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass, supermassive black holes are much less dense than stellar black holes. The average density of a 108 M☉ black hole is comparable to that of water. Consequently, the physics of matter forming a supermassive black hole is much better understood and the possible alternative explanations for supermassive black hole observations are much more mundane. For example, a supermassive black hole could be modelled by a large cluster of very dark objects. However, such alternatives are typically not stable enough to explain the supermassive black hole candidates. The evidence for the existence of stellar and supermassive black holes implies that in order for black holes not to form, general relativity
{ "page_id": 4650, "source": null, "title": "Black hole" }
must fail as a theory of gravity, perhaps due to the onset of quantum mechanical corrections. A much anticipated feature of a theory of quantum gravity is that it will not feature singularities or event horizons and thus black holes would not be real artefacts. For example, in the fuzzball model based on string theory, the individual states of a black hole solution do not generally have an event horizon or singularity, but for a classical/semiclassical observer the statistical average of such states appears just as an ordinary black hole as deduced from general relativity. A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. These include the gravastar, the black star, related nestar and the dark-energy star. == Open questions == === Entropy and thermodynamics === In 1971, Hawking showed under general conditions that the total area of the event horizons of any collection of classical black holes can never decrease, even if they collide and merge. This result, now known as the second law of black hole mechanics, is remarkably similar to the second law of thermodynamics, which states that the total entropy of an isolated system can never decrease. As with classical objects at absolute zero temperature, it was assumed that black holes had zero entropy. If this were the case, the second law of thermodynamics would be violated by entropy-laden matter entering a black hole, resulting in a decrease in the total entropy of the universe. Therefore, Bekenstein proposed that a black hole should have an entropy, and that it should be proportional to its horizon area. The link with the laws of thermodynamics was further strengthened by Hawking's discovery in 1974 that quantum field theory predicts that a black hole
{ "page_id": 4650, "source": null, "title": "Black hole" }
radiates blackbody radiation at a constant temperature. This seemingly causes a violation of the second law of black hole mechanics, since the radiation will carry away energy from the black hole causing it to shrink. The radiation also carries away entropy, and it can be proven under general assumptions that the sum of the entropy of the matter surrounding a black hole and one quarter of the area of the horizon as measured in Planck units is in fact always increasing. This allows the formulation of the first law of black hole mechanics as an analogue of the first law of thermodynamics, with the mass acting as energy, the surface gravity as temperature and the area as entropy. One puzzling feature is that the entropy of a black hole scales with its area rather than with its volume, since entropy is normally an extensive quantity that scales linearly with the volume of the system. This odd property led Gerard 't Hooft and Leonard Susskind to propose the holographic principle, which suggests that anything that happens in a volume of spacetime can be described by data on the boundary of that volume. Although general relativity can be used to perform a semiclassical calculation of black hole entropy, this situation is theoretically unsatisfying. In statistical mechanics, entropy is understood as counting the number of microscopic configurations of a system that have the same macroscopic qualities, such as mass, charge, pressure, etc. Without a satisfactory theory of quantum gravity, one cannot perform such a computation for black holes. Some progress has been made in various approaches to quantum gravity. In 1995, Andrew Strominger and Cumrun Vafa showed that counting the microstates of a specific supersymmetric black hole in string theory reproduced the Bekenstein–Hawking entropy. Since then, similar results have been reported for different
{ "page_id": 4650, "source": null, "title": "Black hole" }
black holes both in string theory and in other approaches to quantum gravity like loop quantum gravity. === Information loss paradox === Because a black hole has only a few internal parameters, most of the information about the matter that went into forming the black hole is lost. Regardless of the type of matter which goes into a black hole, it appears that only information concerning the total mass, charge, and angular momentum are conserved. As long as black holes were thought to persist forever this information loss is not that problematic, as the information can be thought of as existing inside the black hole, inaccessible from the outside, but represented on the event horizon in accordance with the holographic principle. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information appears to be gone forever. The question whether information is truly lost in black holes (the black hole information paradox) has divided the theoretical physics community. In quantum mechanics, loss of information corresponds to the violation of a property called unitarity, and it has been argued that loss of unitarity would also imply violation of conservation of energy, though this has also been disputed. Over recent years evidence has been building that indeed information and unitarity are preserved in a full quantum gravitational treatment of the problem. One attempt to resolve the black hole information paradox is known as black hole complementarity. In 2012, the "firewall paradox" was introduced with the goal of demonstrating that black hole complementarity fails to solve the information paradox. According to quantum field theory in curved spacetime, a single emission of Hawking radiation involves two mutually entangled particles. The outgoing particle escapes and
{ "page_id": 4650, "source": null, "title": "Black hole" }
is emitted as a quantum of Hawking radiation; the infalling particle is swallowed by the black hole. Assume a black hole formed a finite time in the past and will fully evaporate away in some finite time in the future. Then, it will emit only a finite amount of information encoded within its Hawking radiation. According to research by physicists like Don Page and Leonard Susskind, there will eventually be a time by which an outgoing particle must be entangled with all the Hawking radiation the black hole has previously emitted. This seemingly creates a paradox: a principle called "monogamy of entanglement" requires that, like any quantum system, the outgoing particle cannot be fully entangled with two other systems at the same time; yet here the outgoing particle appears to be entangled both with the infalling particle and, independently, with past Hawking radiation. In order to resolve this contradiction, physicists may eventually be forced to give up one of three time-tested principles: Einstein's equivalence principle, unitarity, or local quantum field theory. One possible solution, which violates the equivalence principle, is that a "firewall" destroys incoming particles at the event horizon. In general, which—if any—of these assumptions should be abandoned remains a topic of debate. == In science fiction == Christopher Nolan's 2014 science fiction epic Interstellar features a black hole known as Gargantua, which is the central object of a planetary system in a distant galaxy. Humanity accessed this system via a wormhole in the outer solar system, near Saturn. == See also == == Notes == == References == == Sources == Carroll, Sean M. (2004). Spacetime and Geometry. Addison Wesley. ISBN 978-0-8053-8732-2., the lecture notes on which the book was based are available for free from Sean Carroll's website Archived 23 March 2017 at the Wayback Machine Hawking,
{ "page_id": 4650, "source": null, "title": "Black hole" }
S. W.; Ellis, G. F. R. (1973). Large Scale Structure of space time. Cambridge University Press. ISBN 978-0-521-09906-6. Archived from the original on 21 July 2020. Retrieved 16 May 2020. Misner, Charles; Thorne, Kip S.; Wheeler, John (1973). Gravitation. W. H. Freeman and Company. ISBN 978-0-7167-0344-0. Thorne, Kip S. (1994). Black Holes and Time Warps. Norton, W. W. & Company, Inc. ISBN 978-0-393-31276-8. Wald, Robert M. (1984). General Relativity. University of Chicago Press. ISBN 978-0-226-87033-5. Archived from the original on 11 August 2016. Retrieved 23 February 2016. Wheeler, J. Craig (2007). Cosmic Catastrophes (2nd ed.). Cambridge University Press. ISBN 978-0-521-85714-7. == Further reading == === Popular reading === === University textbooks and monographs === === Review papers === == External links == Black Holes on In Our Time at the BBC Stanford Encyclopedia of Philosophy: "Singularities and Black Holes" by Erik Curiel and Peter Bokulich. Black Holes: Gravity's Relentless Pull – Interactive multimedia Web site about the physics and astronomy of black holes from the Space Telescope Science Institute (HubbleSite) ESA's Black Hole Visualization Archived 3 May 2019 at the Wayback Machine Frequently Asked Questions (FAQs) on Black Holes Schwarzschild Geometry Black holes - basic (NYT; April 2021) === Videos === 16-year-long study tracks stars orbiting Sagittarius A* Movie of Black Hole Candidate from Max Planck Institute Cowen, Ron (20 April 2015). "3D simulations of colliding black holes hailed as most realistic yet". Nature. doi:10.1038/nature.2015.17360. Computer visualisation of the signal detected by LIGO Two Black Holes Merge into One (based upon the signal GW150914)
{ "page_id": 4650, "source": null, "title": "Black hole" }
Plectenchyma (from Greek πλέκω pleko 'I weave' and ἔγχυμα enchyma 'infusion', i.e., 'a woven tissue') is the general term employed to designate all types of fungal tissues. The two most common types of tissues are prosenchyma and pseudoparenchyma. The hyphae specifically become fused together. == Notes ==
{ "page_id": 38212144, "source": null, "title": "Plectenchyma" }