text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
The Grainger challenge is a scientific competition to find an economical way to remove arsenic from arsenic-contaminated groundwater . This competition is being funded by the United States National Academy of Engineering and the Grainger Foundation and is meant to help provide safe drinking water to countries such as Bangladesh , India , and Cambodia .
In 2007, the winner of the Gold Award ($1,000,000) was Abul Hussam , for his invention of the Sono arsenic filter . The Silver Award ($200,000) was awarded to Arup K Sengupta for his invention and implementation of ArsenX np hybrid anion exchange (HAIX) resin. [ 1 ] The Children's Safe Drinking Water Program at Procter & Gamble (P&G), Cincinnati, received the Bronze Award of US$100,000 for the PUR™ Purifier of Water coagulation and flocculation water treatment system.
This environment -related article is a stub . You can help Wikipedia by expanding it .
This water supply –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Grainger_challenge |
In chemistry , the molar mass ( M ) (sometimes called molecular weight or formula weight , but see related quantities for usage) of a chemical compound is defined as the ratio between the mass and the amount of substance (measured in moles ) of any sample of the compound. [ 1 ] The molar mass is a bulk, not molecular, property of a substance. The molar mass is an average of many instances of the compound, which often vary in mass due to the presence of isotopes . Most commonly, the molar mass is computed from the standard atomic weights and is thus a terrestrial average and a function of the relative abundance of the isotopes of the constituent atoms on Earth.
For a sample of a substance X, the molar mass, M (X), is appropriate for converting between the mass of the substance, m (X), and the amount of the substance, n (X), for bulk quantities: M (X) = m (X)/ n (X). If N (X) is the number of entities in the sample, m (X) = N (X) m a (X) and n (X) = N (X)/ N A = N (X) ent, where ent is an atomic-scale unit of amount equal to one entity. So M (X) = m a (X)/ent, the atomic-scale entity mass per entity, which is self evident. Since m a (X) = A r (X) Da, molar mass can be written in units of dalton per entity as M (X) = A r (X) Da/ent. One mole is an aggregate of an Avogadro number of entities, and (for all practical purposes) the Avogadro number is g/Da. So (for all practical purposes) Da/ent = g/mol. And the molar mass can be calculated from M (X) = A r (X) Da/ent = A r (X) g/mol = A r (X) kg/kmol.
The molecular mass (for molecular compounds) and formula mass (for non-molecular compounds, such as ionic salts ) are commonly used as synonyms of molar mass, differing only in units ( dalton vs Da/ent or g/mol); however, the most authoritative sources define it differently. The difference is that molecular mass is the mass of one specific particle or molecule, while the molar mass is an average over many particles or molecules.
The molar mass is an intensive property of the substance, that does not depend on the size of the sample. In the International System of Units (SI), the coherent unit of molar mass is kg/mol. However, for historical reasons, molar masses are almost always expressed in g/mol.
The mole was defined in such a way that the numerical value of the molar mass of a compound in g/mol, i.e. M (X)/(g/mol), was equal to the numerical value of the average atomic-scale mass of one entity (atom, molecule, formula unit, . . .) in Da, i.e. m a (X)/Da = A r (X). Specifically: M (X) = A r (X) g/mol. It was exactly equal before the redefinition of the mole in 2019 , and is now only approximately equal, but the difference is negligible for all practical purposes. Thus, for example, the average mass of a molecule of water is about 18.0153 Da, and the molar mass of water is about 18.0153 g/mol.
For chemical elements without isolated molecules, such as carbon and metals, the molar mass is still computed using M (X) = A r (X) g/mol. Thus, for example, the molar mass of iron is about 55.845 g/mol.
Since 1971, SI defined the "amount of substance" as a separate dimension of measurement . Until 2019, the mole was defined as the amount of substance that has as many constituent particles as there are atoms in 12 grams of carbon-12 . That meant that, during that period, the molar mass of carbon-12 was thus exactly 12 g/mol, by definition: M ( 12 C) = 12 g/mol (exactly). Inverting this gives an expression for the (original) definition of the mole in terms of the international prototype of the kilogram (IPK) and the molar mass of carbon-12: 1 mol = (0.012 IPK)/ M ( 12 C). Because the dalton was (and still is) defined as 1 Da = m a ( 12 C)/12 and M ( 12 C) = m a ( 12 C) N A , the original mole definition can be written as 1 mol = (g/Da)(1/ N A ), where (g/Da) is the (1971 definition of the) Avogadro number—the number of carbon-12 atoms in 12 grams of carbon-12—and (1/ N A ) is an amount of one entity. Since 2019, a mole of any substance has been redefined in the SI as the amount of that substance containing an exactly defined number of entities: 1 mol = 6.022 140 76 × 10 23 (1/ N A ). This is still in the same form as the previous definition, one mole = (Avogadro number)(amount of one entity), but because the dalton is still defined in terms of the (now inexactly known) mass of the carbon-12 atom, the Avogadro number is no longer exactly equal to (g/Da). The numerical value of the molar mass of a substance expressed in g/mol thus is (for all practical purposes) still equal to the numerical value of the mass of this number of entities (i.e. an amount of one mole) of the substance expressed in grams—(the relative discrepancy is only of order 10 –9 ).
The molar mass of atoms of an element is given by the relative atomic mass of the element multiplied by the molar mass constant , M u ≈ 1.000 000 × 10 −3 kg/mol ≈ 1 g/mol. For normal samples from Earth with typical isotope composition, the atomic weight can be approximated by the standard atomic weight [ 2 ] or the conventional atomic weight.
Multiplying by the molar mass constant ensures that the calculation is dimensionally correct: standard relative atomic masses are dimensionless quantities (i.e., pure numbers) whereas molar masses have units (in this case, grams per mole).
Some elements are usually encountered as molecules , e.g. hydrogen ( H 2 ), sulfur ( S 8 ), chlorine ( Cl 2 ). The molar mass of molecules of these elements is the molar mass of the atoms multiplied by the number of atoms in each molecule:
The molar mass of a compound is given by the sum of the relative atomic mass A r of the atoms which form the compound multiplied by the molar mass constant M u ≈ 1 g/mol {\displaystyle M_{u}\approx 1{\text{ g/mol}}} :
Here, M r is the relative molar mass, also called formula weight. For normal samples from earth with typical isotope composition, the standard atomic weight or the conventional atomic weight can be used as an approximation of the relative atomic mass of the sample. Examples are: M ( NaCl ) = [ 22.98976928 ( 2 ) + 35.453 ( 2 ) ] × 1 g/mol = 58.443 ( 2 ) g/mol M ( C 12 H 22 O 11 ) = [ 12 × 12.0107 ( 8 ) + 22 × 1.00794 ( 7 ) + 11 × 15.9994 ( 3 ) ] × 1 g/mol = 342.297 ( 14 ) g/mol {\displaystyle {\begin{array}{ll}M({\ce {NaCl}})&={\bigl [}22.98976928(2)+35.453(2){\bigr ]}\times 1{\text{ g/mol}}\\&=58.443(2){\text{ g/mol}}\\[4pt]M({\ce {C12H22O11}})&={\bigl [}12\times 12.0107(8)+22\times 1.00794(7)+11\times 15.9994(3){\bigr ]}\times 1{\text{ g/mol}}\\&=342.297(14){\text{ g/mol}}\end{array}}}
An average molar mass may be defined for mixtures of compounds. [ 1 ] This is particularly important in polymer science , where there is usually a molar mass distribution of non-uniform polymers so that different polymer molecules contain different numbers of monomer units. [ 3 ] [ 4 ]
The average molar mass of mixtures M ¯ {\displaystyle {\overline {M}}} can be calculated from the mole fractions x i of the components and their molar masses M i :
It can also be calculated from the mass fractions w i of the components:
As an example, the average molar mass of dry air is 28.96 g/mol. [ 5 ]
Molar mass is closely related to the relative molar mass ( M r ) of a compound and to the standard atomic weights of its constituent elements. However, it should be distinguished from the molecular mass (which is confusingly also sometimes known as molecular weight), which is the mass of one molecule (of any single isotopic composition), and to the atomic mass , which is the mass of one atom (of any single isotope). The dalton , symbol Da, is also sometimes used as a unit of molar mass, especially in biochemistry , with the definition 1 Da = 1 g/mol, despite the fact that it is strictly a unit of mass (1 Da = 1 u = 1.660 539 068 92 (52) × 10 −27 kg , as of 2022 CODATA recommended values). [ 6 ]
Obsolete terms for molar mass include gram atomic mass for the mass, in grams, of one mole of atoms of an element, and gram molecular mass for the mass, in grams, of one mole of molecules of a compound. The gram-atom is a former term for a mole of atoms, and gram-molecule for a mole of molecules. [ 7 ]
Molecular weight (M.W.) (for molecular compounds) and formula weight (F.W.) (for non-molecular compounds), are older terms for what is now more correctly called the relative molar mass ( M r ). [ 8 ] This is a dimensionless quantity (i.e., a pure number, without units) equal to the molar mass divided by the molar mass constant . [ notes 1 ]
The molecular mass ( m ) is the mass of a given molecule: it is usually measured in daltons (Da or u). [ 7 ] Different molecules of the same compound may have different molecular masses because they contain different isotopes of an element. This is distinct but related to the molar mass, which is a measure of the average molecular mass of all the molecules in a sample and is usually the more appropriate measure when dealing with macroscopic (weigh-able) quantities of a substance.
Molecular masses are calculated from the atomic masses of each nuclide , while molar masses are calculated from the standard atomic weights [ 9 ] of each element . The standard atomic weight takes into account the isotopic distribution of the element in a given sample (usually assumed to be "normal"). For example, water has a molar mass of 18.0153(3) g/mol , but individual water molecules have molecular masses which range between 18.010 564 6863 (15) Da ( 1 H 2 16 O ) and 22.027 7364 (9) Da ( 2 H 2 18 O ).
The distinction between molar mass and molecular mass is important because relative molecular masses can be measured directly by mass spectrometry , often to a precision of a few parts per million . This is accurate enough to directly determine the chemical formula of a molecule. [ 10 ]
The term formula weight has a specific meaning when used in the context of DNA synthesis: whereas an individual phosphoramidite nucleobase to be added to a DNA polymer has protecting groups and has its molecular weight quoted including these groups, the amount of molecular weight that is ultimately added by this nucleobase to a DNA polymer is referred to as the nucleobase's formula weight (i.e., the molecular weight of this nucleobase within the DNA polymer, minus protecting groups). [ citation needed ]
The precision to which a molar mass is known depends on the precision of the atomic masses from which it was calculated (and very slightly on the value of the molar mass constant , which depends on the measured value of the dalton ). Most atomic masses are known to a precision of at least one part in ten-thousand, often much better [ 2 ] (the atomic mass of lithium is a notable, and serious, [ 11 ] exception). This is adequate for almost all normal uses in chemistry: it is more precise than most chemical analyses , and exceeds the purity of most laboratory reagents.
The precision of atomic masses, and hence of molar masses, is limited by the knowledge of the isotopic distribution of the element. If a more accurate value of the molar mass is required, it is necessary to determine the isotopic distribution of the sample in question, which may be different from the standard distribution used to calculate the standard atomic mass. The isotopic distributions of the different elements in a sample are not necessarily independent of one another: for example, a sample which has been distilled will be enriched in the lighter isotopes of all the elements present. This complicates the calculation of the standard uncertainty in the molar mass.
A useful convention for normal laboratory work is to quote molar masses to two decimal places for all calculations. This is more accurate than is usually required, but avoids rounding errors during calculations. When the molar mass is greater than 1000 g/mol, it is rarely appropriate to use more than one decimal place. These conventions are followed in most tabulated values of molar masses. [ 12 ] [ 13 ]
Molar masses are almost never measured directly. They may be calculated from standard atomic masses, and are often listed in chemical catalogues and on safety data sheets (SDS). Molar masses typically vary between:
While molar masses are almost always, in practice, calculated from atomic weights, they can also be measured in certain cases. Such measurements are much less precise than modern mass spectrometric measurements of atomic weights and molecular masses, and are of mostly historical interest. All of the procedures rely on colligative properties , and any dissociation of the compound must be taken into account.
The measurement of molar mass by vapour density relies on the principle, first enunciated by Amedeo Avogadro , that equal volumes of gases under identical conditions contain equal numbers of particles. This principle is included in the ideal gas equation :
where n is the amount of substance . The vapour density ( ρ ) is given by
Combining these two equations gives an expression for the molar mass in terms of the vapour density for conditions of known pressure and temperature :
The freezing point of a solution is lower than that of the pure solvent , and the freezing-point depression ( Δ T ) is directly proportional to the amount concentration for dilute solutions. When the composition is expressed as a molality , the proportionality constant is known as the cryoscopic constant ( K f ) and is characteristic for each solvent. If w represents the mass fraction of the solute in solution, and assuming no dissociation of the solute, the molar mass is given by
The boiling point of a solution of an involatile solute is higher than that of the pure solvent , and the boiling-point elevation ( Δ T ) is directly proportional to the amount concentration for dilute solutions. When the composition is expressed as a molality , the proportionality constant is known as the ebullioscopic constant ( K b ) and is characteristic for each solvent. If w represents the mass fraction of the solute in solution, and assuming no dissociation of the solute, the molar mass is given by | https://en.wikipedia.org/wiki/Gram_atomic_mass |
The gram per cubic centimetre is a unit of density in International System of Units (SI), and is commonly used in chemistry . Its official SI symbols are g/cm 3 , g·cm −3 , or g cm −3 . It is equal to the units gram per millilitre (g/mL) and kilogram per litre (kg/L). It is defined by dividing the gram , a unit of mass , by the cubic centimetre , a unit of volume . It is a coherent unit in the CGS system , but is not a coherent unit of the SI.
The density of water is approximately 1 g/cm 3 , since the gram was originally defined as the mass of one cubic centimetre of water at its maximum density at approximately 4 °C (39 °F). [ 1 ] | https://en.wikipedia.org/wiki/Gram_per_cubic_centimetre |
Gram stain ( Gram staining or Gram's method ), is a method of staining used to classify bacterial species into two large groups: gram-positive bacteria and gram-negative bacteria . It may also be used to diagnose a fungal infection . [ 1 ] The name comes from the Danish bacteriologist Hans Christian Gram , who developed the technique in 1884. [ 2 ]
Gram staining differentiates bacteria by the chemical and physical properties of their cell walls . Gram-positive cells have a thick layer of peptidoglycan in the cell wall that retains the primary stain, crystal violet . Gram-negative cells have a thinner peptidoglycan layer that allows the crystal violet to wash out on addition of ethanol . They are stained pink or red by the counterstain , [ 3 ] commonly safranin or fuchsine . Lugol's iodine solution is always added after addition of crystal violet to form a stable complex with crystal violet that strengthen the bonds of the stain with the cell wall . [ 4 ]
Gram staining is almost always the first step in the identification of a bacterial group. While Gram staining is a valuable diagnostic tool in both clinical and research settings, not all bacteria can be definitively classified by this technique. This gives rise to gram-variable and gram-indeterminate groups.
The method is named after its inventor, the Danish scientist Hans Christian Gram (1853–1938), who developed the technique while working with Carl Friedländer in the morgue of the city hospital in Berlin in 1884. Gram devised his technique not for the purpose of distinguishing one type of bacterium from another but to make bacteria more visible in stained sections of lung tissue. [ 5 ] Gram noticed that some bacterial cells possessed noticeable resistance to decolorization. Based on these observations, Gram developed the initial gram staining procedure, initially making use of Ehrlich's aniline-gentian violet, Lugol's iodine, absolute alcohol for decolorization, and Bismarck brown for counterstain. [ 6 ] He published his method in 1884, and included in his short report the observation that the typhus bacillus did not retain the stain. [ 7 ] Gram did not initially make the distinction between Gram-negative and Gram-positive bacteria using his procedure. [ 6 ]
Gram staining is a bacteriological laboratory technique [ 8 ] used to differentiate bacterial species into two large groups ( gram-positive and gram-negative ) based on the physical properties of their cell walls . [ 9 ] [ page needed ] Gram staining can also be used to diagnose a fungal infection . [ 1 ] Gram staining is not used to classify archaea , since these microorganisms yield widely varying responses that do not follow their phylogenetic groups . [ 10 ]
Gram stains are performed on body fluid or biopsy when infection is suspected. Gram stains yield results much more quickly than culturing , and are especially important when infection would make an important difference in the patient's treatment and prognosis; examples are cerebrospinal fluid for meningitis and synovial fluid for septic arthritis . [ 11 ] [ 12 ]
Gram-positive bacteria have a thick mesh-like cell wall made of peptidoglycan (50–90% of cell envelope), and as a result are stained purple by crystal violet, whereas gram-negative bacteria have a thinner layer (10% of cell envelope), so do not retain the purple stain and are counter-stained pink by safranin. There are four basic steps of the Gram stain:
Crystal violet (CV) dissociates in aqueous solutions into CV + and chloride ( Cl − ) ions. These ions penetrate the cell wall of both gram-positive and gram-negative cells. The CV + ion interacts with negatively charged components of bacterial cells and stains the cells purple. [ 15 ]
Iodide ( I − or I − 3 ) interacts with CV + and forms large complexes of crystal violet and iodine (CV–I) within the inner and outer layers of the cell. Iodine is often referred to as a mordant , but is a trapping agent that prevents the removal of the CV–I complex and, therefore, colors the cell. [ 16 ]
When a decolorizer such as alcohol or acetone is added, it interacts with the lipids of the cell membrane. [ 17 ] A gram-negative cell loses its outer lipopolysaccharide membrane, and the inner peptidoglycan layer is left exposed. The CV–I complexes are washed from the gram-negative cell along with the outer membrane. [ 18 ] In contrast, a gram-positive cell becomes dehydrated from an ethanol treatment. The large CV–I complexes become trapped within the gram-positive cell due to the multilayered nature of its peptidoglycan. [ 18 ] The decolorization step is critical and must be timed correctly; the crystal violet stain is removed from both gram-positive and negative cells if the decolorizing agent is left on too long (a matter of seconds). [ 19 ]
After decolorization, the gram-positive cell remains purple and the gram-negative cell loses its purple color. [ 19 ] Counterstain, which is usually positively charged safranin or basic fuchsine, is applied last to give decolorized gram-negative bacteria a pink or red color. [ 3 ] [ 20 ] Both gram-positive bacteria and gram-negative bacteria pick up the counterstain. The counterstain, however, is unseen on gram-positive bacteria because of the darker crystal violet stain. [ citation needed ]
Gram-positive bacteria generally have a single membrane ( monoderm ) surrounded by a thick peptidoglycan.
This rule is followed by two phyla: Bacillota (except for the classes Mollicutes and Negativicutes ) and the Actinomycetota . [ 9 ] [ 21 ] In contrast, members of the Chloroflexota (green non-sulfur bacteria) are monoderms but possess a thin or absent (class Dehalococcoidetes ) peptidoglycan and can stain negative, positive or indeterminate; members of the Deinococcota stain positive but are diderms with a thick peptidoglycan. [ 9 ] [ page needed ] [ 21 ]
The cell wall's strength is enhanced by teichoic acids, glycopolymeric substances embedded within the peptidoglycan. Teichoic acids play multiple roles, such as generating the cell's net negative charge, contributing to cell wall rigidity and shape maintenance, and aiding in cell division and resistance to various stressors, including heat and salt. Despite the density of the peptidoglycan layer, it remains relatively porous, allowing most substances to permeate. For larger nutrients, Gram-positive bacteria utilize exoenzymes, secreted extracellularly to break down macromolecules outside the cell. [ 22 ]
Historically , the gram-positive forms made up the phylum Firmicutes , a name now used for the largest group. It includes many well-known genera such as Lactobacillus, Bacillus , Listeria , Staphylococcus , Streptococcus , Enterococcus , and Clostridium . [ 23 ] It has also been expanded to include the Mollicutes, bacteria such as Mycoplasma and Thermoplasma that lack cell walls and so cannot be Gram-stained, but are derived from such forms. [ 24 ]
Some bacteria have cell walls which are particularly adept at retaining stains. These will appear positive by Gram stain even though they are not closely related to other gram-positive bacteria. These are called acid-fast bacteria , and can only be differentiated from other gram-positive bacteria by special staining procedures . [ 25 ]
Gram-negative bacteria generally possess a thin layer of peptidoglycan between two membranes ( diderm ). [ 26 ] Lipopolysaccharide (LPS) is the most abundant antigen on the cell surface of most gram-negative bacteria, contributing up to 80% of the outer membrane of E. coli and Salmonella . [ 27 ] These LPS molecules, consisting of the O-antigen or O-polysaccharide, core polysaccharide, and lipid A, serve multiple functions including contributing to the cell's negative charge and protecting against certain chemicals. LPS's role is critical in host-pathogen interactions, with the O-antigen eliciting an immune response and lipid A acting as an endotoxin. [ 22 ]
Additionally, the outer membrane acts as a selective barrier, regulated by porins, transmembrane proteins forming pores that allow specific molecules to pass. The space between the cell membrane and the outer membrane, known as the periplasm, contains periplasmic enzymes for nutrient processing. A significant structural component linking the peptidoglycan layer and the outer membrane is Braun's lipoprotein, which provides additional stability and strength to the bacterial cell wall. [ 22 ]
Most bacterial phyla are gram-negative, including the cyanobacteria , green sulfur bacteria , and most Pseudomonadota (exceptions being some members of the Rickettsiales and the insect-endosymbionts of the Enterobacteriales ). [ 9 ] [ page needed ] [ 21 ]
Some bacteria, after staining with the Gram stain, yield a gram-variable pattern: a mix of pink and purple cells are seen. [ 18 ] [ 28 ] In cultures of Bacillus, Butyrivibrio , and Clostridium , a decrease in peptidoglycan thickness during growth coincides with an increase in the number of cells that stain gram-negative. [ 28 ] In addition, in all bacteria stained using the Gram stain, the age of the culture may influence the results of the stain. [ 28 ]
Gram-indeterminate bacteria do not respond predictably to Gram staining and, therefore, cannot be determined as either gram-positive or gram-negative. Examples include many species of Mycobacterium , including Mycobacterium bovis , Mycobacterium leprae and Mycobacterium tuberculosis , the latter two of which are the causative agents of leprosy and tuberculosis, respectively. [ 29 ] [ 30 ] Bacteria of the genus Mycoplasma lack a cell wall around their cell membranes , [ 11 ] which means they do not stain by Gram's method and are resistant to the antibiotics that target cell wall synthesis. [ 31 ] [ 32 ]
The term Gram staining is derived from the surname of Hans Christian Gram ; the eponym (Gram) is therefore capitalized but not the common noun (stain) as is usual for scientific terms. [ 33 ] The initial letters of gram-positive and gram-negative , which are eponymous adjectives , can be either capital G or lowercase g , depending on what style guide (if any) governs the document being written. Lowercase style is used by the US Centers for Disease Control and Prevention and other style regimens such as the AMA style . [ 34 ] Dictionaries may use lowercase, [ 35 ] [ 36 ] uppercase, [ 37 ] [ 38 ] [ 39 ] [ 40 ] or both. [ 41 ] [ 42 ] Uppercase Gram-positive or Gram-negative usage is also common in many scientific journal articles and publications. [ 42 ] [ 43 ] [ 44 ] When articles are submitted to journals, each journal may or may not apply house style to the postprint version. Preprint versions contain whichever style the author happened to use. Even style regimens that use lowercase for the adjectives gram-positive and gram-negative still typically use capital for Gram stain . [ citation needed ] | https://en.wikipedia.org/wiki/Gram_stain |
Grammatical Man: Information, Entropy, Language, and Life is a 1982 book written by Jeremy Campbell, then Washington correspondent for the Evening Standard . [ 1 ] The book examines the topics of probability , information theory , cybernetics , genetics , and linguistics .
Information processes are used to frame and examine all of existence, from the Big Bang to DNA to human communication to artificial intelligence.
For Laplace's "intelligence," as for the God of Plato, Galileo and Einstein, the past and future coexist on equal terms, like the two rays into which an arbitrarily chosen point divides a straight line. If the theories I have presented are correct, however, not even the ultimate computer --the universe itself-- ever contains enough information to specify completely its own future states. The present moment always contains an element of genuine novelty and the future is never wholly predictable. Because biological processes also generate information and because consciousness enables us to experience those processes directly, the intuitive perception of the world as unfolding in time captures one of the most deepseated properties of the universe.
To understand complex systems, such as a large computer or a living organism, we cannot use ordinary, formal logic, which deals with events that definitely will happen or definitely will not happen. A probabilistic logic is needed, one that makes statements about how likely or unlikely it is that various events will happen. | https://en.wikipedia.org/wiki/Grammatical_Man |
Grammatik was the first grammar-checking program developed for home computer systems. Aspen Software of Albuquerque, NM , released the earliest version of this diction and style checker for personal computers. [ 1 ] It was first released no later than 1981, [ 2 ] and was inspired by the Writer's Workbench . [ 1 ]
Grammatik was first available for the Radio Shack TRS-80 , and soon had versions for CP/M and the IBM PC . Reference Software International of San Francisco , California, acquired Grammatik in 1985. Development of Grammatik continued, and it became an actual grammar checker that could detect writing errors beyond simple style checking. [ 3 ]
Subsequent versions were released for the MS-DOS , Windows , Macintosh and Unix platforms. Grammatik was ultimately acquired by WordPerfect Corporation and is integrated into the WordPerfect word processor .
This software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Grammatik |
The Grammy Award for Best Engineered Recording, Classical has been awarded since 1959. The award had several minor name changes:
This award is presented alongside the Grammy Award for Best Engineered Album, Non-Classical . From 1960 to 1965 a further award was presented for Best Engineered Recording - Special or Novel Effects .
Years reflect the year in which the Grammy Awards were presented, for works released in the previous year.
The award is presented to engineers (and, if applicable, mastering engineers), not to artists, orchestras, conductors or other performers on the winning works, except if the engineer is also a credited performer. | https://en.wikipedia.org/wiki/Grammy_Award_for_Best_Engineered_Album,_Classical |
The Grammy Award for Best Engineered Album, Non-Classical has been awarded since 1959. The award had several minor name changes:
This award is presented alongside the Grammy Award for Best Engineered Album, Classical . From 1960 to 1965 a further award was presented for Best Engineered Recording – Special or Novel Effects .
Years reflect the year in which the Grammy Awards were presented, for works released in the previous year. The award is presented to the audio engineer (s) (and, since 2012, also to the mastering engineer[s]) on the winning work, not to the artist or performer, except if the artist is also a credited engineer. | https://en.wikipedia.org/wiki/Grammy_Award_for_Best_Engineered_Album,_Non-Classical |
In geometry , the Gram–Euler theorem , [ 1 ] Gram-Sommerville, Brianchon-Gram or Gram relation [ 2 ] (named after Jørgen Pedersen Gram , Leonhard Euler , Duncan Sommerville and Charles Julien Brianchon ) is a generalization of the internal angle sum formula of polygons to higher-dimensional polytopes . The equation constrains the sums of the interior angles of a polytope in a manner analogous to the Euler relation on the number of d-dimensional faces .
Let P {\displaystyle P} be an n {\displaystyle n} -dimensional convex polytope . For each k - face F {\displaystyle F} , with k = dim ( F ) {\displaystyle k=\dim(F)} its dimension (0 for vertices, 1 for edges, 2 for faces, etc., up to n for P itself), its interior (higher-dimensional) solid angle ∠ ( F ) {\displaystyle \angle (F)} is defined by choosing a small enough ( n − 1 ) {\displaystyle (n-1)} - sphere centered at some point in the interior of F {\displaystyle F} and finding the surface area contained inside P {\displaystyle P} . Then the Gram–Euler theorem states: [ 3 ] [ 1 ] ∑ F ⊂ P ( − 1 ) dim F ∠ ( F ) = 0 {\displaystyle \sum _{F\subset P}(-1)^{\dim F}\angle (F)=0} In non-Euclidean geometry of constant curvature (i.e. spherical , ϵ = 1 {\displaystyle \epsilon =1} , and hyperbolic , ϵ = − 1 {\displaystyle \epsilon =-1} , geometry) the relation gains a volume term, but only if the dimension n is even: ∑ F ⊂ P ( − 1 ) dim F ∠ ( F ) = ϵ n / 2 ( 1 + ( − 1 ) n ) Vol ( P ) {\displaystyle \sum _{F\subset P}(-1)^{\dim F}\angle (F)=\epsilon ^{n/2}(1+(-1)^{n})\operatorname {Vol} (P)} Here, Vol ( P ) {\displaystyle \operatorname {Vol} (P)} is the normalized (hyper)volume of the polytope (i.e, the fraction of the n -dimensional spherical or hyperbolic space); the angles ∠ ( F ) {\displaystyle \angle (F)} also have to be expressed as fractions (of the ( n -1)-sphere). [ 2 ]
When the polytope is simplicial additional angle restrictions known as Perles relations hold, analogous to the Dehn-Sommerville equations for the number of faces. [ 2 ]
For a two-dimensional polygon , the statement expands into: ∑ v α v − ∑ e π + 2 π = 0 {\displaystyle \sum _{v}\alpha _{v}-\sum _{e}\pi +2\pi =0} where the first term A = ∑ α v {\displaystyle A=\textstyle \sum \alpha _{v}} is the sum of the internal vertex angles, the second sum is over the edges, each of which has internal angle π {\displaystyle \pi } , and the final term corresponds to the entire polygon, which has a full internal angle 2 π {\displaystyle 2\pi } . For a polygon with n {\displaystyle n} faces, the theorem tells us that A − π n + 2 π = 0 {\displaystyle A-\pi n+2\pi =0} , or equivalently, A = π ( n − 2 ) {\displaystyle A=\pi (n-2)} . For a polygon on a sphere, the relation gives the spherical surface area or solid angle as the spherical excess : Ω = A − π ( n − 2 ) {\displaystyle \Omega =A-\pi (n-2)} .
For a three-dimensional polyhedron the theorem reads: ∑ v Ω v − 2 ∑ e θ e + ∑ f 2 π − 4 π = 0 {\displaystyle \sum _{v}\Omega _{v}-2\sum _{e}\theta _{e}+\sum _{f}2\pi -4\pi =0} where Ω v {\displaystyle \Omega _{v}} is the solid angle at a vertex, θ e {\displaystyle \theta _{e}} the dihedral angle at an edge (the solid angle of the corresponding lune is twice as big), the third sum counts the faces (each with an interior hemisphere angle of 2 π {\displaystyle 2\pi } ) and the last term is the interior solid angle (full sphere or 4 π {\displaystyle 4\pi } ).
The n-dimensional relation was first proven by Sommerville , Heckman and Grünbaum for the spherical, hyperbolic and Euclidean case, respectively. [ 2 ] | https://en.wikipedia.org/wiki/Gram–Euler_theorem |
In mathematics , particularly linear algebra and numerical analysis , the Gram–Schmidt process or Gram-Schmidt algorithm is a way of finding a set of two or more vectors that are perpendicular to each other.
By technical definition, it is a method of constructing an orthonormal basis from a set of vectors in an inner product space , most commonly the Euclidean space R n {\displaystyle \mathbb {R} ^{n}} equipped with the standard inner product . The Gram–Schmidt process takes a finite , linearly independent set of vectors S = { v 1 , … , v k } {\displaystyle S=\{\mathbf {v} _{1},\ldots ,\mathbf {v} _{k}\}} for k ≤ n and generates an orthogonal set S ′ = { u 1 , … , u k } {\displaystyle S'=\{\mathbf {u} _{1},\ldots ,\mathbf {u} _{k}\}} that spans the same k {\displaystyle k} -dimensional subspace of R n {\displaystyle \mathbb {R} ^{n}} as S {\displaystyle S} .
The method is named after Jørgen Pedersen Gram and Erhard Schmidt , but Pierre-Simon Laplace had been familiar with it before Gram and Schmidt. [ 1 ] In the theory of Lie group decompositions , it is generalized by the Iwasawa decomposition .
The application of the Gram–Schmidt process to the column vectors of a full column rank matrix yields the QR decomposition (it is decomposed into an orthogonal and a triangular matrix ).
The vector projection of a vector v {\displaystyle \mathbf {v} } on a nonzero vector u {\displaystyle \mathbf {u} } is defined as [ note 1 ] proj u ( v ) = ⟨ v , u ⟩ ⟨ u , u ⟩ u , {\displaystyle \operatorname {proj} _{\mathbf {u} }(\mathbf {v} )={\frac {\langle \mathbf {v} ,\mathbf {u} \rangle }{\langle \mathbf {u} ,\mathbf {u} \rangle }}\,\mathbf {u} ,} where ⟨ v , u ⟩ {\displaystyle \langle \mathbf {v} ,\mathbf {u} \rangle } denotes the inner product of the vectors u {\displaystyle \mathbf {u} } and v {\displaystyle \mathbf {v} } . This means that proj u ( v ) {\displaystyle \operatorname {proj} _{\mathbf {u} }(\mathbf {v} )} is the orthogonal projection of v {\displaystyle \mathbf {v} } onto the line spanned by u {\displaystyle \mathbf {u} } . If u {\displaystyle \mathbf {u} } is the zero vector, then proj u ( v ) {\displaystyle \operatorname {proj} _{\mathbf {u} }(\mathbf {v} )} is defined as the zero vector.
Given k {\displaystyle k} nonzero linearly-independent vectors v 1 , … , v k {\displaystyle \mathbf {v} _{1},\ldots ,\mathbf {v} _{k}} the Gram–Schmidt process defines the vectors u 1 , … , u k {\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{k}} as follows: u 1 = v 1 , e 1 = u 1 ‖ u 1 ‖ u 2 = v 2 − proj u 1 ( v 2 ) , e 2 = u 2 ‖ u 2 ‖ u 3 = v 3 − proj u 1 ( v 3 ) − proj u 2 ( v 3 ) , e 3 = u 3 ‖ u 3 ‖ u 4 = v 4 − proj u 1 ( v 4 ) − proj u 2 ( v 4 ) − proj u 3 ( v 4 ) , e 4 = u 4 ‖ u 4 ‖ ⋮ ⋮ u k = v k − ∑ j = 1 k − 1 proj u j ( v k ) , e k = u k ‖ u k ‖ . {\displaystyle {\begin{aligned}\mathbf {u} _{1}&=\mathbf {v} _{1},&\!\mathbf {e} _{1}&={\frac {\mathbf {u} _{1}}{\|\mathbf {u} _{1}\|}}\\\mathbf {u} _{2}&=\mathbf {v} _{2}-\operatorname {proj} _{\mathbf {u} _{1}}(\mathbf {v} _{2}),&\!\mathbf {e} _{2}&={\frac {\mathbf {u} _{2}}{\|\mathbf {u} _{2}\|}}\\\mathbf {u} _{3}&=\mathbf {v} _{3}-\operatorname {proj} _{\mathbf {u} _{1}}(\mathbf {v} _{3})-\operatorname {proj} _{\mathbf {u} _{2}}(\mathbf {v} _{3}),&\!\mathbf {e} _{3}&={\frac {\mathbf {u} _{3}}{\|\mathbf {u} _{3}\|}}\\\mathbf {u} _{4}&=\mathbf {v} _{4}-\operatorname {proj} _{\mathbf {u} _{1}}(\mathbf {v} _{4})-\operatorname {proj} _{\mathbf {u} _{2}}(\mathbf {v} _{4})-\operatorname {proj} _{\mathbf {u} _{3}}(\mathbf {v} _{4}),&\!\mathbf {e} _{4}&={\mathbf {u} _{4} \over \|\mathbf {u} _{4}\|}\\&{}\ \ \vdots &&{}\ \ \vdots \\\mathbf {u} _{k}&=\mathbf {v} _{k}-\sum _{j=1}^{k-1}\operatorname {proj} _{\mathbf {u} _{j}}(\mathbf {v} _{k}),&\!\mathbf {e} _{k}&={\frac {\mathbf {u} _{k}}{\|\mathbf {u} _{k}\|}}.\end{aligned}}}
The sequence u 1 , … , u k {\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{k}} is the required system of orthogonal vectors, and the normalized vectors e 1 , … , e k {\displaystyle \mathbf {e} _{1},\ldots ,\mathbf {e} _{k}} form an orthonormal set . The calculation of the sequence u 1 , … , u k {\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{k}} is known as Gram–Schmidt orthogonalization , and the calculation of the sequence e 1 , … , e k {\displaystyle \mathbf {e} _{1},\ldots ,\mathbf {e} _{k}} is known as Gram–Schmidt orthonormalization .
To check that these formulas yield an orthogonal sequence, first compute ⟨ u 1 , u 2 ⟩ {\displaystyle \langle \mathbf {u} _{1},\mathbf {u} _{2}\rangle } by substituting the above formula for u 2 {\displaystyle \mathbf {u} _{2}} : we get zero. Then use this to compute ⟨ u 1 , u 3 ⟩ {\displaystyle \langle \mathbf {u} _{1},\mathbf {u} _{3}\rangle } again by substituting the formula for u 3 {\displaystyle \mathbf {u} _{3}} : we get zero. For arbitrary k {\displaystyle k} the proof is accomplished by mathematical induction .
Geometrically, this method proceeds as follows: to compute u i {\displaystyle \mathbf {u} _{i}} , it projects v i {\displaystyle \mathbf {v} _{i}} orthogonally onto the subspace U {\displaystyle U} generated by u 1 , … , u i − 1 {\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{i-1}} , which is the same as the subspace generated by v 1 , … , v i − 1 {\displaystyle \mathbf {v} _{1},\ldots ,\mathbf {v} _{i-1}} . The vector u i {\displaystyle \mathbf {u} _{i}} is then defined to be the difference between v i {\displaystyle \mathbf {v} _{i}} and this projection, guaranteed to be orthogonal to all of the vectors in the subspace U {\displaystyle U} .
The Gram–Schmidt process also applies to a linearly independent countably infinite sequence { v i } i . The result is an orthogonal (or orthonormal) sequence { u i } i such that for natural number n : the algebraic span of v 1 , … , v n {\displaystyle \mathbf {v} _{1},\ldots ,\mathbf {v} _{n}} is the same as that of u 1 , … , u n {\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{n}} .
If the Gram–Schmidt process is applied to a linearly dependent sequence, it outputs the 0 vector on the i {\displaystyle i} th step, assuming that v i {\displaystyle \mathbf {v} _{i}} is a linear combination of v 1 , … , v i − 1 {\displaystyle \mathbf {v} _{1},\ldots ,\mathbf {v} _{i-1}} . If an orthonormal basis is to be produced, then the algorithm should test for zero vectors in the output and discard them because no multiple of a zero vector can have a length of 1. The number of vectors output by the algorithm will then be the dimension of the space spanned by the original inputs.
A variant of the Gram–Schmidt process using transfinite recursion applied to a (possibly uncountably) infinite sequence of vectors ( v α ) α < λ {\displaystyle (v_{\alpha })_{\alpha <\lambda }} yields a set of orthonormal vectors ( u α ) α < κ {\displaystyle (u_{\alpha })_{\alpha <\kappa }} with κ ≤ λ {\displaystyle \kappa \leq \lambda } such that for any α ≤ λ {\displaystyle \alpha \leq \lambda } , the completion of the span of { u β : β < min ( α , κ ) } {\displaystyle \{u_{\beta }:\beta <\min(\alpha ,\kappa )\}} is the same as that of { v β : β < α } {\displaystyle \{v_{\beta }:\beta <\alpha \}} . In particular, when applied to a (algebraic) basis of a Hilbert space (or, more generally, a basis of any dense subspace), it yields a (functional-analytic) orthonormal basis. Note that in the general case often the strict inequality κ < λ {\displaystyle \kappa <\lambda } holds, even if the starting set was linearly independent, and the span of ( u α ) α < κ {\displaystyle (u_{\alpha })_{\alpha <\kappa }} need not be a subspace of the span of ( v α ) α < λ {\displaystyle (v_{\alpha })_{\alpha <\lambda }} (rather, it's a subspace of its completion).
Consider the following set of vectors in R 2 {\displaystyle \mathbb {R} ^{2}} (with the conventional inner product ) S = { v 1 = [ 3 1 ] , v 2 = [ 2 2 ] } . {\displaystyle S=\left\{\mathbf {v} _{1}={\begin{bmatrix}3\\1\end{bmatrix}},\mathbf {v} _{2}={\begin{bmatrix}2\\2\end{bmatrix}}\right\}.}
Now, perform Gram–Schmidt, to obtain an orthogonal set of vectors: u 1 = v 1 = [ 3 1 ] {\displaystyle \mathbf {u} _{1}=\mathbf {v} _{1}={\begin{bmatrix}3\\1\end{bmatrix}}} u 2 = v 2 − proj u 1 ( v 2 ) = [ 2 2 ] − proj [ 3 1 ] [ 2 2 ] = [ 2 2 ] − 8 10 [ 3 1 ] = [ − 2 / 5 6 / 5 ] . {\displaystyle \mathbf {u} _{2}=\mathbf {v} _{2}-\operatorname {proj} _{\mathbf {u} _{1}}(\mathbf {v} _{2})={\begin{bmatrix}2\\2\end{bmatrix}}-\operatorname {proj} _{\left[{\begin{smallmatrix}3\\1\end{smallmatrix}}\right]}{\begin{bmatrix}2\\2\end{bmatrix}}={\begin{bmatrix}2\\2\end{bmatrix}}-{\frac {8}{10}}{\begin{bmatrix}3\\1\end{bmatrix}}={\begin{bmatrix}-2/5\\6/5\end{bmatrix}}.}
We check that the vectors u 1 {\displaystyle \mathbf {u} _{1}} and u 2 {\displaystyle \mathbf {u} _{2}} are indeed orthogonal: ⟨ u 1 , u 2 ⟩ = ⟨ [ 3 1 ] , [ − 2 / 5 6 / 5 ] ⟩ = − 6 5 + 6 5 = 0 , {\displaystyle \langle \mathbf {u} _{1},\mathbf {u} _{2}\rangle =\left\langle {\begin{bmatrix}3\\1\end{bmatrix}},{\begin{bmatrix}-2/5\\6/5\end{bmatrix}}\right\rangle =-{\frac {6}{5}}+{\frac {6}{5}}=0,} noting that if the dot product of two vectors is 0 then they are orthogonal.
For non-zero vectors, we can then normalize the vectors by dividing out their sizes as shown above: e 1 = 1 10 [ 3 1 ] {\displaystyle \mathbf {e} _{1}={\frac {1}{\sqrt {10}}}{\begin{bmatrix}3\\1\end{bmatrix}}} e 2 = 1 40 25 [ − 2 / 5 6 / 5 ] = 1 10 [ − 1 3 ] . {\displaystyle \mathbf {e} _{2}={\frac {1}{\sqrt {40 \over 25}}}{\begin{bmatrix}-2/5\\6/5\end{bmatrix}}={\frac {1}{\sqrt {10}}}{\begin{bmatrix}-1\\3\end{bmatrix}}.}
Denote by GS ( v 1 , … , v k ) {\displaystyle \operatorname {GS} (\mathbf {v} _{1},\dots ,\mathbf {v} _{k})} the result of applying the Gram–Schmidt process to a collection of vectors v 1 , … , v k {\displaystyle \mathbf {v} _{1},\dots ,\mathbf {v} _{k}} . This yields a map GS : ( R n ) k → ( R n ) k {\displaystyle \operatorname {GS} \colon (\mathbb {R} ^{n})^{k}\to (\mathbb {R} ^{n})^{k}} .
It has the following properties:
Let g : R n → R n {\displaystyle g\colon \mathbb {R} ^{n}\to \mathbb {R} ^{n}} be orthogonal (with respect to the given inner product). Then we have GS ( g ( v 1 ) , … , g ( v k ) ) = ( g ( GS ( v 1 , … , v k ) 1 ) , … , g ( GS ( v 1 , … , v k ) k ) ) {\displaystyle \operatorname {GS} (g(\mathbf {v} _{1}),\dots ,g(\mathbf {v} _{k}))=\left(g(\operatorname {GS} (\mathbf {v} _{1},\dots ,\mathbf {v} _{k})_{1}),\dots ,g(\operatorname {GS} (\mathbf {v} _{1},\dots ,\mathbf {v} _{k})_{k})\right)}
Further, a parametrized version of the Gram–Schmidt process yields a (strong) deformation retraction of the general linear group G L ( R n ) {\displaystyle \mathrm {GL} (\mathbb {R} ^{n})} onto the orthogonal group O ( R n ) {\displaystyle O(\mathbb {R} ^{n})} .
When this process is implemented on a computer, the vectors u k {\displaystyle \mathbf {u} _{k}} are often not quite orthogonal, due to rounding errors . For the Gram–Schmidt process as described above (sometimes referred to as "classical Gram–Schmidt") this loss of orthogonality is particularly bad; therefore, it is said that the (classical) Gram–Schmidt process is numerically unstable .
The Gram–Schmidt process can be stabilized by a small modification; this version is sometimes referred to as modified Gram-Schmidt or MGS. This approach gives the same result as the original formula in exact arithmetic and introduces smaller errors in finite-precision arithmetic.
Instead of computing the vector u k as u k = v k − proj u 1 ( v k ) − proj u 2 ( v k ) − ⋯ − proj u k − 1 ( v k ) , {\displaystyle \mathbf {u} _{k}=\mathbf {v} _{k}-\operatorname {proj} _{\mathbf {u} _{1}}(\mathbf {v} _{k})-\operatorname {proj} _{\mathbf {u} _{2}}(\mathbf {v} _{k})-\cdots -\operatorname {proj} _{\mathbf {u} _{k-1}}(\mathbf {v} _{k}),} it is computed as u k ( 1 ) = v k − proj u 1 ( v k ) , u k ( 2 ) = u k ( 1 ) − proj u 2 ( u k ( 1 ) ) , ⋮ u k ( k − 2 ) = u k ( k − 3 ) − proj u k − 2 ( u k ( k − 3 ) ) , u k ( k − 1 ) = u k ( k − 2 ) − proj u k − 1 ( u k ( k − 2 ) ) , e k = u k ( k − 1 ) ‖ u k ( k − 1 ) ‖ {\displaystyle {\begin{aligned}\mathbf {u} _{k}^{(1)}&=\mathbf {v} _{k}-\operatorname {proj} _{\mathbf {u} _{1}}(\mathbf {v} _{k}),\\\mathbf {u} _{k}^{(2)}&=\mathbf {u} _{k}^{(1)}-\operatorname {proj} _{\mathbf {u} _{2}}\left(\mathbf {u} _{k}^{(1)}\right),\\&\;\;\vdots \\\mathbf {u} _{k}^{(k-2)}&=\mathbf {u} _{k}^{(k-3)}-\operatorname {proj} _{\mathbf {u} _{k-2}}\left(\mathbf {u} _{k}^{(k-3)}\right),\\\mathbf {u} _{k}^{(k-1)}&=\mathbf {u} _{k}^{(k-2)}-\operatorname {proj} _{\mathbf {u} _{k-1}}\left(\mathbf {u} _{k}^{(k-2)}\right),\\\mathbf {e} _{k}&={\frac {\mathbf {u} _{k}^{(k-1)}}{\left\|\mathbf {u} _{k}^{(k-1)}\right\|}}\end{aligned}}}
This method is used in the previous animation, when the intermediate v 3 ′ {\displaystyle \mathbf {v} '_{3}} vector is used when orthogonalizing the blue vector v 3 {\displaystyle \mathbf {v} _{3}} .
Here is another description of the modified algorithm. Given the vectors v 1 , v 2 , … , v n {\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},\dots ,\mathbf {v} _{n}} , in our first step we produce vectors v 1 , v 2 ( 1 ) , … , v n ( 1 ) {\displaystyle \mathbf {v} _{1},\mathbf {v} _{2}^{(1)},\dots ,\mathbf {v} _{n}^{(1)}} by removing components along the direction of v 1 {\displaystyle \mathbf {v} _{1}} . In formulas, v k ( 1 ) := v k − ⟨ v k , v 1 ⟩ ⟨ v 1 , v 1 ⟩ v 1 {\displaystyle \mathbf {v} _{k}^{(1)}:=\mathbf {v} _{k}-{\frac {\langle \mathbf {v} _{k},\mathbf {v} _{1}\rangle }{\langle \mathbf {v} _{1},\mathbf {v} _{1}\rangle }}\mathbf {v} _{1}} . After this step we already have two of our desired orthogonal vectors u 1 , … , u n {\displaystyle \mathbf {u} _{1},\dots ,\mathbf {u} _{n}} , namely u 1 = v 1 , u 2 = v 2 ( 1 ) {\displaystyle \mathbf {u} _{1}=\mathbf {v} _{1},\mathbf {u} _{2}=\mathbf {v} _{2}^{(1)}} , but we also made v 3 ( 1 ) , … , v n ( 1 ) {\displaystyle \mathbf {v} _{3}^{(1)},\dots ,\mathbf {v} _{n}^{(1)}} already orthogonal to u 1 {\displaystyle \mathbf {u} _{1}} . Next, we orthogonalize those remaining vectors against u 2 = v 2 ( 1 ) {\displaystyle \mathbf {u} _{2}=\mathbf {v} _{2}^{(1)}} . This means we compute v 3 ( 2 ) , v 4 ( 2 ) , … , v n ( 2 ) {\displaystyle \mathbf {v} _{3}^{(2)},\mathbf {v} _{4}^{(2)},\dots ,\mathbf {v} _{n}^{(2)}} by subtraction v k ( 2 ) := v k ( 1 ) − ⟨ v k ( 1 ) , u 2 ⟩ ⟨ u 2 , u 2 ⟩ u 2 {\displaystyle \mathbf {v} _{k}^{(2)}:=\mathbf {v} _{k}^{(1)}-{\frac {\langle \mathbf {v} _{k}^{(1)},\mathbf {u} _{2}\rangle }{\langle \mathbf {u} _{2},\mathbf {u} _{2}\rangle }}\mathbf {u} _{2}} . Now we have stored the vectors v 1 , v 2 ( 1 ) , v 3 ( 2 ) , v 4 ( 2 ) , … , v n ( 2 ) {\displaystyle \mathbf {v} _{1},\mathbf {v} _{2}^{(1)},\mathbf {v} _{3}^{(2)},\mathbf {v} _{4}^{(2)},\dots ,\mathbf {v} _{n}^{(2)}} where the first three vectors are already u 1 , u 2 , u 3 {\displaystyle \mathbf {u} _{1},\mathbf {u} _{2},\mathbf {u} _{3}} and the remaining vectors are already orthogonal to u 1 , u 2 {\displaystyle \mathbf {u} _{1},\mathbf {u} _{2}} . As should be clear now, the next step orthogonalizes v 4 ( 2 ) , … , v n ( 2 ) {\displaystyle \mathbf {v} _{4}^{(2)},\dots ,\mathbf {v} _{n}^{(2)}} against u 3 = v 3 ( 2 ) {\displaystyle \mathbf {u} _{3}=\mathbf {v} _{3}^{(2)}} . Proceeding in this manner we find the full set of orthogonal vectors u 1 , … , u n {\displaystyle \mathbf {u} _{1},\dots ,\mathbf {u} _{n}} . If orthonormal vectors are desired, then we normalize as we go, so that the denominators in the subtraction formulas turn into ones.
The following MATLAB algorithm implements classical Gram–Schmidt orthonormalization. The vectors v 1 , ..., v k (columns of matrix V , so that V(:,j) is the j {\displaystyle j} th vector) are replaced by orthonormal vectors (columns of U ) which span the same subspace.
The cost of this algorithm is asymptotically O( nk 2 ) floating point operations, where n is the dimensionality of the vectors. [ 2 ]
If the rows { v 1 , ..., v k } are written as a matrix A {\displaystyle A} , then applying Gaussian elimination to the augmented matrix [ A A T | A ] {\displaystyle \left[AA^{\mathsf {T}}|A\right]} will produce the orthogonalized vectors in place of A {\displaystyle A} . However the matrix A A T {\displaystyle AA^{\mathsf {T}}} must be brought to row echelon form , using only the row operation of adding a scalar multiple of one row to another. [ 3 ] For example, taking v 1 = [ 3 1 ] , v 2 = [ 2 2 ] {\displaystyle \mathbf {v} _{1}={\begin{bmatrix}3&1\end{bmatrix}},\mathbf {v} _{2}={\begin{bmatrix}2&2\end{bmatrix}}} as above, we have [ A A T | A ] = [ 10 8 3 1 8 8 2 2 ] {\displaystyle \left[AA^{\mathsf {T}}|A\right]=\left[{\begin{array}{rr|rr}10&8&3&1\\8&8&2&2\end{array}}\right]}
And reducing this to row echelon form produces [ 1 .8 .3 .1 0 1 − .25 .75 ] {\displaystyle \left[{\begin{array}{rr|rr}1&.8&.3&.1\\0&1&-.25&.75\end{array}}\right]}
The normalized vectors are then e 1 = 1 .3 2 + .1 2 [ .3 .1 ] = 1 10 [ 3 1 ] {\displaystyle \mathbf {e} _{1}={\frac {1}{\sqrt {.3^{2}+.1^{2}}}}{\begin{bmatrix}.3&.1\end{bmatrix}}={\frac {1}{\sqrt {10}}}{\begin{bmatrix}3&1\end{bmatrix}}} e 2 = 1 .25 2 + .75 2 [ − .25 .75 ] = 1 10 [ − 1 3 ] , {\displaystyle \mathbf {e} _{2}={\frac {1}{\sqrt {.25^{2}+.75^{2}}}}{\begin{bmatrix}-.25&.75\end{bmatrix}}={\frac {1}{\sqrt {10}}}{\begin{bmatrix}-1&3\end{bmatrix}},} as in the example above.
The result of the Gram–Schmidt process may be expressed in a non-recursive formula using determinants .
e j = 1 D j − 1 D j | ⟨ v 1 , v 1 ⟩ ⟨ v 2 , v 1 ⟩ ⋯ ⟨ v j , v 1 ⟩ ⟨ v 1 , v 2 ⟩ ⟨ v 2 , v 2 ⟩ ⋯ ⟨ v j , v 2 ⟩ ⋮ ⋮ ⋱ ⋮ ⟨ v 1 , v j − 1 ⟩ ⟨ v 2 , v j − 1 ⟩ ⋯ ⟨ v j , v j − 1 ⟩ v 1 v 2 ⋯ v j | {\displaystyle \mathbf {e} _{j}={\frac {1}{\sqrt {D_{j-1}D_{j}}}}{\begin{vmatrix}\langle \mathbf {v} _{1},\mathbf {v} _{1}\rangle &\langle \mathbf {v} _{2},\mathbf {v} _{1}\rangle &\cdots &\langle \mathbf {v} _{j},\mathbf {v} _{1}\rangle \\\langle \mathbf {v} _{1},\mathbf {v} _{2}\rangle &\langle \mathbf {v} _{2},\mathbf {v} _{2}\rangle &\cdots &\langle \mathbf {v} _{j},\mathbf {v} _{2}\rangle \\\vdots &\vdots &\ddots &\vdots \\\langle \mathbf {v} _{1},\mathbf {v} _{j-1}\rangle &\langle \mathbf {v} _{2},\mathbf {v} _{j-1}\rangle &\cdots &\langle \mathbf {v} _{j},\mathbf {v} _{j-1}\rangle \\\mathbf {v} _{1}&\mathbf {v} _{2}&\cdots &\mathbf {v} _{j}\end{vmatrix}}}
u j = 1 D j − 1 | ⟨ v 1 , v 1 ⟩ ⟨ v 2 , v 1 ⟩ ⋯ ⟨ v j , v 1 ⟩ ⟨ v 1 , v 2 ⟩ ⟨ v 2 , v 2 ⟩ ⋯ ⟨ v j , v 2 ⟩ ⋮ ⋮ ⋱ ⋮ ⟨ v 1 , v j − 1 ⟩ ⟨ v 2 , v j − 1 ⟩ ⋯ ⟨ v j , v j − 1 ⟩ v 1 v 2 ⋯ v j | {\displaystyle \mathbf {u} _{j}={\frac {1}{D_{j-1}}}{\begin{vmatrix}\langle \mathbf {v} _{1},\mathbf {v} _{1}\rangle &\langle \mathbf {v} _{2},\mathbf {v} _{1}\rangle &\cdots &\langle \mathbf {v} _{j},\mathbf {v} _{1}\rangle \\\langle \mathbf {v} _{1},\mathbf {v} _{2}\rangle &\langle \mathbf {v} _{2},\mathbf {v} _{2}\rangle &\cdots &\langle \mathbf {v} _{j},\mathbf {v} _{2}\rangle \\\vdots &\vdots &\ddots &\vdots \\\langle \mathbf {v} _{1},\mathbf {v} _{j-1}\rangle &\langle \mathbf {v} _{2},\mathbf {v} _{j-1}\rangle &\cdots &\langle \mathbf {v} _{j},\mathbf {v} _{j-1}\rangle \\\mathbf {v} _{1}&\mathbf {v} _{2}&\cdots &\mathbf {v} _{j}\end{vmatrix}}}
where D 0 = 1 {\displaystyle D_{0}=1} and, for j ≥ 1 {\displaystyle j\geq 1} , D j {\displaystyle D_{j}} is the Gram determinant
D j = | ⟨ v 1 , v 1 ⟩ ⟨ v 2 , v 1 ⟩ ⋯ ⟨ v j , v 1 ⟩ ⟨ v 1 , v 2 ⟩ ⟨ v 2 , v 2 ⟩ ⋯ ⟨ v j , v 2 ⟩ ⋮ ⋮ ⋱ ⋮ ⟨ v 1 , v j ⟩ ⟨ v 2 , v j ⟩ ⋯ ⟨ v j , v j ⟩ | . {\displaystyle D_{j}={\begin{vmatrix}\langle \mathbf {v} _{1},\mathbf {v} _{1}\rangle &\langle \mathbf {v} _{2},\mathbf {v} _{1}\rangle &\cdots &\langle \mathbf {v} _{j},\mathbf {v} _{1}\rangle \\\langle \mathbf {v} _{1},\mathbf {v} _{2}\rangle &\langle \mathbf {v} _{2},\mathbf {v} _{2}\rangle &\cdots &\langle \mathbf {v} _{j},\mathbf {v} _{2}\rangle \\\vdots &\vdots &\ddots &\vdots \\\langle \mathbf {v} _{1},\mathbf {v} _{j}\rangle &\langle \mathbf {v} _{2},\mathbf {v} _{j}\rangle &\cdots &\langle \mathbf {v} _{j},\mathbf {v} _{j}\rangle \end{vmatrix}}.}
Note that the expression for u k {\displaystyle \mathbf {u} _{k}} is a "formal" determinant, i.e. the matrix contains both scalars and vectors; the meaning of this expression is defined to be the result of a cofactor expansion along the row of vectors.
The determinant formula for the Gram-Schmidt is computationally (exponentially) slower than the recursive algorithms described above; it is mainly of theoretical interest.
Expressed using notation used in geometric algebra , the unnormalized results of the Gram–Schmidt process can be expressed as u k = v k − ∑ j = 1 k − 1 ( v k ⋅ u j ) u j − 1 , {\displaystyle \mathbf {u} _{k}=\mathbf {v} _{k}-\sum _{j=1}^{k-1}(\mathbf {v} _{k}\cdot \mathbf {u} _{j})\mathbf {u} _{j}^{-1}\ ,} which is equivalent to the expression using the proj {\displaystyle \operatorname {proj} } operator defined above. The results can equivalently be expressed as [ 4 ] u k = v k ∧ v k − 1 ∧ ⋅ ⋅ ⋅ ∧ v 1 ( v k − 1 ∧ ⋅ ⋅ ⋅ ∧ v 1 ) − 1 , {\displaystyle \mathbf {u} _{k}=\mathbf {v} _{k}\wedge \mathbf {v} _{k-1}\wedge \cdot \cdot \cdot \wedge \mathbf {v} _{1}(\mathbf {v} _{k-1}\wedge \cdot \cdot \cdot \wedge \mathbf {v} _{1})^{-1},} which is closely related to the expression using determinants above.
Other orthogonalization algorithms use Householder transformations or Givens rotations . The algorithms using Householder transformations are more stable than the stabilized Gram–Schmidt process. On the other hand, the Gram–Schmidt process produces the j {\displaystyle j} th orthogonalized vector after the j {\displaystyle j} th iteration, while orthogonalization using Householder reflections produces all the vectors only at the end. This makes only the Gram–Schmidt process applicable for iterative methods like the Arnoldi iteration .
Yet another alternative is motivated by the use of Cholesky decomposition for inverting the matrix of the normal equations in linear least squares . Let V {\displaystyle V} be a full column rank matrix, whose columns need to be orthogonalized. The matrix V ∗ V {\displaystyle V^{*}V} is Hermitian and positive definite , so it can be written as V ∗ V = L L ∗ , {\displaystyle V^{*}V=LL^{*},} using the Cholesky decomposition . The lower triangular matrix L {\displaystyle L} with strictly positive diagonal entries is invertible . Then columns of the matrix U = V ( L − 1 ) ∗ {\displaystyle U=V\left(L^{-1}\right)^{*}} are orthonormal and span the same subspace as the columns of the original matrix V {\displaystyle V} . The explicit use of the product V ∗ V {\displaystyle V^{*}V} makes the algorithm unstable, especially if the product's condition number is large. Nevertheless, this algorithm is used in practice and implemented in some software packages because of its high efficiency and simplicity.
In quantum mechanics there are several orthogonalization schemes with characteristics better suited for certain applications than original Gram–Schmidt. Nevertheless, it remains a popular and effective algorithm for even the largest electronic structure calculations. [ 5 ]
Gram-Schmidt orthogonalization can be done in strongly-polynomial time . The run-time analysis is similar to that of Gaussian elimination . [ 6 ] : 40 | https://en.wikipedia.org/wiki/Gram–Schmidt_process |
A Gran plot (also known as Gran titration or the Gran method ) is a common means of standardizing a titrate or titrant by estimating the equivalence volume or end point in a strong acid -strong base titration or in a potentiometric titration . Such plots have been also used to calibrate glass electrodes, to estimate the carbonate content of aqueous solutions, and to estimate the K a values ( acid dissociation constants ) of weak acids and bases from titration data. Gran plots are named after Swedish chemist Gunnar Gran, who developed the method in 1950. [ 1 ]
Gran plots use linear approximations of the a priori non-linear relationships between the measured quantity, pH or electromotive potential (emf), and the titrant volume. Other types of concentration measures, such as spectrophotometric absorbances or NMR chemical shifts , can in principle be similarly treated. These approximations are only valid near, but not at, the end point, and so the method differs from end point estimations by way of first- and second- derivative plots, which require data at the end point. Gran plots were originally devised for graphical determinations in pre-computer times, wherein an x-y plot on paper would be manually extrapolated to estimate the x-intercept. The graphing and visual estimation of the end point have been replaced by more accurate least-squares analyses since the advent of modern computers and enabling software packages, especially spreadsheet programs with built-in least-squares functionality.
The Gran plot is based on the Nernst equation which can be written as
where E is a measured electrode potential, E 0 is a standard electrode potential, s is the slope, ideally equal to RT/nF, and {H + } is the activity of the hydrogen ion. The expression rearranges to
depending on whether the electrode is calibrated in millivolts or pH. For convenience the concentration, [H + ], is used in place of activity. In a titration of strong acid with strong alkali, the analytical concentration of the hydrogen ion is obtained from the initial concentration of acid, C i and the amount of alkali added during titration.
where v i is the initial volume of solution, c OH is the concentration of alkali in the burette and v is the titre volume. Equating the two expressions for [H + ] and simplifying, the following expression is obtained
A plot of ( v i + v ) 10 E − E 0 s o r ( v i + v ) 10 − p H {\displaystyle (v_{i}+v)10^{\frac {E-E^{0}}{s}}\ or\ (v_{i}+v)10^{-pH}} against v will be a straight line. If E 0 and s are known from electrode calibration, where the line crosses the x-axis indicates the volume at the equivalence point, C i v i = c O H v {\displaystyle C_{i}v_{i}=c_{OH}v} . Alternatively, this plot can be used for electrode calibration by finding the values of E 0 and s that give the best straight line.
For a strong acid-strong base titration monitored by pH, we have at any i' th point in the titration
where K w is the water autoprotolysis constant.
If titrating an acid of initial volume v 0 {\displaystyle v_{0^{}}} and concentration [ H + ] 0 {\displaystyle [H^{+}]_{0^{}}} with base of concentration [ O H − ] 0 {\displaystyle [OH^{-}]_{0^{}}} , then at any i' th point in the titration with titrant volume v i {\displaystyle v_{i^{}}} ,
At the equivalence point , the equivalence volume v e = v i {\displaystyle v_{e^{}}=v_{i^{}}} .
Thus,
The equivalence volume is used to compute whichever of [ H + ] 0 {\displaystyle [H^{+}]_{0^{}}} or [ O H − ] 0 {\displaystyle [OH^{-}]_{0^{}}} is unknown.
The pH meter is usually calibrated with buffer solutions at known pH values before starting the titration. The ionic strength can be kept constant by judicious choice of acid and base. For instance, HCl titrated with NaOH of approximately the same concentration will replace H + with an ion (Na + ) of the same charge at the same concentration, to keep the ionic strength fairly constant. Otherwise, a relatively high concentration of background electrolyte can be used, or the activity quotient can be computed. [ 2 ]
Mirror-image plots are obtained if titrating the base with the acid, and the signs of the slopes are reversed.
Hence,
Figure 1 gives sample Gran plots of a strong base-strong acid titration.
The method can be used to estimate the dissociation constants of weak acids, as well as their concentrations (Gran, 1952). With an acid represented by HA, where
we have at any i' th point in the titration of a volume v 0 {\displaystyle v_{0}} of acid at a concentration [ H A ] 0 {\displaystyle [HA]_{0}} by base of concentration [ O H − ] 0 {\displaystyle [OH^{-}]_{0}} . In the linear regions away from equivalence,
are valid approximations, whence
A plot of 10 − p H i v i {\displaystyle 10^{-pH_{i}}v_{i}} versus v i {\displaystyle v_{i^{}}} will have a slope − K a {\displaystyle -K_{a^{}}} over the linear acidic region and an extrapolated x-intercept v e {\displaystyle v_{e^{}}} , from which either [ H A ] 0 {\displaystyle [HA]_{0^{}}} or [ O H − ] 0 {\displaystyle [OH^{-}]_{0^{}}} can be computed. [ 2 ] The alkaline region is treated in the same manner as for a titration of strong acid . Figure 2 gives an example; in this example, the two x-intercepts differ by about 0.2 mL but this is a small discrepancy, given the large equivalence volume (0.5% error).
Similar equations can be written for the titration of a weak base by strong acid (Gran, 1952; Harris, 1998).
Martell and Motekaitis (1992) use the most linear regions and exploit the difference in equivalence volumes between acid-side and base-side plots during an acid-base titration to estimate the adventitious CO 2 content in the base solution. This is illustrated in the sample Gran plots of Figure 1. In that situation, the extra acid used to neutralize the carbonate, by double protonation, in volume v 0 {\displaystyle v_{0^{}}} of titrate is ( v e − v e ′ ) [ H + ] 0 = 2 v 0 [ C O 2 ] 0 {\displaystyle (v_{e}-v_{e}^{\prime })[H^{+}]_{0^{}}=2v_{0}[CO_{2}]_{0}} . In the opposite case of a titration of acid by base, the carbonate content is similarly computed from ( v e ′ − v e ) [ O H − ] 0 = 2 v e ′ [ C O 2 ] 0 {\displaystyle (v_{e}^{\prime }-v_{e})[OH^{-}]_{0^{}}=2v_{e}^{\prime }[CO_{2}]_{0}} , where v e ′ {\displaystyle v_{e}^{\prime }} is the base-side equivalence volume (from Martell and Motekaitis).
When the total CO 2 content is significant, as in natural waters and alkaline effluents, two or three inflections can be seen in the pH-volume curves owing to buffering by higher concentrations of bicarbonate and carbonate. As discussed by Stumm and Morgan (1981), the analysis of such waters can use up to six Gran plots from a single titration to estimate the multiple end points and measure the total alkalinity and the carbonate and/or bicarbonate contents.
To use potentiometric (e.m.f.) measurements E i {\displaystyle E_{i^{}}} in monitoring the H + {\displaystyle H^{+_{}}} concentration in place of p H i {\displaystyle pH_{i^{}}} readings, one can trivially set − l o g 10 [ H + ] i = b 0 − b 1 E i {\displaystyle -log_{10}[H^{+}]_{i}=b_{0}-b_{1}E_{i^{}}} and apply the same equations as above, where b 0 {\displaystyle b_{0^{}}} is the offset correction n F E 0 / R T {\displaystyle nFE_{0^{}}/RT} , and b 1 {\displaystyle b_{1^{}}} is a slope correction n F / R T {\displaystyle nF^{_{}}/RT} (1/59.2 pH units/mV at 25°C), such that − b 1 E i {\displaystyle -b_{1}E_{i^{}}} replaces p H i {\displaystyle pH_{i^{}}} .
Thus, as before for a titration of strong acid by strong base,
Analogous plots can be drawn using data from a titration of base by acid.
Note that the above analysis requires prior knowledge of b 0 {\displaystyle b_{0^{}}} and b 1 {\displaystyle b_{1^{}}} .
If a pH electrode is not well calibrated, an offset correction can be computed in situ from the acid-side Gran slope:
In the sample data illustrated in Figure 1, this offset correction was not insignificant, at -0.054 pH units.
The value of b 1 {\displaystyle b_{1^{}}} , however, may deviate from its theoretical value and can only be assessed by a proper calibration of the electrode. Calibration of an electrode is often performed using buffers of known pH, or by performing a titration of strong acid with strong base. In that case, a constant ionic strength can be maintained, and [ H + ] i {\displaystyle [H^{+_{}}]_{i}} is known at all titration points if both [ H + ] 0 {\displaystyle [H_{}^{+}]_{0}} and [ O H − ] 0 {\displaystyle [OH_{}^{-}]_{0}} are known (and should be directly related to primary standards ). For instance, Martell and Motekaitis (1992) calculated the pH value expected at the start of the titration, having earlier titrated the acid and base solutions against primary standards, then adjusted the pH electrode reading accordingly, but this does not afford a slope correction if one is needed.
Based on earlier work by McBryde (1969), Gans and O'Sullivan (2000) describe an iterative approach to arrive at both b 0 {\displaystyle b_{0^{}}} and b 1 {\displaystyle b_{1^{}}} values in the relation − l o g 10 [ H + ] i = b 0 − b 1 E i {\displaystyle -log_{10}[H^{+}]_{i}=b_{0}-b_{1}E_{i^{}}} , from a titration of strong acid by strong base:
The procedure could in principle be modified for titrations of base by acid. A computer program named GLEE (for GLass Electrode Evaluation) implements this approach on titrations of acid by base for electrode calibration. This program additionally can compute (by a separate, non-linear least-squares process) a 'correction' for the base concentration. An advantage of this method of electrode calibration is that it can be performed in the same medium of constant ionic strength which may later be used for the determination of equilibrium constants . Note that the regular Gran functions will provide the required equivalence volumes and, as b 1 {\displaystyle b_{1^{}}} is initially set at its theoretical value, the initial estimate for b 0 {\displaystyle b_{0^{}}} in step 1 can be had from the slope of the regular acid-side Gran function as detailed earlier. Note too that this procedure computes the CO 2 content and can indeed be combined with a complete standardization of the base, using the definition of v e {\displaystyle v_{e^{}}} to compute [ O H − ] 0 {\displaystyle [OH^{-}]_{0^{}}} . Finally, the usable pH range could be extended by solving the quadratic ( v 0 [ H + ] 0 − v i [ O H − ] 0 ) / ( v 0 + v i ) = [ H + ] i − K w / [ H + ] i {\displaystyle (v_{0{^{}}}[H^{+}]_{0}-v_{i}[OH^{-}]_{0})/(v_{0}+v_{i})=[H^{+}]_{i}-K_{w}/[H^{+}]_{i}} for [ H + ] i {\displaystyle [H^{+}]_{i^{}}} .
Potentiometric data are also used to monitor species other than H + {\displaystyle H^{+_{}}} . When monitoring any species S 1 {\displaystyle S^{1_{}}} by potentiometry, one can apply the same formalism with − l o g 10 [ S 1 ] i = b 0 − b 1 E i {\displaystyle -log_{10}[S^{1}]_{i}=b_{0}-b_{1}E_{i^{}}} . Thus, a titration of a solution of another species S 2 {\displaystyle S^{2_{}}} by species S 1 {\displaystyle S^{1_{}}} is analogous to a pH-monitored titration of base by acid, whence either ( v 0 + v i ) 10 b 1 E i {\displaystyle ({v_{0}+v_{i}})10^{b_{1}E_{i}}} or ( v 0 + v i ) 10 − b 1 E i {\displaystyle ({v_{0}+v_{i}})10^{-b_{1}E_{i}}} plotted versus v i {\displaystyle v_{i^{}}} will have an x-intercept v 0 [ S 0 ] 0 / [ S 1 ] 0 {\displaystyle v_{0}[S^{0}]_{0}/[S^{1}]_{0^{}}} . In the opposite titration of S 1 {\displaystyle S^{1_{}}} by S 2 {\displaystyle S^{2_{}}} , the equivalence volume will be v 0 [ S 1 ] 0 / [ S 0 ] 0 {\displaystyle v_{0}[S^{1}]_{0}/[S^{0}]_{0^{}}} . The significance of the slopes will depend on the interactions between the two species, whether associating in solution or precipitating together (Gran, 1952). Usually, the only result of interest is the equivalence point. However, the before-equivalence slope could in principle be used to assess the solubility product K s p {\displaystyle K_{sp^{}}} in the same way as K w {\displaystyle K_{w^{}}} can be determined from acid-base titrations, although other ion-pair association interactions may be occurring as well. [ 3 ]
To illustrate, consider a titration of Cl − by Ag + monitored potentiometrically:
Hence,
Figure 3 gives sample plots of potentiometric titration data.
In any titration lacking buffering components, both before-equivalence and beyond-equivalence plots should ideally cross the x axis at the same point. Non-ideal behaviour can result from measurement errors ( e.g. a poorly calibrated electrode, an insufficient equilibration time before recording the electrode reading, drifts in ionic strength), sampling errors ( e.g. low data densities in the linear regions) or an incomplete chemical model ( e.g. the presence of titratable impurities such as carbonate in the base, or incomplete precipitation in potentiometric titrations of dilute solutions, for which Gran et al. (1981) propose alternate approaches). Buffle et al. (1972) discuss a number of error sources.
Because the 10 p H i {\displaystyle 10^{pH_{i}}} or 10 − p H i {\displaystyle 10^{-pH_{i}}} terms in the Gran functions only asymptotically tend toward, and never reach, the x axis, curvature approaching the equivalence point is to be expected in all cases. However, there is disagreement among practitioners as to which data to plot, whether using only data on one side of equivalence or on both sides, and whether to select data nearest equivalence or in the most linear portions: [ 4 ] [ 5 ] using the data nearest the equivalence point will enable the two x-intercepts to be more coincident with each other and to better coincide with estimates from derivative plots, while using acid-side data in an acid-base titration presumably minimizes interference from titratable (buffering) impurities, such as bicarbonate/carbonate in the base (see Carbonate content ), and the effect of a drifting ionic strength. In the sample plots displayed in the Figures, the most linear regions (the data represented by filled circles) were selected for the least-squares computations of slopes and intercepts. Data selection is always subjective. | https://en.wikipedia.org/wiki/Gran_plot |
In geometry , the grand 120-cell or grand polydodecahedron is a regular star 4-polytope with Schläfli symbol {5,3,5/2}. It is one of 10 regular Schläfli-Hess polytopes .
It is one of four regular star 4-polytopes discovered by Ludwig Schläfli . It is named by John Horton Conway , extending the naming system by Arthur Cayley for the Kepler-Poinsot solids .
It has the same edge arrangement as the 600-cell , icosahedral 120-cell and the same face arrangement as the great 120-cell .
It could be seen as another 4D analogue of the three-dimensional great dodecahedron due to being a pentagonal polytope with enlarged facets .
This 4-polytope article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Grand_120-cell |
In geometry , the grand 600-cell or grand polytetrahedron is a regular star 4-polytope with Schläfli symbol {3, 3, 5/2}. It is one of 10 regular Schläfli-Hess polytopes. It is the only one with 600 cells.
It is one of four regular star 4-polytopes discovered by Ludwig Schläfli . It was named by John Horton Conway , extending the naming system by Arthur Cayley for the Kepler-Poinsot solids .
The grand 600-cell can be seen as the four-dimensional analogue of the great icosahedron (which in turn is analogous to the pentagram ); both of these are the only regular n -dimensional star polytopes which are derived by performing stellational operations on the pentagonal polytope which has simplectic faces. It can be constructed analogously to the pentagram, its two-dimensional analogue, via the extension of said ( n-1 )-D simplex faces of the core n D polytope ( tetrahedra for the grand 600-cell, equilateral triangles for the great icosahedron, and line segments for the pentagram) until the figure regains regular faces.
The Grand 600-cell is also dual to the great grand stellated 120-cell , mirroring the great icosahedron's duality with the great stellated dodecahedron (which in turn is also analogous to the pentagram); all of these are the final stellations of the n -dimensional "dodecahedral-type" pentagonal polytope.
It has the same edge arrangement as the great stellated 120-cell , and grand stellated 120-cell , and same face arrangement as the great icosahedral 120-cell .
This 4-polytope article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Grand_600-cell |
A Grand Unified Theory ( GUT ) is any model in particle physics that merges the electromagnetic , weak , and strong forces (the three gauge interactions of the Standard Model ) into a single force at high energies . Although this unified force has not been directly observed, many GUT models theorize its existence. If the unification of these three interactions is possible, it raises the possibility that there was a grand unification epoch in the very early universe in which these three fundamental interactions were not yet distinct.
Experiments have confirmed that at high energy, the electromagnetic interaction and weak interaction unify into a single combined electroweak interaction . [ 1 ] GUT models predict that at even higher energy , the strong and electroweak interactions will unify into one electronuclear interaction. This interaction is characterized by one larger gauge symmetry and thus several force carriers , but one unified coupling constant . Unifying gravity with the electronuclear interaction would provide a more comprehensive theory of everything (TOE) rather than a Grand Unified Theory. Thus, GUTs are often seen as an intermediate step towards a TOE.
The novel particles predicted by GUT models are expected to have extremely high masses—around the GUT scale of 10 16 GeV/ c 2 (only three orders of magnitude below the Planck scale of 10 19 GeV/ c 2 )—and so are well beyond the reach of any foreseen particle hadron collider experiments. Therefore, the particles predicted by GUT models will be unable to be observed directly, and instead the effects of grand unification might be detected through indirect observations of the following:
Some GUTs, such as the Pati–Salam model , predict the existence of magnetic monopoles .
While GUTs might be expected to offer simplicity over the complications present in the Standard Model , realistic models remain complicated because they need to introduce additional fields and interactions, or even additional dimensions of space, in order to reproduce observed fermion masses and mixing angles. This difficulty, in turn, may be related to the existence [ clarification needed ] of family symmetries beyond the conventional GUT models. Due to this and the lack of any observed effect of grand unification so far, there is no generally accepted GUT model.
Models that do not unify the three interactions using one simple group as the gauge symmetry but do so using semisimple groups can exhibit similar properties and are sometimes referred to as Grand Unified Theories as well.
Historically, the first true GUT, which was based on the simple Lie group SU(5) , was proposed by Howard Georgi and Sheldon Glashow in 1974. [ 3 ] The Georgi–Glashow model was preceded by the semisimple Lie algebra Pati–Salam model by Abdus Salam and Jogesh Pati also in 1974, [ 4 ] who pioneered the idea to unify gauge interactions.
The acronym GUT was first coined in 1978 by CERN researchers John Ellis , Andrzej Buras , Mary K. Gaillard , and Dimitri Nanopoulos , however in the final version of their paper [ 5 ] they opted for the less anatomical GUM (Grand Unification Mass). Nanopoulos later that year was the first to use [ 6 ] the acronym in a paper. [ 7 ]
The fact that the electric charges of electrons and protons seem to cancel each other exactly to extreme precision is essential for the existence of the macroscopic world as we know it, but this important property of elementary particles is not explained in the Standard Model of particle physics. While the description of strong and weak interactions within the Standard Model is based on gauge symmetries governed by the simple symmetry groups SU(3) and SU(2) which allow only discrete charges, the remaining component, the weak hypercharge interaction is described by an abelian symmetry U(1) which in principle allows for arbitrary charge assignments. [ note 1 ] The observed charge quantization , namely the postulation that all known elementary particles carry electric charges which are exact multiples of one-third of the "elementary" charge , has led to the idea that hypercharge interactions and possibly the strong and weak interactions might be embedded in one Grand Unified interaction described by a single, larger simple symmetry group containing the Standard Model. This would automatically predict the quantized nature and values of all elementary particle charges. Since this also results in a prediction for the relative strengths of the fundamental interactions which we observe, in particular, the weak mixing angle , grand unification ideally reduces the number of independent input parameters but is also constrained by observations.
Grand unification is reminiscent of the unification of electric and magnetic forces by Maxwell's field theory of electromagnetism in the 19th century, but its physical implications and mathematical structure are qualitatively different.
SU(5) is the simplest GUT. The smallest simple Lie group which contains the standard model , and upon which the first Grand Unified Theory was based, is
Such group symmetries allow the reinterpretation of several known particles, including the photon, W and Z bosons, and gluon, as different states of a single particle field. However, it is not obvious that the simplest possible choices for the extended "Grand Unified" symmetry should yield the correct inventory of elementary particles. The fact that all currently known matter particles fit perfectly into three copies of the smallest group representations of SU(5) and immediately carry the correct observed charges, is one of the first and most important reasons why people believe that a Grand Unified Theory might actually be realized in nature.
The two smallest irreducible representations of SU(5) are 5 (the defining representation) and 10 . (These bold numbers indicate the dimension of the representation.) In the standard assignment, the 5 contains the charge conjugates of the right-handed down-type quark color triplet and a left-handed lepton isospin doublet , while the 10 contains the six up-type quark components, the left-handed down-type quark color triplet, and the right-handed electron . This scheme has to be replicated for each of the three known generations of matter . It is notable that the theory is anomaly free with this matter content.
The hypothetical right-handed neutrinos are a singlet of SU(5) , which means its mass is not forbidden by any symmetry; it doesn't need a spontaneous electroweak symmetry breaking which explains why its mass would be heavy [ clarification needed ] (see seesaw mechanism ).
The next simple Lie group which contains the standard model is
Here, the unification of matter is even more complete, since the irreducible spinor representation 16 contains both the 5 and 10 of SU(5) and a right-handed neutrino, and thus the complete particle content of one generation of the extended standard model with neutrino masses . This is already the largest simple group that achieves the unification of matter in a scheme involving only the already known matter particles (apart from the Higgs sector ).
Since different standard model fermions are grouped together in larger representations, GUTs specifically predict relations among the fermion masses, such as between the electron and the down quark , the muon and the strange quark , and the tau lepton and the bottom quark for SU(5) and SO(10) . Some of these mass relations hold approximately, but most don't (see Georgi-Jarlskog mass relation ).
The boson matrix for SO(10) is found by taking the 15 × 15 matrix from the 10 + 5 representation of SU(5) and adding an extra row and column for the right-handed neutrino. The bosons are found by adding a partner to each of the 20 charged bosons (2 right-handed W bosons, 6 massive charged gluons and 12 X/Y type bosons) and adding an extra heavy neutral Z-boson to make 5 neutral bosons in total. The boson matrix will have a boson or its new partner in each row and column. These pairs combine to create the familiar 16D Dirac spinor matrices of SO(10) .
In some forms of string theory , including E 8 × E 8 heterotic string theory , the resultant four-dimensional theory after spontaneous compactification on a six-dimensional Calabi–Yau manifold resembles a GUT based on the group E 6 . Notably E 6 is the only exceptional simple Lie group to have any complex representations , a requirement for a theory to contain chiral fermions (namely all weakly-interacting fermions). Hence the other four ( G 2 , F 4 , E 7 , and E 8 ) can't be the gauge group of a GUT. [ citation needed ]
Non-chiral extensions of the Standard Model with vectorlike split-multiplet particle spectra which naturally appear in the higher SU(N) GUTs considerably modify the desert physics and lead to the realistic (string-scale) grand unification for conventional three quark-lepton families even without using supersymmetry (see below). On the other hand, due to a new missing VEV mechanism emerging in the supersymmetric SU(8) GUT the simultaneous solution to the gauge hierarchy (doublet-triplet splitting) problem and problem of unification of flavor can be argued. [ 8 ]
GUTs with four families / generations, SU(8) : Assuming 4 generations of fermions instead of 3 makes a total of 64 types of particles. These can be put into 64 = 8 + 56 representations of SU(8) . This can be divided into SU(5) × SU(3) F × U(1) which is the SU(5) theory together with some heavy bosons which act on the generation number.
GUTs with four families / generations, O(16) : Again assuming 4 generations of fermions, the 128 particles and anti-particles can be put into a single spinor representation of O(16) .
Symplectic gauge groups could also be considered. For example, Sp(8) (which is called Sp(4) in the article symplectic group ) has a representation in terms of 4 × 4 quaternion unitary matrices which has a 16 dimensional real representation and so might be considered as a candidate for a gauge group. Sp(8) has 32 charged bosons and 4 neutral bosons. Its subgroups include SU(4) so can at least contain the gluons and photon of SU(3) × U(1) . Although it's probably not possible to have weak bosons acting on chiral fermions in this representation. A quaternion representation of the fermions might be:
A further complication with quaternion representations of fermions is that there are two types of multiplication: left multiplication and right multiplication which must be taken into account. It turns out that including left and right-handed 4 × 4 quaternion matrices is equivalent to including a single right-multiplication by a unit quaternion which adds an extra SU(2) and so has an extra neutral boson and two more charged bosons. Thus the group of left- and right-handed 4 × 4 quaternion matrices is Sp(8) × SU(2) which does include the standard model bosons:
If ψ {\displaystyle \psi } is a quaternion valued spinor, A μ a b {\displaystyle A_{\mu }^{ab}} is quaternion hermitian 4 × 4 matrix coming from Sp(8) and B μ {\displaystyle B_{\mu }} is a pure vector quaternion (both of which are 4-vector bosons) then the interaction term is:
It can be noted that a generation of 16 fermions can be put into the form of an octonion with each element of the octonion being an 8-vector. If the 3 generations are then put in a 3x3 hermitian matrix with certain additions for the diagonal elements then these matrices form an exceptional (Grassmann) Jordan algebra , which has the symmetry group of one of the exceptional Lie groups ( F 4 , E 6 , E 7 , or E 8 ) depending on the details.
Because they are fermions the anti-commutators of the Jordan algebra become commutators. It is known that E 6 has subgroup O(10) and so is big enough to include the Standard Model. An E 8 gauge group, for example, would have 8 neutral bosons, 120 charged bosons and 120 charged anti-bosons. To account for the 248 fermions in the lowest multiplet of E 8 , these would either have to include anti-particles (and so have baryogenesis ), have new undiscovered particles, or have gravity-like ( spin connection ) bosons affecting elements of the particles spin direction. Each of these possesses theoretical problems.
Other structures have been suggested including Lie 3-algebras and Lie superalgebras . Neither of these fit with Yang–Mills theory . In particular Lie superalgebras would introduce bosons with incorrect [ clarification needed ] statistics. Supersymmetry , however, does fit with Yang–Mills.
The unification of forces is possible due to the energy scale dependence of force coupling parameters in quantum field theory called renormalization group "running" , which allows parameters with vastly different values at usual energies to converge to a single value at a much higher energy scale. [ 2 ]
The renormalization group running of the three gauge couplings in the Standard Model has been found to nearly, but not quite, meet at the same point if the hypercharge is normalized so that it is consistent with SU(5) or SO(10) GUTs, which are precisely the GUT groups which lead to a simple fermion unification. This is a significant result, as other Lie groups lead to different normalizations. However, if the supersymmetric extension MSSM is used instead of the Standard Model, the match becomes much more accurate. In this case, the coupling constants of the strong and electroweak interactions meet at the grand unification energy , also known as the GUT scale:
It is commonly believed that this matching is unlikely to be a coincidence, and is often quoted as one of the main motivations to further investigate supersymmetric theories despite the fact that no supersymmetric partner particles have been experimentally observed. Also, most model builders simply assume supersymmetry because it solves the hierarchy problem —i.e., it stabilizes the electroweak Higgs mass against radiative corrections . [ 9 ]
Since Majorana masses of the right-handed neutrino are forbidden by SO(10) symmetry, SO(10) GUTs predict the Majorana masses of right-handed neutrinos to be close to the GUT scale where the symmetry is spontaneously broken in those models. In supersymmetric GUTs, this scale tends to be larger than would be desirable to obtain realistic masses of the light, mostly left-handed neutrinos (see neutrino oscillation ) via the seesaw mechanism . These predictions are independent of the Georgi–Jarlskog mass relations , wherein some GUTs predict other fermion mass ratios.
Several theories have been proposed, but none is currently universally accepted. An even more ambitious theory that includes all fundamental forces, including gravitation , is termed a theory of everything. Some common mainstream GUT models are:
Not quite GUTs:
Note : These models refer to Lie algebras not to Lie groups . The Lie group could be [ S U ( 4 ) × S U ( 2 ) × S U ( 2 ) ] / Z 2 , {\displaystyle [\mathrm {SU} (4)\times \mathrm {SU} (2)\times \mathrm {SU} (2)]/\mathbb {Z} _{2},} just to take a random example.
The most promising candidate is SO(10) . [ 10 ] [ 11 ] (Minimal) SO(10) does not contain any exotic fermions (i.e. additional fermions besides the Standard Model fermions and the right-handed neutrino), and it unifies each generation into a single irreducible representation . A number of other GUT models are based upon subgroups of SO(10) . They are the minimal left-right model , SU(5) , flipped SU(5) and the Pati–Salam model. The GUT group E 6 contains SO(10) , but models based upon it are significantly more complicated. The primary reason for studying E 6 models comes from E 8 × E 8 heterotic string theory .
GUT models generically predict the existence of topological defects such as monopoles , cosmic strings , domain walls , and others. But none have been observed. Their absence is known as the monopole problem in cosmology . Many GUT models also predict proton decay , although not the Pati–Salam model. As of now, proton decay has never been experimentally observed. The minimal experimental limit on the proton's lifetime pretty much rules out minimal SU(5) and heavily constrains the other models. The lack of detected supersymmetry to date also constrains many models.
Some GUT theories like SU(5) and SO(10) suffer from what is called the doublet-triplet problem . These theories predict that for each electroweak Higgs doublet, there is a corresponding colored Higgs triplet field with a very small mass (many orders of magnitude smaller than the GUT scale here). In theory, unifying quarks with leptons , the Higgs doublet would also be unified with a Higgs triplet. Such triplets have not been observed. They would also cause extremely rapid proton decay (far below current experimental limits) and prevent the gauge coupling strengths from running together in the renormalization group.
Most GUT models require a threefold replication of the matter fields. As such, they do not explain why there are three generations of fermions. Most GUT models also fail to explain the little hierarchy between the fermion masses for different generations.
A GUT model consists of a gauge group which is a compact Lie group , a connection form for that Lie group, a Yang–Mills action for that connection given by an invariant symmetric bilinear form over its Lie algebra (which is specified by a coupling constant for each factor), a Higgs sector consisting of a number of scalar fields taking on values within real/complex representations of the Lie group and chiral Weyl fermions taking on values within a complex rep of the Lie group. The Lie group contains the Standard Model group and the Higgs fields acquire VEVs leading to a spontaneous symmetry breaking to the Standard Model. The Weyl fermions represent matter.
The discovery of neutrino oscillations indicates that the Standard Model is incomplete, but there is currently no clear evidence that nature is described by any Grand Unified Theory. Neutrino oscillations have led to renewed interest toward certain GUT such as SO(10) .
One of the few possible experimental tests of certain GUT is proton decay and also fermion masses. There are a few more special tests for supersymmetric GUT. However, minimum proton lifetimes from research (at or exceeding the 10 34 ~10 35 year range) have ruled out simpler GUTs and most non-SUSY models. [ 12 ] The maximum upper limit on proton lifetime (if unstable), is calculated at 6 × 10 39 years for SUSY models and 1.4 × 10 36 years for minimal non-SUSY GUTs. [ 13 ]
The gauge coupling strengths of QCD, the weak interaction and hypercharge seem to meet at a common length scale called the GUT scale and equal approximately to 10 16 GeV (slightly less than the Planck energy of 10 19 GeV), which is somewhat suggestive. This interesting numerical observation is called the gauge coupling unification , and it works particularly well if one assumes the existence of superpartners of the Standard Model particles. Still, it is possible to achieve the same by postulating, for instance, that ordinary (non supersymmetric) SO(10) models break with an intermediate gauge scale, such as the one of Pati–Salam group. | https://en.wikipedia.org/wiki/Grand_Unified_Theory |
The grand potential or Landau potential or Landau free energy is a quantity used in statistical mechanics , especially for irreversible processes in open systems .
The grand potential is the characteristic state function for the grand canonical ensemble .
The grand potential is defined by Φ G = d e f U − T S − μ N {\displaystyle \Phi _{\text{G}}{\stackrel {\mathrm {def} }{{}={}}}U-TS-\mu N} where U is the internal energy , T is the temperature of the system, S is the entropy , μ is the chemical potential , and N is the number of particles in the system.
The change in the grand potential is given by d Φ G = d U − T d S − S d T − μ d N − N d μ = − P d V − S d T − N d μ {\displaystyle {\begin{aligned}d\Phi _{\text{G}}&=dU-T\,dS-S\,dT-\mu d\,N-N\,d\mu \\&=-P\,dV-S\,dT-N\,d\mu \end{aligned}}} where P is pressure and V is volume , using the fundamental thermodynamic relation (combined first and second thermodynamic laws );
d U = T d S − P d V + μ d N {\displaystyle dU=T\,dS-P\,dV+\mu \,dN}
When the system is in thermodynamic equilibrium , Φ G is a minimum. This can be seen by considering that d Φ G is zero if the volume is fixed and the temperature and chemical potential have stopped evolving.
Some authors refer to the grand potential as the Landau free energy or Landau potential and write its definition as: [ 1 ] [ 2 ]
Ω = d e f F − μ N = U − T S − μ N {\displaystyle \Omega {\stackrel {\mathrm {def} }{{}={}}}F-\mu N=U-TS-\mu N}
named after Russian physicist Lev Landau , which may be a synonym for the grand potential, depending on system stipulations. For homogeneous systems, one obtains Ω = − P V {\displaystyle \Omega =-PV} . [ 3 ]
In the case of a scale-invariant type of system (where a system of volume λ V {\displaystyle \lambda V} has exactly the same set of microstates as λ {\displaystyle \lambda } systems of volume V {\displaystyle V} ), then when the system expands new particles and energy will flow in from the reservoir to fill the new volume with a homogeneous extension of the original system.
The pressure, then, must be constant with respect to changes in volume:
( ∂ ⟨ P ⟩ ∂ V ) μ , T = 0 , {\displaystyle \left({\frac {\partial \langle P\rangle }{\partial V}}\right)_{\mu ,T}=0,}
and all extensive quantities (particle number, energy, entropy, potentials, ...) must grow linearly with volume, e.g.
( ∂ ⟨ N ⟩ ∂ V ) μ , T = N V . {\displaystyle \left({\frac {\partial \langle N\rangle }{\partial V}}\right)_{\mu ,T}={\frac {N}{V}}.}
In this case we simply have Φ G = − ⟨ P ⟩ V {\displaystyle \Phi _{\text{G}}=-\langle P\rangle V} , as well as the familiar relationship G = ⟨ N ⟩ μ {\displaystyle G=\langle N\rangle \mu } for the Gibbs free energy .
The value of Φ G {\displaystyle \Phi _{\text{G}}} can be understood as the work that can be extracted from the system by shrinking it down to nothing (putting all the particles and energy back into the reservoir). The fact that Φ G = − ⟨ P ⟩ V {\displaystyle \Phi _{\text{G}}=-\langle P\rangle V} is negative implies that the extraction of particles from the system to the reservoir requires energy input.
Such homogeneous scaling does not exist in many systems. For example, when analyzing the ensemble of electrons in a single molecule or even a piece of metal floating in space, doubling the volume of the space does double the number of electrons in the material. [ 4 ] The problem here is that, although electrons and energy are exchanged with a reservoir, the material host is not allowed to change.
Generally in small systems, or systems with long range interactions (those outside the thermodynamic limit ), Φ G ≠ − ⟨ P ⟩ V {\displaystyle \Phi _{\text{G}}\neq -\langle P\rangle V} . [ 5 ] | https://en.wikipedia.org/wiki/Grand_potential |
In geometry , the grand stellated 120-cell or grand stellated polydodecahedron is a regular star 4-polytope with Schläfli symbol {5/2,5,5/2}. It is one of 10 regular Schläfli-Hess polytopes .
It is also one of two such polytopes that is self-dual.
It has the same edge arrangement as the grand 600-cell , icosahedral 120-cell , and the same face arrangement as the great stellated 120-cell .
Due to its self-duality, it does not have a good three-dimensional analogue, but (like all other star polyhedra and polychora) is analogous to the two-dimensional pentagram .
This 4-polytope article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Grand_stellated_120-cell |
In planetary astronomy , the grand tack hypothesis proposes that Jupiter formed at a distance of 3.5 AU from the Sun , then migrated inward to 1.5 AU, before reversing course due to capturing Saturn in an orbital resonance , eventually halting near its current orbit at 5.2 AU. The reversal of Jupiter's planetary migration is likened to the path of a sailboat changing directions ( tacking ) as it travels against the wind. [ 1 ]
The planetesimal disk is truncated at 1.0 AU by Jupiter's migration, limiting the material available to form Mars . [ 2 ] Jupiter twice crosses the asteroid belt , scattering asteroids outward then inward. The resulting asteroid belt has a small mass, a wide range of inclinations and eccentricities, and a population originating from both inside and outside Jupiter's original orbit. [ 3 ] Debris produced by collisions among planetesimals swept ahead of Jupiter may have driven an early generation of planets into the Sun . [ 4 ]
In the grand tack hypothesis Jupiter underwent a two-phase migration after its formation, migrating inward to 1.5 AU before reversing course and migrating outward. Jupiter's formation took place near the ice line , at roughly 3.5 AU.
After clearing a gap in the gas disk Jupiter underwent type II migration , moving slowly toward the Sun with the gas disk. If uninterrupted, this migration would have left Jupiter in a close orbit around the Sun, similar to hot Jupiters in other planetary systems. [ 5 ] Saturn also migrated toward the Sun, but being smaller it migrated faster, undergoing either type I migration or runaway migration. [ 6 ] Saturn converged on Jupiter and was captured in a 2:3 mean-motion resonance with Jupiter during this migration. An overlapping gap in the gas disk then formed around Jupiter and Saturn, [ 7 ] altering the balance of forces on these planets which began migrating together. Saturn partially cleared its part of the gap reducing the torque exerted on Jupiter by the outer disk.
The net torque on the planets then became positive, with the torques generated by the inner Lindblad resonances exceeding those from the outer disk, and the planets began to migrate outward. [ 8 ] The outward migration was able to continue because interactions between the planets allowed gas to stream through the gap. [ 9 ] The gas exchanged angular momentum with the planets during its passage, adding to the positive balance of torques, allowing the planets to migrate outward relative to the disk; the exchange also transferred mass from the outer disk to the inner disk. [ 10 ] The transfer of gas to the inner disk also slowed the reduction of the inner disk's mass relative to the outer disk as it accreted onto the Sun, which otherwise would weaken the inner torque, ending the giant planets' outward migration. [ 8 ] [ 11 ] In the grand tack hypothesis this process is assumed to have reversed the inward migration of the planets when Jupiter was at 1.5 AU. [ 6 ] The outward migration of Jupiter and Saturn continued until they reached a zero-torque configuration within a flared disk, [ 12 ] [ 11 ] or when the gas disk dissipated. [ 11 ] The whole process is presumed to end when Jupiter reached its approximate current orbit. [ 6 ]
The hypothesis can be applied to multiple phenomena in the Solar System.
The "Mars problem" is a conflict between some simulations of the formation of the terrestrial planets which end with a 0.5–1.0 M E planet in its region, much larger than the actual mass of Mars: 0.107 M E , when begun with planetesimals distributed throughout the inner Solar System. Jupiter's grand tack resolves the Mars problem by limiting the material available to form Mars. [ 13 ]
Jupiter's inward migration alters this distribution of material, [ 14 ] driving planetesimals inward to form a narrow dense band with a mix of materials inside 1.0 AU , [ 15 ] and leaves the Mars region largely empty. [ 16 ] Planetary embryos quickly form in the narrow band. Most of these embryos collide and merge to form the larger terrestrial planets ( Venus and Earth ) over a period of 60 to 130 million years. [ 17 ] Others are scattered outside the band where they are deprived of additional material, slowing their growth, and form the lower-mass terrestrial planets Mars and Mercury . [ 18 ]
Jupiter and Saturn drive most asteroids from their initial orbits during their migrations, leaving behind an excited remnant derived from both inside and outside Jupiter's original location. Before Jupiter's migrations the surrounding regions contained asteroids which varied in composition with their distance from the Sun. [ 19 ] Rocky asteroids dominated the inner region, while more primitive and icy asteroids dominated the outer region beyond the ice line. [ 20 ] As Jupiter and Saturn migrate inward, ~15% of the inner asteroids are scattered outward onto orbits beyond Saturn. [ 2 ] After reversing course, Jupiter and Saturn first encounter these objects, scattering about 0.5% of the original population back inward onto stable orbits. [ 6 ] Later, as Jupiter and Saturn migrate into the outer region, about 0.5% of the primitive asteroids are scattered onto orbits in the outer asteroid belt. [ 6 ] The encounters with Jupiter and Saturn leave many of the captured asteroids with large eccentricities and inclinations . [ 16 ] These may be reduced during the giant planet instability described in the Nice model so that the eccentricity distribution resembles that of the current asteroid belt. [ 21 ] Some of the icy asteroids are also left in orbits crossing the region where the terrestrial planets later formed, allowing water to be delivered to the accreting planets as when the icy asteroids collide with them. [ 22 ] [ 23 ]
The absence of close orbiting super-Earths in the Solar System may also be the result of Jupiter's inward migration. [ 24 ] As Jupiter migrates inward, planetesimals are captured in its mean-motion resonances, causing their orbits to shrink and their eccentricities to grow. A collisional cascade follows as the planetesimals' relative velocities became large enough to produce catastrophic impacts. The resulting debris then spirals inward toward the Sun due to drag from the gas disk. If there were super-Earths in the early Solar System, they would have caught much of this debris in resonances and could have been driven into the Sun as the debris spiraled inward. The current terrestrial planets would then form from planetesimals left behind when Jupiter reversed course. [ 25 ] However, the migration of close orbiting super-Earths into the Sun could be avoided if the debris coalesced into larger objects, reducing gas drag; and if the protoplanetary disk had an inner cavity, their inward migration could be halted near its edge. [ 26 ] If no planets had yet formed in the inner Solar System, the destruction of the larger bodies during the collisional cascade could have left the remaining debris small enough to be pushed outward by the solar wind, which would have been much stronger during the early Solar System, leaving little to form planets inside Mercury's orbit. [ 27 ]
Simulations of the formation of the terrestrial planets using models of the protoplanetary disk that include viscous heating and the migration of the planetary embryos indicate that Jupiter's migration may have reversed at 2.0 AU. In simulations the eccentricities of the embryos are excited by perturbations from Jupiter. As these eccentricities are damped by the denser gas disk of recent models, the semi-major axes of the embryos shrink, shifting the peak density of solids inward. For simulations with Jupiter's migration reversing at 1.5 AU, this resulted in the largest terrestrial planet forming near Venus's orbit rather than at Earth's orbit. Simulations that instead reversed Jupiter's migration at 2.0 AU yielded a closer match to the current Solar System. [ 9 ]
When the fragmentation due to hit and run collisions are included in simulations with an early instability the orbits of the terrestrial planets are better produced. The larger numbers of small bodies resulting from these collisions reduce the eccentricities and inclinations of the growing planets orbits via additional collisions and dynamical friction. This also results in a larger fraction of the terrestrial planets mass being concentrated in Venus and Earth and extends their formation times relative to that of Mars. [ 28 ]
The migration of the giant planets through the asteroid belt creates a spike in impact velocities that could result in the formation of CB chondrites. CB chondrites are metal rich carbonaceous chondrites containing iron/nickel nodules that formed from the crystallization of impact melts 4.8 ±0.3 Myrs after the first solids. The vaporization of these metals requires impacts of greater than 18 km/s, well beyond the maximum of 12.2 km/s in standard accretion models. Jupiter's migration across the asteroid belt increases the eccentricities and inclinations of the asteroids, resulting in a 0.5 Myr period of impact velocities sufficient to vaporize metals. If the formation of CB chondrites was due to Jupiter's migration it would have occurred 4.5-5 Myrs after the formation of the Solar System. [ 29 ]
The presence of a thick atmosphere around Titan and its absence around Ganymede and Callisto may be due to the timing of their formation relative to the grand tack. If Ganymede and Callisto formed before the grand tack their atmospheres would have been lost as Jupiter moved closer to the Sun. However, for Titan to avoid Type I migration into Saturn, and for Titan's atmosphere to survive, it must have formed after the grand tack. [ 30 ] [ 31 ]
Encounters with other embryos could destabilize a disk orbiting Mars reducing the mass of moons that form around Mars. After Mars is scattered from the annulus by encounters with other planets it continues to have encounters with other objects until the planets clear material from the inner Solar System. While these encounters enable the orbit of Mars to become decoupled from the other planets and remain on a stable orbit, they can also perturb the disk of material from which the moons of Mars form. These perturbations cause material to escape from the orbit of Mars or to impact on its surface reducing the mass of the disk resulting in the formation of smaller moons. [ 32 ]
Most of the accretion of Mars must have taken place outside the narrow annulus of material formed by the grand tack if Mars has a different composition than Earth and Venus. The planets that grow in the annulus created by the grand tack end with similar compositions. If the grand tack occurred early, while the embryo that became Mars was relatively small, a Mars with a differing composition could form if it was instead scattered outward then inward like the asteroids. The chance of this occurring is roughly 2%. [ 33 ] [ 34 ]
Later studies have shown that the convergent orbital migration of Jupiter and Saturn in the fading solar nebula is unlikely to establish a 3:2 mean-motion resonance. Instead of supporting a faster runaway migration, nebula conditions lead to a slower migration of Saturn and its capture in a 2:1 mean-motion resonance. [ 11 ] [ 35 ] [ 36 ] Capture of Jupiter and Saturn in the 2:1 mean-motion resonance does not typically reverse the direction of migration, but particular nebula configurations have been identified that may drive outward migration. [ 37 ] These configurations, however, tend to excite Jupiter's and Saturn's orbital eccentricity to values between two and three times as large as their actual values. [ 37 ] [ 38 ] Also, if the temperature and viscosity of the gas allow Saturn to produce a deeper gap, the resulting net torque can again become negative, resulting in the inward migration of the system. [ 11 ]
The grand tack scenario ignores the ongoing accretion of gas on both Jupiter and Saturn. [ 39 ] In fact, to drive outward migration and move the planets to the proximity of their current orbits, the solar nebula had to contain a sufficiently large reservoir of gas around the orbits of the two planets. However, this gas would provide a source for accretion, which would affect the growth of Jupiter and Saturn and their mass ratio. [ 11 ] The type of nebula density required for capture in the 3:2 mean-motion resonance is especially dangerous for the survival of the two planets, because it can lead to significant mass growth and ensuing planet-planet scattering. But conditions leading to 2:1 mean-motion resonant systems may also put the planets in danger. [ 40 ] Accretion of gas on both planets also tends to reduce the supply toward the inner disk, lowering the accretion rate toward the Sun. This process works to deplete somewhat the disk interior to Jupiter's orbit, weakening the torques on Jupiter arising from inner Lindblad resonances and potentially ending the planets' outward migration. [ 11 ]
Multiple hypotheses have been offered to explain the small mass of Mars. A small Mars may have been a low probability event as it occurs in a small, but non-zero, fraction of simulations that begin with planetesimals distributed across the entire inner Solar System. [ 41 ] [ 42 ] [ 43 ] A small Mars could be the result of its region having been largely empty due to solid material drifting farther inward before the planetesimals formed. [ 44 ] [ 45 ] Most of the mass could also have been removed from the Mars region before it formed if the giant planet instability described in the Nice model occurred early. [ 46 ] [ 47 ] If most of the growth of planetesimals and embryos into terrestrial planets was due to pebble accretion , a small Mars could be the result this process having been less efficient with increasing distances from the Sun. [ 48 ] [ 49 ] Convergent migration of planetary embryos in the gas disk toward 1 AU would result in the formation of terrestrial planets only near this distance leaving Mars as a stranded embryo. [ 50 ] Sweeping secular resonances during the clearing of the gas disk could also excite inclinations and eccentricities, increasing relative velocities so that collisions resulted in fragmentation instead of accretion. [ 51 ] A number of these hypotheses could also explain the low mass of the asteroid belt.
A number of hypotheses have also been proposed to explain the orbital eccentricities and inclinations of the asteroids and the low mass of the asteroid belt. If the region of the asteroid belt was initially empty due to few planetesimals forming there it could have been populated by icy planetesimals that were scattered inward during Jupiter's and Saturn's gas accretion, [ 52 ] and by stony asteroids that were scattered outward by the forming terrestrial planets. [ 53 ] [ 54 ] The inward scattered icy planetesimals could also deliver water to the terrestrial region. [ 55 ] An initially low-mass asteroid belt could have had its orbital eccentricities and inclinations excited by secular resonances if the resonant orbits of Jupiter and Saturn became chaotic before the instability of the Nice model. [ 56 ] [ 57 ] The eccentricities and inclinations of the asteroid could also be excited during the giant planet instability, reaching the observed levels if it lasted for a few hundred thousand years. [ 58 ] Gravitational interactions between the asteroids and embryos in an initially massive asteroid belt would enhance these effects by altering the asteroids semi-major axes, driving many asteroids into unstable orbits where they were removed due to interactions with the planets, resulting in the loss of more than 99% of its mass. [ 59 ] Secular resonance sweeping during the dissipation of the gas disk could have excited the orbits of the asteroids and removed many as they spiraled toward the Sun due to gas drag after their eccentricities were excited. [ 60 ]
Several hypotheses have also been offered for the lack of any close orbiting super-Earths and the small mass of Mercury .
If Jupiter's core formed close to the Sun, its outward migration across the inner Solar System could have pushed material outward in its resonances, leaving the region inside Venus's orbit depleted. [ 61 ] [ 26 ] In a protoplanetary disk that was evolving via a disk wind, planetary embryos could have migrated outward before merging to form planets, leaving the Solar System without planets inside Mercury's orbit. [ 62 ] [ 63 ] Convergent migration of planetary embryos in the gas disk toward 1 AU would also have resulted in the formation of large terrestrial planets near this distance leaving Mercury as a stranded embryo. [ 50 ] An early generation of inner planets could have been lost due to catastrophic collisions during an instability, resulting in the debris being ground small enough to be lost due to Poynting-Robertson drag. [ 64 ] [ 65 ] If planetesimal formation only occurred early, the inner edge of the planetesimal disk might have been located at the silicate condensation line at this time. [ 66 ] The formation of planetesimals closer than Mercury's orbit may have required that the magnetic field of the star be aligned with the rotation of the disk, enabling the depletion of the gas so that solid to gas ratios reached values sufficient for streaming instabilities to occur. [ 67 ] [ 68 ] The formation of super-Earths may require a higher flux of inward drifting pebbles than occurred in the early Solar System. [ 69 ] Planetesimals orbiting in a protoplanetary disk closer than 0.6 AU may have eroded away due to a headwind. [ 70 ] An early Solar System that was largely depleted of material could have resulted in the formation of small planets that were lost or destroyed in an early instability leaving only Mercury or the formation of only Mercury. [ 71 ] [ 72 ] | https://en.wikipedia.org/wiki/Grand_tack_hypothesis |
In mathematics , the infinite series 1 − 1 + 1 − 1 + ⋯ , also written
is sometimes called Grandi's series , after Italian mathematician , philosopher , and priest Guido Grandi , who gave a memorable treatment of the series in 1703. It is a divergent series , meaning that the sequence of partial sums of the series does not converge.
However, though it is divergent, it can be manipulated to yield a number of mathematically interesting results. For example, many summation methods are used in mathematics to assign numerical values even to a divergent series. For example, the Cesàro summation and the Ramanujan summation of this series are both 1 / 2 .
One obvious method to find the sum of the series
would be to treat it like a telescoping series and perform the subtractions in place:
On the other hand, a similar bracketing procedure leads to the apparently contradictory result
Thus, by applying parentheses to Grandi's series in different ways, one can obtain either 0 or 1 as a "value". This is closely akin to the general problem of conditional convergence , and variations of this idea, called the Eilenberg–Mazur swindle , are sometimes used in knot theory and algebra . By taking the average of these two "values", one can justify that the series converges to 1 / 2 .
Treating Grandi's series as a divergent geometric series and using the same algebraic methods that evaluate convergent geometric series to obtain a third value:
S = 1 − 1 + 1 − 1 + … , so 1 − S = 1 − ( 1 − 1 + 1 − 1 + … ) = 1 − 1 + 1 − 1 + … = S 1 − S = S 1 = 2 S , {\displaystyle {\begin{aligned}S&=1-1+1-1+\ldots ,{\text{ so}}\\1-S&=1-(1-1+1-1+\ldots )=1-1+1-1+\ldots =S\\1-S&=S\\1&=2S,\end{aligned}}}
resulting in S = 1 / 2 {\displaystyle S=1/2} . The same conclusion results from calculating − S {\textstyle -S} (from ( − S = ( 1 − S ) − 1 {\textstyle -S=(1-S)-1} ), subtracting the result from S {\displaystyle S} , and solving 2 S = 1 {\displaystyle 2S=1} . [ 1 ]
The above manipulations do not consider what the sum of a series rigorously means and how said algebraic methods can be applied to divergent geometric series . Still, to the extent that it is important to be able to bracket series at will, and that it is more important to be able to perform arithmetic with them, one can arrive at two conclusions:
In fact, both of these statements can be made precise and formally proven, but only using well-defined mathematical concepts that arose in the 19th century. After the late 17th-century introduction of calculus in Europe , but before the advent of modern rigour , the tension between these answers fueled what has been characterized as an "endless" and "violent" dispute between mathematicians . [ 3 ]
For any number r {\displaystyle r} in the interval ( − 1 , 1 ) {\displaystyle (-1,1)} , the sum to infinity of a geometric series can be evaluated via
For any ε ∈ ( 0 , 2 ) {\displaystyle \varepsilon \in (0,2)} , one thus finds
and so the limit ε → 0 {\displaystyle \varepsilon \to 0} of series evaluations is
However, as mentioned, the series obtained by switching the limits,
is divergent.
In the terms of complex analysis , 1 / 2 is thus seen to be the value at z = −1 of the analytic continuation of the series ∑ n = 0 N z n {\displaystyle \textstyle \sum _{n=0}^{N}z^{n}} , which is only defined on the complex unit disk, | z | < 1 .
In modern mathematics, the sum of an infinite series is defined to be the limit of the sequence of its partial sums , if it exists. The sequence of partial sums of Grandi's series is 1, 0, 1, 0, ..., which clearly does not approach any number (although it does have two accumulation points at 0 and 1). Therefore, Grandi's series is divergent .
It can be shown that it is not valid to perform many seemingly innocuous operations on a series, such as reordering individual terms, unless the series is absolutely convergent . Otherwise these operations can alter the result of summation. [ 4 ] Further, the terms of Grandi's series can be rearranged to have its accumulation points at any interval of two or more consecutive integer numbers, not only 0 or 1. For instance, the series
(in which, after five initial +1 terms, the terms alternate in pairs of +1 and −1 terms – the infinitude of both +1s and −1s allows any finite number of 1s or −1s to be prepended, by Hilbert's paradox of the Grand Hotel ) is a permutation of Grandi's series in which each value in the rearranged series corresponds to a value that is at most four positions away from it in the original series; its accumulation points are 3, 4, and 5.
Around 1987, Anna Sierpińska introduced Grandi's series to a group of 17-year-old precalculus students at a Warsaw lyceum . She focused on humanities students with the expectation that their mathematical experience would be less significant than that of their peers studying mathematics and physics, so the epistemological obstacles they exhibit would be more representative of the obstacles that may still be present in lyceum students.
Sierpińska initially expected the students to balk at assigning a value to Grandi's series, at which point she could shock them by claiming that 1 − 1 + 1 − 1 + ··· = 1 / 2 as a result of the geometric series formula. Ideally, by searching for the error in reasoning and by investigating the formula for various common ratios, the students would "notice that there are two kinds of series and an implicit conception of convergence will be born". [ 5 ] However, the students showed no shock at being told that 1 − 1 + 1 − 1 + ··· = 1 / 2 or even that 1 + 2 + 4 + 8 + ⋯ = −1 . Sierpińska remarks that a priori , the students' reaction shouldn't be too surprising given that Leibniz and Grandi thought 1 / 2 to be a plausible result;
The students were ultimately not immune to the question of convergence; Sierpińska succeeded in engaging them in the issue by linking it to decimal expansions the following day. As soon as 0.999... = 1 caught the students by surprise, the rest of her material "went past their ears". [ 5 ]
In another study conducted in Treviso , Italy around the year 2000, third-year and fourth-year Liceo Scientifico pupils (between 16 and 18 years old) were given cards asking the following:
The students had been introduced to the idea of an infinite set, but they had no prior experience with infinite series. They were given ten minutes without books or calculators. The 88 responses were categorized as follows:
The researcher, Giorgio Bagni, interviewed several of the students to determine their reasoning. Some 16 of them justified an answer of 0 using logic similar to that of Grandi and Riccati. Others justified 1 / 2 as being the average of 0 and 1. Bagni notes that their reasoning, while similar to Leibniz's, lacks the probabilistic basis that was so important to 18th-century mathematics. He concludes that the responses are consistent with a link between historical development and individual development, although the cultural context is different. [ 6 ]
Joel Lehmann describes the process of distinguishing between different sum concepts as building a bridge over a conceptual crevasse: the confusion over divergence that dogged 18th-century mathematics.
As a result, many students develop an attitude similar to Euler's:
Lehmann recommends meeting this objection with the same example that was advanced against Euler's treatment of Grandi's series by Jean-Charles Callet. Euler had viewed the sum as the evaluation at x = 1 of the geometric series 1 − x + x 2 − x 3 + ⋯ = 1 / ( 1 + x ) {\displaystyle 1-x+x^{2}-x^{3}+\cdots =1/(1+x)} , giving the sum 1 / 2 . However, Callet pointed out that one could instead view Grandi's series as the evaluation at x = 1 of a different series, 1 − x 2 + x 3 − x 5 + x 6 − ⋯ = 1 + x 1 + x + x 2 {\displaystyle 1-x^{2}+x^{3}-x^{5}+x^{6}-\cdots ={\tfrac {1+x}{1+x+x^{2}}}} , giving the sum 2 / 3 . Lehman argues that seeing such a conflicting outcome in intuitive evaluations may motivate the need for rigorous definitions and attention to detail. [ 8 ]
The series 1 − 2 + 3 − 4 + 5 − 6 + 7 − 8 + ... ( up to infinity) is also divergent, but some methods may be used to sum it to 1 / 4 . This is the square of the value most summation methods assign to Grandi's series, which is reasonable as it can be viewed as the Cauchy product of two copies of Grandi's series. | https://en.wikipedia.org/wiki/Grandi's_series |
The grandmother hypothesis is a hypothesis to explain the existence of menopause in human life history by identifying the adaptive value of extended kin networking. It builds on the previously postulated " mother hypothesis " which states that as mothers age, the costs of reproducing become greater, and energy devoted to those activities would be better spent helping her offspring in their reproductive efforts. [ 1 ] It suggests that by redirecting their energy onto those of their offspring, grandmothers can better ensure the survival of their genes through younger generations. By providing sustenance and support to their kin, grandmothers not only ensure that their genetic interests are met, but they also enhance their social networks which could translate into better immediate resource acquisition. [ 2 ] [ 3 ] This effect could extend past kin into larger community networks and benefit wider group fitness . [ 4 ]
One explanation to this was presented by G.C. Williams who was the first to posit [ 5 ] that menopause might be an adaptation. Williams suggested that at some point it became more advantageous for women to redirect reproductive efforts into increased support of existing offspring. Since a female's dependent offspring would die as soon as she did, he argued, older mothers should stop producing new offspring and focus on those existing. In so doing, they would avoid the age-related risks associated with reproduction and thereby eliminate a potential threat to the continued survival of current offspring. The evolutionary reasoning behind this is driven by related theories.
Kin selection provides the framework for an adaptive strategy by which altruistic behavior is bestowed on closely related individuals because easily identifiable markers exist to indicate them as likely to reciprocate. Kin selection is implicit in theories regarding the successful propagation of genetic material through reproduction, as helping an individual more likely to share one's genetic material would better ensure the survival of at least a portion of it. Hamilton's rule suggests that individuals preferentially help those more related to them when costs to themselves are minimal. This is modeled mathematically as r b > c {\displaystyle rb>c} . Grandmothers would, therefore, be expected to forgo their own reproduction once the benefits of helping those individuals ( b ) multiplied by the relatedness to that individual ( r ) outweighed the costs of the grandmother not reproducing ( c ).
Evidence of kin selection emerged as correlated with climate-driven changes, around 1.8–1.7 million years ago, in female foraging and food sharing practices. [ 6 ] These adjustments increased juvenile dependency, forcing mothers to opt for a low-ranked, common food source ( tubers ) that required adult skill to harvest and process. [ 6 ] Such demands constrained female IBIs (Inter Birth Intervals) thus providing an opportunity for selection to favor the grandmother hypothesis.
Parental investment, originally put forth by Robert Trivers , is defined as any benefit a parent confers on an offspring at a cost to its ability to invest elsewhere. [ 7 ] This theory serves to explain the dynamic sex difference in investment toward offspring observed in most species. It is evident first in gamete size, as eggs are larger and far more energetically expensive than sperm. Females are also much more sure of their genetic relationship with their offspring, as birth serves as a very reliable marker of relatedness. This paternity uncertainty that males experience makes them less likely than females to invest, since it would be costly for males to provide sustenance to another male's offspring. This translates into the grandparental generation, as grandmothers should be much more likely than grandfathers to invest energy into the offspring of their children, and more so in the offspring of their daughters than sons.
Evolutionary theory dictates that all organisms invest heavily in reproduction in order to replicate their genes. According to parental investment, human females will invest heavily in their young because the number of mating opportunities available to them and how many offspring they are able to produce in a given amount of time is fixed by the biology of their sex. This inter birth interval (IBI) is a limiting factor in how many children a woman can have because of the extended developmental period that human children experience. Extended childhood, like the extended post-reproductive lifespan for females, is relatively unique to humans. [ 8 ] Because of this correlation, human grandmothers are well-poised to provide supplemental parental care to their offspring's children. Since their grandchildren still carry a portion of their genes, it is still in the grandmother's genetic interest to ensure those children survive to reproduction.
The mismatch between the rates of degradation of somatic cells versus gametes in human females provides an unsolved paradox. Somatic cells decline more slowly, and humans invest more in somatic longevity relative to other species. [ 9 ] Since natural selection has a much stronger influence on younger generations, deleterious mutations during later life become harder to select out of the population. [ 10 ]
In female placentals , the number of ovarian oocytes is fixed during embryonic development, possibly as an adaptation to reduce the accumulation of mutations , [ 11 ] which then mature or degrade over the life course. At birth there are, typically, one million ova. However, by menopause, only approximately 400 eggs would have actually matured. [ 12 ] In humans, the rate of follicular atresia increases at older ages (around 38–40), for reasons that are not known. [ 13 ] In chimpanzees, our closest nonhuman, genetic relatives, recent research indicates a menopausal age of roughly 50, similar to that of human females, in captive chimpanzees ( [ 14 ] ), with similar findings reflected in a study of the Ngogo (Uganda) wild chimpanzee community reported in October 2023 ( [ 15 ] ). The report of the latter study questioned the grandmother hypothesis by observing that "...chimpanzees have very different living arrangements than humans. Older female chimpanzees typically do not live near their daughters or provide care for grandchildren, yet females at Ngogo often live past their childbearing years." Previously, a very similar rate of oocyte atresia until the age of 40 had been posited in chimps and humans, at which point humans experienced a far accelerated rate compared to chimpanzees. [ 16 ]
The aging process in humans leaves a dilemma in that females live past their ability to reproduce. The question poised to evolutionary researchers then becomes, why do human bodies live on so robustly and for so long past their reproductive potential, and could there be an adaptive benefit to abandoning one's own attempts at reproduction to assist kin?
The practice of dividing parenting responsibilities among non-parents affords females a great advantage in that they can dedicate more effort and energy toward having an increased number of offspring. While this practice is observed in several species, [ 17 ] it has been an especially successful strategy for humans who rely extensively on social networks. One observational study of the Aka foragers of Central Africa demonstrated how allomaternal investment toward an offspring increased specifically during times that the mother's investment in subsistence and economic activities increased. [ 18 ]
If the grandmother effect were true, post-menopausal women should continue to work after the cessation of fertility and use the proceeds to preferentially provision their kin. Studies of Hadza women have provided such evidence. A modern hunter-gatherer group in Tanzania, the post-menopausal Hadza women often help their grandchildren by foraging for food staples that younger children are inefficient at acquiring successfully. [ 8 ] Children, therefore, require the assistance of an adult to gain this crucial version of sustenance. Often, however, mothers are inhibited by the care of younger offspring and are less available to help their older children forage. [ 8 ] In this regard, the Hadza grandmothers become vital to the care of existing grandchildren, and allow reproductive-age women to redirect energy from existing offspring into younger offspring or other reproductive efforts.
However, some commentators felt that the role of Hadza men, who contribute 96% of the mean daily intake of protein, was ignored, [ 8 ] though the authors have addressed this criticism in numerous publications. [ 8 ] [ 19 ] [ 20 ] [ 21 ] Other studies also demonstrated reservations about behavioral similarities between the Hadza and our ancestors. [ 22 ]
Because grandmothers should be expected to provide preferential treatment to offspring she is most certain of her relationship to, there should be differences in the help she provides to each grandchild according to that relationship. Studies have found that not only does the maternal or paternal relationship of the grandparent affect whether or how much help a grandchild receives, but also what kind of help. Paternal grandmothers often had a detrimental effect on infant mortality. [ 23 ] [ 24 ] Also, maternal grandmothers concentrate on offspring survival, whereas paternal grandmothers increase birth rates. [ 25 ] These finding are consistent with ideas of parental investment and paternity uncertainty. Equally, a grandmother could be both a maternal and paternal grandmother and thus in division of resources, a daughter's offspring should be favored.
Other studies have focused on the genetic relationship between grandmothers and grandchildren. Such studies have found that the effects of maternal / paternal grandmothers on grandsons / granddaughters may vary based on degree of genetic relatedness, with paternal grandmothers having positive effects on granddaughters but detrimental effects on grandsons, [ 26 ] and paternity uncertainty may be less important than chromosome inheritance. [ 27 ]
Some critics have cast doubt on the hypothesis because while it addresses how grandparental care could have maintained longer female post-reproductive lifespans, it does not provide an explanation for how it would have evolved in the first place. One theory is that the number of caregivers has a positive relationship on the likelihood of offspring reaching adulthood, suggesting that grandparents who contribute to the care of their grandchildren are more likely to have their genes passed down. Some versions of the grandmother hypothesis asserted that it helped explain the longevity of human senescence . However, demographic data has shown that historically rising numbers in older people among the population correlated with lower numbers of younger people. [ 28 ] This suggests that at some point grandmothers were not helpful toward the survival of their grandchildren, and does not explain why the first grandmother would forgo her own reproduction to help her offspring and grandchildren.
In addition, all variations on the mother, or grandmother effect, fail to explain longevity with continued spermatogenesis in males.
Another problem concerning the grandmother hypothesis is that it requires a history of female philopatry . Though some studies suggest that hunter-gatherer societies are patriarchal , [ 29 ] mounting evidence shows that residence is fluid among hunter-gatherers [ 30 ] [ 31 ] and that married women in at least one patrilineal society visit their kin during times when kin-based support can be especially beneficial to a woman's reproductive success . [ 32 ] One study does suggest, however, that maternal kin were essential to the fitness of sons as fathers in a patrilocal society. [ 33 ]
It also fails to explain the detrimental effects of losing ovarian follicular activity. While continued post-menopausal synthesis of estrogen occurs in peripheral tissues through the adrenal pathways, [ 34 ] these women undoubtedly face an increased risk of conditions associated with lower levels of estrogen: osteoporosis , osteoarthritis , Alzheimer's disease and coronary artery disease . [ 35 ]
However, cross-cultural studies of menopause have found that menopausal symptoms are quite variable among different populations, and that some populations of females do not recognize, and may not even experience, these "symptoms". [ 36 ] This high level of variability in menopausal symptoms across populations brings into question the plausibility of menopause as a sort of " culling agent" to eliminate non-reproductive females from competition with younger, fertile members of the species. This also faces the task of explaining the paradox between the typical age for menopause onset and the life expectancy of female humans. | https://en.wikipedia.org/wiki/Grandmother_hypothesis |
Granny dumping (informal) is a form of modern senicide . The term was introduced in the early 1980s by professionals in the medical and social work fields. Granny dumping is defined by the Oxford English Dictionary as "the abandonment of an elderly person in a public place such as a hospital or nursing home , especially by a relative". [ 1 ] It may be carried out by family members who are unable or unwilling to continue providing care due to financial problems, burnout, lack of resources (such as home health or assisted living options), or stress. [ 2 ] However, instances of institutional granny dumping, by hospitals and care facilities, has also been known to occur. [ 3 ] The "dumping" may involve the literal abandonment of an elderly person, who is taken to a location such as hospital waiting area or emergency room and then left, or in the refusal to return to collect an elderly person after the person is discharged from a hospital visit or hotel stay. While leaving an elderly person in a hospital or nursing facility is a common form of the practice, there have been incidences of elderly people being "dumped" in other locations, such as the side of a public street. [ 4 ]
A practice known as ubasute , existed in Japanese mythology since centuries ago, involving legends of senile elders who were brought to mountaintops by poor citizens who were unable to look after them. The widespread economic and demographic problems facing Japan have seen it on the rise, with relatives dropping off seniors at hospitals or charities. [ 5 ] 70,000 (both male and female equally) elderly Americans were estimated to have been abandoned in 1992 in a report issued by the American College of Emergency Physicians . [ 6 ] In this same study, ACEP received informal surveys from 169 hospital Emergency Departments and report an average of 8 "granny dumping" abandonments per week. According to the New York Times, 1 in 5 people are now caring for an elderly parent and people are spending more time than ever caring for an elderly parent than their own children. Social workers have said that this may be the result of millions of people who are near the breaking point of looking after their elderly parents who are in poor health. [ 7 ]
In the US, granny dumping is more likely to happen in states such as Florida, Texas, and California where there are large populations of retirement communities. Congress has attempted to step in by mandating to emergency departments requiring them to see all patients. In some US states, and some other countries, the practice is illegal, or is subject to efforts to declare it illegal. [ 8 ]
However, Medicaid is covering fewer and fewer of medical bills through reimbursement (in 1989, it was 78% but that number is decreasing) and reduced eligibility. [ 9 ] In some cases, the hospitals may not want to take the risk of having a patient who cannot pay so they will attempt to transfer their care to another hospital. According to the Consolidated Omnibus Budget Reconciliation Act of 1985 set into place by Ronald Reagan , a hospital can transfer at the patient's request or providers must sign a document providing why they believe a patient's care should be better served at another facility. With 40% of revenue coming from Medicaid and Medicare a hospital must earn 8 cents per dollar to compensate for the loss of 7 cents per Medicaid/Medicare patients. Hospitals had to pay an additional 2 billion dollars to private payers to cover costs for Medicare/Medicaid patients in 1989. [ 9 ]
In cases where granny dumping is practiced by family members or caregivers, the dumping falls into two categories: temporary, or permanent. Temporary abandonment of elderly persons is generally due to the inability or expense of finding temporary care for a person with complex medical needs. Needing a break, or wishing to go on a holiday, the normal caregivers will take their elderly patient to a hospital emergency room, or possibly a hotel, and then leave, with the plan to return once the vacation is over.
Incidents of granny dumping often happen before long weekends and may peak before Christmas when families head off on holidays. Caregivers in both Australia and New Zealand report that old people without acute medical problems are dropped off at hospitals. As a result, hospitals and care facilities have to carry an extra burden on their limited resources. [ 10 ] [ 11 ]
In Poland, the practice of dumping elderly persons before Christmas or Easter is known among emergency and ambulance personnel as Babka Świąteczna, i.e. Holiday Granny, the phrase also meaning 'Holiday pie'.
Caregivers may also intend the abandonment to be permanent. In such cases, the caregivers will refuse to return to collect the elderly person, even when contacted by officials. Caregivers may go to great lengths to abandon the elderly person in a place far from their home location to prevent being tracked down and having the elderly person returned to their care.
Permanent abandonment might be done because the caregiver is mentally, physically, or financially unable to continue to provide care, or conscientiously as a tool and method of forcing institutions and government assistance to step in and provide placement and support which would otherwise be unavailable or denied to the caregiver or elderly person.
Caregivers who abandon their elderly may face criminal charges or legal repercussions for doing so, dependent on their local laws.
A hospital or care facility's legal obligation in such cases can be complicated. The protocols to handle a permanently abandoned elderly person are unclear and vary between institutions. However, the expense of providing emergency or long-term care to an abandoned elderly person can represent a considerable burden on a facility's budget, capacity, and manpower. This has led to institutional granny-dumping, where a hospital or nursing facility likewise abandon the elderly person to avoid the expense of their care. [ 9 ]
Hospitals generally seek to place an abandoned elderly person with a long-term care or nursing facility, but such facilities may have no capacity, or may refuse to take the patient, who may have no ability to pay. When this occurs, hospitals are faced with the dilemma of either providing care themselves at great expense, or similarly dumping the patient by taking them off of hospital property and leaving them. [ 12 ] [ 3 ]
Nursing homes may similarly abandon low-income residents by evicting them and leaving them in hotels, homeless shelters, or on the street. [ 13 ] Nursing homes may refuse to readmit residents after a trip home. In a granny dumping practice also called hospital dumping , residents may be sent to a hospital for temporary treatment and not permitted to return. [ 14 ]
Another form of institutional granny dumping may occur when a nursing home closes, and staff abandon residents in the facility, or leave them in hotels, homeless shelters, or similar. During the COVID-19 pandemic, institutional granny dumping by nursing homes became a widespread problem in the United States as above average numbers of care facilities closed with no alternatives to provide care for the displaced residents. [ 13 ] | https://en.wikipedia.org/wiki/Granny_dumping |
Grant Robert Sutherland AC (born 2 June 1945) is a retired Australian human geneticist and cytogeneticist . He was the Director, Department of Cytogenetics and Molecular Genetics, Adelaide Women's and Children's Hospital for 27 years (1975-2002), then became the Foundation Research Fellow there until 2007. He is an Emeritus Professor in the Departments of Paediatrics and Genetics at the University of Adelaide .
He developed methods to allow the reliable observation of fragile sites on chromosomes . These studies culminated in the recognition of fragile X syndrome as the most common familial form of intellectual impairment, allowing carriers to be identified and improving prenatal diagnosis. Clinically, his book on genetic counselling for chromosome abnormalities has become the standard work in this area. He is a past President of the Human Genetics Society of Australasia and of the Human Genome Organisation .
Sutherland was born in Bairnsdale, Victoria , on 2 June 1945.
His father had served as a soldier in World War II and qualified for the soldier settlement farm scheme; as such, when Grant was 12, the family moved to a dairy farm at Numurkah . As a teenager, he bred budgerigars , which he credits for starting his interest in genetics.
After completing at Numurkah High School, he left home and moved to Melbourne. [ 1 ]
He studied at the University of Melbourne , graduating in 1967 with a BSc major in genetics and a sub-major in zoology.
During vacations, he worked at the CSIRO as a technician, in the team that was developing a vaccine for contagious bovine pleuropneumonia .
Still at the University of Melbourne, he went on to graduate with a MSc in 1971. He undertook his doctoral studies at the University of Edinburgh , graduating with a PhD in 1974 and a DSc in 1984, presenting the thesis Studies in human genetics and cytogenetics [ 2 ] [ 3 ] [ 1 ]
After graduating with his BSc in 1967, Sutherland starting work as a cytogeneticist in the Chromosome Laboratory of the Mental Health Authority, Melbourne. In 1971, he became the Cytogeneticist-in-Charge in the Department of Pathology, Royal Hospital for Sick Children, Edinburgh , a role he held until 1974. [ 3 ]
After graduating with his PhD, in 1975, Sutherland took up the role of Director of the Department of Cytogenetics and Molecular Genetics at the Women's and Children's Hospital (WCH) in Adelaide. In 2002, he moved to the role of Foundation Research Fellow at WCH, a position which he held until 2007.
In 1990, he also took on the role of Affiliate Professor in the Departments of Paediatrics and Genetics, University of Adelaide , and became Emeritus Professor in 2017. [ 3 ] [ 4 ]
While at WCH, Sutherland's principal focus was on chromosomal fragile sites .
Large family studies of genetic diseases revealed unexpected patterns, where some men were " carriers " who did not display the disease themselves but passed it on to their daughters. This was contrary to conventional genetic wisdom: "There was no way a male could pass on an X-linked disease without having it himself, or so we thought," Sutherland said. "We'd go to medical conferences with photos of these men, photos of their businesses and copies of their university degrees to show the sceptics they were normal. They didn't believe that a male could have this genetic mutation and be OK." [ 5 ]
The explanation was in the DNA, which Sutherland commenced mapping in detail. He found that the fragile X fault behaved differently to most genetic mutations; it builds up as it replicates through generations until it reaches a threshold where the full-blown syndrome is triggered. Such a disease mechanism, where genetic abnormalities accumulate until they reach a critical level, had not been observed before.
He developed techniques to observe fragile sites, which allowed him to specify critical DNA fragments on the fragile X chromosome and led him to identify fragile X syndrome as the most common cause of hereditary intellectual disability; in Australia it affects about 60 children each year. These findings allowed him to improve diagnostic tools and techniques, making identification of carriers more reliable and ultimately improving prenatal diagnosis. [ 3 ] [ 6 ] [ 7 ] [ 4 ] [ 5 ]
As part of the Human Genome Project , his group mapped much of chromosome 16 and positional cloning of genes on this chromosome. [ 8 ]
In 1998, Sutherland and Associate Professor Eric Haan discovered Sutherland–Haan Syndrome, which is another genetic disease that causes intellectual and physical problems among males. In 2004, they identified the specific genetic sequences that cause the condition. The discovery means that future generations who are at risk will be able to know if they are carriers and to test in utero for the disease. [ 9 ]
The proposal of prenatal testing to diagnose genetic diseases has sometimes been controversial for Sutherland, because it raises the question of what to do if problems are detected. [ 10 ]
Sutherland was the president of the Human Genome Organization (HUGO) from 1996 to 1997, [ 11 ] and he was involved in establishing the professional body in 1977, which grew into the Human Genetics Society of Australasia , and he served as its president from 1989 to 1991. [ 3 ] [ 12 ] [ 13 ] [ 14 ] [ 8 ] [ 15 ]
In the 1998 Australia Day Honours , Sutherland was appointed a Companion of the Order of Australia (AC) for service to science [ 16 ] [ 17 ] and in 2001, he was awarded a Centenary Medal . [ 18 ]
Other significant awards include:
Since 1994 he has been an Honorary Fellow of the Royal College of Pathologists of Australasia. [ 3 ] Professional society fellowships include the Royal Society of London (1996) [ 24 ] and the Australian Academy of Science (1997). [ 8 ] In 2005, the Human Genetics Society of Australasia introduced the annual "Sutherland Lecture" in his honour, allowing outstanding mid-career researchers to showcase their work. [ 24 ] [ 26 ]
Scopus lists 458 documents by Sutherland, and calculates his h-index as 83. [ 27 ] | https://en.wikipedia.org/wiki/Grant_Robert_Sutherland |
Granularity (also called graininess ) is the degree to which a material or system is composed of distinguishable pieces, "granules" or "grains" (metaphorically).
It can either refer to the extent to which a larger entity is subdivided, or the extent to which groups of smaller indistinguishable entities have joined together to become larger distinguishable entities.
Coarse-grained materials or systems have fewer, larger discrete components than fine-grained materials or systems.
The concepts granularity , coarseness , and fineness are relative; and are used when comparing systems or descriptions of systems. An example of increasingly fine granularity: a list of nations in the United Nations , a list of all states/provinces in those nations, a list of all cities in those states, etc.
A fine-grained description of a system is a detailed, exhaustive, low-level model of it. A coarse-grained description is a model where some of this fine detail has been smoothed over or averaged out. The replacement of a fine-grained description with a lower-resolution coarse-grained model is called coarse-graining . (See for example the second law of thermodynamics )
In molecular dynamics , coarse graining consists of replacing an atomistic description of a biological molecule with a lower-resolution coarse-grained model that averages or smooths away fine details.
Coarse-grained models have been developed for investigating the longer time- and length-scale dynamics that are critical to many biological processes, such as lipid membranes and proteins. [ 1 ] These concepts not only apply to biological molecules but also inorganic molecules.
Coarse graining may remove certain degrees of freedom , such as the vibrational modes between two atoms, or represent the two atoms as a single particle. The ends to which systems may be coarse-grained is simply bound by the accuracy in the dynamics and structural properties one wishes to replicate. This modern area of research is in its infancy, and although it is commonly used in biological modeling, the analytic theory behind it is poorly understood.
In parallel computing , granularity means the amount of computation in relation to communication, i.e., the ratio of computation to the amount of communication. [ 2 ]
Fine-grained parallelism means individual tasks are relatively small in terms of code size and execution time. The data is transferred among processors frequently in amounts of one or a few memory words. Coarse-grained is the opposite: data is communicated infrequently, after larger amounts of computation.
The finer the granularity, the greater the potential for parallelism and hence speed-up, but the greater the overheads of synchronization and communication. [ 3 ] Granularity disintegrators exist as well and are important to understand in order to determine the accurate level of granularity. [ 4 ]
In order to attain the best parallel performance, the best balance between load and communication overhead needs to be found. If the granularity is too fine, the performance can suffer from the increased communication overhead. On the other side, if the granularity is too coarse, the performance can suffer from load imbalance.
In reconfigurable computing and supercomputing , these terms refer to the data path width. The use of about one-bit wide processing elements like the configurable logic blocks (CLBs) in an FPGA is called fine-grained computing or fine-grained reconfigurability, whereas using wide data paths, such as, for instance, 32 bits wide resources, like microprocessor CPUs or data-stream-driven data path units (DPU]) like in a reconfigurable datapath array ( rDPA ) is called coarse-grained computing or coarse-grained reconfigurability.
The granularity of data refers to the size in which data fields are sub-divided. For example, a postal address can be recorded, with coarse granularity , as a single field:
or, with fine granularity , as multiple fields:
or even finer granularity:
Finer granularity has overheads for data input and storage. This manifests itself in a higher number of objects and methods in the object-oriented programming paradigm or more subroutine calls for procedural programming and parallel computing environments. It does however offer benefits in flexibility of data processing in treating each data field in isolation if required. A performance problem caused by excessive granularity may not reveal itself until scalability becomes an issue.
Within database design and data warehouse design, data grain can also refer to the smallest combination of columns in a table which makes the rows (also called records) unique. [ 5 ] | https://en.wikipedia.org/wiki/Granularity |
In mathematics , specifically number theory , Granville numbers , also known as S {\displaystyle {\mathcal {S}}} -perfect numbers, are an extension of the perfect numbers .
In 1996, Andrew Granville proposed the following construction of a set S {\displaystyle {\mathcal {S}}} : [ 1 ]
A Granville number is an element of S {\displaystyle {\mathcal {S}}} for which equality holds, that is, n {\displaystyle n} is a Granville number if it is equal to the sum of its proper divisors that are also in S {\displaystyle {\mathcal {S}}} . Granville numbers are also called S {\displaystyle {\mathcal {S}}} -perfect numbers. [ 2 ]
The elements of S {\displaystyle {\mathcal {S}}} can be k -deficient, k -perfect, or k -abundant. In particular, 2-perfect numbers are a proper subset of S {\displaystyle {\mathcal {S}}} . [ 1 ]
Numbers that fulfill the strict form of the inequality in the above definition are known as S {\displaystyle {\mathcal {S}}} -deficient numbers. That is, the S {\displaystyle {\mathcal {S}}} -deficient numbers are the natural numbers for which the sum of their divisors in S {\displaystyle {\mathcal {S}}} is strictly less than themselves:
Numbers that fulfill equality in the above definition are known as S {\displaystyle {\mathcal {S}}} -perfect numbers. [ 1 ] That is, the S {\displaystyle {\mathcal {S}}} -perfect numbers are the natural numbers that are equal the sum of their divisors in S {\displaystyle {\mathcal {S}}} . The first few S {\displaystyle {\mathcal {S}}} -perfect numbers are:
Every perfect number is also S {\displaystyle {\mathcal {S}}} -perfect. [ 1 ] However, there are numbers such as 24 which are S {\displaystyle {\mathcal {S}}} -perfect but not perfect. The only known S {\displaystyle {\mathcal {S}}} -perfect number with three distinct prime factors is 126 = 2 · 3 2 · 7. [ 2 ]
Every number of form 2^(n - 1) * (2^n - 1) * (2^n)^m where m >= 0, where 2^n - 1 is Prime, are Granville Numbers. So, there are infinitely many Granville Numbers and the infinite family has 2 prime factors- 2 and a Mersenne Prime. Others include 126, 5540590, 9078520, 22528935, 56918394 and 246650552 having 3, 5, 5, 5, 5 and 5 prime factors.
Numbers that violate the inequality in the above definition are known as S {\displaystyle {\mathcal {S}}} -abundant numbers. That is, the S {\displaystyle {\mathcal {S}}} -abundant numbers are the natural numbers for which the sum of their divisors in S {\displaystyle {\mathcal {S}}} is strictly greater than themselves:
They belong to the complement of S {\displaystyle {\mathcal {S}}} . The first few S {\displaystyle {\mathcal {S}}} -abundant numbers are:
Every deficient number and every perfect number is in S {\displaystyle {\mathcal {S}}} because the restriction of the divisors sum to members of S {\displaystyle {\mathcal {S}}} either decreases the divisors sum or leaves it unchanged. The first natural number that is not in S {\displaystyle {\mathcal {S}}} is the smallest abundant number , which is 12. The next two abundant numbers, 18 and 20, are also not in S {\displaystyle {\mathcal {S}}} . However, the fourth abundant number, 24, is in S {\displaystyle {\mathcal {S}}} because the sum of its proper divisors in S {\displaystyle {\mathcal {S}}} is:
In other words, 24 is abundant but not S {\displaystyle {\mathcal {S}}} -abundant because 12 is not in S {\displaystyle {\mathcal {S}}} . In fact, 24 is S {\displaystyle {\mathcal {S}}} -perfect - it is the smallest number that is S {\displaystyle {\mathcal {S}}} -perfect but not perfect.
The smallest odd abundant number that is in S {\displaystyle {\mathcal {S}}} is 2835, and the smallest pair of consecutive numbers that are not in S {\displaystyle {\mathcal {S}}} are 5984 and 5985. [ 1 ] | https://en.wikipedia.org/wiki/Granville_number |
Grape syrup is a condiment made with concentrated grape juice. It is thick and sweet because of its high ratio of sugar to water. Grape syrup is made by boiling grapes, removing their skins, and squeezing them through a sieve to extract the juice. Like other fruit syrups , a common use of grape syrup is as a topping to sweet cakes, such as pancakes or waffles .
The ancient Greek name for grape syrup is siraios ( σιραίος ), in the general category of hepsema ( ἕψημα ), which translates to 'boiled'. [ 1 ] The Greek name was used in Crete and, in modern times, in Cyprus . [ 2 ]
Petimezi is the name for a type of Mediterranean grape syrup. The word comes from the Turkish pekmez , which usually refers to grape syrup, but is also used to refer to mulberry and other fruit syrups. [ 3 ] [ 4 ]
Vincotto (not to be confused with vino cotto ) is the southern Italian term for grape syrup. It is made only from cooked wine grape must ( mosto cotto ), with no fermentation involved. There is no alcohol or vinegar content, and no additives, preservatives or sweeteners are added. It is both a condiment and ingredient used in either sweet or savory dishes.
One of the earliest mentions of grape syrup comes from the fifth-century BC Greek physician Hippocrates , who refers to hépsēma ( ἕψημα ), the Greek name for the condiment. [ 5 ] The fifth-century BC Athenian playwright Aristophanes also makes a reference to it, as does Roman-era Greek physician Galen . [ 5 ]
Grape syrup was known by different names in Ancient Roman cuisine depending on the boiling procedure. Defrutum , carenum , and sapa were reductions of must . They were made by boiling down grape juice or must in large kettles until it had been reduced to two-thirds of the original volume, carenum ; half the original volume, defrutum ; or one-third, sapa . The Greek name for this variant of grape syrup was siraion ( σίραιον ). [ 6 ]
The main culinary use of defrutum was to help preserve and sweeten wine , but it was also added to fruit and meat dishes as a sweetening and souring agent and even given to food animals such as ducks and suckling pigs to improve the taste of their flesh. Defrutum was mixed with garum to make the popular condiment oenogarum . Quince and melon were preserved in defrutum and honey through the winter, and some Roman women used defrutum or sapa as a cosmetic . Defrutum was often used as a food preservative in provisions for Roman troops. [ 7 ]
There is some confusion as the amount of reduction for sapa and defrutum . As James Grout explains in its Encyclopedia Romana , [ 8 ] authors informed different reductions, as follows:
The elder Cato, Columella, and Pliny all describe how unfermented grape juice ( mustum , must) was boiled to concentrate its natural sugars. "A product of art, not of nature," the must was reduced to one half ( defrutum ) or even one third its volume ( sapa ) (Pliny, XIV.80), [ 9 ] although the terms are not always consistent. Columella identifies defrutum as "must of the sweetest possible flavour" that has been boiled down to a third of its volume (XXI.1). [ 10 ] Isidore of Seville, writing in the seventh century AD, says that it is sapa that has been reduced by a third but goes on to imagine that defrutum is so called because it has been cheated or defrauded ( defrudare ) (Etymologies, XX.3.15). [ 11 ] Varro reverses Pliny's proportions altogether (quoted in Nonius Marcellus, De Conpendiosa Doctrina, XVIII.551M). [ 12 ]
Defrutum is mentioned in almost all Roman books dealing with cooking or household management. Pliny the Elder recommended that defrutum only be boiled at the time of the new moon , while Cato the Censor suggested that only the sweetest possible defrutum should be used.
In ancient Rome , grape syrup was often boiled in lead pots, which sweetened the syrup through the leaching of the sweet-tasting chemical compound lead acetate into the syrup. Incidentally, this is thought to have caused lead poisoning for Romans consuming the syrup. [ 13 ] [ 14 ] A 2009 History Channel documentary produced a batch of historically accurate defrutum in lead-lined vessels and tested the liquid, finding a lead level of 29,000 parts per billion (ppb), which is 2,900 times higher than contemporary American drinking water limit of 10 ppb. These levels are easily high enough to cause either acute lead toxicity if consumed in large amounts or chronic lead poisoning when consumed in smaller quantities over a longer period of time (as defrutum was typically used). [ 14 ]
However, the use of leaden cookware, though popular, was not the general standard of use. Copper cookware was used far more generally and no indication exists as to how often sapa was added or in what quantity. There is not, however, scholarly agreement on the circumstances and quantity of lead in these ancient Roman condiments. For instance, the original research was done by Jerome Nriagu , but was criticized by John Scarborough, a pharmacologist and classicist, who characterized Nriagu's research as "so full of false evidence, miscitations, typographical errors, and a blatant flippancy regarding primary sources that the reader cannot trust the basic arguments." [ 15 ]
Grape syrup has been used in the Levant since antiquity, as evidenced by a document from Nessana in the northern Negev , within modern Israel , that mentions grape syrup production. Sources describing the Muslim conquest of the Levant in 636 note that when Jews met with Rashidun caliph Umar , who camped in Jabiyah , southern Golan , they claimed that due to the harsh climate and plagues, they had to drink wine. Umar suggested honey instead, but they said it was not beneficial for them. As a compromise, Umar agreed they could make a dish from grape syrup without intoxicating effects. They boiled grape juice until two-thirds evaporated and presented it to Umar, who noted it reminded him of an ointment for camels. Botanist Zohar Amar estimates that this explains the winepresses from Mishnaic and Talmudic times found in the Mount Hermon area, which are similar to those used for grape syrup production in modern times. [ 16 ]
Islamic law increased the prevalence of grape syrup in the region due to the prohibition of wine , a practice that was strictly enforced during the Mamluk period , when grape syrup became a common wine substitute among Muslims. Rabbi Joseph Tov Elem , who lived in Jerusalem around 1370, proposed that the honey mentioned in the Bible is actually grape syrup. Obadiah of Bertinoro also mentioned grape syrup among various types of honey sold in Jerusalem, and Meshullam of Volterra described it as "hard as a rock and very fine." Baalbek , in modern Lebanon, was particularly renowned for its dibs production, and Ibn Battuta detailed the production process, noting the use of a type of soil to harden the syrup so that it remained intact even if the container broke. In the 15th century, hashish users mixed it with dibs to mitigate its effects. Rabbis such as Nissim of Gerona and Obadish of Bertinoro discussed its kashrut . In the early Ottoman period, there was sometimes a special tax on raisins and dibs. In the 19th century, Hebron exported significant quantities of grape syrup to Egypt , as documented by Samson Bloch and Samuel David Luzzatto . [ 16 ]
In early Islam, hépsēma was known in Arabic as tilā’ . Early caliphs distributed tilā’ to Muslim troops along with other foodstuffs, considering that it was no longer intoxicating. However, fermentation could resume in the amphorae, and in the late 710s, Caliph ‘ Umar II prohibited drinking this beverage. [ 17 ]
The ancient Greek name hépsēma (now pronounced épsēma in Cypriot Greek ) is still used to refer to the condiment, which is still made in Cyprus .
Petimezi ( Greek : πετιμέζι Greek pronunciation: [petiˈmezi] ), also called epsima ( έψημα ) and in English grapemust or grape molasses , is a syrup that is reduced until it becomes dark and syrupy. Petimezi keeps indefinitely. Its flavor is sweet with slightly bitter undertones. The syrup may be light or dark colored, depending on the grapes used. Before the wide availability of inexpensive cane sugar, petimezi was a common sweetener in Greek cooking , along with carob syrup and honey . Petimezi is still used today in desserts and as a sweet topping for some foods. Though petimezi can be homemade, [ 20 ] [ 21 ] it is also sold commercially under different brand names.
Fruits and vegetables that have been candied by boiling in petimezi ( epsima ) are called retselia .
From late August until the beginning of December, many Greek bakeries make and sell dark crunchy and fragrant petimezi cookies, moustokoúloura ( μουστοκούλουρα ).
Petimezopita ( πετιμεζόπιτα ) is a spiced cake with petimezi . [ 22 ]
In Iranian cuisine , grape syrup (in Persian : شیره انگور ) is used to sweeten ardeh (tahini) , which is consumed at breakfast. An alternative is date syrup , which is also widely used in Middle Eastern cooking.
Saba , (from the Latin word sapa , with the same meaning), vincotto or vino cotto is commonly used in Italy, especially in the regions of Emilia Romagna , Marche , Calabria , and Sardinia , where it is considered a traditional flavor.
In North Macedonia , a form of grape syrup known as madjun ( Macedonian : Гроздов маџун ) has been produced for centuries, commonly used as a sweetener, but also as traditional medicine. It never contains any added sugar.
In South Africa , the grape syrup is known as moskonfyt .
Arrope is a form of grape concentrate typically produced in Spain . Often derived from grape varieties such as Pedro Ximénez , it is made by boiling unfermented grape juice until the volume is reduced by at least 50%, and its viscosity reduced to a syrup . [ 23 ] [ 24 ] The final product is a thick liquid with cooked caramel flavours, and its use is frequent as an additive for dark, sweet wines such as sweet styles of sherry , Malaga , and Marsala . [ 24 ]
In Turkey , grape syrup is known as pekmez .
Grape syrup is known as dibs or dibs al-anab in the countries of the Levant ( Palestine , Jordan , Lebanon , Israel and Syria ). It is usually used as a sweetener and as part of desserts alongside carob syrup and bee honey. In areas of Palestine, it is also used to sweeten wine and eaten with leben and toasted nuts such as walnuts and almonds for breakfast. [ citation needed ] The syrup is made in Druze villages in the northern Golan Heights . [ 16 ]
In some areas, its combined with tahini to make a dip called dibs wa tahini ( Arabic : دبس وطحينة ), and then eaten with bread (typically pita ), similar to pekmez , date syrup , or carob syrup . [ 25 ]
Grape syrup is particularly popular in the city of Hebron , where the cultivation of grapes is also popular, [ 25 ] where it is eaten in a variety of dishes, in combination with tahini to make a dip, or with snow to make ice cream. [ 26 ] | https://en.wikipedia.org/wiki/Grape_syrup |
GraphCrunch is a comprehensive, parallelizable, and easily extendible open source software tool for analyzing and modeling large biological networks (or graphs ); it compares real-world networks against a series of random graph models with respect to a multitude of local and global network properties. [ 1 ] It is available here .
Recent technological advances in experimental biology have yielded large amounts of biological network data. Many other real-world phenomena have also been described in terms of large networks (also called graphs), such as various types of social and technological networks. Thus, understanding these complex phenomena has become an important scientific problem that has led to intensive research in network modeling and analyses.
An important step towards understanding biological networks is finding an adequate network model. Evaluating the fit of a model network to the data is a formidable challenge, since network comparisons are computationally infeasible and thus have to rely on heuristics, or "network properties." GraphCrunch automates the process of generating random networks drawn from a series of random graph models and evaluating the fit of the network models to a real-world network with respect to a variety of global and local network properties.
GraphCrunch performs the following tasks: 1) computes user specified global and local properties of an input real-world network, 2) creates a user specified number of random networks belonging to user specified random graph models, 3) compares how closely each model network reproduces a range of global and local properties (specified in point 1 above) of the real-world network, and 4) produces the statistics of network property similarities between the data and the model networks.
GraphCrunch currently supports five different types of random graph models:
GraphCrunch currently supports seven global and local network properties:
Instructions on how to install and run GraphCrunch are available at https://web.archive.org/web/20100717040957/http://www.ics.uci.edu/~bio-nets/graphcrunch/ .
GraphCrunch has been used to find an optimal network model for protein-protein interaction networks, [ 2 ] [ 3 ] as well as for protein structure networks. [ 3 ] [ 4 ] | https://en.wikipedia.org/wiki/GraphCrunch |
GraphExeter is a material consisting of a few graphene sheets with a layer of ferric chloride molecules in between each graphene sheet. [ 1 ] [ 2 ] It was created by The Centre for Graphene Science at the University of Exeter in collaboration with the University of Bath . [ 3 ] | https://en.wikipedia.org/wiki/GraphExeter |
In computer science , a graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from the field of graph theory within mathematics .
A graph data structure consists of a finite (and possibly mutable) set of vertices (also called nodes or points ), together with a set of unordered pairs of these vertices for an undirected graph or a set of ordered pairs for a directed graph. These pairs are known as edges (also called links or lines ), and for a directed graph are also known as edges but also sometimes arrows or arcs . The vertices may be part of the graph structure, or may be external entities represented by integer indices or references .
A graph data structure may also associate to each edge some edge value , such as a symbolic label or a numeric attribute (cost, capacity, length, etc.).
The basic operations provided by a graph data structure G usually include: [ 1 ]
Structures that associate values to the edges usually also provide: [ 1 ]
The following table gives the time complexity cost of performing various operations on graphs, for each of these representations, with | V | the number of vertices and | E | the number of edges. [ citation needed ] In the matrix representations, the entries encode the cost of following an edge. The cost of edges that are not present are assumed to be ∞.
Adjacency lists are generally preferred for the representation of sparse graphs , while an adjacency matrix is preferred if the graph is dense; that is, the number of edges | E | {\displaystyle |E|} is close to the number of vertices squared, | V | 2 {\displaystyle |V|^{2}} , or if one must be able to quickly look up if there is an edge connecting two vertices. [ 5 ] [ 6 ]
The time complexity of operations in the adjacency list representation can be improved by storing the sets of adjacent vertices in more efficient data structures, such as hash tables or balanced binary search trees (the latter representation requires that vertices are identified by elements of a linearly ordered set, such as integers or character strings). A representation of adjacent vertices via hash tables leads to an amortized average time complexity of O ( 1 ) {\displaystyle O(1)} to test adjacency of two given vertices and to remove an edge and an amortized average time complexity [ 7 ] of O ( deg ( x ) ) {\displaystyle O(\deg(x))} to remove a given vertex x of degree deg ( x ) {\displaystyle \deg(x)} . The time complexity of the other operations and the asymptotic space requirement do not change.
The parallelization of graph problems faces significant challenges: Data-driven computations, unstructured problems, poor locality and high data access to computation ratio. [ 8 ] [ 9 ] The graph representation used for parallel architectures plays a significant role in facing those challenges. Poorly chosen representations may unnecessarily drive up the communication cost of the algorithm, which will decrease its scalability . In the following, shared and distributed memory architectures are considered.
In the case of a shared memory model, the graph representations used for parallel processing are the same as in the sequential case, [ 10 ] since parallel read-only access to the graph representation (e.g. an adjacency list ) is efficient in shared memory.
In the distributed memory model, the usual approach is to partition the vertex set V {\displaystyle V} of the graph into p {\displaystyle p} sets V 0 , … , V p − 1 {\displaystyle V_{0},\dots ,V_{p-1}} . Here, p {\displaystyle p} is the amount of available processing elements (PE). The vertex set partitions are then distributed to the PEs with matching index, additionally to the corresponding edges. Every PE has its own subgraph representation, where edges with an endpoint in another partition require special attention. For standard communication interfaces like MPI , the ID of the PE owning the other endpoint has to be identifiable. During computation in a distributed graph algorithms, passing information along these edges implies communication. [ 10 ]
Partitioning the graph needs to be done carefully - there is a trade-off between low communication and even size partitioning [ 11 ] But partitioning a graph is a NP-hard problem, so it is not feasible to calculate them. Instead, the following heuristics are used.
1D partitioning: Every processor gets n / p {\displaystyle n/p} vertices and the corresponding outgoing edges. This can be understood as a row-wise or column-wise decomposition of the adjacency matrix. For algorithms operating on this representation, this requires an All-to-All communication step as well as O ( m ) {\displaystyle {\mathcal {O}}(m)} message buffer sizes, as each PE potentially has outgoing edges to every other PE. [ 12 ]
2D partitioning: Every processor gets a submatrix of the adjacency matrix. Assume the processors are aligned in a rectangle p = p r × p c {\displaystyle p=p_{r}\times p_{c}} , where p r {\displaystyle p_{r}} and p c {\displaystyle p_{c}} are the amount of processing elements in each row and column, respectively. Then each processor gets a submatrix of the adjacency matrix of dimension ( n / p r ) × ( n / p c ) {\displaystyle (n/p_{r})\times (n/p_{c})} . This can be visualized as a checkerboard pattern in a matrix. [ 12 ] Therefore, each processing unit can only have outgoing edges to PEs in the same row and column. This bounds the amount of communication partners for each PE to p r + p c − 1 {\displaystyle p_{r}+p_{c}-1} out of p = p r × p c {\displaystyle p=p_{r}\times p_{c}} possible ones.
Graphs with trillions of edges occur in machine learning , social network analysis , and other areas. Compressed graph representations have been developed to reduce I/O and memory requirements. General techniques such as Huffman coding are applicable, but the adjacency list or adjacency matrix can be processed in specific ways to increase efficiency. [ 13 ]
Breadth-first search (BFS) and depth-first search (DFS) are two closely-related approaches that are used for exploring all of the nodes in a given connected component . Both start with an arbitrary node, the " root ". [ 14 ] | https://en.wikipedia.org/wiki/Graph_(abstract_data_type) |
In discrete mathematics , particularly in graph theory , a graph is a structure consisting of a set of objects where some pairs of the objects are in some sense "related". The objects are represented by abstractions called vertices (also called nodes or points ) and each of the related pairs of vertices is called an edge (also called link or line ). [ 1 ] Typically, a graph is depicted in diagrammatic form as a set of dots or circles for the vertices, joined by lines or curves for the edges.
The edges may be directed or undirected. For example, if the vertices represent people at a party, and there is an edge between two people if they shake hands, then this graph is undirected because any person A can shake hands with a person B only if B also shakes hands with A . In contrast, if an edge from a person A to a person B means that A owes money to B , then this graph is directed, because owing money is not necessarily reciprocated.
Graphs are the basic subject studied by graph theory. The word "graph" was first used in this sense by J. J. Sylvester in 1878 due to a direct relation between mathematics and chemical structure (what he called a chemico-graphical image). [ 2 ] [ 3 ]
Definitions in graph theory vary. The following are some of the more basic ways of defining graphs and related mathematical structures .
A graph (sometimes called an undirected graph to distinguish it from a directed graph , or a simple graph to distinguish it from a multigraph ) [ 4 ] [ 5 ] is a pair G = ( V , E ) , where V is a set whose elements are called vertices (singular: vertex), and E is a set of unordered pairs { v 1 , v 2 } {\displaystyle \{v_{1},v_{2}\}} of vertices, whose elements are called edges (sometimes links or lines ).
The vertices u and v of an edge { u , v } are called the edge's endpoints . The edge is said to join u and v and to be incident on them. A vertex may belong to no edge, in which case it is not joined to any other vertex and is called isolated . When an edge { u , v } {\displaystyle \{u,v\}} exists, the vertices u and v are called adjacent .
A multigraph is a generalization that allows multiple edges to have the same pair of endpoints. In some texts, multigraphs are simply called graphs. [ 6 ] [ 7 ]
Sometimes, graphs are allowed to contain loops , which are edges that join a vertex to itself. To allow loops, the pairs of vertices in E must be allowed to have the same node twice. Such generalized graphs are called graphs with loops or simply graphs when it is clear from the context that loops are allowed.
Generally, the vertex set V is taken to be finite (which implies that the edge set E is also finite). Sometimes infinite graphs are considered, but they are usually viewed as a special kind of binary relation , because most results on finite graphs either do not extend to the infinite case or need a rather different proof.
An empty graph is a graph that has an empty set of vertices (and thus an empty set of edges). The order of a graph is its number | V | of vertices, usually denoted by n . The size of a graph is its number | E | of edges, typically denoted by m . However, in some contexts, such as for expressing the computational complexity of algorithms, the term size is used for the quantity | V | + | E | (otherwise, a non-empty graph could have size 0). The degree or valency of a vertex is the number of edges that are incident to it; for graphs with loops, a loop is counted twice.
In a graph of order n , the maximum degree of each vertex is n − 1 (or n + 1 if loops are allowed, because a loop contributes 2 to the degree), and the maximum number of edges is n ( n − 1)/2 (or n ( n + 1)/2 if loops are allowed).
The edges of a graph define a symmetric relation on the vertices, called the adjacency relation . Specifically, two vertices x and y are adjacent if { x , y } is an edge. A graph is fully determined by its adjacency matrix A , which is an n × n square matrix, with A ij specifying the number of connections from vertex i to vertex j . For a simple graph, A ij is either 0, indicating disconnection, or 1, indicating connection; moreover A ii = 0 because an edge in a simple graph cannot start and end at the same vertex. Graphs with self-loops will be characterized by some or all A ii being equal to a positive integer, and multigraphs (with multiple edges between vertices) will be characterized by some or all A ij being equal to a positive integer. Undirected graphs will have a symmetric adjacency matrix (meaning A ij = A ji ).
A directed graph or digraph is a graph in which edges have orientations.
In one restricted but very common sense of the term, [ 8 ] a directed graph is a pair G = ( V , E ) comprising:
To avoid ambiguity, this type of object may be called precisely a directed simple graph .
In the edge ( x , y ) directed from x to y , the vertices x and y are called the endpoints of the edge, x the tail of the edge and y the head of the edge. The edge is said to join x and y and to be incident on x and on y . A vertex may exist in a graph and not belong to an edge. The edge ( y , x ) is called the inverted edge of ( x , y ) . Multiple edges , not allowed under the definition above, are two or more edges with both the same tail and the same head.
In one more general sense of the term allowing multiple edges, [ 8 ] a directed graph is sometimes defined to be an ordered triple G = ( V , E , ϕ ) comprising:
To avoid ambiguity, this type of object may be called precisely a directed multigraph .
A loop is an edge that joins a vertex to itself. Directed graphs as defined in the two definitions above cannot have loops, because a loop joining a vertex x {\displaystyle x} to itself is the edge (for a directed simple graph) or is incident on (for a directed multigraph) ( x , x ) {\displaystyle (x,x)} which is not in { ( x , y ) ∣ ( x , y ) ∈ V 2 and x ≠ y } {\displaystyle \{(x,y)\mid (x,y)\in V^{2}\;{\textrm {and}}\;x\neq y\}} . So to allow loops the definitions must be expanded. For directed simple graphs, the definition of E {\displaystyle E} should be modified to E ⊆ V 2 {\displaystyle E\subseteq V^{2}} . For directed multigraphs, the definition of ϕ {\displaystyle \phi } should be modified to ϕ : E → V 2 {\displaystyle \phi :E\to V^{2}} . To avoid ambiguity, these types of objects may be called precisely a directed simple graph permitting loops and a directed multigraph permitting loops (or a quiver ) respectively.
The edges of a directed simple graph permitting loops G is a homogeneous relation ~ on the vertices of G that is called the adjacency relation of G . Specifically, for each edge ( x , y ) , its endpoints x and y are said to be adjacent to one another, which is denoted x ~ y .
A mixed graph is a graph in which some edges may be directed and some may be undirected. It is an ordered triple G = ( V , E , A ) for a mixed simple graph and G = ( V , E , A , ϕ E , ϕ A ) for a mixed multigraph with V , E (the undirected edges), A (the directed edges), ϕ E and ϕ A defined as above. Directed and undirected graphs are special cases.
A weighted graph or a network [ 9 ] [ 10 ] is a graph in which a number (the weight) is assigned to each edge. [ 11 ] Such weights might represent for example costs, lengths or capacities, depending on the problem at hand. Such graphs arise in many contexts, for example in shortest path problems such as the traveling salesman problem .
One definition of an oriented graph is that it is a directed graph in which at most one of ( x , y ) and ( y , x ) may be edges of the graph. That is, it is a directed graph that can be formed as an orientation of an undirected (simple) graph.
Some authors use "oriented graph" to mean the same as "directed graph". Some authors use "oriented graph" to mean any orientation of a given undirected graph or multigraph.
A regular graph is a graph in which each vertex has the same number of neighbours, i.e., every vertex has the same degree. A regular graph with vertices of degree k is called a k ‑regular graph or regular graph of degree k .
A complete graph is a graph in which each pair of vertices is joined by an edge. A complete graph contains all possible edges.
A finite graph is a graph in which the vertex set and the edge set are finite sets . Otherwise, it is called an infinite graph .
Most commonly in graph theory it is implied that the graphs discussed are finite. If the graphs are infinite, that is usually specifically stated.
In an undirected graph, an unordered pair of vertices { x , y } is called connected if a path leads from x to y . Otherwise, the unordered pair is called disconnected .
A connected graph is an undirected graph in which every unordered pair of vertices in the graph is connected. Otherwise, it is called a disconnected graph .
In a directed graph, an ordered pair of vertices ( x , y ) is called strongly connected if a directed path leads from x to y . Otherwise, the ordered pair is called weakly connected if an undirected path leads from x to y after replacing all of its directed edges with undirected edges. Otherwise, the ordered pair is called disconnected .
A strongly connected graph is a directed graph in which every ordered pair of vertices in the graph is strongly connected. Otherwise, it is called a weakly connected graph if every ordered pair of vertices in the graph is weakly connected. Otherwise it is called a disconnected graph .
A k-vertex-connected graph or k-edge-connected graph is a graph in which no set of k − 1 vertices (respectively, edges) exists that, when removed, disconnects the graph. A k -vertex-connected graph is often called simply a k-connected graph .
A bipartite graph is a simple graph in which the vertex set can be partitioned into two sets, W and X , so that no two vertices in W share a common edge and no two vertices in X share a common edge. Alternatively, it is a graph with a chromatic number of 2.
In a complete bipartite graph , the vertex set is the union of two disjoint sets, W and X , so that every vertex in W is adjacent to every vertex in X but there are no edges within W or X .
A path graph or linear graph of order n ≥ 2 is a graph in which the vertices can be listed in an order v 1 , v 2 , …, v n such that the edges are the { v i , v i +1 } where i = 1, 2, …, n − 1. Path graphs can be characterized as connected graphs in which the degree of all but two vertices is 2 and the degree of the two remaining vertices is 1. If a path graph occurs as a subgraph of another graph, it is a path in that graph.
A planar graph is a graph whose vertices and edges can be drawn in a plane such that no two of the edges intersect.
A cycle graph or circular graph of order n ≥ 3 is a graph in which the vertices can be listed in an order v 1 , v 2 , …, v n such that the edges are the { v i , v i +1 } where i = 1, 2, …, n − 1, plus the edge { v n , v 1 } . Cycle graphs can be characterized as connected graphs in which the degree of all vertices is 2. If a cycle graph occurs as a subgraph of another graph, it is a cycle or circuit in that graph.
A tree is an undirected graph in which any two vertices are connected by exactly one path , or equivalently a connected acyclic undirected graph.
A forest is an undirected graph in which any two vertices are connected by at most one path, or equivalently an acyclic undirected graph, or equivalently a disjoint union of trees.
A polytree (or directed tree or oriented tree or singly connected network ) is a directed acyclic graph (DAG) whose underlying undirected graph is a tree.
A polyforest (or directed forest or oriented forest ) is a directed acyclic graph whose underlying undirected graph is a forest.
More advanced kinds of graphs are:
Two edges of a graph are called adjacent if they share a common vertex. Two edges of a directed graph are called consecutive if the head of the first one is the tail of the second one. Similarly, two vertices are called adjacent if they share a common edge ( consecutive if the first one is the tail and the second one is the head of an edge), in which case the common edge is said to join the two vertices. An edge and a vertex on that edge are called incident .
The graph with only one vertex and no edges is called the trivial graph . A graph with only vertices and no edges is known as an edgeless graph . The graph with no vertices and no edges is sometimes called the null graph or empty graph , but the terminology is not consistent and not all mathematicians allow this object.
Normally, the vertices of a graph, by their nature as elements of a set, are distinguishable. This kind of graph may be called vertex-labeled . However, for many questions it is better to treat vertices as indistinguishable. (Of course, the vertices may be still distinguishable by the properties of the graph itself, e.g., by the numbers of incident edges.) The same remarks apply to edges, so graphs with labeled edges are called edge-labeled . Graphs with labels attached to edges or vertices are more generally designated as labeled . Consequently, graphs in which vertices are indistinguishable and edges are indistinguishable are called unlabeled . (In the literature, the term labeled may apply to other kinds of labeling, besides that which serves only to distinguish different vertices or edges.)
The category of directed multigraphs permitting loops is the comma category Set ↓ D where D : Set → Set is the functor taking a set s to s × s .
There are several operations that produce new graphs from initial ones, which might be classified into the following categories:
In a hypergraph , an edge can join any positive number of vertices.
An undirected graph can be seen as a simplicial complex consisting of 1- simplices (the edges) and 0-simplices (the vertices). As such, complexes are generalizations of graphs since they allow for higher-dimensional simplices.
Every graph gives rise to a matroid .
In model theory , a graph is just a structure . But in that case, there is no limitation on the number of edges: it can be any cardinal number , see continuous graph .
In computational biology , power graph analysis introduces power graphs as an alternative representation of undirected graphs.
In geographic information systems , geometric networks are closely modeled after graphs, and borrow many concepts from graph theory to perform spatial analysis on road networks or utility grids. | https://en.wikipedia.org/wiki/Graph_(discrete_mathematics) |
In mathematics , the graph Fourier transform is a mathematical transform which eigendecomposes the Laplacian matrix of a graph into eigenvalues and eigenvectors . Analogously to the classical Fourier transform , the eigenvalues represent frequencies and eigenvectors form what is known as a graph Fourier basis .
The Graph Fourier transform is important in spectral graph theory . It is widely applied in the recent study of graph structured learning algorithms , such as the widely employed convolutional networks .
Given an undirected weighted graph G = ( V , E ) {\displaystyle G=(V,E)} , where V {\displaystyle V} is the set of nodes with | V | = N {\displaystyle |V|=N} ( N {\displaystyle N} being the number of nodes) and E {\displaystyle E} is the set of edges, a graph signal f : V → R {\displaystyle f:V\rightarrow \mathbb {R} } is a function defined on the vertices of the graph G {\displaystyle G} . The signal f {\displaystyle f} maps every vertex { v i } i = 1 , … , N {\displaystyle \{v_{i}\}_{i=1,\ldots ,N}} to a real number f ( i ) {\displaystyle f(i)} . Any graph signal can be projected on the eigenvectors of the Laplacian matrix L {\displaystyle L} . [ 1 ] Let λ l {\displaystyle \lambda _{l}} and μ l {\displaystyle \mu _{l}} be the l th {\displaystyle l_{\text{th}}} eigenvalue and eigenvector of the Laplacian matrix L {\displaystyle L} (the eigenvalues are sorted in an increasing order, i.e., 0 = λ 0 ≤ λ 1 ≤ ⋯ ≤ λ N − 1 {\displaystyle 0=\lambda _{0}\leq \lambda _{1}\leq \cdots \leq \lambda _{N-1}} [ 2 ] ), the graph Fourier transform (GFT) f ^ {\displaystyle {\hat {f}}} of a graph signal f {\displaystyle f} on the vertices of G {\displaystyle G} is the expansion of f {\displaystyle f} in terms of the eigenfunctions of L {\displaystyle L} . [ 3 ] It is defined as: [ 1 ] [ 4 ]
where μ l ∗ = μ l T {\displaystyle \mu _{l}^{*}=\mu _{l}^{\text{T}}} .
Since L {\displaystyle L} is a real symmetric matrix , its eigenvectors { μ l } l = 0 , ⋯ , N − 1 {\displaystyle \{\mu _{l}\}_{l=0,\cdots ,N-1}} form an orthogonal basis . Hence an inverse graph Fourier transform (IGFT) exists, and it is written as: [ 4 ]
Analogously to the classical Fourier transform , graph Fourier transform provides a way to represent a signal in two different domains: the vertex domain and the graph spectral domain . Note that the definition of the graph Fourier transform and its inverse depend on the choice of Laplacian eigenvectors, which are not necessarily unique. [ 3 ] The eigenvectors of the normalized Laplacian matrix are also a possible base to define the forward and inverse graph Fourier transform.
The Parseval relation holds for the graph Fourier transform, [ 5 ] that is, for any f , h ∈ R N {\displaystyle f,h\in \mathbb {R} ^{N}}
This gives us Parseval's identity : [ 3 ]
The definition of convolution between two functions f {\displaystyle f} and g {\displaystyle g} cannot be directly applied to graph signals, because the signal translation is not defined in the context of graphs. [ 4 ] However, by replacing the complex exponential shift in classical Fourier transform with the graph Laplacian eigenvectors, convolution of two graph signals can be defined as: [ 3 ]
The generalized convolution operator satisfies the following properties: [ 3 ]
As previously stated, the classical translation operator T v {\displaystyle T_{v}} cannot be generalized to the graph setting. One way to define a generalized translation operator is through generalized convolution with a delta function centered at vertex n {\displaystyle n} : [ 2 ] ( T n f ) ( i ) = N ( f ∗ δ n ) ( i ) = N ∑ l = 0 N − 1 f ^ ( λ l ) u l ∗ ( n ) u l ( i ) , {\displaystyle \left(T_{n}f\right)(i)={\sqrt {N}}\left(f*\delta _{n}\right)(i){=}{\sqrt {N}}\sum _{l=0}^{N-1}{\hat {f}}\left(\lambda _{l}\right)u_{l}^{*}(n)u_{l}(i),}
where δ i ( n ) = { 1 , if i = n , 0 , otherwise. {\displaystyle \delta _{i}(n)={\begin{cases}1,&{\text{if }}i=n,\\0,&{\text{otherwise.}}\end{cases}}}
The normalization constant N {\displaystyle {\sqrt {N}}} ensures that the translation operator preserves the signal mean, [ 4 ] i.e.,
The generalized convolution operator satisfies the following properties: [ 3 ]
For any f , g ∈ R N {\displaystyle f,g\in \mathbb {R} ^{N}} , and j , k ∈ { 1 , 2 , … , N } {\displaystyle j,k\in \{1,2,\dots ,N\}} ,
Representing signals in frequency domain is a common approach to data compression. As graph signals can be sparse in their graph spectral domain, the graph Fourier transform can also be used for image compression . [ 6 ] [ 7 ]
Similar to classical noise reduction of signals based on Fourier transform, graph filters based on the graph Fourier transform can be designed for graph signal denoising. [ 8 ]
As the graph Fourier transform enables the definition of convolution on graphs, it makes possible to adapt the conventional convolutional neural networks (CNN) to work on graphs. Graph structured semi-supervised learning algorithms such as graph convolutional network (GCN), are able to propagate the labels of a graph signal throughout the graph with a small subset of labeled nodes, theoretically operating as a first order approximation of spectral graph convolutions without computing the graph Laplacian and its eigendecomposition. [ 9 ]
GSPBOX [ 10 ] [ 11 ] is a toolbox for signal processing of graphs, including the graph Fourier transform. It supports both Python and MATLAB languages. | https://en.wikipedia.org/wiki/Graph_Fourier_transform |
Graph Theory, 1736–1936 is a book in the history of mathematics on graph theory . It focuses on the foundational documents of the field, beginning with the 1736 paper of Leonhard Euler on the Seven Bridges of Königsberg and ending with the first textbook on the subject, published in 1936 by Dénes Kőnig . Graph Theory, 1736–1936 was edited by Norman L. Biggs , E. Keith Lloyd, and Robin J. Wilson , and published in 1976 by the Clarendon Press . [ 1 ] [ 2 ] [ 3 ] [ 4 ] The Oxford University Press published a paperback second edition in 1986, [ 5 ] with a corrected reprint in 1998. [ 6 ]
Graph Theory, 1736–1936 contains copies, extracts, and translations of 37 original sources in graph theory, grouped into ten chapters [ 1 ] and punctuated by commentary on their meaning and context. [ 2 ] It begins with Euler's 1736 paper "Solutio problematis ad geometriam situs pertinentis" on the seven bridges of Königsberg (both in the original Latin and in English translation) and ending with Dénes Kőnig's book Theorie der endlichen und unendlichen Graphen . [ 5 ] [ 6 ] The source material touches on recreational mathematics , chemical graph theory , the analysis of electrical circuits , and applications of graph theory in abstract algebra . [ 5 ] Also included are background material and portraits on the mathematicians who originally developed this material. [ 6 ]
The chapters of the book organize the material into topics within graph theory, rather than being strictly chronological. [ 2 ] The first chapter, on paths, includes maze-solving algorithms as well as Euler's work on Euler tours . Next, a chapter on circuits includes material on knight's tours in chess (a topic that long predates Euler), Hamiltonian cycles , and the work of Thomas Kirkman on polyhedral graphs . Next follow chapters on spanning trees and Cayley's formula , chemical graph theory and graph enumeration , and planar graphs , Kuratowski's theorem , and Euler's polyhedral formula . There are three chapters on the four color theorem and graph coloring , a chapter on algebraic graph theory , and a final chapter on graph factorization . Appendices provide a brief update on graph history since 1936, biographies of the authors of the works included in the book, and a comprehensive bibliography. [ 1 ] [ 2 ]
Reviewer Ján Plesník names the book the first ever published on the history of graph theory, [ 1 ] and although Hazel Perfect notes that parts of it can be difficult to read, [ 3 ] Plesník states that it can also be used as "a self-contained introduction" to the field, [ 1 ] and Edward Maziarz suggests its use as a textbook for graph theory courses. [ 2 ] Perfect calls the book "fascinating ... full of information", thoroughly researched and carefully written, [ 3 ] and Maziarz finds inspiring the ways in which it describes serious mathematics as arising from frivolous starting points. [ 2 ] Fernando Q. Gouvêa calls it a "must-have" for anyone interested in graph theory, [ 6 ] and Philip Peak also recommends it to anyone interested more generally in the history of mathematics. [ 4 ] | https://en.wikipedia.org/wiki/Graph_Theory,_1736–1936 |
In mathematics , especially in the fields of universal algebra and graph theory , a graph algebra is a way of giving a directed graph an algebraic structure . It was introduced by McNulty and Shallon, [ 1 ] and has seen many uses in the field of universal algebra since then.
Let D = ( V , E ) be a directed graph , and 0 an element not in V . The graph algebra associated with D has underlying set V ∪ { 0 } {\displaystyle V\cup \{0\}} , and is equipped with a multiplication defined by the rules
This notion has made it possible to use the methods of graph theory in universal algebra and several other areas of discrete mathematics and computer science . Graph algebras have been used, for example, in constructions concerning dualities , [ 2 ] equational theories , [ 3 ] flatness , [ 4 ] groupoid rings , [ 5 ] topologies , [ 6 ] varieties , [ 7 ] finite-state machines , [ 8 ] [ 9 ] tree languages and tree automata , [ 10 ] etc. | https://en.wikipedia.org/wiki/Graph_algebra |
Graph algebra is systems-centric modeling tool for the social sciences . [ 1 ] It was first developed by Sprague, Pzeworski, and Cortes [ 2 ] as a hybridized version of engineering plots to describe social phenomena. | https://en.wikipedia.org/wiki/Graph_algebra_(social_sciences) |
In graph theory , a graph amalgamation is a relationship between two graphs (one graph is an amalgamation of another). Similar relationships include subgraphs and minors . Amalgamations can provide a way to reduce a graph to a simpler graph while keeping certain structure intact. The amalgamation can then be used to study properties of the original graph in an easier to understand context. Applications include embeddings, [ 1 ] computing genus distribution, [ 2 ] and Hamiltonian decompositions .
Let G {\displaystyle G} and H {\displaystyle H} be two graphs with the same number of edges where G {\displaystyle G} has more vertices than H {\displaystyle H} . Then we say that H {\displaystyle H} is an amalgamation of G {\displaystyle G} if there is a bijection ϕ : E ( G ) → E ( H ) {\displaystyle \phi :E(G)\to E(H)} and a surjection ψ : V ( G ) → V ( H ) {\displaystyle \psi :V(G)\to V(H)} and the following hold:
Note that while G {\displaystyle G} can be a graph or a pseudograph , it will usually be the case that H {\displaystyle H} is a pseudograph.
Edge colorings are invariant to amalgamation. This is obvious, as all of the edges between the two graphs are in bijection with each other. However, what may not be obvious, is that if G {\displaystyle G} is a complete graph of the form K 2 n + 1 {\displaystyle K_{2n+1}} , and we color the edges as to specify a Hamiltonian decomposition (a decomposition into Hamiltonian paths , then those edges also form a Hamiltonian Decomposition in H {\displaystyle H} .
Figure 1 illustrates an amalgamation of K 5 {\displaystyle K_{5}} . The invariance of edge coloring and Hamiltonian Decomposition can be seen clearly. The function ϕ {\displaystyle \phi } is a bijection and is given as letters in the figure. The function ψ {\displaystyle \psi } is given in the table below.
One of the ways in which amalgamations can be used is to find Hamiltonian Decompositions of complete graphs with 2 n + 1 vertices. [ 4 ] The idea is to take a graph and produce an amalgamation of it which is edge colored in n {\displaystyle n} colors and satisfies certain properties (called an outline Hamiltonian decomposition). We can then 'reverse' the amalgamation and we are left with K 2 n + 1 {\displaystyle K_{2n+1}} colored in a Hamiltonian Decomposition.
In [ 3 ] Hilton outlines a method for doing this, as well as a method for finding all Hamiltonian Decompositions without repetition. The methods rely on a theorem he provides which states (roughly) that if we have an outline Hamiltonian decomposition, we could have arrived at it by first starting with a Hamiltonian decomposition of the complete graph and then finding an amalgamation for it. | https://en.wikipedia.org/wiki/Graph_amalgamation |
In the mathematical field of graph theory , an automorphism of a graph is a form of symmetry in which the graph is mapped onto itself while preserving the edge– vertex connectivity.
Formally, an automorphism of a graph G = ( V , E ) is a permutation σ of the vertex set V , such that the pair of vertices ( u , v ) form an edge if and only if the pair ( σ ( u ), σ ( v )) also form an edge. That is, it is a graph isomorphism from G to itself. Automorphisms may be defined in this way both for directed graphs and for undirected graphs .
The composition of two automorphisms is another automorphism, and the set of automorphisms of a given graph, under the composition operation, forms a group , the automorphism group of the graph. In the opposite direction, by Frucht's theorem , all groups can be represented as the automorphism group of a connected graph – indeed, of a cubic graph . [ 1 ] [ 2 ]
Constructing the automorphism group of a graph, in the form of a list of generators, is polynomial-time equivalent to the graph isomorphism problem , and therefore solvable in quasi-polynomial time , that is with running time 2 O ( ( log n ) c ) {\displaystyle 2^{O((\log n)^{c})}} for some fixed c > 0 {\displaystyle c>0} . [ 3 ] [ 4 ] Consequently, like the graph isomorphism problem, the problem of finding a graph's automorphism group is known to belong to the complexity class NP , but not known to be in P nor to be NP-complete , and therefore may be NP-intermediate .
The easier problem of testing whether a graph has any symmetries (nontrivial automorphisms), known as the graph automorphism problem , also has no known polynomial time solution. [ 5 ] There is a polynomial time algorithm for solving the graph automorphism problem for graphs where vertex degrees are bounded by a constant. [ 6 ] The graph automorphism problem is polynomial-time many-one reducible to the graph isomorphism problem, but the converse reduction is unknown. [ 3 ] [ 7 ] [ 8 ] By contrast, hardness is known when the automorphisms are constrained in a certain fashion; for instance, determining the existence of a fixed-point-free automorphism (an automorphism that fixes no vertex) is NP-complete, and the problem of counting such automorphisms is ♯P-complete . [ 5 ] [ 8 ]
While no worst-case polynomial-time algorithms are known for the general Graph Automorphism problem, finding the automorphism group (and printing out an irredundant set of generators) for many large graphs arising in applications is rather easy. Several open-source software tools are available for this task, including NAUTY , [ 9 ] BLISS [ 10 ] and SAUCY . [ 11 ] [ 12 ] SAUCY and BLISS are particularly efficient for sparse graphs, e.g., SAUCY processes some graphs with millions of vertices in mere seconds. However, BLISS and NAUTY can also produce Canonical Labeling , whereas SAUCY is currently optimized for solving Graph Automorphism. An important observation is that for a graph on n vertices, the automorphism group can be specified by no more than n − 1 {\displaystyle n-1} generators, and the above software packages are guaranteed to satisfy this bound as a side-effect of their algorithms (minimal sets of generators are harder to find and are not particularly useful in practice). It also appears that the total support (i.e., the number of vertices moved) of all generators is limited by a linear function of n , which is important in runtime analysis of these algorithms. However, this has not been established for a fact, as of March 2012.
Practical applications of Graph Automorphism include graph drawing and other visualization tasks, solving structured instances of Boolean Satisfiability arising in the context of Formal verification and Logistics . Molecular symmetry can predict or explain chemical properties.
Several graph drawing researchers have investigated algorithms for drawing graphs in such a way that the automorphisms of the graph become visible as symmetries of the drawing. This may be done either by using a method that is not designed around symmetries, but that automatically generates symmetric drawings when possible, [ 13 ] or by explicitly identifying symmetries and using them to guide vertex placement in the drawing. [ 14 ] It is not always possible to display all symmetries of the graph simultaneously, so it may be necessary to choose which symmetries to display and which to leave unvisualized.
Several families of graphs are defined by having certain types of automorphisms:
Inclusion relationships between these families are indicated by the following table: | https://en.wikipedia.org/wiki/Graph_automorphism |
In mathematics , particularly in game theory and mathematical economics , a function is graph continuous if its graph —the set of all input-output pairs—is a closed set in the product topology of the domain and codomain. In simpler terms, if a sequence of points on the graph converges, its limit point must also belong to the graph. This concept, related to the closed graph property in functional analysis , allows for a broader class of discontinuous payoff functions while enabling equilibrium analysis in economic models.
Graph continuity gained prominence through the work of Partha Dasgupta and Eric Maskin in their 1986 paper on the existence of equilibria in discontinuous economic games. [ 1 ] Unlike standard continuity , which requires small changes in inputs to produce small changes in outputs, graph continuity permits certain well-behaved discontinuities. This property is crucial for establishing equilibria in settings such as auction theory , oligopoly models, and location competition , where payoff discontinuities naturally arise.
Consider a game with N {\displaystyle N} agents with agent i {\displaystyle i} having strategy A i ⊆ R {\displaystyle A_{i}\subseteq \mathbb {R} } ; write a {\displaystyle \mathbf {a} } for an N-tuple of actions (i.e. a ∈ ∏ j = 1 N A j {\displaystyle \mathbf {a} \in \prod _{j=1}^{N}A_{j}} ) and a − i = ( a 1 , a 2 , … , a i − 1 , a i + 1 , … , a N ) {\displaystyle \mathbf {a} _{-i}=(a_{1},a_{2},\ldots ,a_{i-1},a_{i+1},\ldots ,a_{N})} as the vector of all agents' actions apart from agent i {\displaystyle i} .
Let U i : A i ⟶ R {\displaystyle U_{i}:A_{i}\longrightarrow \mathbb {R} } be the payoff function for agent i {\displaystyle i} .
A game is defined as [ ( A i , U i ) ; i = 1 , … , N ] {\displaystyle [(A_{i},U_{i});i=1,\ldots ,N]} .
Function U i : A ⟶ R {\displaystyle U_{i}:A\longrightarrow \mathbb {R} } is graph continuous if for all a ∈ A {\displaystyle \mathbf {a} \in A} there exists a function F i : A − i ⟶ A i {\displaystyle F_{i}:A_{-i}\longrightarrow A_{i}} such that U i ( F i ( a − i ) , a − i ) {\displaystyle U_{i}(F_{i}(\mathbf {a} _{-i}),\mathbf {a} _{-i})} is continuous at a − i {\displaystyle \mathbf {a} _{-i}} .
Dasgupta and Maskin named this property "graph continuity" because, if one plots a graph of a player's payoff as a function of his own strategy (keeping the other players' strategies fixed), then a graph-continuous payoff function will result in this graph changing continuously as one varies the strategies of the other players.
The property is interesting in view of the following theorem.
If, for 1 ≤ i ≤ N {\displaystyle 1\leq i\leq N} , A i ⊆ R m {\displaystyle A_{i}\subseteq \mathbb {R} ^{m}} is non-empty, convex , and compact ; and if U i : A ⟶ R {\displaystyle U_{i}:A\longrightarrow \mathbb {R} } is quasi-concave in a i {\displaystyle a_{i}} , upper semi-continuous in a {\displaystyle \mathbf {a} } , and graph continuous, then the game [ ( A i , U i ) ; i = 1 , … , N ] {\displaystyle [(A_{i},U_{i});i=1,\ldots ,N]} possesses a pure strategy Nash equilibrium . | https://en.wikipedia.org/wiki/Graph_continuous_function |
In mathematics , the energy of a graph is the sum of the absolute values of the eigenvalues of the adjacency matrix of the graph. This quantity is studied in the context of spectral graph theory .
More precisely, let G be a graph with n vertices . It is assumed that G is a simple graph , that is, it does not contain loops or parallel edges. Let A be the adjacency matrix of G and let λ i {\displaystyle \lambda _{i}} , i = 1 , … , n {\displaystyle i=1,\ldots ,n} , be the eigenvalues of A . Then the energy of the graph is defined as:
This graph theory -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Graph_energy |
In information theory , the graph entropy is a measure of the information rate achievable by communicating symbols over a channel in which certain pairs of values may be confused. [ 1 ] This measure, first introduced by Körner in the 1970s, [ 2 ] [ 3 ] has since also proven itself useful in other settings, including combinatorics. [ 4 ]
Let G = ( V , E ) {\displaystyle G=(V,E)} be an undirected graph . The graph entropy of G {\displaystyle G} , denoted H ( G ) {\displaystyle H(G)} is defined as
where X {\displaystyle X} is chosen uniformly from V {\displaystyle V} , Y {\displaystyle Y} ranges over independent sets of G, the joint distribution of X {\displaystyle X} and Y {\displaystyle Y} is such that X ∈ Y {\displaystyle X\in Y} with probability one, and I ( X ; Y ) {\displaystyle I(X;Y)} is the mutual information of X {\displaystyle X} and Y {\displaystyle Y} . [ 5 ]
That is, if we let I {\displaystyle {\mathcal {I}}} denote the independent vertex sets in G {\displaystyle G} , we wish to find the joint distribution X , Y {\displaystyle X,Y} on V × I {\displaystyle V\times {\mathcal {I}}} with the lowest mutual information such that (i) the marginal distribution of the first term is uniform and (ii) in samples from the distribution, the second term contains the first term almost surely. The mutual information of X {\displaystyle X} and Y {\displaystyle Y} is then called the entropy of G {\displaystyle G} .
Additionally, simple formulas exist for certain families classes of graphs.
Here, we use properties of graph entropy to provide a simple proof that a complete graph G {\displaystyle G} on n {\displaystyle n} vertices cannot be expressed as the union of fewer than log 2 n {\displaystyle \log _{2}n} bipartite graphs.
Proof By monotonicity, no bipartite graph can have graph entropy greater than that of a complete bipartite graph, which is bounded by 1 {\displaystyle 1} . Thus, by sub-additivity, the union of k {\displaystyle k} bipartite graphs cannot have entropy greater than k {\displaystyle k} . Now let G = ( V , E ) {\displaystyle G=(V,E)} be a complete graph on n {\displaystyle n} vertices. By the properties listed above, H ( G ) = log 2 n {\displaystyle H(G)=\log _{2}n} . Therefore, the union of fewer than log 2 n {\displaystyle \log _{2}n} bipartite graphs cannot have the same entropy as G {\displaystyle G} , so G {\displaystyle G} cannot be expressed as such a union. ◼ {\displaystyle \blacksquare } | https://en.wikipedia.org/wiki/Graph_entropy |
In graph theory , Graph equations are equations in which the unknowns are graphs . One of the central questions of graph theory concerns the notion of isomorphism . We ask: When are two graphs the same? (i.e., graph isomorphism ) The graphs in question may be expressed differently in terms of graph equations. [ 1 ]
What are the graphs ( solutions ) G and H such that the line graph of G is same as the total graph of H ? (What are G and H such that L ( G ) = T ( H ) ?).
For example, G = K 3 , and H = K 2 are the solutions of the graph equation L ( K 3 ) = T ( K 2 ) and G = K 4 , and H = K 3 are the solutions of the graph equation L ( K 4 ) = T ( K 3 ).
Note that T ( K 3 ) is a 4- regular graph on 6 vertices. | https://en.wikipedia.org/wiki/Graph_equation |
In graph theory , a factor of a graph G is a spanning subgraph , i.e., a subgraph that has the same vertex set as G . A k -factor of a graph is a spanning k - regular subgraph, and a k -factorization partitions the edges of the graph into disjoint k -factors. A graph G is said to be k -factorable if it admits a k -factorization. In particular, a 1-factor is a perfect matching , and a 1-factorization of a k -regular graph is a proper edge coloring with k colors. A 2-factor is a collection of disjoint cycles that spans all vertices of the graph.
If a graph is 1-factorable then it has to be a regular graph . However, not all regular graphs are 1-factorable. A k -regular graph is 1-factorable if it has chromatic index k ; examples of such graphs include:
However, there are also k -regular graphs that have chromatic index k + 1, and these graphs are not 1-factorable; examples of such graphs include:
A 1-factorization of a complete graph corresponds to pairings in a round-robin tournament . The 1-factorization of complete graphs is a special case of Baranyai's theorem concerning the 1-factorization of complete hypergraphs .
One method for constructing a 1-factorization of a complete graph on an even number of vertices involves placing all but one of the vertices in a regular polygon , with the remaining vertex at the center. With this arrangement of vertices, one way of constructing a 1-factor of the graph is to choose an edge e from the center to a single polygon vertex together with all possible edges that lie on lines perpendicular to e . The 1-factors that can be constructed in this way form a 1-factorization of the graph.
The number of distinct 1-factorizations of K 2 , K 4 , K 6 , K 8 , ... is 1, 1, 6, 6240, 1225566720, 252282619805368320, 98758655816833727741338583040, ... ( OEIS : A000438 ).
Let G be a k -regular graph with 2 n nodes. If k is sufficiently large, it is known that G has to be 1-factorable:
The 1-factorization conjecture [ 3 ] is a long-standing conjecture that states that k ≈ n is sufficient. In precise terms, the conjecture is:
The overfull conjecture implies the 1-factorization conjecture.
A perfect pair from a 1-factorization is a pair of 1-factors whose union induces a Hamiltonian cycle .
A perfect 1-factorization (P1F) of a graph is a 1-factorization having the property that every pair of 1-factors is a perfect pair. A perfect 1-factorization should not be confused with a perfect matching (also called a 1-factor).
In 1964, Anton Kotzig conjectured that every complete graph K 2 n where n ≥ 2 has a perfect 1-factorization. So far, it is known that the following graphs have a perfect 1-factorization: [ 4 ]
If the complete graph K n +1 has a perfect 1-factorization, then the complete bipartite graph K n , n also has a perfect 1-factorization. [ 5 ]
If a graph is 2-factorable, then it has to be 2 k -regular for some integer k . Julius Petersen showed in 1891 that this necessary condition is also sufficient: any 2 k -regular graph is 2-factorable . [ 6 ]
If a connected graph is 2 k -regular and has an even number of edges it may also be k -factored, by choosing each of the two factors to be an alternating subset of the edges of an Euler tour . [ 7 ] This applies only to connected graphs; disconnected counterexamples include disjoint unions of odd cycles, or of copies of K 2 k +1 .
The Oberwolfach problem concerns the existence of 2-factorizations of complete graphs into isomorphic subgraphs. It asks for which subgraphs this is possible. This is known when the subgraph is connected (in which case it is a Hamiltonian cycle and this special case is the problem of Hamiltonian decomposition ) but the general case remains open . | https://en.wikipedia.org/wiki/Graph_factorization |
Flattenability in some d {\displaystyle d} -dimensional normed vector space is a property of graphs which states that any embedding , or drawing , of the graph in some high dimension d ′ {\displaystyle d'} can be "flattened" down to live in d {\displaystyle d} -dimensions, such that the distances between pairs of points connected by edges are preserved. A graph G {\displaystyle G} is d {\displaystyle d} -flattenable if every distance constraint system (DCS) with G {\displaystyle G} as its constraint graph has a d {\displaystyle d} -dimensional framework . Flattenability was first called realizability, [ 1 ] but the name was changed to avoid confusion with a graph having some DCS with a d {\displaystyle d} -dimensional framework. [ 2 ]
Flattenability has connections to structural rigidity , tensegrities , Cayley configuration spaces , and a variant of the graph realization problem .
A distance constraint system ( G , δ ) {\displaystyle (G,\delta )} , where G = ( V , E ) {\displaystyle G=(V,E)} is a graph and δ : E → R | E | {\displaystyle \delta :E\rightarrow \mathbb {R} ^{|E|}} is an assignment of distances onto the edges of G {\displaystyle G} , is d {\displaystyle d} -flattenable in some normed vector space R d {\displaystyle \mathbb {R} ^{d}} if there exists a framework of ( G , δ ) {\displaystyle (G,\delta )} in d {\displaystyle d} -dimensions.
A graph G = ( V , E ) {\displaystyle G=(V,E)} is d {\displaystyle d} -flattenable in R d {\displaystyle \mathbb {R} ^{d}} if every distance constraint system with G {\displaystyle G} as its constraint graph is d {\displaystyle d} -flattenable.
Flattenability can also be defined in terms of Cayley configuration spaces; see connection to Cayley configuration spaces below.
Closure under subgraphs. Flattenability is closed under taking subgraphs. [ 1 ] To see this, observe that for some graph G {\displaystyle G} , all possible embeddings of a subgraph H {\displaystyle H} of G {\displaystyle G} are contained in the set of all embeddings of G {\displaystyle G} .
Minor-closed. Flattenability is a minor-closed property by a similar argument as above. [ 1 ]
Flattening dimension. The flattening dimension of a flattenable graph G {\displaystyle G} in some normed vector space is the lowest dimension d {\displaystyle d} such that G {\displaystyle G} is d {\displaystyle d} -flattenable. The flattening dimension of a graph is closely related to its gram dimension. [ 3 ] The following is an upper-bound on the flattening dimension of an arbitrary graph under the l 2 {\displaystyle l_{2}} -norm.
Theorem. [ 4 ] The flattening dimension of a graph G = ( V , E ) {\displaystyle G=\left(V,E\right)} under the l 2 {\displaystyle l_{2}} -norm is at most O ( | E | ) {\displaystyle O\left({\sqrt {\left|E\right|}}\right)} .
For a detailed treatment of this topic, see Chapter 11.2 of Deza & Laurent. [ 5 ]
This section concerns flattenability results in Euclidean space , where distance is measured using the l 2 {\displaystyle l_{2}} norm, also called the Euclidean norm .
The following theorem is folklore and shows that the only forbidden minor for 1-flattenability is the complete graph K 3 {\displaystyle K_{3}} .
Theorem. A graph is 1-flattenable if and only if it is a forest .
Proof. A proof can be found in Belk & Connelly. [ 1 ] For one direction, a forest is a collection of trees, and any distance constraint system whose graph is a tree can be realized in 1-dimension. For the other direction, if a graph G {\displaystyle G} is not a forest, then it has the complete graph K 3 {\displaystyle K_{3}} as a subgraph. Consider the DCS that assigns the distance 1 to the edges of the K 4 {\displaystyle K_{4}} subgraph and the distance 0 to all other edges. This DCS has a realization in 2-dimensions as the 1-skeleton of a triangle, but it has no realization in 1-dimension.
This proof allowed for distances on edges to be 0, but the argument holds even when this is not allowed. See Belk & Connelly [ 1 ] for a detailed explanation.
The following theorem is folklore and shows that the only forbidden minor for 2-flattenability is the complete graph K 4 {\displaystyle K_{4}} .
Theorem. A graph is 2-flattenable if and only if it is a partial 2-tree .
Proof. A proof can be found in Belk & Connelly. [ 1 ] For one direction, since flattenability is closed under taking subgraphs, it is sufficient to show that 2-trees are 2-flattenable. A 2-tree with n {\displaystyle n} vertices can be constructed recursively by taking a 2-tree with n − 1 {\displaystyle n-1} vertices and connecting a new vertex to the vertices of an existing edge. The base case is the K 3 {\displaystyle K_{3}} . Proceed by induction on the number of vertices n {\displaystyle n} . When n = 3 {\displaystyle n=3} , consider any distance assignment δ {\displaystyle \delta } on the edges K 3 {\displaystyle K_{3}} . Note that if δ {\displaystyle \delta } does not obey the triangle inequality , then this DCS does not have a realization in any dimension. Without loss of generality, place the first vertex v 1 {\displaystyle v_{1}} at the origin and the second vertex v 2 {\displaystyle v_{2}} along the x-axis such that δ 12 {\displaystyle \delta _{12}} is satisfied. The third vertex v 3 {\displaystyle v_{3}} can be placed at an intersection of the circles with centers v 1 {\displaystyle v_{1}} and v 2 {\displaystyle v_{2}} and radii δ 13 {\displaystyle \delta _{13}} and δ 23 {\displaystyle \delta _{23}} respectively. This method of placement is called a ruler and compass construction . Hence, K 3 {\displaystyle K_{3}} is 2-flattenable.
Now, assume a 2-tree with k {\displaystyle k} vertices is 2-flattenable. By definition, a 2-tree with k + 1 {\displaystyle k+1} vertices is a 2-tree with k {\displaystyle k} vertices, say T {\displaystyle T} , and an additional vertex u {\displaystyle u} connected to the vertices of an existing edge in T {\displaystyle T} . By the inductive hypothesis, T {\displaystyle T} is 2-flattenable. Finally, by a similar ruler and compass construction argument as in the base case, u {\displaystyle u} can be placed such that it lies in the plane. Thus, 2-trees are 2-flattenable by induction.
If a graph G {\displaystyle G} is not a partial 2-tree, then it contains K 4 {\displaystyle K_{4}} as a minor. Assigning the distance of 1 to the edges of the K 4 {\displaystyle K_{4}} minor and the distance of 0 to all other edges yields a DCS with a realization in 3-dimensions as the 1-skeleton of a tetrahedra. However, this DCS has no realization in 2-dimensions: when attempting to place the fourth vertex using a ruler and compass construction, the three circles defined by the fourth vertex do not all intersect.
Example. Consider the graph in figure 2. Adding the edge A C ¯ {\displaystyle {\bar {AC}}} turns it into a 2-tree; hence, it is a partial 2-tree. Thus, it is 2-flattenable.
Example. The wheel graph W 5 {\displaystyle W_{5}} contains K 4 {\displaystyle K_{4}} as a minor. Thus, it is not 2-flattenable.
The class of 3-flattenable graphs strictly contains the class of partial 3-trees. [ 1 ] More precisely, the forbidden minors for partial 3-trees are the complete graph K 5 {\displaystyle K_{5}} , the 1-skeleton of the octahedron K 2 , 2 , 2 {\displaystyle K_{2,2,2}} , V 8 {\displaystyle V_{8}} , and C 5 × C 2 {\displaystyle C_{5}\times C_{2}} , but V 8 {\displaystyle V_{8}} , and C 5 × C 2 {\displaystyle C_{5}\times C_{2}} are 3-flattenable. [ 6 ] These graphs are shown in Figure 3. Furthermore, the following theorem from Belk & Connelly [ 1 ] shows that the only forbidden minors for 3-flattenability are K 5 {\displaystyle K_{5}} and K 2 , 2 , 2 {\displaystyle K_{2,2,2}} .
Theorem. [ 1 ] A graph is 3-flattenable if and only if it does not have K 5 {\displaystyle K_{5}} or K 2 , 2 , 2 {\displaystyle K_{2,2,2}} as a minor.
Proof Idea: The proof given in Belk & Connelly [ 1 ] assumes that V 8 {\displaystyle V_{8}} , and C 5 × C 2 {\displaystyle C_{5}\times C_{2}} are 3-realizable. This is proven in the same article using mathematical tools from rigidity theory, specifically those concerning tensegrities. The complete graph K 5 {\displaystyle K_{5}} is not 3-flattenable, and the same argument that shows K 4 {\displaystyle K_{4}} is not 2-flattenable and K 3 {\displaystyle K_{3}} is not 1-flattenable works here: assigning the distance 1 to the edges of K 5 {\displaystyle K_{5}} yields a DCS with no realization in 3-dimensions. Figure 4 gives a visual proof that the graph K 2 , 2 , 2 {\displaystyle K_{2,2,2}} is not 3-flattenable. Vertices 1, 2, and 3 form a degenerate triangle. For the edges between vertices 1- 5, edges ( 1 , 4 ) {\displaystyle (1,4)} and ( 3 , 4 ) {\displaystyle (3,4)} are assigned the distance 2 {\displaystyle {\sqrt {2}}} and all other edges are assigned the distance 1. Vertices 1- 5 have unique placements in 3-dimensions, up to congruence. Vertex 6 has 2 possible placements in 3-dimensions: 1 on each side of the plane Π {\displaystyle \Pi } defined by vertices 1, 2, and 4. Hence, the edge ( 5 , 6 ) {\displaystyle (5,6)} has two distance values that can be realized in 3-dimensions. However, vertex 6 can revolve around the plane Π {\displaystyle \Pi } in 4-dimensions while satisfying all constraints, so the edge ( 5 , 6 ) {\displaystyle (5,6)} has infinitely many distance values that can only be realized in 4-dimensions or higher. Thus, K 2 , 2 , 2 {\displaystyle K_{2,2,2}} is not 3-flattenable. The fact that these graphs are not 3-flattenable proves that any graph with either K 5 {\displaystyle K_{5}} or K 2 , 2 , 2 {\displaystyle K_{2,2,2}} as a minor is not 3-flattenable.
The other direction shows that if a graph G {\displaystyle G} does not have K 5 {\displaystyle K_{5}} or K 2 , 2 , 2 {\displaystyle K_{2,2,2}} as a minor, then G {\displaystyle G} can be constructed from partial 3-trees, V 8 {\displaystyle V_{8}} , and C 5 × C 2 {\displaystyle C_{5}\times C_{2}} via 1-sums , 2-sums, and 3-sums. These graphs are all 3-flattenable and these operations preserve 3-flattenability, so G {\displaystyle G} is 3-flattenable.
The techniques in this proof yield the following result from Belk & Connelly. [ 1 ]
Theorem. [ 1 ] Every 3-realizable graph is a subgraph of a graph that can be obtained by a sequence of 1-sums, 2-sums, and 3-sums of the graphs K 4 {\displaystyle K_{4}} , V 8 {\displaystyle V_{8}} , and C 5 × C 2 {\displaystyle C_{5}\times C_{2}} .
Example. The previous theorem can be applied to show that the 1-skeleton of a cube is 3-flattenable. Start with the graph K 4 {\displaystyle K_{4}} , which is the 1-skeleton of a tetrahedron. On each face of the tetrahedron, perform a 3-sum with another K 4 {\displaystyle K_{4}} graph, i.e. glue two tetrahedra together on their faces. The resulting graph contains the cube as a subgraph and is 3-flattenable.
Giving a forbidden minor characterization of d {\displaystyle d} -flattenable graphs, for dimension d > 3 {\displaystyle d>3} , is an open problem. For any dimension d {\displaystyle d} , K d + 2 {\displaystyle K_{d+2}} and the 1-skeleton of the d {\displaystyle d} -dimensional analogue of an octahedron are forbidden minors for d {\displaystyle d} -flattenability. [ 1 ] It is conjectured that the number of forbidden minors for d {\displaystyle d} -flattenability grows asymptotically to the number of forbidden minors for partial d {\displaystyle d} -trees, and there are over 75 {\displaystyle 75} forbidden minors for partial 4-trees. [ 1 ]
An alternative characterization of d {\displaystyle d} -flattenable graphs relates flattenability to Cayley configuration spaces. [ 2 ] [ 7 ] See the section on the connection to Cayley configuration spaces .
Given a distance constraint system and a dimension d {\displaystyle d} , the graph realization problem asks for a d {\displaystyle d} -dimensional framework of the DCS, if one exists. There are algorithms to realize d {\displaystyle d} -flattenable graphs in d {\displaystyle d} -dimensions, for d ≤ 3 {\displaystyle d\leq 3} , that run in polynomial time in the size of the graph. For d = 1 {\displaystyle d=1} , realizing each tree in a forest in 1-dimension is trivial to accomplish in polynomial time. An algorithm for d = 2 {\displaystyle d=2} is mentioned in Belk & Connelly. [ 1 ] For d = 3 {\displaystyle d=3} , the algorithm in So & Ye [ 8 ] obtains a framework r {\displaystyle r} of a DCS using semidefinite programming techniques and then utilizes the "folding" method described in Belk [ 6 ] to transform r {\displaystyle r} into a 3-dimensional framework.
This section concerns flattenability results for graphs under general p {\displaystyle p} -norms , for 1 ≤ p ≤ ∞ {\displaystyle 1\leq p\leq \infty } .
Determining the flattenability of a graph under a general p {\displaystyle p} -norm can be accomplished using methods in algebraic geometry , as suggested in Belk & Connelly. [ 1 ] The question of whether a graph G = ( V , E ) {\displaystyle G=(V,E)} is d {\displaystyle d} -flattenable is equivalent to determining if two semi-algebraic sets are equal. One algorithm to compare two semi-algebraic sets takes ( 4 | E | ) O ( n d | V | 2 ) {\displaystyle (4|E|)^{O\left(nd|V|^{2}\right)}} time. [ 9 ]
For general l p {\displaystyle l_{p}} -norms, there is a close relationship between flattenability and Cayley configuration spaces . [ 2 ] [ 7 ] The following theorem and its corollary are found in Sitharam & Willoughby. [ 2 ]
Theorem. [ 2 ] A graph G {\displaystyle G} is d {\displaystyle d} -flattenable if and only if for every subgraph H = G ∖ F {\displaystyle H=G\setminus F} of G {\displaystyle G} resulting from removing a set of edges F {\displaystyle F} from G {\displaystyle G} and any l p p {\displaystyle l_{p}^{p}} -distance vector δ H {\displaystyle \delta _{H}} such that the DCS ( H , δ H ) {\displaystyle (H,\delta _{H})} has a realization, the d {\displaystyle d} -dimensional Cayley configuration space of ( H , δ H ) {\displaystyle (H,\delta _{H})} over F {\displaystyle F} is convex.
Corollary. A graph G {\displaystyle G} is not d {\displaystyle d} -flattenable if there exists some subgraph H = G ∖ F {\displaystyle H=G\setminus F} of G {\displaystyle G} and some l p p {\displaystyle l_{p}^{p}} -distance vector δ {\displaystyle \delta } such that the d {\displaystyle d} -dimensional Cayley configuration space of ( H , δ H ) {\displaystyle (H,\delta _{H})} over F {\displaystyle F} is not convex.
The l 1 {\displaystyle l_{1}} and l ∞ {\displaystyle l_{\infty }} norms are equivalent up to rotating axes in 2-dimensions, [ 5 ] so 2-flattenability results for either norm hold for both. This section uses the l 1 {\displaystyle l_{1}} -norm. The complete graph K 4 {\displaystyle K_{4}} is 2-flattenable under the l 1 {\displaystyle l_{1}} -norm and K 5 {\displaystyle K_{5}} is 3-flattenable, but not 2-flattenable. [ 10 ] These facts contribute to the following results on 2-flattenability under the l 1 {\displaystyle l_{1}} -norm found in Sitharam & Willoughby. [ 2 ]
Observation. [ 2 ] The set of 2-flattenable graphs under the l 1 {\displaystyle l_{1}} -norm (and l ∞ {\displaystyle l_{\infty }} -norm) strictly contains the set of 2-flattenable graphs under the l 2 {\displaystyle l_{2}} -norm.
Theorem. [ 2 ] A 2-sum of 2-flattenable graphs is 2-flattenable if and only if at most one graph has a K 4 {\displaystyle K_{4}} minor.
The fact that K 4 {\displaystyle K_{4}} is 2-flattenable but K 5 {\displaystyle K_{5}} is not has implications on the forbidden minor characterization for 2-flattenable graphs under the l 1 {\displaystyle l_{1}} -norm. Specifically, the minors of K 5 {\displaystyle K_{5}} could be forbidden minors for 2-flattenability. The following results explore these possibilities and give the complete set of forbidden minors.
Theorem. [ 2 ] The banana graph, or K 5 {\displaystyle K_{5}} with one edge removed, is not 2-flattenable.
Observation. [ 2 ] The graph obtained by removing two edges that are incident to the same vertex from K 5 {\displaystyle K_{5}} is 2-flattenable.
Observation. [ 2 ] Connected graphs on 5 vertices with 7 edges are 2-flattenable.
The only minor of K 5 {\displaystyle K_{5}} left is the wheel graph W 5 {\displaystyle W_{5}} , and the following result shows that this is one of the forbidden minors.
Theorem. [ 11 ] A graph is 2-flattenable under the l 1 {\displaystyle l_{1}} - or l ∞ {\displaystyle l_{\infty }} -norm if and only if it does not have either of the following graphs as minors:
This section relates flattenability to concepts in structural (combinatorial) rigidity theory , such as the rigidity matroid . The following results concern the l p p {\displaystyle l_{p}^{p}} -distance cone Φ n , l p {\displaystyle \Phi _{n,l_{p}}} , i.e., the set of all l p p {\displaystyle l_{p}^{p}} -distance vectors that can be realized as a configuration of n {\displaystyle n} points in some dimension. A proof that this set is a cone can be found in Ball. [ 12 ] The d {\displaystyle d} -stratum of this cone Φ n , l p d {\displaystyle \Phi _{n,l_{p}}^{d}} are the vectors that can be realized as a configuration of n {\displaystyle n} points in d {\displaystyle d} -dimensions. The projection of Φ n , l p {\displaystyle \Phi _{n,l_{p}}} or Φ n , l p d {\displaystyle \Phi _{n,l_{p}}^{d}} onto the edges of a graph G {\displaystyle G} is the set of l p p {\displaystyle l_{p}^{p}} distance vectors that can be realized as the edge-lengths of some embedding of G {\displaystyle G} .
A generic property of a graph G {\displaystyle G} is one that almost all frameworks of distance constraint systems, whose graph is G {\displaystyle G} , have. A framework of a DCS ( G , δ ) {\displaystyle (G,\delta )} under an l p {\displaystyle l_{p}} -norm is a generic framework (with respect to d {\displaystyle d} -flattenability) if the following two conditions hold:
Condition (1) ensures that the neighborhood Ω {\displaystyle \Omega } has full rank. In other words, Ω {\displaystyle \Omega } has dimension equal to the flattening dimension of the complete graph K n {\displaystyle K_{n}} under the l p {\displaystyle l_{p}} -norm. See Kitson [ 13 ] for a more detailed discussion of generic framework for l p {\displaystyle l_{p}} -norms. The following results are found in Sitharam & Willoughby. [ 2 ]
Theorem. [ 2 ] A graph G {\displaystyle G} is d {\displaystyle d} -flattenable if and only if every generic framework of G {\displaystyle G} is d {\displaystyle d} -flattenable.
Theorem. [ 2 ] d {\displaystyle d} -flattenability is not a generic property of graphs, even for the l 2 {\displaystyle l_{2}} -norm.
Theorem. [ 2 ] A generic d {\displaystyle d} -flattenable framework of a graph G {\displaystyle G} exists if and only if G {\displaystyle G} is independent in the generic d {\displaystyle d} -dimensional rigidity matroid.
Corollary. [ 2 ] A graph G {\displaystyle G} is d {\displaystyle d} -flattenable only if G {\displaystyle G} is independent in the d {\displaystyle d} -dimensional rigidity matroid.
Theorem. [ 2 ] For general l p {\displaystyle l_{p}} -norms, a graph G {\displaystyle G} is | https://en.wikipedia.org/wiki/Graph_flattenability |
In algebraic topology and graph theory , graph homology describes the homology groups of a graph , where the graph is considered as a topological space . It formalizes the idea of the number of "holes" in the graph. It is a special case of a simplicial homology , as a graph is a special case of a simplicial complex. Since a finite graph is a 1-complex (i.e., its 'faces' are the vertices – which are 0-dimensional, and the edges – which are 1-dimensional), the only non-trivial homology groups are the 0th group and the 1st group. [ 1 ]
The general formula for the 1st homology group of a topological space X is: H 1 ( X ) := ker ∂ 1 / im ∂ 2 {\displaystyle H_{1}(X):=\ker \partial _{1}{\big /}\operatorname {im} \partial _{2}} The example below explains these symbols and concepts in full detail on a graph.
Let X be a directed graph with 3 vertices {x, y, z} and 4 edges {a: x → y, b: y → z, c: z → x, d: z → x}. It has several cycles :
If we cut the plane along the loop a + b + d, and then cut at c and "glue" at d, we get a cut along the loop a + b + c. This can be represented by the following relation: (a + b + d) + (c − d) = (a + b + c). To formally define this relation, we define the following commutative groups:
Most elements of C 1 are not cycles, for example a + b, 2a + 5b − c, etc. are not cycles. To formally define a cycle, we first define boundaries . The boundary of an edge is denoted by the ∂ 1 {\displaystyle \partial _{1}} operator and defined as its target minus its source, so ∂ 1 ( a ) = y − x , ∂ 1 ( b ) = z − y , ∂ 1 ( c ) = ∂ 1 ( d ) = x − z . {\displaystyle \partial _{1}(a)=y-x,~\partial _{1}(b)=z-y,~\partial _{1}(c)=\partial _{1}(d)=x-z.} So ∂ 1 {\displaystyle \partial _{1}} is a mapping from the group C 1 to the group C 0 . Since a, b, c, d are the generators of C 1 , this ∂ 1 {\displaystyle \partial _{1}} naturally extends to a group homomorphism from C 1 to C 0 . In this homomorphism, ∂ 1 ( a + b + c ) = ∂ 1 ( a ) + ∂ 1 ( b ) + ∂ 1 ( c ) = ( y − x ) + ( z − y ) + ( x − z ) = 0 {\displaystyle \partial _{1}(a+b+c)=\partial _{1}(a)+\partial _{1}(b)+\partial _{1}(c)=(y-x)+(z-y)+(x-z)=0} . Similarly, ∂ 1 {\displaystyle \partial _{1}} maps any cycle in C 1 to the zero element of C 0 . In other words, the set of cycles in C 1 generates the null space (the kernel) of ∂ 1 {\displaystyle \partial _{1}} . In this case, the kernel of ∂ 1 {\displaystyle \partial _{1}} has two generators: one corresponds to a + b + c and the other to a + b + d (the third cycle, c − d, is a linear combination of the first two). So ker ∂ 1 {\displaystyle \ker \partial _{1}} is isomorphic to Z 2 .
In a general topological space, we would define higher-dimensional chains. In particular, C 2 would be the free abelian group on the set of 2-dimensional objects. However, in a graph there are no such objects, so C 2 is a trivial group. Therefore, the image of the second boundary operator, ∂ 2 {\displaystyle \partial _{2}} , is trivial too. Therefore: H 1 ( X ) = ker ∂ 1 / im ∂ 2 ≅ Z 2 / Z 0 = Z 2 {\displaystyle H_{1}(X)=\ker \partial _{1}{\big /}\operatorname {im} \partial _{2}\cong \mathbb {Z} ^{2}/\mathbb {Z} ^{0}=\mathbb {Z} ^{2}} This corresponds to the intuitive fact that the graph has two "holes". The exponent is the number of holes.
The above example can be generalized to an arbitrary connected graph G = ( V , E ). Let T be a spanning tree of G . Every edge in E \ T corresponds to a cycle; these are exactly the linearly independent cycles. Therefore, the first homology group H 1 of a graph is the free abelian group with | E \ T | generators. This number equals | E | − | V | + 1; so: [ 1 ] H 1 ( X ) ≅ Z | E | − | V | + 1 . {\displaystyle H_{1}(X)\cong \mathbb {Z} ^{|E|-|V|+1}.} In a disconnected graph, when C is the set of connected components, a similar computation shows: H 1 ( X ) ≅ Z | E | − | V | + | C | . {\displaystyle H_{1}(X)\cong \mathbb {Z} ^{|E|-|V|+|C|}.} In particular, the first group is trivial if and only if X is a forest .
The general formula for the 0th homology group of a topological space X is: H 0 ( X ) := ker ∂ 0 / im ∂ 1 {\displaystyle H_{0}(X):=\ker \partial _{0}{\big /}\operatorname {im} \partial _{1}}
Returning to the graph with 3 vertices {x, y, z} and 4 edges {a: x → y, b: y → z, c: z → x, d: z → x}. Recall that the group C 0 is generated by the set of vertices. Since there are no (−1)-dimensional elements, the group C −1 is trivial, and so the entire group C 0 is a kernel of the corresponding boundary operator: ker ∂ 0 = C 0 {\displaystyle \ker \partial _{0}=C_{0}} = the free abelian group generated by {x, y, z}.
The image of ∂ 1 {\displaystyle \partial _{1}} contains an element for each pair of vertices that are boundaries of an edge, i.e., it is generated by the differences {y − x, z − y, x − z}. To calculate the quotient group, it is convenient to think of all the elements of im ∂ 1 {\displaystyle \operatorname {im} \partial _{1}} as "equivalent to zero". This means that x, y and z are equivalent – they are in the same equivalence class of the quotient. In other words, H 0 ( X ) {\displaystyle H_{0}(X)} is generated by a single element (any vertex can generate it). So it is isomorphic to Z .
The above example can be generalized to any connected graph . Starting from any vertex, it is possible to get to any other vertex by adding to it one or more expressions corresponding to edges (e.g. starting from x, one can get to z by adding y − x and z − y). Since the elements of im ∂ 1 {\displaystyle \operatorname {im} \partial _{1}} are all equivalent to zero, it means that all vertices of the graph are in a single equivalence class, and therefore H 0 ( X ) {\displaystyle H_{0}(X)} is isomorphic to Z .
In general, the graph can have several connected components . Let C be the set of components. Then, every connected component is an equivalence class in the quotient group. Therefore: H 0 ( X ) ≅ Z | C | . {\displaystyle H_{0}(X)\cong \mathbb {Z} ^{|C|}.} It can be generated by any | C |-tuple of vertices, one from each component.
Often, it is convenient to assume that the 0th homology of a connected graph is trivial (so that, if the graph contains a single point, then all its homologies are trivial). This leads to the definition of the reduced homology . For a graph, the reduced 0th homology is: H 0 ~ ( X ) ≅ Z | C | − 1 . {\displaystyle {\tilde {H_{0}}}(X)\cong \mathbb {Z} ^{|C|-1}.} This "reduction" affects only the 0th homology; the reduced homologies of higher dimensions are equal to the standard homologies.
A graph has only vertices (0-dimensional elements) and edges (1-dimensional elements). We can generalize the graph to an abstract simplicial complex by adding elements of a higher dimension. Then, the concept of graph homology is generalized by the concept of simplicial homology .
In the above example graph, we can add a two-dimensional "cell" enclosed between the edges c and d; let's call it A and assume that it is oriented clockwise. Define C 2 as the free abelian group generated by the set of two-dimensional cells, which in this case is a singleton {A}. Each element of C 2 is called a 2-dimensional chain .
Just like the boundary operator from C 1 to C 0 , which we denote by ∂ 1 {\displaystyle \partial _{1}} , there is a boundary operator from C 2 to C 1 , which we denote by ∂ 2 {\displaystyle \partial _{2}} . In particular, the boundary of the 2-dimensional cell A are the 1-dimensional edges c and d, where c is in the "correct" orientation and d is in a "reverse" orientation; therefore: ∂ 2 ( A ) = c − d {\displaystyle \partial _{2}(A)=c-d} . The sequence of chains and boundary operators can be presented as follows: C 2 → ∂ 2 C 1 → ∂ 1 C 0 {\displaystyle C_{2}\xrightarrow {\partial _{2}} C_{1}\xrightarrow {\partial _{1}} C_{0}} The addition of the 2-dimensional cell A implies that its boundary, c − d, no longer represents a hole (it is homotopic to a single point). Therefore, the group of "holes" now has a single generator, namely a + b + c (it is homotopic to a+b+d). The first homology group is now defined as the quotient group : H 1 ( X ) := ker ∂ 1 / im ∂ 2 {\displaystyle H_{1}(X):=\ker \partial _{1}{\big /}\operatorname {im} \partial _{2}} Here, ker ∂ 1 {\displaystyle \ker \partial _{1}} is the group of 1-dimensional cycles, which is isomorphic to Z 2 , and im ∂ 2 {\displaystyle \operatorname {im} \partial _{2}} is the group of 1-dimensional cycles that are boundaries of 2-dimensional cells, which is isomorphic to Z . Hence, their quotient H 1 is isomorphic to Z . This corresponds to the fact that X now has a single hole. Previously. the image of ∂ 2 {\displaystyle \partial _{2}} was the trivial group , so the quotient was equal to ker ∂ 1 {\displaystyle \ker \partial _{1}} . Suppose now that we add another oriented 2-dimensional cell B between the edges c and d, such that ∂ 2 ( B ) = ∂ 2 ( A ) = c − d {\displaystyle \partial _{2}(\mathrm {B} )=\partial _{2}(\mathrm {A} )=c-d} . Now C 2 is the free abelian group generated by {A, B}. This does not change H 1 – it is still isomorphic to Z ( X still has a single 1-dimensional hole). But now C 2 contains the two-dimensional cycle A − B, so ∂ 2 {\displaystyle \partial _{2}} has a non-trivial kernel. This cycle generates the second homology group, corresponding to the fact that there is a single two-dimensional hole: H 2 ( X ) := ker ∂ 2 ≅ Z {\displaystyle H_{2}(X):=\ker \partial _{2}\cong \mathbb {Z} } We can proceed and add a 3-cell – a solid 3-dimensional object (called C) bounded by A and B. Define C 3 as the free abelian group generated by {C}, and the boundary operator ∂ 3 : C 3 → C 2 {\displaystyle \partial _{3}:C_{3}\to C_{2}} . We can orient C such that ∂ 3 ( C ) = A − B {\displaystyle \partial _{3}(\mathrm {C} )=\mathrm {A} -\mathrm {B} } ; note that the boundary of C is a cycle in C 2 . Now the second homology group is: H 2 ( X ) := ker ∂ 2 / im ∂ 3 ≅ 0 {\displaystyle H_{2}(X):=\ker \partial _{2}{\big /}\operatorname {im} \partial _{3}\cong {0}} corresponding to the fact that there are no two-dimensional holes (C "fills the hole" between A and B).
In general, one can define chains of any dimension. If the maximum dimension of a chain is k , then we get the following sequence of groups: C k → ∂ k C k − 1 ⋯ C 1 → ∂ 1 C 0 {\displaystyle C_{k}\xrightarrow {\partial _{k}} C_{k-1}\cdots C_{1}\xrightarrow {\partial _{1}} C_{0}} It can be proved that any boundary of a ( k + 1)-dimensional cell is a k -dimensional cycle. In other words, for any k , im ∂ k + 1 {\displaystyle \operatorname {im} \partial _{k+1}} (the group of boundaries of k + 1 elements) is contained in ker ∂ k {\displaystyle \ker \partial _{k}} (the group of k -dimensional cycles). Therefore, the quotient ker ∂ k / im ∂ k + 1 {\displaystyle \ker \partial _{k}{\big /}\operatorname {im} \partial _{k+1}} is well-defined, and it is defined as the k th homology group: H k ( X ) := ker ∂ k / im ∂ k + 1 {\displaystyle H_{k}(X):=\ker \partial _{k}{\big /}\operatorname {im} \partial _{k+1}} | https://en.wikipedia.org/wiki/Graph_homology |
Graph paper , coordinate paper , grid paper , or squared paper is writing paper that is printed with fine lines making up a regular grid . It is available either as loose leaf paper or bound in notebooks or graph books.
It is commonly found in mathematics and engineering education settings, exercise books , and in laboratory notebooks .
The lines are often used as guides for mathematical notation , plotting graphs of functions or experimental data , and drawing curves .
The Metropolitan Museum of Art owns a pattern book dated to around 1596 in which each page bears a grid printed with a woodblock . The owner has used these grids to create block pictures in black and white and in colour. [ 1 ]
The first commercially published "coordinate paper" is usually attributed to a Dr. Buxton of England, who patented paper printed with a rectangular coordinate grid, in 1794. [ 2 ] A century later, E. H. Moore, a distinguished mathematician at the University of Chicago, advocated usage of paper or exercise books with "squared lines" by students of high schools and universities. [ 3 ] The 1906 edition of Algebra for Beginners by H. S. Hall and S. R. Knight included a strong statement that "the squared paper should be of good quality and accurately ruled to inches and tenths of an inch. Experience shows that anything on a smaller scale (such as 'millimeter' paper) is practically worthless in the hands of beginners." [ 4 ]
The term "graph paper" did not catch on quickly in American usage. A School Arithmetic (1919) by H. S. Hall and F. H. Stevens had a chapter on graphing with "squared paper". Analytic Geometry (1937) by W. A. Wilson and J. A. Tracey used the phrase "coordinate paper". The term "squared paper" remained in British usage for longer; for example it was used in Public School Arithmetic (2023) by W. M. Baker and A. A. Bourne published in London. [ 4 ]
In general, graphs showing grids are sometimes called Cartesian graphs because the square can be used to map measurements onto a Cartesian coordinate system. | https://en.wikipedia.org/wiki/Graph_paper |
In mathematics, a graph polynomial is a graph invariant whose value is a polynomial . Invariants of this type are studied in algebraic graph theory . [ 1 ] Important graph polynomials include: | https://en.wikipedia.org/wiki/Graph_polynomial |
In graph theory , a graph property or graph invariant is a property of graphs that depends only on the abstract structure, not on graph representations such as particular labellings or drawings of the graph. [ 1 ]
While graph drawing and graph representation are valid topics in graph theory, in order to focus only on the abstract structure of graphs, a graph property is defined to be a property preserved under all possible isomorphisms of a graph. In other words, it is a property of the graph itself, not of a specific drawing or representation of the graph.
Informally, the term "graph invariant" is used for properties expressed quantitatively, while "property" usually refers to descriptive characterizations of graphs. For example, the statement "graph does not have vertices of degree 1" is a "property" while "the number of vertices of degree 1 in a graph" is an "invariant".
More formally, a graph property is a class of graphs with the property that any two isomorphic graphs either both belong to the class, or both do not belong to it. [ 1 ] Equivalently, a graph property may be formalized using the indicator function of the class, a function from graphs to Boolean values that is true for graphs in the class and false otherwise; again, any two isomorphic graphs must have the same function value as each other. A graph invariant or graph parameter may similarly be formalized as a function from graphs to a broader class of values, such as integers, real numbers , sequences of numbers, or polynomials , that again has the same value for any two isomorphic graphs. [ 2 ]
Many graph properties are well-behaved with respect to certain natural partial orders or preorders defined on graphs:
These definitions may be extended from properties to numerical invariants of graphs: a graph invariant is hereditary, monotone, or minor-closed if the function formalizing the invariant forms a monotonic function from the corresponding partial order on graphs to the real numbers.
Additionally, graph invariants have been studied with respect to their behavior with regard to disjoint unions of graphs:
In addition, graph properties can be classified according to the type of graph they describe: whether the graph is undirected or directed , whether the property applies to multigraphs , etc. [ 1 ]
The target set of a function that defines a graph invariant may be one of:
Easily computable graph invariants are instrumental for fast recognition of graph isomorphism , or rather non-isomorphism, since for any invariant at all, two graphs with different values cannot (by definition) be isomorphic. Two graphs with the same invariants may or may not be isomorphic, however.
A graph invariant I ( G ) is called complete if the identity of the invariants I ( G ) and I ( H ) implies the isomorphism of the graphs G and H . Finding an efficiently-computable such invariant (the problem of graph canonization ) would imply an easy solution to the challenging graph isomorphism problem . However, even polynomial-valued invariants such as the chromatic polynomial are not usually complete. The claw graph and the path graph on 4 vertices both have the same chromatic polynomial, for example. | https://en.wikipedia.org/wiki/Graph_property |
Graphane is a two-dimensional polymer of carbon and hydrogen with the formula unit (CH) n where n is large. [ 1 ] Partial hydrogenation results in hydrogenated graphene, which was reported by Elias et al. in 2009 by a TEM study to be "direct evidence for a new graphene-based derivative". The authors viewed the panorama as "a whole range of new two-dimensional crystals with designed electronic and other properties". With the band gap ranges from 0 to 0.8 eV [ 2 ]
Its preparation was reported in 2009. [ 2 ] Graphane can be formed by electrolytic hydrogenation of graphene, few-layer graphene or high-oriented pyrolytic graphite . In the last case mechanical exfoliation of hydrogenated top layers can be used. [ 3 ]
The first theoretical description of graphane was reported in 2003. [ 4 ] The structure was found, using a cluster expansion method, to be the most stable of all the possible hydrogenation ratios of graphene. [ 4 ] In 2007, researchers found that the compound is more stable than other compounds containing carbon and hydrogen, such as benzene , cyclohexane and polyethylene . [ 1 ] This group named the predicted compound graphane, because it is the fully saturated version of graphene.
Graphane is effectively made up of cyclohexane units, and, in parallel to cyclohexane, the most stable structural conformation is not planar, but an out-of-plane structure, including the chair and boat conformers, in order to minimize ring strain and allow for the ideal tetrahedral bond angle of 109.5° for sp 3 -bonded atoms. However, in contrast to cyclohexane, graphane cannot interconvert between these different conformers because not only are they topologically different, but they are also different structural isomers with different configurations. The chair conformer has the hydrogens alternating above or below the plane from carbon to neighboring carbon, while the boat conformer has the hydrogen atoms alternating in pairs above and below the plane. There are also other possible conformational isomers, including the twist-boat and twist-boat-chair. As with cyclohexane, the most stable conformer for graphane is the chair, followed by the twist-boat structure. [ 5 ] [ 6 ] While the buckling of the chair conformer would imply lattice shrinkage, [ 6 ] calculations show the lattice actually expands by approximately 30% [ 7 ] due to the opposing effect on the lattice spacing of the longer carbon-carbon (C-C) bonds, as the sp 3 -bonding of graphane yields longer C-C bonds of 1.52 Å compared to the sp 2 -bonding of graphene which yields shorter C-C bonds of 1.42 Å. [ 7 ] As just established, theoretically if graphane was perfect and everywhere in its stable chair conformer, the lattice would expand; however, the existence of domains where the locally stable twist-boat conformer dominates “contribute to the experimentally observed lattice contraction.” [ 6 ] When experimentalists have characterized graphane, they have found a distribution of lattice spacings, corresponding to different domains exhibiting different conformers. [ 6 ] Any disorder in hydrogenation conformation tends to contract the lattice constant by about 2.0%. [ 8 ]
Graphane is an insulator. Chemical functionalization of graphene with hydrogen may be a suitable method to open a band gap in graphene. [ 1 ] P-doped graphane is proposed to be a high-temperature BCS theory superconductor with a T c above 90 K . [ 9 ]
Partial hydrogenation leads to hydrogenated graphene rather than (fully hydrogenated) graphane. [ 2 ] Such compounds are usually named as "graphane-like" structures. Graphane and graphane-like structures can be formed by electrolytic hydrogenation of graphene or few-layer graphene or high-oriented pyrolytic graphite . In the last case mechanical exfoliation of hydrogenated top layers can be used. [ 3 ]
Hydrogenation of graphene on substrate affects only one side, preserving hexagonal symmetry. One-sided hydrogenation of graphene is possible due to the existence of ripplings. Because the latter are distributed randomly, the obtained material is disordered in contrast to two-sided graphane. [ 2 ] Annealing allows the hydrogen to disperse, reverting to graphene. [ 10 ] Simulations revealed the underlying kinetic mechanism. [ 11 ]
p-Doped graphane is postulated to be a high-temperature BCS theory superconductor with a T c above 90 K . [ 9 ]
Graphane has been proposed for hydrogen storage. [ 1 ] Hydrogenation decreases the dependence of the lattice constant on temperature, which indicates a possible application in precision instruments. [ 8 ] | https://en.wikipedia.org/wiki/Graphane |
GrapheneOS [ b ] is an open-source , privacy- and security-focused Android operating system that runs on selected Google Pixel devices, including smartphones , tablets and foldables . [ 5 ]
The main developer , Daniel Micay, originally worked on CopperheadOS , until a schism over software licensing between the co-founders of Copperhead Limited led to Micay's dismissal from the company in 2018. [ 6 ] After the incident, Micay continued working on the Android Hardening project, [ 6 ] [ 7 ] which was renamed as GrapheneOS [ 7 ] and announced in April 2019. [ 6 ]
In March 2022, two GrapheneOS apps, "Secure Camera" and "Secure PDF Viewer", were released on the Google Play Store . [ 8 ]
Also in March 2022, GrapheneOS reportedly released Android 12L for Google Pixel devices before Google did, second to ProtonAOSP. [ 9 ]
In May 2023, Micay announced he would step down as lead developer of GrapheneOS and as a GrapheneOS Foundation director. [ 10 ] As of September 2024, the GrapheneOS Foundation's Federal Corporation Information lists Micay as one of its directors. [ 2 ]
By default Google apps are not installed with GrapheneOS, [ 5 ] [ 12 ] but users can install a sandboxed version of Google Play Services from the pre-installed "App Store". [ 12 ] The sandboxed Google Play Services allows access to the Google Play Store and apps dependent on it, along with features including push notifications and in-app payments. [ 12 ]
Around January 2024, Android Auto support was added to GrapheneOS, allowing users to install it via the App Store. [ 13 ] The Sandboxed Google Play compatibility layer settings adds a new permission menu with 4 toggles for granting the minimal access required for wired Android Auto, wireless Android Auto, audio routing and phone calls. [ 14 ]
GrapheneOS introduces revocable network access and sensors permission toggles for each installed app. [ 5 ] [ 15 ] GrapheneOS also introduces a PIN scrambling option for the lock screen . [ 16 ]
GrapheneOS randomizes Wi-Fi MAC addresses per connection (to a Wi-Fi network) by default, instead of the Android per-network default. [ 6 ] [ 17 ]
GrapheneOS includes automatic phone reboot when not in use, automatic WiFi and Bluetooth disabling, and system-level disabling of USB-C port, microphone, camera, and sensors for apps. Additionally, it offers the "Contact Scopes" feature, which allows users to select which contacts an app can access. [ 18 ]
A hardened Chromium -based web browser and WebView implementation known as Vanadium, is developed by GrapheneOS and included as the default web browser/WebView. [ 15 ] It includes automatic updates, process and site-level sandboxing, and built-in ad and tracker blocking. [ 19 ]
Auditor, a hardware-based attestation app, developed by GrapheneOS, which "provide strong hardware-based verification of the authenticity and integrity of the firmware / software on the device" is also included. [ 18 ]
Apps like Secure Camera and Secure PDF Viewer offer advanced privacy features such as automatic removal of Exif metadata and protection against malicious code in PDF files. [ 20 ]
GrapheneOS currently is only compatible with Google Pixel devices, [ 21 ] due to specific requirements that GrapheneOS has for adding support for a new device, including an unlockable bootloader and proper implementation of verified boot. [ 22 ] [ 23 ]
The operating system can be installed from various platforms, including Windows, macOS, Linux, and Android devices. Two installation methods are available: a WebUSB -based installer, recommended for most users, and a command-line based installer, intended for more experienced users. [ 24 ]
In 2019, Georg Pichler of Der Standard , and other news sources, quoted Edward Snowden saying on Twitter , "If I were configuring a smartphone today, I'd use Daniel Micay's GrapheneOS as the base operating system." [ 25 ]
In discussing why services should not force users to install proprietary apps, Lennart Mühlenmeier of netzpolitik.org suggested GrapheneOS as an alternative to Apple or Google. [ 26 ]
Svět Mobilně and Webtekno repeated the suggestions that GrapheneOS is a good security- and privacy-oriented replacement for standard Android. [ 27 ] [ 28 ]
In a detailed review of GrapheneOS for Golem.de , Moritz Tremmel and Sebastian Grüner said they were able to use GrapheneOS similarly to other Android systems, while enjoying more freedom from Google, without noticing differences from "additional memory protection, but that's the way it should be." They concluded GrapheneOS cannot change how "Android devices become garbage after three years at the latest", but "it can better secure the devices during their remaining life while protecting privacy." [ 6 ]
In June 2021, reviews of GrapheneOS, KaiOS , AliOS , and Tizen OS , were published in Cellular News. The review of GrapheneOS called it "arguably the best mobile operating system in terms of privacy and security." However, they criticized GrapheneOS for its inconvenience to users, saying "GrapheneOS is completely de-Googled and will stay that way forever—at least according to the developers." They also noticed a "slight performance decrease" and said "it might take two full seconds for an app—even if it’s just the Settings app—to fully load." [ 29 ]
In March 2022, writing for How-To Geek Joe Fedewa said that Google apps were not included due to concerns over privacy, and GrapheneOS also did not include a default app store . Instead, Fedewa suggested, F-Droid could be used. [ 5 ]
In 2022, Jonathan Lamont of MobileSyrup reviewed GrapheneOS installed on a Pixel 3 , after one week of use. He called GrapheneOS install process "straightforward" and concluded that he liked GrapheneOS overall, but criticized the post-install as "often not a seamless experience like using an unmodified Pixel or an iPhone ", attributing his experience to his "over-reliance on Google apps" and the absence of some "smart" features in GrapheneOS default keyboard and camera apps, in comparison to software from Google. [ 12 ]
In his initial impressions post a week prior, Lamont said that after an easy install there were issues with permissions for Google's Messages app, and difficulty importing contacts; Lamont then concluded, "Anyone looking for a straightforward experience may want to avoid GrapheneOS or other privacy-oriented Android experiences since the privacy gains often come at the expense of convenience and ease of use." [ 30 ]
In July 2022, Charlie Osborne of ZDNET suggested that individuals who suspect a Pegasus infection use a secondary device with GrapheneOS for secure communication. [ 31 ]
In January 2023, a Swiss startup company, Apostrophy AG, announced AphyOS, which is a subscription fee-based Android operating system and services "built atop" GrapheneOS. [ 32 ] [ 33 ] | https://en.wikipedia.org/wiki/GrapheneOS |
A graphic designer is a practitioner who follows the discipline of graphic design , either within companies or organizations or independently. They are professionals in design and visual communication , with their primary focus on transforming linguistic messages into graphic manifestations, whether tangible or intangible. [ 1 ] [ 2 ] They are responsible for planning, designing, projecting, and conveying messages or ideas through visual communication. [ 3 ] Graphic design is one of the most in-demand professions with significant job opportunities, as it allows leveraging technological advancements and working online from anywhere in the world. [ 4 ]
Referring back to the history of graphic design development, it is evident that the design field was always a skill demanding profession due to variability of printing responsibilities. [ 1 ] Unlike pre digital era, where design craft was rather an exclusive practice, the current situation in the field is more accessible and welcoming for everyone. [ 4 ] The easy access attracts many individuals to join the field. The concept of graphic designer is fluid: technically, anyone who knows how to use design software and manipulate the provided templates can be called a ‘graphic designer’. [ 5 ] This profession is unique in the sense that unlike traditional jobs, one can still work as a designer as a freelancer without an official certification.
However, The design industry is currently occupied by an increasing number of ‘self-taught’ and ‘informally trained’ graphic designers who don't have any formal design education. [ 6 ]
The Industrial Revolution in England had drawn a distinctive line between fine art and commercial art, and this split formed graphic design as a modern design profession. [ citation needed ] During the first and second world wars, graphic designers were needed to unite, persuade and inform citizens with the help of printed media. [ 7 ] The post war era shifted designers’ focus to the advertising and consumerism promotion. [ 8 ] The profession of graphic design emerged from the printing and publishing industry, and the term has been widely used since the 1950s. [ 9 ]
Generally, a graphic designer works in areas such as branding , corporate identity , advertising , technical and artistic drawing , multimedia , etc. It is a profession that exposes individuals to various academic fields during their university career, [ 5 ] [ 6 ] [ 10 ] because they need to understand human anatomy , psychology , photography , painting and printing techniques , mathematics , marketing , digital animation , 3D modeling , and some professionals even complement their skills with programming , [ 11 ] providing a comprehensive view of a company by addressing the three essential factors evaluated: structure, team, and product. [ 12 ] Graphic designers are usually expected to have process management, conceptual design, technical design and software skills to apply for a graphic designer position. [ 13 ]
Graphic design encompasses various extends of expertise, which is categorised by such levels of qualifications:
Professional requirements for graphic designers vary from one place to another. Their role and responsibilities evolve and morph each year, adapting to the current technologies and market demands. A practitioner essentially has two primary roles in the process: satisfying the design brief and executing the job. [ 14 ]
Designers should undergo specialized training, including advanced education and practical experience (internship) to develop skills and expertise in the workplace, which is necessary to obtain a credential that allows them to practice the profession. [ 15 ] [ 16 ] Practical, technical, and academic requirements to become a graphic designer vary by country or jurisdiction, although the formal study of design in academic institutions has played a crucial role in the overall development of the profession. [ 17 ] [ 18 ] [ 19 ] Graphic designers can work with singular clients or multiple people including collaborations. This is where communication is crucial because misunderstandings can lead to setbacks. [ 20 ]
The primary responsibility of graphic designers is to manipulate visual and textual content. [ 21 ] Today, graphic designers are much more than visual decorators-they are required to be versatile and have various skills besides the design realm.
Graphic design is usually tightly connected with stakeholders and commerce, which means that graphic designers' decisions depend on clients’ vision.
A graphic designer is a versatile instrument that is capable of visually communicating messages through a skilful usage of typography , imagery, compositional layout, visual hierarchy , colour combinations, and more.
The main goal of graphic designers is to effectively communicate messages relying on text and images. [ 22 ] Designers focus on imposing an order and structure to the manipulated content to facilitate and ease the communication process, while optimizing the likelihood that the conveyed message will be received and understood by the target audience. [ 23 ] Additionally, designers aim to create aesthetically appealing products and invent creative approaches to the design process. [ 24 ]
Depending on an employment type, besides primary goals, there are secondary goals such as:
One of the most important aspects of graphic designers is creative thinking . That is one of the most prominent expectations put into designer’s as designing is a creative process . Creativity allows graphic designers to stand out from others and when actively using it. Practitioners aim to experiment and find unconventional methods to create more unique, effective and prominent products.
According the Bureau of Labor Statistics , the median salary for graphic designers is $58,900 as of May 2023. The bottom 10% earned less than $36,420 while the top 10% earned more than $100,450. [ 25 ]
Designers should be able to solve visual communication problems or challenges. In doing so, the designer must identify the communications issue, gather and analyze information related to the issue, and generate potential approaches aimed at solving the problem. Iterative prototyping and user testing can be used to determine the success or failure of a visual service. Approaches to a communications problem are developed in the context of an audience and a media channel. Graphic designers must understand the social and cultural norms of that audience in order to develop visual services that are perceived as relevant, understandable and effective. [ 26 ] Directly speaking with individuals from set audiences can prevent any complications. [ 27 ] A good graphic designer is able to adapt existing historical or contemporary models and derive unique approaches, which come from a detailed research, and apply them to solve complex problems in an effortless manner. [ 28 ]
Graphic designers should also have a thorough understanding of production and rendering methods. Some of the technologies and methods of production are drawing, offset printing, photography, and time-based and interactive media (film, video, computer multimedia). Frequently, designers are also called upon to manage color in different media. [ 26 ] For instance, graphic designers use different colors for digital and print advertisements. RGB — standing for red, green, blue — is an additive color model used for digital media designs. However, the CMYK color model is made up of subtractive colors — cyan, magenta, yellow, and black — and used in designing print media. The reason for the different models is that when designing print ads, colors look different on the screen and when printed onto paper. For example, the colors appear darker on paper than on screen. [ 29 ]
While appreciating the advantages of the development of AI technologies in the design realm, it is important to notice a drastic shift in visual communication sphere that influences its practitioners. The position of the human designer is challenged by the advancement of AI(Artificial Intelligence) and ML (Machine Learning) in graphic design, altering how it is defined and perceived. According to some theories, human designers will need to shift their focus to the facilitation and curation of context-sensitive design services. The idea that humans are positioned inside a deficit narrative is one of the most common motifs in the research of automated design technologies. Humans are portrayed as imperfect and untrustworthy in comparison to robots, unable to perform jobs with the same accuracy and speed. The progress is appealing under neoliberal capitalism. In addition to the expenses of acquisition and upkeep, machines might execute a greater workload faster and without payment. [ 30 ]
According to some analysts, graphic designers will act as intermediaries between customers and computer-generated goods. In such case, the task of the designer is not giving the form to a product, but seeding the system and evaluating the results. In this case, the human designer uses their expertise, skills and knowledge to understand and improve outcomes to the satisfaction of a client. Designers are more concerned with making sure the product is sound and of the appropriate quality. It is suggested that the designer will collaborate with automated designers as part of a larger digital ecosystem rather than serving as the "master" of tools. [ 31 ]
Some experts emphasise a future in which machines replace designers almost entirely, as they become better and more efficient at tasks that usually are done by human designers. [ 32 ] | https://en.wikipedia.org/wiki/Graphic_designer |
In game theory , the graphical form or graphical game is an alternate compact representation of strategic interactions that efficiently models situations where players' outcomes depend only on a subset of other players. [ 1 ] First formalized by Michael Kearns , Michael Littman , and Satinder Singh in 2001, this approach complements traditional representations such as the normal form and extensive form by leveraging concepts from graph theory to achieve more concise game descriptions.
In a graphical game representation, players are depicted as nodes in a graph , with edges connecting players whose decisions directly affect each other. Each player's utility function depends only on their own strategy and the strategies of their immediate neighbors in the graph, rather than on all players' actions. This framework is particularly valuable for modeling social network interactions, economic networks, and localized competitive scenarios where players primarily respond to those in their immediate vicinity.
The graphical approach offers significant advantages when representing large games with limited interaction patterns, as it can exponentially reduce the amount of information needed to fully describe the game. This compact representation facilitates more efficient computational analysis for complex multi-agent systems across fields such as artificial intelligence , economics , and network science .
A graphical game is represented by a graph G {\displaystyle G} , in which each player is represented by a node, and there is an edge between two nodes i {\displaystyle i} and j {\displaystyle j} iff their utility functions are dependent on the strategy which the other player will choose. Each node i {\displaystyle i} in G {\displaystyle G} has a function u i : { 1 … m } d i + 1 → R {\displaystyle u_{i}:\{1\ldots m\}^{d_{i}+1}\rightarrow \mathbb {R} } , where d i {\displaystyle d_{i}} is the degree of vertex i {\displaystyle i} . u i {\displaystyle u_{i}} specifies the utility of player i {\displaystyle i} as a function of his strategy as well as those of his neighbors.
For a general n {\displaystyle n} players game, in which each player has m {\displaystyle m} possible strategies, the size of a normal form representation would be O ( m n ) {\displaystyle O(m^{n})} . The size of the graphical representation for this game is O ( m d ) {\displaystyle O(m^{d})} where d {\displaystyle d} is the maximal node degree in the graph. If d ≪ n {\displaystyle d\ll n} , then the graphical game representation is much smaller.
In case where each player's utility function depends only on one other player:
The maximal degree of the graph is 1, and the game can be described as n {\displaystyle n} functions (tables) of size m 2 {\displaystyle m^{2}} . So, the total size of the input will be n m 2 {\displaystyle nm^{2}} .
Finding Nash equilibrium in a game takes exponential time in the size of the representation. If the graphical representation of the game is a tree, we can find the equilibrium in polynomial time. In the general case, where the maximal degree of a node is 3 or more, the problem is NP-complete . | https://en.wikipedia.org/wiki/Graphical_game_theory |
Graphical models have become powerful frameworks for protein structure prediction , protein–protein interaction , and free energy calculations for protein structures. Using a graphical model to represent the protein structure allows the solution of many problems including secondary structure prediction, protein-protein interactions, protein-drug interaction, and free energy calculations.
There are two main approaches to using graphical models in protein structure modeling. The first approach uses discrete variables for representing the coordinates or the dihedral angles of the protein structure. The variables are originally all continuous values and, to transform them into discrete values, a discretization process is typically applied. The second approach uses continuous variables for the coordinates or dihedral angles.
Markov random fields , also known as undirected graphical models are common representations for this problem. Given an undirected graph G = ( V , E ), a set of random variables X = ( X v ) v ∈ V indexed by V , form a Markov random field with respect to G if they satisfy the pairwise Markov property:
In the discrete model, the continuous variables are discretized into a set of favorable discrete values. If the variables of choice are dihedral angles , the discretization is typically done by mapping each value to the corresponding rotamer conformation.
Let X = { X b , X s } be the random variables representing the entire protein structure. X b can be represented by a set of 3-d coordinates of the backbone atoms, or equivalently, by a sequence of bond lengths and dihedral angles . The probability of a particular conformation x can then be written as:
where Θ {\displaystyle \Theta } represents any parameters used to describe this model, including sequence information, temperature etc. Frequently the backbone is assumed to be rigid with a known conformation, and the problem is then transformed to a side-chain placement problem. The structure of the graph is also encoded in Θ {\displaystyle \Theta } . This structure shows which two variables are conditionally independent. As an example, side chain angles of two residues far apart can be independent given all other angles in the protein. To extract this structure, researchers use a distance threshold, and only a pair of residues which are within that threshold are considered connected (i.e. have an edge between them).
Given this representation, the probability of a particular side chain conformation x s given the backbone conformation x b can be expressed as
where C ( G ) is the set of all cliques in G , Φ {\displaystyle \Phi } is a potential function defined over the variables, and Z is the partition function .
To completely characterize the MRF, it is necessary to define the potential function Φ {\displaystyle \Phi } . To simplify, the cliques of a graph are usually restricted to only the cliques of size 2, which means the potential function is only defined over pairs of variables. In Goblin System , these pairwise functions are defined as
where E ( x s i p , x b j q ) {\displaystyle E(x_{s}^{i_{p}},x_{b}^{j_{q}})} is the energy of interaction between rotamer state p of residue X i s {\displaystyle X_{i}^{s}} and rotamer state q of residue X j s {\displaystyle X_{j}^{s}} and k B {\displaystyle k_{B}} is the Boltzmann constant .
Using a PDB file, this model can be built over the protein structure. From this model, free energy can be calculated.
It has been shown that the free energy of a system is calculated as
where E is the enthalpy of the system, T the temperature and S, the entropy. Now if we associate a probability with each state of the system, (p(x) for each conformation value, x), G can be rewritten as
Calculating p(x) on discrete graphs is done by the generalized belief propagation algorithm. This algorithm calculates an approximation to the probabilities, and it is not guaranteed to converge to a final value set. However, in practice, it has been shown to converge successfully in many cases.
Graphical models can still be used when the variables of choice are continuous. In these cases, the probability distribution is represented as a multivariate probability distribution over continuous variables. Each family of distribution will then impose certain properties on the graphical model. Multivariate Gaussian distribution is one of the most convenient distributions in this problem. The simple form of the probability and the direct relation with the corresponding graphical model makes it a popular choice among researchers.
Gaussian graphical models are multivariate probability distributions encoding a network of dependencies among variables. Let Θ = [ θ 1 , θ 2 , … , θ n ] {\displaystyle \Theta =[\theta _{1},\theta _{2},\dots ,\theta _{n}]} be a set of n {\displaystyle n} variables, such as n {\displaystyle n} dihedral angles , and let f ( Θ = D ) {\displaystyle f(\Theta =D)} be the value of the probability density function at a particular value D . A multivariate Gaussian graphical model defines this probability as follows:
Where Z = ( 2 π ) n / 2 | Σ | 1 / 2 {\displaystyle Z=(2\pi )^{n/2}|\Sigma |^{1/2}} is the closed form for the partition function . The parameters of this distribution are μ {\displaystyle \mu } and Σ {\displaystyle \Sigma } . μ {\displaystyle \mu } is the vector of mean values of each variable, and Σ − 1 {\displaystyle \Sigma ^{-1}} , the inverse of the covariance matrix , also known as the precision matrix . Precision matrix contains the pairwise dependencies between the variables. A zero value in Σ − 1 {\displaystyle \Sigma ^{-1}} means that conditioned on the values of the other variables, the two corresponding variable are independent of each other.
To learn the graph structure as a multivariate Gaussian graphical model, we can use either L-1 regularization , or neighborhood selection algorithms. These algorithms simultaneously learn a graph structure and the edge strength of the connected nodes. An edge strength corresponds to the potential function defined on the corresponding two-node clique . We use a training set of a number of PDB structures to learn the μ {\displaystyle \mu } and Σ − 1 {\displaystyle \Sigma ^{-1}} .
Once the model is learned, we can repeat the same step as in the discrete case, to get the density functions at each node, and use analytical form to calculate the free energy. Here, the partition function already has a closed form , so the inference , at least for the Gaussian graphical models is trivial. If the analytical form of the partition function is not available, particle filtering or expectation propagation can be used to approximate Z , and then perform the inference and calculate free energy. | https://en.wikipedia.org/wiki/Graphical_models_for_protein_structure |
Graphical unitary group approach (GUGA) is a technique used to construct Configuration state functions (CSFs) in computational quantum chemistry . As reflected in its name, the method uses the mathematical properties of the unitary group .
The foundation of the unitary group approach (UGA) can be traced to the work of Moshinsky. [ 1 ] Later, Shavitt [ 2 ] [ 3 ] introduced the graphical aspect (GUGA) drawing on the earlier work of Paldus. [ 4 ]
Computer programs based on the GUGA method have been shown to be highly efficient. [ 5 ] [ 6 ] offering certain performance advantages over the older, sometimes called traditional, techniques for CSF construction. However traditional methods can offer other advantages [ 7 ] such as the ability to handle degenerate symmetry point groups, such as C ∞ v {\displaystyle C_{\infty v}} .
This quantum chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Graphical_unitary_group_approach |
GRaphic Animation System for Professionals ( GRASP ) was the first multimedia animation program for the IBM PC family of computers. It was also at one time the most widely used animation format. [ 1 ]
Originally conceived by Doug Wolfgram under the name FlashGun, the first public version of GRASP was the Graphical System for Presentation . The original software was written by Doug Wolfgram and Rob Neville. It later became the GRaphic Animation System for Professionals. Many regard this as the birth of the multimedia industry.
In 1984 Doug Wolfgram conceived of the idea of an animation scripting language that would allow graphics images to move smoothly across a computer screen under program control. Persyst Systems hired Wolfgram's company to develop some graphics and animation for their new graphics card, the BoB board . [ 2 ] The marketing manager from Persyst then moved to AST computer where he brought in Wolfgram to do similar animation work for the AST line of peripheral cards for PCs. 1
Wolfgram saw the growing demand for multimedia so he brought in John Bridges , with whom he had co-developed PCPaint for Mouse Systems in 1984. Together they co-developed the early versions of GRASP for Wolfgram's company, Microtex Industries . Subsequent versions followed. Version 1.10c was released in September 1986. [ 3 ]
Starting with John and Doug's source code for PCPaint , the painting aspects were chopped out and
instead a simple font editor for Doug's slideshow program FlashGun was created. The graphics library was used to make a simple script playback that had a command for each graphics library function. It also originally used the assembly language fades from FlashGun for
a "FADE" command, but those image fade routines were mode specific (CGA) and difficult to enhance. The routines were rewritten along with the script parts. It stored all the files in a ZIB archive, renaming John Bridges' program ZIB to GLIB and the archives it produced were GL files.
In 1987, GRASP 2.0, was released and no longer distributed as ShareWare. It became a commercial product published in the USA by Paul Mace Software. John Bridges assumed responsibility for development of the core engine while Wolfgram developed fades, external utilities and new commands.
In 1988, GRASP 3.0 was released, followed in October 1988 by GRASP 3.5, bundled with Pictor Paint , an improved PCPaint minus publishing features. GRASP 3.5 "[supported] a wide range of video formats, including CGA, EGA, Hercules, VGA and all popular enhanced VGA modes up to 800 x 600 pixels and 1,024 x 768 pixels resolution. The software [displayed] and [edited] images in several standard formats, including PC Paintbrush (PCX) and GIF." [ 4 ]
Award-winning animator Tom Guthery claims that by using GRASP in 1990 his early animated computer programs "[gave] smooth movement and detailed animation to a degree that many programmers had thought impossible at the time". [ 5 ]
In February 1991 GRASP 4.0 was released, with the ability to create "self-executing" demos (bind to make EXE added), AutoDesk FLI/FLC support, PC Speaker Digitized Sound, and a robust programming environment. It also included ARTOOLS , a collection of image manipulation tools which included an early morphing utility which tracked all points in source and destination images, creating all the in-between frames. Later that year HRFE (High Res Flic Enhancement) was offered as an add-on for GRASP, "[enabling] GRASP to recognize, import, manipulate and compile animations created in Autodesk Animator Pro environment." [ 6 ]
In a published paper critiquing GRASP 4.0, the authors Stuart White and John Lenarcic said that "The GRASP language offers creative freedom in the development of interactive multimedia presentations, especially to seasoned programmers with an artistic inclination." [ 7 ]
A stripped-down version of GRASP 4.0 was also included with copies of Philip Shaddock's Multimedia Creations: Hands-On Workshop for Exploring Animation and Sound . [ 8 ]
In June 1993, Multi-Media GRASP 1.0 (also known as MMGRASP and MultiMedia GRaphic Animation System for Professionals Version 5.0) was released with TrueColor support.
Authorship and ownership
Early in 1990 Doug Wolfgram sold his remaining rights to GRASP (and PCPaint) to John Bridges.
In 1994, GRASP development stopped when John Bridges terminated his publishing contract with Paul Mace Software. In 1995, John created GLPro for IMS Communications Ltd , the newest incarnation of John's ideas behind GRASP updated for Windows . In 2002, John Bridges created AfterGRASP , a successor to GRASP and GLPro.
GLPro was a multimedia authoring application for MS-DOS and Microsoft Windows . GLPro is a contraction of Graphics Language Professional, and was written by John Bridges as a successor to GRASP. Windows support in GLPro was released in the summer of 1996.
Unlike competing technologies such as Macromedia Director , GLPro took a very minimalist approach, providing an extensive scripting language rather than a lot of WYSIWYG tools within a Graphical User Interface. Everything was accomplished by writing code using its BASIC -like syntax. The scripting language was not object oriented , and as a result consists of a very large number of specialised commands. The programmer was not able to create new classes or extend the language. It has been criticised for its syntactical inconsistency, steep learning curve, and the fact that it does not deliver a cross-platform multimedia solution. Despite this it has been enthusiastically received by numbers of users, many dating back to the early GRASP under MS-DOS days.
An unusual design philosophy behind GLPro is that it does not rely on external OS services to handle many media types, such as MP3 audio, MPEG video, etc. Instead it contains its own player code. The thinking is that by avoiding OS services for these tasks, the end user is spared the problem of needing to install additional components before being able to run a multimedia title on their machine - it is intended to "just work". Although an advantage for some standalone projects, this philosophy suffered from an inability to keep up with new media developments.
GLPro was moved into a separate company, GMedia , in early 2000, which closed their doors in February 2001 just as the native Macintosh and Linux support was entering public beta testing. Bridges is no longer involved in its development, and as of February 2002 is developing a new multimedia authoring system called AfterGRASP designed to be backwards compatible with GLPro with less emphasis on built-in media playback support.
GLPro is currently owned by Comlet Technologies, LLC. and is one of the primary languages used in its Comlets Message System product. | https://en.wikipedia.org/wiki/Graphics_Animation_System_for_Professionals |
Graphics BBS ( GBBS ) was a bulletin board system server developed from 1989 to 1992 by Eric Anderson as part of his thesis at Chisholm Institute of Technology . Although it had superior graphics capabilities compared to RIP , it was harder to integrate into existing BBS's, and so was ultimately less popular. [ 1 ]
GBBS allowed sending graphics defined by BASIC commands, as well as GIF images. Since the images were cached between sessions, each image only needed to be downloaded once, so these connections were often as fast as a text BBS.
The software was primarily used around Melbourne until the Internet killed the old bulletin boards. [ 2 ]
This software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Graphics_BBS |
Most of the synthesized Zinc oxide (ZnO) nanostructures in different geometric configurations such as nanowires , nanorods , nanobelts and nanosheets are usually in the wurtzite crystal structure . However, it was found from density functional theory calculations that for ultra-thin films of ZnO, the graphite-like structure was energetically more favourable as compared to the wurtzite structure. [ 1 ] [ 2 ] The stability of this phase transformation of wurtzite lattice to graphite-like structure of the ZnO film is only limited to the thickness of about several Zn-O layers and was subsequently verified by experiment. [ 3 ] Similar phase transition was also observed in ZnO nanowire when it was subjected to uniaxial tensile loading. [ 4 ] However, with the use of the first-principles all electron full-potential method, it was observed that the wurtzite to graphite-like phase transformation for ultra-thin ZnO films will not occur in the presence of a significant amount of oxygen vacancies (V o ) at the Zn-terminated (0001) surface of the thin film. [ 5 ] The absence of the structural phase transformation was explained in terms of the Coulomb attraction at the surfaces. [ 5 ] The graphitic ZnO thin films are structurally similar to the multilayer of graphite and are expected to have interesting mechanical and electronic properties for potential nanoscale applications. In addition, density functional theory calculations and experimental observations also indicate that the concentration of the V o is the highest near the surfaces as compared to the inner parts of the nanostructures. [ 6 ] [ 7 ] This is due to the lower V o defect formation energies in the interior of the nanostructures as compared to their surfaces. [ 6 ] [ 7 ]
This nanotechnology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Graphite-like_zinc_oxide_nanostructure |
In the area of solid state chemistry , graphite intercalation compounds are a family of materials prepared from graphite. In particular, the sheets of carbon that comprise graphite can be pried apart by the insertion ( intercalation ) of ions. The graphite is viewed as a host and the inserted ions as guests . The materials have the formula (guest)C n where n ≥ 6. The insertion of the guests increases the distance between the carbon sheets. Common guests are reducing agents such as alkali metals . Strong oxidants also intercalate into graphite. Intercalation involves electron transfer into or out of the carbon sheets. So, in some sense, graphite intercalation compounds are salts. Intercalation is often reversible: the inserted ions can be removed and the sheets of carbon collapse to a graphite-like structure.
The properties of graphite intercalation compounds differ from those of the parent graphite. [ 1 ] [ 2 ]
These materials are prepared by treating graphite with a strong oxidant or a strong reducing agent:
The reaction is reversible.
The host (graphite) and the guest X interact by charge transfer . An analogous process is the basis of commercial lithium-ion batteries .
In a graphite intercalation compound not every layer is necessarily occupied by guests. In so-called stage 1 compounds , graphite layers and intercalated layers alternate and in stage 2 compounds , two graphite layers with no guest material in between alternate with an intercalated layer. The actual composition may vary and therefore these compounds are an example of non-stoichiometric compounds. It is customary to specify the composition together with the stage. The layers are pushed apart upon incorporation of the guest ions.
One of the best studied graphite intercalation compounds, KC 8 , is prepared by melting potassium over graphite powder. The potassium is absorbed into the graphite and the material changes color from black to bronze. [ 3 ] The resulting solid is pyrophoric . [ 4 ] The composition is explained by assuming that the potassium to potassium distance is twice the distance between hexagons in the carbon framework. The bond between anionic graphite layers and potassium cations is ionic. The electrical conductivity of the material is greater than that of α-graphite. [ 4 ] [ 5 ] KC 8 is a superconductor with a very low critical temperature T c = 0.14 K. [ 6 ] Heating KC 8 leads to the formation of a series of decomposition products as the K atoms are eliminated: [ citation needed ]
Via the intermediates KC 24 (blue in color), [ 3 ] KC 36 , KC 48 , ultimately the compound KC 60 results.
The stoichiometry MC 8 is observed for M = K, Rb and Cs. For smaller ions M = Li + , Sr 2+ , Ba 2+ , Eu 2+ , Yb 3+ , and Ca 2+ , the limiting stoichiometry is MC 6 . [ 6 ] Calcium graphite CaC 6 is obtained by immersing highly oriented pyrolytic graphite in liquid Li–Ca alloy for 10 days at 350 °C. The crystal structure of CaC 6 belongs to the R 3 m space group. The graphite interlayer distance increases upon Ca intercalation from 3.35 to 4.524 Å, and the carbon-carbon distance increases from 1.42 to 1.444 Å.
With barium and ammonia , the cations are solvated, giving the stoichiometry ( Ba(NH 3 ) 2.5 C 10.9 (stage 1)) or those with caesium , hydrogen and potassium ( CsC 8 ·K 2 H 4/3 C 8 (stage 1)). [ clarification needed ]
In situ adsorption on free-standing graphene and intercalation in bilayer graphene of the alkali metals K, Cs, and Li was observed by means of low-energy electron microscopy. [ 7 ]
Different from other alkali metals, the amount of Na intercalation is very small. Quantum-mechanical calculations show that this originates from a quite general phenomenon: among the alkali and alkaline earth metals, Na and Mg generally have the weakest chemical binding to a given substrate, compared with the other elements in the same group of the periodic table. [ 8 ] The phenomenon arises from the competition between trends in the ionization energy and the ion–substrate coupling, down the columns of the periodic table. [ 8 ] However, considerable Na intercalation into graphite can occur in cases when the ion is wrapped in a solvent shell through the process of co-intercalation. A complex magnesium(I) species has also been intercalated into graphite. [ 9 ]
The intercalation compounds graphite bisulfate and graphite perchlorate can be prepared by treating graphite with strong oxidizing agents in the presence of strong acids. In contrast to the potassium and calcium graphites, the carbon layers are oxidized in this process:
In graphite perchlorate, planar layers of carbon atoms are 794 picometers apart, separated by ClO − 4 ions. Cathodic reduction of graphite perchlorate is analogous to heating KC 8 , which leads to a sequential elimination of HClO 4 .
Both graphite bisulfate and graphite perchlorate are better conductors as compared to graphite, as predicted by using a positive-hole mechanism. [ 4 ] Reaction of graphite with [O 2 ] + [AsF 6 ] − affords the salt [C 8 ] + [AsF 6 ] − . [ 4 ]
A number of metal halides intercalate into graphite. The chloride derivatives have been most extensively studied. Examples include MCl 2 (M = Zn, Ni, Cu, Mn), MCl 3 (M = Al, Fe, Ga), MCl 4 (M = Zr, Pt), etc. [ 1 ] The materials consists of layers of close-packed metal halide layers between sheets of carbon. The derivative C ~8 FeCl 3 exhibits spin glass behavior. [ 10 ] It proved to be a particularly fertile system on which to study phase transitions. [ citation needed ] A stage n magnetic graphite intercalation compounds has n graphite layers separating successive magnetic layers. As the stage number increases the interaction between spins in successive magnetic layers becomes weaker and 2D magnetic behaviour may arise.
Chlorine and bromine reversibly intercalate into graphite. Iodine does not. Fluorine reacts irreversibly. In the case of bromine, the following stoichiometries are known: C n Br for n = 8, 12, 14, 16, 20, and 28.
Because it forms irreversibly, carbon monofluoride is often not classified as an intercalation compound. It has the formula (CF) x . It is prepared by reaction of gaseous fluorine with graphitic carbon at 215–230 °C. The color is greyish, white, or yellow. The bond between the carbon and fluorine atoms is covalent. Tetracarbon monofluoride ( C 4 F ) is prepared by treating graphite with a mixture of fluorine and hydrogen fluoride at room temperature. The compound has a blackish-blue color. Carbon monofluoride is not electrically conductive. It has been studied as a cathode material in one type of primary (non-rechargeable) lithium batteries .
Graphite oxide is an unstable yellow solid.
Graphite intercalation compounds have fascinated materials scientists for many years owing to their diverse electronic and electrical properties.
Among the superconducting graphite intercalation compounds, CaC 6 exhibits the highest critical temperature T c = 11.5 K, which further increases under applied pressure (15.1 K at 8 GPa). [ 6 ] Superconductivity in these compounds is thought to be related to the role of an interlayer state, a free electron like band lying roughly 2 eV (0.32 aJ) above the Fermi level ; superconductivity only occurs if the interlayer state is occupied. [ 11 ] Analysis of pure CaC 6 using a high quality ultraviolet light revealed to conduct angle-resolved photoemission spectroscopy measurements. The opening of a superconducting gap in the π* band revealed a substantial contribution to the total electron–phonon-coupling strength from the π*-interlayer interband interaction. [ 11 ]
The bronze-colored material KC 8 is one of the strongest reducing agents known. It has also been used as a catalyst in polymerizations and as a coupling reagent for aryl halides to biphenyls . [ 12 ] In one study, freshly prepared KC 8 was treated with 1-iodododecane delivering a modification ( micrometre scale carbon platelets with long alkyl chains sticking out providing solubility) that is soluble in chloroform . [ 12 ] Another potassium graphite compound, KC 24 , has been used as a neutron monochromator. A new essential application for potassium graphite was introduced by the invention of the potassium-ion battery . Like the lithium-ion battery , the potassium-ion battery should use a carbon-based anode instead of a metallic anode. In this circumstance, the stable structure of potassium graphite is an important advantage. | https://en.wikipedia.org/wiki/Graphite_intercalation_compound |
Graphitic carbon nitride (g-C 3 N 4 ) is a family of carbon nitride compounds with a general formula near to C 3 N 4 (albeit typically with non-zero amounts of hydrogen) and two major substructures based on heptazine and poly(triazine imide) units which, depending on reaction conditions, exhibit different degrees of condensation , properties and reactivities .
Graphitic carbon nitride can be made by polymerization of cyanamide , dicyandiamide or melamine . The firstly formed polymeric C 3 N 4 structure, melon , with pendant amino groups , is a highly ordered polymer . Further reaction leads to more condensed and less defective C 3 N 4 species, based on tri-s-triazine (C 6 N 7 ) units as elementary building blocks. [ 2 ]
Graphitic carbon nitride can also be prepared by electrodeposition on Si (100) substrate from a saturated acetone solution of cyanuric trichloride and melamine (ratio =1: 1.5) at room temperature. [ 3 ]
Well-crystallized graphitic carbon nitride nanocrystallites can also be prepared via benzene-thermal reaction between C 3 N 3 Cl 3 and NaNH 2 at 180–220 °C for 8–12 h. [ 4 ]
Recently, a new method of syntheses of graphitic carbon nitrides by heating at 400-600 °C of a mixture of melamine and uric acid in the presence of alumina has been reported. Alumina favored the deposition of the graphitic carbon nitrides layers on the exposed surface. This method can be assimilated to an in situ chemical vapor deposition (CVD). [ 5 ]
Characterization of crystalline g-C 3 N 4 can be carried out by identifying the triazine ring existing in the products by X-ray photoelectron spectroscopy (XPS) measurements, photoluminescence spectra and Fourier transform infrared spectroscopy (FTIR) spectrum (peaks at 800 cm −1 , 1310 cm −1 and 1610 cm −1 ). [ 4 ]
Due to the special semiconductor properties of carbon nitrides, they show unexpected catalytic activity for a variety of reactions, such as for the activation of benzene , trimerization reactions, and also the activation of carbon dioxide ( artificial photosynthesis ). [ 2 ]
A commercial graphitic carbon nitride is available under the brand name Nicanite. In its micron-sized graphitic form, it can be used for tribological coatings, biocompatible medical coatings, chemically inert coatings, insulators and for energy storage solutions. [ 6 ] Graphitic carbon nitride is reported as one of the best hydrogen storage materials. [ 7 ] [ 8 ] It can also be used as a support for catalytic nanoparticles . [ 1 ]
Due to their properties (primarily large, tuneable band gaps and efficient intercalation of salts) graphitic carbon nitrides are under research for a variety of applications: | https://en.wikipedia.org/wiki/Graphitic_carbon_nitride |
Graphitization is a process of transforming a carbonaceous material, such as coal or the carbon in certain forms of iron alloys, into graphite . [ 1 ]
The graphitization process involves a restructuring of the molecular structure of the carbon material. In the initial state, these materials can have an amorphous structure or a crystalline structure different from graphite. Graphitization generally occurs at high temperatures (up to 3,000 °C (5,430 °F)), and can be accelerated by catalysts such as iron or nickel . [ 2 ]
When carbonaceous material is exposed to high temperatures for an extended period of time, the carbon atoms begin to rearrange and form layered crystal planes. In the structure of graphite, carbon atoms are arranged in flat hexagonal sheets that are stacked on top of each other. These crystal planes give graphite its characteristic flake structure, giving it specific properties such as good electrical and thermal conductivity, low friction and excellent lubrication.
Graphitization can be observed in various contexts. For example, it occurs naturally during the formation of certain types of coal or graphite in the Earth's crust . It can also be artificially induced during the manufacture of specific carbon materials, such as graphite electrodes used in fuel cells, nuclear reactors or metallurgical applications. [ 3 ]
Graphitization is of particular interest in the field of metallurgy. Some iron alloys, such as cast iron, can undergo graphitization heat treatment to improve their mechanical properties and machinability. During this process, the carbon dissolved in the iron alloy matrix separates and restructures as graphite, which gives the cast iron its specific characteristics, such as improved ductility and wear resistance. | https://en.wikipedia.org/wiki/Graphitization |
Graphmatica is a graphing program created by Keith Hertzer, [ 1 ] a graduate of the University of California, Berkeley . It runs on Microsoft Windows (all versions), Mac OS X 10.5 and higher, and iOS 5.0 and higher.
Graphmatica for Windows and Macs is distributed free of charge for evaluation purposes. After one month, non-commercial users are asked to pay a $25 licensing fee. Other licensing plans are available for commercial users.
Graphmatica for iOS is distributed via the Apple App Store.
Graphmatica can graph Cartesian functions , relations , and inequalities, plus polar , parametric and ordinary differential equations .
This article about mathematics software is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Graphmatica |
In graph theory and statistics , a graphon (also known as a graph limit ) is a symmetric measurable function W : [ 0 , 1 ] 2 → [ 0 , 1 ] {\displaystyle W:[0,1]^{2}\to [0,1]} , that is important in the study of dense graphs . Graphons arise both as a natural notion for the limit of a sequence of dense graphs, and as the fundamental defining objects of exchangeable random graph models. Graphons are tied to dense graphs by the following pair of observations: the random graph models defined by graphons give rise to dense graphs almost surely , and, by the regularity lemma , graphons capture the structure of arbitrary large dense graphs.
A graphon is a symmetric measurable function W : [ 0 , 1 ] 2 → [ 0 , 1 ] {\displaystyle W:[0,1]^{2}\to [0,1]} . Usually a graphon is understood as defining an exchangeable random graph model according to the following scheme:
A random graph model is an exchangeable random graph model if and only if it can be defined in terms of a (possibly random) graphon in this way.
The model based on a fixed graphon W {\displaystyle W} is sometimes denoted G ( n , W ) {\displaystyle \mathbb {G} (n,W)} ,
by analogy with the Erdős–Rényi model of random graphs.
A graph generated from a graphon W {\displaystyle W} in this way is called a W {\displaystyle W} -random graph.
It follows from this definition and the law of large numbers that, if W ≠ 0 {\displaystyle W\neq 0} , exchangeable random graph models are dense almost surely. [ 1 ]
The simplest example of a graphon is W ( x , y ) ≡ p {\displaystyle W(x,y)\equiv p} for some constant p ∈ [ 0 , 1 ] {\displaystyle p\in [0,1]} . In this case the associated exchangeable random graph model is the Erdős–Rényi model G ( n , p ) {\displaystyle G(n,p)} that includes each edge independently with probability p {\displaystyle p} .
If we instead start with a graphon that is piecewise constant by:
the resulting exchangeable random graph model is the k {\displaystyle k} community stochastic block model , a generalization of the Erdős–Rényi model.
We can interpret this as a random graph model consisting of k {\displaystyle k} distinct Erdős–Rényi graphs with parameters p ℓ ℓ {\displaystyle p_{\ell \ell }} respectively, with bigraphs between them where each possible edge between blocks ( ℓ , ℓ ) {\displaystyle (\ell ,\ell )} and ( m , m ) {\displaystyle (m,m)} is included independently with probability p ℓ m {\displaystyle p_{\ell m}} .
Many other popular random graph models can be understood as exchangeable random graph models defined by some graphon, a detailed survey is included in Orbanz and Roy. [ 1 ]
A random graph of size n {\displaystyle n} can be represented as a random n × n {\displaystyle n\times n} adjacency matrix . In order to impose consistency (in the sense of projectivity ) between random graphs of different sizes it is natural to study the sequence of adjacency matrices arising as the upper-left n × n {\displaystyle n\times n} sub-matrices of some infinite array of random variables; this allows us to generate G n {\displaystyle G_{n}} by adding a node to G n − 1 {\displaystyle G_{n-1}} and sampling the edges ( j , n ) {\displaystyle (j,n)} for j < n {\displaystyle j<n} . With this perspective, random graphs are defined as random infinite symmetric arrays ( X i j ) {\displaystyle (X_{ij})} .
Following the fundamental importance of exchangeable sequences in classical probability, it is natural to look for an analogous notion in the random graph setting. One such notion is given by jointly exchangeable matrices; i.e. random matrices satisfying
for all permutations σ {\displaystyle \sigma } of the natural numbers, where = d {\displaystyle {\overset {d}{=}}} means equal in distribution . Intuitively, this condition means that the distribution of the random graph is unchanged by a relabeling of its vertices: that is, the labels of the vertices carry no information.
There is a representation theorem for jointly exchangeable random adjacency matrices, analogous to de Finetti’s representation theorem for exchangeable sequences. This is a special case of the Aldous–Hoover theorem for jointly exchangeable arrays and, in this setting, asserts that the random matrix ( X i j ) {\displaystyle (X_{ij})} is generated by:
where W : [ 0 , 1 ] 2 → [ 0 , 1 ] {\displaystyle W:[0,1]^{2}\to [0,1]} is a (possibly random) graphon. That is, a random graph model has a jointly exchangeable adjacency matrix if and only if it is a jointly exchangeable random graph model defined in terms of some graphon.
Due to identifiability issues, it is impossible to estimate either the graphon function W {\displaystyle W} or the node latent positions u i , {\displaystyle u_{i},} and there are two main directions of graphon estimation. One direction aims at estimating W {\displaystyle W} up to an equivalence class, [ 2 ] [ 3 ] or estimate the probability matrix induced by W {\displaystyle W} . [ 4 ] [ 5 ]
Any graph on n {\displaystyle n} vertices { 1 , 2 , … , n } {\displaystyle \{1,2,\dots ,n\}} can be identified with its adjacency matrix A G {\displaystyle A_{G}} .
This matrix corresponds to a step function W G : [ 0 , 1 ] 2 → [ 0 , 1 ] {\displaystyle W_{G}:[0,1]^{2}\to [0,1]} ,
defined by partitioning [ 0 , 1 ] {\displaystyle [0,1]} into intervals I 1 , I 2 , … , I n {\displaystyle I_{1},I_{2},\dots ,I_{n}} such that I j {\displaystyle I_{j}} has interior ( j − 1 n , j n ) {\displaystyle \left({\frac {j-1}{n}},{\frac {j}{n}}\right)} and for each ( x , y ) ∈ I i × I j {\displaystyle (x,y)\in I_{i}\times I_{j}} , setting W G ( x , y ) {\displaystyle W_{G}(x,y)} equal to the ( i , j ) th {\displaystyle (i,j)^{\text{th}}} entry of A G {\displaystyle A_{G}} .
This function W G {\displaystyle W_{G}} is the associated graphon of the graph G {\displaystyle G} .
In general, if we have a sequence of graphs ( G n ) {\displaystyle (G_{n})} where the number of vertices of G n {\displaystyle G_{n}} goes to infinity, we can analyze the limiting behavior of the sequence by considering the limiting behavior of the functions ( W G n ) {\displaystyle (W_{G_{n}})} .
If these graphs converge (according to some suitable definition of convergence ), then we expect the limit of these graphs to correspond to the limit of these associated functions.
This motivates the definition of a graphon (short for "graph function") as a symmetric measurable function W : [ 0 , 1 ] 2 → [ 0 , 1 ] {\displaystyle W:[0,1]^{2}\to [0,1]} which captures the notion of a limit of a sequence of graphs. It turns out that for sequences of dense graphs, several apparently distinct notions of convergence are equivalent and under all of them the natural limit object is a graphon. [ 6 ]
Take a sequence of ( G n ) {\displaystyle (G_{n})} Erdős–Rényi random graphs G n = G ( n , p ) {\displaystyle G_{n}=G(n,p)} with some fixed parameter p {\displaystyle p} .
Intuitively, as n {\displaystyle n} tends to infinity, the limit of this sequence of graphs is determined solely by edge density of these graphs.
In the space of graphons, it turns out that such a sequence converges almost surely to the constant W ( x , y ) ≡ p {\displaystyle W(x,y)\equiv p} , which captures the above intuition.
Take the sequence ( H n ) {\displaystyle (H_{n})} of half-graphs , defined by taking H n {\displaystyle H_{n}} to be the bipartite graph on 2 n {\displaystyle 2n} vertices u 1 , u 2 , … , u n {\displaystyle u_{1},u_{2},\dots ,u_{n}} and v 1 , v 2 , … , v n {\displaystyle v_{1},v_{2},\dots ,v_{n}} such that u i {\displaystyle u_{i}} is adjacent to v j {\displaystyle v_{j}} precisely when i ≤ j {\displaystyle i\leq j} . If the vertices are listed in the presented order, then
the adjacency matrix A H n {\displaystyle A_{H_{n}}} has two corners of "half square" block matrices filled with ones, with the rest of the entries equal to zero. For example, the adjacency matrix of H 3 {\displaystyle H_{3}} is given by
[ 0 0 0 1 1 1 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 1 1 1 0 0 0 ] . {\displaystyle {\begin{bmatrix}0&0&0&1&1&1\\0&0&0&0&1&1\\0&0&0&0&0&1\\1&0&0&0&0&0\\1&1&0&0&0&0\\1&1&1&0&0&0\end{bmatrix}}.}
As n {\displaystyle n} gets large, these corners of ones "smooth" out.
Matching this intuition, the sequence ( H n ) {\displaystyle (H_{n})} converges to the half-graphon W {\displaystyle W} defined by W ( x , y ) = 1 {\displaystyle W(x,y)=1} when | x − y | ≥ 1 / 2 {\displaystyle |x-y|\geq 1/2} and W ( x , y ) = 0 {\displaystyle W(x,y)=0} otherwise.
Take the sequence ( K n , n ) {\displaystyle (K_{n,n})} of complete bipartite graphs with equal sized parts.
If we order the vertices by placing all vertices in one part at the beginning
and placing the vertices of the other part at the end,
the adjacency matrix of ( K n , n ) {\displaystyle (K_{n,n})} looks like a block off-diagonal matrix, with two blocks of ones and two blocks of zeros.
For example, the adjacency matrix of K 2 , 2 {\displaystyle K_{2,2}} is given by
[ 0 0 1 1 0 0 1 1 1 1 0 0 1 1 0 0 ] . {\displaystyle {\begin{bmatrix}0&0&1&1\\0&0&1&1\\1&1&0&0\\1&1&0&0\end{bmatrix}}.}
As n {\displaystyle n} gets larger, this block structure of the adjacency matrix remains constant,
so that this sequence of graphs converges to a "complete bipartite" graphon W {\displaystyle W} defined by W ( x , y ) = 1 {\displaystyle W(x,y)=1} whenever min ( x , y ) ≤ 1 / 2 {\displaystyle \min(x,y)\leq 1/2} and max ( x , y ) > 1 / 2 {\displaystyle \max(x,y)>1/2} , and setting W ( x , y ) = 0 {\displaystyle W(x,y)=0} otherwise.
If we instead order the vertices of K n , n {\displaystyle K_{n,n}} by alternating between parts,
the adjacency matrix has a chessboard structure of zeros and ones.
For example, under this ordering, the adjacency matrix of K 2 , 2 {\displaystyle K_{2,2}} is given by
[ 0 1 0 1 1 0 1 0 0 1 0 1 1 0 1 0 ] . {\displaystyle {\begin{bmatrix}0&1&0&1\\1&0&1&0\\0&1&0&1\\1&0&1&0\end{bmatrix}}.}
As n {\displaystyle n} gets larger,
the adjacency matrices become a finer and finer chessboard.
Despite this behavior, we still want the limit of ( K n , n ) {\displaystyle (K_{n,n})} to be unique and result in the graphon from example 3.
This means that when we formally define convergence for a sequence of graphs, the definition of a limit should be agnostic to relabelings of the vertices.
Take a random sequence ( G n ) {\displaystyle (G_{n})} of W {\displaystyle W} -random graphs by drawing G n ∼ G ( n , W ) {\displaystyle G_{n}\sim \mathbb {G} (n,W)} for some fixed graphon W {\displaystyle W} .
Then just like in the first example from this section, it turns out that ( G n ) {\displaystyle (G_{n})} converges to W {\displaystyle W} almost surely.
Given graph G {\displaystyle G} with associated graphon W = W G {\displaystyle W=W_{G}} , we can recover graph theoretic properties and parameters of G {\displaystyle G} by integrating transformations of W {\displaystyle W} . For example, the edge density (i.e. average degree divided by number of vertices) of G {\displaystyle G} is given by the integral ∫ 0 1 ∫ 0 1 W ( x , y ) d x d y . {\displaystyle \int _{0}^{1}\int _{0}^{1}W(x,y)\;\mathrm {d} x\,\mathrm {d} y.} This is because W {\displaystyle W} is { 0 , 1 } {\displaystyle \{0,1\}} -valued, and each edge ( i , j ) {\displaystyle (i,j)} in G {\displaystyle G} corresponds to a region I i × I j {\displaystyle I_{i}\times I_{j}} of area 1 / n 2 {\displaystyle 1/n^{2}} where W {\displaystyle W} equals 1 {\displaystyle 1} .
Similar reasoning shows that the triangle density in G {\displaystyle G} is equal to 1 6 ∫ 0 1 ∫ 0 1 ∫ 0 1 W ( x , y ) W ( y , z ) W ( z , x ) d x d y d z . {\displaystyle {\frac {1}{6}}\int _{0}^{1}\int _{0}^{1}\int _{0}^{1}W(x,y)W(y,z)W(z,x)\;\mathrm {d} x\,\mathrm {d} y\,\mathrm {d} z.}
There are many different ways to measure the distance between two graphs.
If we are interested in metrics that "preserve" extremal properties of graphs,
then we should restrict our attention to metrics that identify random graphs as similar.
For example, if we randomly draw two graphs independently from an Erdős–Rényi model G ( n , p ) {\displaystyle G(n,p)} for some fixed p {\displaystyle p} , the distance between these two graphs under a "reasonable" metric should be close to zero with high probability for large n {\displaystyle n} .
Naively, given two graphs on the same vertex set, one might define their distance as the number of edges that must be added or removed to get from one graph to the other, i.e. their edit distance . However, the edit distance does not identify random graphs as similar; in fact, two graphs drawn independently from G ( n , 1 2 ) {\displaystyle G(n,{\tfrac {1}{2}})} have an expected (normalized) edit distance of 1 2 {\displaystyle {\tfrac {1}{2}}} .
There are two natural metrics that behave well on dense random graphs in the sense that we want.
The first is a sampling metric, which says that two graphs are close if their distributions of subgraphs are close.
The second is an edge discrepancy metric, which says two graphs are close when their edge densities are close on all their corresponding subsets of vertices.
Miraculously, a sequence of graphs converges with respect to one metric precisely when it converges with respect to the other.
Moreover, the limit objects under both metrics turn out to be graphons.
The equivalence of these two notions of convergence mirrors how various notions of quasirandom graphs are equivalent. [ 7 ]
One way to measure the distance between two graphs G {\displaystyle G} and H {\displaystyle H} is to compare their relative subgraph counts.
That is, for each graph F {\displaystyle F} we can compare the number of copies of F {\displaystyle F} in G {\displaystyle G} and F {\displaystyle F} in H {\displaystyle H} .
If these numbers are close for every graph F {\displaystyle F} , then
intuitively G {\displaystyle G} and H {\displaystyle H} are similar looking graphs.
Rather than dealing directly with subgraphs, however, it turns out to be
easier to work with graph homomorphisms.
This is fine when dealing with large, dense graphs, since in this scenario
the number of subgraphs and the number of graph homomorphisms from a fixed graph are asymptotically equal.
Given two graphs F {\displaystyle F} and G {\displaystyle G} , the homomorphism density t ( F , G ) {\displaystyle t(F,G)} of F {\displaystyle F} in G {\displaystyle G} is defined to be the number of graph homomorphisms from F {\displaystyle F} to G {\displaystyle G} .
In other words, t ( F , G ) {\displaystyle t(F,G)} is the probability a randomly chosen map from the vertices of F {\displaystyle F} to the vertices of G {\displaystyle G} sends adjacent vertices in F {\displaystyle F} to adjacent vertices in G {\displaystyle G} .
Graphons offer a simple way to compute homomorphism densities.
Indeed, given a graph G {\displaystyle G} with associated graphon W G {\displaystyle W_{G}} and another F {\displaystyle F} , we have
t ( F , G ) = ∫ ∏ ( i , j ) ∈ E ( F ) W G ( x i , x j ) { d x i } i ∈ V ( F ) {\displaystyle t(F,G)=\int \prod _{(i,j)\in E(F)}W_{G}(x_{i},x_{j})\;\left\{\mathrm {d} x_{i}\right\}_{i\in V(F)}}
where the integral is multidimensional, taken over the unit hypercube [ 0 , 1 ] V ( F ) {\displaystyle [0,1]^{V(F)}} .
This follows from the definition of an associated graphon, by considering when the above integrand is equal to 1 {\displaystyle 1} .
We can then extend the definition of homomorphism density to arbitrary graphons W {\displaystyle W} , by using the same integral and defining
t ( F , W ) = ∫ ∏ ( i , j ) ∈ E ( F ) W ( x i , x j ) { d x i } i ∈ V ( F ) {\displaystyle t(F,W)=\int \prod _{(i,j)\in E(F)}W(x_{i},x_{j})\;\left\{\mathrm {d} x_{i}\right\}_{i\in V(F)}}
for any graph F {\displaystyle F} .
Given this setup, we say a sequence of graphs ( G n ) {\displaystyle (G_{n})} is left-convergent if for every fixed graph F {\displaystyle F} , the sequence of homomorphism densities ( t ( F , G n ) ) {\displaystyle \left(t(F,G_{n})\right)} converges.
Although not evident from the definition alone, if ( G n ) {\displaystyle (G_{n})} converges in this sense, then there always exists a graphon W {\displaystyle W} such that for every graph F {\displaystyle F} , we have lim n → ∞ t ( F , G n ) = t ( F , W ) {\displaystyle \lim _{n\to \infty }t(F,G_{n})=t(F,W)} simultaneously.
Take two graphs G {\displaystyle G} and H {\displaystyle H} on the same vertex set.
Because these graphs share the same vertices,
one way to measure their distance is to restrict to subsets X , Y {\displaystyle X,Y} of the vertex set, and for each such pair of subsets compare the number of edges e G ( X , Y ) {\displaystyle e_{G}(X,Y)} from X {\displaystyle X} to Y {\displaystyle Y} in G {\displaystyle G} to the number of edges e H ( X , Y ) {\displaystyle e_{H}(X,Y)} between X {\displaystyle X} and Y {\displaystyle Y} in H {\displaystyle H} . If these numbers are similar for every pair of subsets (relative to the total number of vertices), then that suggests G {\displaystyle G} and H {\displaystyle H} are similar graphs.
As a preliminary formalization of this notion of distance, for any pair of graphs G {\displaystyle G} and H {\displaystyle H} on the same vertex set V {\displaystyle V} of size | V | = n {\displaystyle |V|=n} , define the labeled cut distance between G {\displaystyle G} and H {\displaystyle H} to be
d ◻ ( G , H ) = 1 n 2 max X , Y ⊆ V | e G ( X , Y ) − e H ( X , Y ) | . {\displaystyle d_{\square }(G,H)={\frac {1}{n^{2}}}\max _{X,Y\subseteq V}\left|e_{G}(X,Y)-e_{H}(X,Y)\right|.}
In other words, the labeled cut distance encodes the maximum discrepancy of the edge densities between G {\displaystyle G} and H {\displaystyle H} .
We can generalize this concept to graphons by expressing the edge density 1 n 2 e G ( X , Y ) {\displaystyle {\tfrac {1}{n^{2}}}e_{G}(X,Y)} in terms of the associated graphon W G {\displaystyle W_{G}} , giving the equality
d ◻ ( G , H ) = max X , Y ⊆ V | ∫ I X ∫ I Y W G ( x , y ) − W H ( x , y ) d x d y | {\displaystyle d_{\square }(G,H)=\max _{X,Y\subseteq V}\left|\int _{I_{X}}\int _{I_{Y}}W_{G}(x,y)-W_{H}(x,y)\;\mathrm {d} x\,\mathrm {d} y\right|}
where I X , I Y ⊆ [ 0 , 1 ] {\displaystyle I_{X},I_{Y}\subseteq [0,1]} are unions of intervals corresponding to the vertices in X {\displaystyle X} and Y {\displaystyle Y} . Note that this definition can still be used even when the graphs being compared do not share a vertex set.
This motivates the following more general definition.
Definition 1. For any symmetric, measurable function f : [ 0 , 1 ] 2 → R {\displaystyle f:[0,1]^{2}\to \mathbb {R} } , define the cut norm of f {\displaystyle f} to be the quantity
‖ f ‖ ◻ = sup S , T ⊆ [ 0 , 1 ] | ∫ S ∫ T f ( x , y ) d x d y | {\displaystyle \lVert f\rVert _{\square }=\sup _{S,T\subseteq [0,1]}\left|\int _{S}\int _{T}f(x,y)\;\mathrm {d} x\,\mathrm {d} y\right|} taken over all measurable subsets S , T {\displaystyle S,T} of the unit interval. [ 6 ]
This captures our earlier notion of labeled cut distance, as we have the equality ‖ W G − W H ‖ ◻ = d ◻ ( G , H ) {\displaystyle \lVert W_{G}-W_{H}\rVert _{\square }=d_{\square }(G,H)} .
This distance measure still has one major limitation: it can assign nonzero distance to two isomorphic graphs.
To make sure isomorphic graphs have distance zero, we should compute the minimum cut norm over all possible "relabellings" of the vertices.
This motivates the following definition of the cut distance.
Definition 2. For any pair of graphons U {\displaystyle U} and W {\displaystyle W} , define their cut distance to be
δ ◻ ( U , W ) = inf φ ‖ U − W φ ‖ ◻ {\displaystyle \delta _{\square }(U,W)=\inf _{\varphi }\lVert U-W^{\varphi }\rVert _{\square }} where W φ ( x , y ) = W ( φ ( x ) , φ ( y ) ) {\displaystyle W^{\varphi }(x,y)=W(\varphi (x),\varphi (y))} is the composition of W {\displaystyle W} with the map φ {\displaystyle \varphi } , and the infimum is taken over all measure-preserving bijections from the unit interval to itself. [ 8 ]
The cut distance between two graphs is defined to be the cut distance between their associated graphons.
We now say that a sequence of graphs ( G n ) {\displaystyle (G_{n})} is convergent under the cut distance if it is a Cauchy sequence under the cut distance δ ◻ {\displaystyle \delta _{\square }} . Although not a direct consequence of the definition, if such a sequence of graphs is Cauchy, then it always converges to some graphon W {\displaystyle W} .
As it turns out, for any sequence of graphs ( G n ) {\displaystyle (G_{n})} , left-convergence is equivalent to convergence under the cut distance, and furthermore, the limit graphon W {\displaystyle W} is the same. We can also consider convergence of graphons themselves using the same definitions, and the same equivalence is true. In fact, both notions of convergence are related more strongly through what are called counting lemmas . [ 6 ]
Counting Lemma. For any pair of graphons U {\displaystyle U} and W {\displaystyle W} , we have
| t ( F , U ) − t ( F , W ) | ≤ e ( F ) δ ◻ ( U , W ) {\displaystyle |t(F,U)-t(F,W)|\leq e(F)\delta _{\square }(U,W)} for all graphs F {\displaystyle F} .
The name "counting lemma" comes from the bounds that this lemma gives on homomorphism densities t ( F , W ) {\displaystyle t(F,W)} , which are analogous to subgraph counts of graphs. This lemma is a generalization of the graph counting lemma that appears in the field of regularity partitions , and it immediately shows that convergence under the cut distance implies left-convergence.
Inverse Counting Lemma. For every real number ε > 0 {\displaystyle \varepsilon >0} , there exist a real number η > 0 {\displaystyle \eta >0} and a positive integer k {\displaystyle k} such that for any pair of graphons U {\displaystyle U} and W {\displaystyle W} with
| t ( F , U ) − t ( F , W ) | ≤ η {\displaystyle |t(F,U)-t(F,W)|\leq \eta } for all graphs F {\displaystyle F} satisfying v ( F ) ≤ k {\displaystyle v(F)\leq k} ,
we must have δ ◻ ( U , W ) < ε {\displaystyle \delta _{\square }(U,W)<\varepsilon } .
This lemma shows that left-convergence implies convergence under the cut distance.
We can make the cut-distance into a metric by taking the set of all graphons and identifying two graphons U ∼ W {\displaystyle U\sim W} whenever δ ◻ ( U , W ) = 0 {\displaystyle \delta _{\square }(U,W)=0} .
The resulting space of graphons is denoted W ~ 0 {\displaystyle {\widetilde {\mathcal {W}}}_{0}} , and together with δ ◻ {\displaystyle \delta _{\square }} forms a metric space .
This space turns out to be compact .
Moreover, it contains the set of all finite graphs, represented by their associated graphons, as a dense subset .
These observations show that the space of graphons is a completion of the space of graphs with respect to the cut distance. One immediate consequence of this is the following.
Corollary 1. For every real number ε > 0 {\displaystyle \varepsilon >0} , there is an integer N {\displaystyle N} such that for every graphon W {\displaystyle W} , there is a graph G {\displaystyle G} with at most N {\displaystyle N} vertices such that δ ◻ ( W , W G ) < ε {\displaystyle \delta _{\square }(W,W_{G})<\varepsilon } .
To see why, let G {\displaystyle {\mathcal {G}}} be the set of graphs. Consider for each graph G ∈ G {\displaystyle G\in {\mathcal {G}}} the open ball B ◻ ( G , ε ) {\displaystyle B_{\square }(G,\varepsilon )} containing all graphons W {\displaystyle W} such that δ ◻ ( W , W G ) < ε {\displaystyle \delta _{\square }(W,W_{G})<\varepsilon } . The set of open balls for all graphs covers W ~ 0 {\displaystyle {\widetilde {\mathcal {W}}}_{0}} , so compactness implies that there is a finite subcover { B ◻ ( G , ε ) ∣ G ∈ G 0 } {\displaystyle \{B_{\square }(G,\varepsilon )\mid G\in {\mathcal {G}}_{0}\}} for some finite subset G 0 ⊂ G {\displaystyle {\mathcal {G}}_{0}\subset {\mathcal {G}}} . We can now take N {\displaystyle N} to be the largest number of vertices among the graphs in G 0 {\displaystyle {\mathcal {G}}_{0}} .
Compactness of the space of graphons ( W ~ 0 , δ ◻ ) {\displaystyle ({\widetilde {\mathcal {W}}}_{0},\delta _{\square })} can be thought of as an analytic formulation of Szemerédi's regularity lemma ; in fact, a stronger result than the original lemma. [ 9 ] Szemeredi's regularity lemma can be translated into the language of graphons as follows. Define a step function to be a graphon W {\displaystyle W} that is piecewise constant, i.e. for some partition P {\displaystyle {\mathcal {P}}} of [ 0 , 1 ] {\displaystyle [0,1]} , W {\displaystyle W} is constant on S × T {\displaystyle S\times T} for all S , T ∈ P {\displaystyle S,T\in {\mathcal {P}}} . The statement that a graph G {\displaystyle G} has a regularity partition is equivalent to saying that its associated graphon W G {\displaystyle W_{G}} is close to a step function.
The proof of compactness requires only the weak regularity lemma :
Weak Regularity Lemma for Graphons. For every graphon W {\displaystyle W} and ε > 0 {\displaystyle \varepsilon >0} , there is a step function W ′ {\displaystyle W'} with at most ⌈ 4 1 / ε 2 ⌉ {\displaystyle \lceil 4^{1/\varepsilon ^{2}}\rceil } steps such that ‖ W − W ′ ‖ ◻ ≤ ε {\displaystyle \lVert W-W'\rVert _{\square }\leq \varepsilon } .
but it can be used to prove stronger regularity results, such as the strong regularity lemma :
Strong Regularity Lemma for Graphons. For every sequence ε = ( ε 0 , ε 1 , … ) {\displaystyle \mathbf {\varepsilon } =(\varepsilon _{0},\varepsilon _{1},\dots )} of positive real numbers, there is a positive integer S {\displaystyle S} such that for every graphon W {\displaystyle W} , there is a graphon W ′ {\displaystyle W'} and a step function U {\displaystyle U} with k < S {\displaystyle k<S} steps such that ‖ W − W ′ ‖ 1 ≤ ε 0 {\displaystyle \lVert W-W'\rVert _{1}\leq \varepsilon _{0}} and ‖ W ′ − U ‖ ◻ ≤ ε k . {\displaystyle \lVert W'-U\rVert _{\square }\leq \varepsilon _{k}.}
The proof of the strong regularity lemma is similar in concept to Corollary 1 above. It turns out that every graphon W {\displaystyle W} can be approximated with a step function U {\displaystyle U} in the L 1 {\displaystyle L_{1}} norm , showing that the set of balls B 1 ( U , ε 0 ) {\displaystyle B_{1}(U,\varepsilon _{0})} cover W ~ 0 {\displaystyle {\widetilde {\mathcal {W}}}_{0}} . These sets are not open in the δ ◻ {\displaystyle \delta _{\square }} metric, but they can be enlarged slightly to be open. Now, we can take a finite subcover, and one can show that the desired condition follows.
The analytic nature of graphons allows greater flexibility in attacking inequalities related to homomorphisms.
For example, Sidorenko's conjecture is a major open problem in extremal graph theory , which asserts that for any graph G {\displaystyle G} on n {\displaystyle n} vertices with average degree p n {\displaystyle pn} (for some p ∈ [ 0 , 1 ] {\displaystyle p\in [0,1]} ) and bipartite graph H {\displaystyle H} on v {\displaystyle v} vertices and e {\displaystyle e} edges, the number of homomorphisms from H {\displaystyle H} to G {\displaystyle G} is at least p e n v {\displaystyle p^{e}n^{v}} . [ 10 ] Since this quantity is the expected number of labeled subgraphs of H {\displaystyle H} in a random graph G ( n , p ) {\displaystyle G(n,p)} ,
the conjecture can be interpreted as the claim
that for any bipartite graph H {\displaystyle H} , the random graph achieves (in expectation) the minimum number of copies of H {\displaystyle H} over all graphs with some fixed edge density.
Many approaches to Sidorenko's conjecture formulate the problem as an integral inequality on graphons, which then allows the problem to be attacked using other analytical approaches. [ 11 ]
Graphons are naturally associated with dense simple graphs. There are extensions of this model to dense directed weighted graphs, often referred to as decorated graphons. [ 12 ] There are also recent extensions to the sparse graph regime, from both the perspective of random graph models [ 13 ] and graph limit theory. [ 14 ] [ 15 ] | https://en.wikipedia.org/wiki/Graphon |
Graphotype was a brand name used by the Addressograph-Multigraph Company for its range of metal plate embossing machines. [ 1 ]
The machines were originally used to create address plates for the Addressograph system and mark military style identity tags and other industrial nameplates .
The machines came in a number of variants with sliding, hand wheel or keyboard selected letters. The keyboard models and some rotary select units were motorised to allow faster operation.
The machines used in making Addressograph plates would deboss (stamp into the plate) the letters into the plate resulting in a well defined printing surface resembling the typewriter fonts of the day on the reverse side that would be used to transfer the details (usually customer addresses) onto envelopes or form letters. The same style was used in the early 1940s to 1980s for the US military identification tags and the tag details could be transferred onto medical charts using a hand held imprinter in field hospital conditions.
These same machines also found use in marking other nameplates and rating plates in industry and for this an embossed (raised letters in the style found on contemporary credit cards) marking style was preferred for ease of reading and maintaining a flat surface on the back of the plate. Military tags moved over to this style when the imprinting use was deprecated in the late 1960s and new machines would only be supplied as embossing units as the address plate market had been taken over by the computer revolution. | https://en.wikipedia.org/wiki/Graphotype_(machine) |
In graph theory , a class of graphs is said to have few cliques if every member of the class has a polynomial number of maximal cliques. [ 1 ] Certain generally NP-hard computational problems are solvable in polynomial time on such classes of graphs, [ 1 ] [ 2 ] making graphs with few cliques of interest in computational graph theory , network analysis , and other branches of applied mathematics . [ 3 ] Informally, a family of graphs has few cliques if the graphs do not have a large number of large clusters.
A clique of a graph is a complete subgraph , while a maximal clique is a clique that is not properly contained in another clique. One can regard a clique as a cluster of vertices, since they are by definition all connected to each other by an edge. The concept of clusters is ubiquitous in data analysis , such as on the analysis of social networks . For that reason, limiting the number of possible maximal cliques has computational ramifications for algorithms on graphs or networks.
Formally, let X {\displaystyle X} be a class of graphs. If for every n {\displaystyle n} - vertex graph G {\displaystyle G} in X {\displaystyle X} , there exists a polynomial f ( n ) {\displaystyle f(n)} such that G {\displaystyle G} has O ( f ( n ) ) {\displaystyle O(f(n))} maximal cliques, then X {\displaystyle X} is said to be a class of graphs with few cliques. [ 1 ] | https://en.wikipedia.org/wiki/Graphs_with_few_cliques |
In algebraic number theory , the Gras conjecture ( Gras 1977 ) relates the p -parts of the Galois eigenspaces of an ideal class group to the group of global units modulo cyclotomic units . It was proved by Mazur & Wiles (1984) as a corollary of their work on the main conjecture of Iwasawa theory . Kolyvagin (1990) later gave a simpler proof using Euler systems . A version of the Gras conjecture applying to ray class groups was later proven by Timothy All.
This number theory -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gras_conjecture |
In fluid mechanics (especially fluid thermodynamics ), the Grashof number ( Gr , after Franz Grashof [ a ] ) is a dimensionless number which approximates the ratio of the buoyancy to viscous forces acting on a fluid . It frequently arises in the study of situations involving natural convection and is analogous to the Reynolds number ( Re ). [ 2 ]
Free convection is caused by a change in density of a fluid due to a temperature change or gradient . Usually the density decreases due to an increase in temperature and causes the fluid to rise. This motion is caused by the buoyancy force. The major force that resists the motion is the viscous force. The Grashof number is a way to quantify the opposing forces. [ 3 ]
The Grashof number is:
where:
The L and D subscripts indicate the length scale basis for the Grashof number.
The transition to turbulent flow occurs in the range 10 8 < Gr L < 10 9 for natural convection from vertical flat plates. At higher Grashof numbers, the boundary layer is turbulent; at lower Grashof numbers, the boundary layer is laminar, that is, in the range 10 3 < Gr L < 10 6 .
There is an analogous form of the Grashof number used in cases of natural convection mass transfer problems. In the case of mass transfer, natural convection is caused by concentration gradients rather than temperature gradients. [ 2 ]
G r c = g β ∗ ( C a , s − C a , a ) L 3 ν 2 {\displaystyle \mathrm {Gr} _{c}={\frac {g\beta ^{*}(C_{a,s}-C_{a,a})L^{3}}{\nu ^{2}}}}
where
β ∗ = − 1 ρ ( ∂ ρ ∂ C a ) T , p {\displaystyle \beta ^{*}=-{\frac {1}{\rho }}\left({\frac {\partial \rho }{\partial C_{a}}}\right)_{T,p}}
and:
The Rayleigh number , shown below, is a dimensionless number that characterizes convection problems in heat transfer. A critical value exists for the Rayleigh number , above which fluid motion occurs. [ 3 ]
R a x = G r x P r {\displaystyle \mathrm {Ra} _{x}=\mathrm {Gr} _{x}\mathrm {Pr} }
The ratio of the Grashof number to the square of the Reynolds number may be used to determine if forced or free convection may be neglected for a system, or if there's a combination of the two . This characteristic ratio is known as the Richardson number ( Ri ). If the ratio is much less than one, then free convection may be ignored. If the ratio is much greater than one, forced convection may be ignored. Otherwise, the regime is combined forced and free convection. [ 2 ]
The first step to deriving the Grashof number is manipulating the volume expansion coefficient, β {\displaystyle \mathrm {\beta } } as follows.
β = 1 v ( ∂ v ∂ T ) p = − 1 ρ ( ∂ ρ ∂ T ) p {\displaystyle \beta ={\frac {1}{v}}\left({\frac {\partial v}{\partial T}}\right)_{p}={\frac {-1}{\rho }}\left({\frac {\partial \rho }{\partial T}}\right)_{p}}
The v {\displaystyle v} in the equation above, which represents specific volume , is not the same as the v {\displaystyle v} in the subsequent sections of this derivation, which will represent a velocity. This partial relation of the volume expansion coefficient, β {\displaystyle \mathrm {\beta } } , with respect to fluid density, ρ {\displaystyle \mathrm {\rho } } , given constant pressure, can be rewritten as
ρ = ρ 0 ( 1 − β Δ T ) {\displaystyle \rho =\rho _{0}(1-\beta \Delta T)}
where:
There are two different ways to find the Grashof number from this point. One involves the energy equation while the other incorporates the buoyant force due to the difference in density between the boundary layer and bulk fluid.
This discussion involving the energy equation is with respect to rotationally symmetric flow. This analysis will take into consideration the effect of gravitational acceleration on flow and heat transfer. The mathematical equations to follow apply both to rotational symmetric flow as well as two-dimensional planar flow.
∂ ∂ s ( ρ u r 0 n ) + ∂ ∂ y ( ρ v r 0 n ) = 0 {\displaystyle {\frac {\partial }{\partial s}}(\rho ur_{0}^{n})+{\frac {\partial }{\partial y}}(\rho vr_{0}^{n})=0}
where:
In this equation the superscript n is to differentiate between rotationally symmetric flow from planar flow. The following characteristics of this equation hold true.
This equation expands to the following with the addition of physical fluid properties:
ρ ( u ∂ u ∂ s + v ∂ u ∂ y ) = ∂ ∂ y ( μ ∂ u ∂ y ) − d p d s + ρ g . {\displaystyle \rho \left(u{\frac {\partial u}{\partial s}}+v{\frac {\partial u}{\partial y}}\right)={\frac {\partial }{\partial y}}\left(\mu {\frac {\partial u}{\partial y}}\right)-{\frac {dp}{ds}}+\rho g.}
From here we can further simplify the momentum equation by setting the bulk fluid velocity to 0 ( u = 0 {\displaystyle u=0} ).
d p d s = ρ 0 g {\displaystyle {\frac {dp}{ds}}=\rho _{0}g}
This relation shows that the pressure gradient is simply a product of the bulk fluid density and the gravitational acceleration. The next step is to plug in the pressure gradient into the momentum equation.
( u ∂ u ∂ s + v ∂ u ∂ y ) = ν ( ∂ 2 u ∂ y 2 ) + g ρ − ρ 0 ρ = ν ( ∂ 2 u ∂ y 2 ) − ρ 0 ρ g β ( T − T 0 ) {\displaystyle \left(u{\frac {\partial u}{\partial s}}+v{\frac {\partial u}{\partial y}}\right)=\nu \left({\frac {\partial ^{2}u}{\partial y^{2}}}\right)+g{\frac {\rho -\rho _{0}}{\rho }}=\nu \left({\frac {\partial ^{2}u}{\partial y^{2}}}\right)-{\frac {\rho _{0}}{\rho }}g\beta (T-T_{0})}
where the volume expansion coefficient to density relationship ρ − ρ 0 = − ρ 0 β ( T − T 0 ) {\displaystyle \rho -\rho _{0}=-\rho _{0}\beta (T-T_{0})} found above and the kinematic viscosity relationship ν = μ ρ {\displaystyle \nu ={\frac {\mu }{\rho }}} were substituted into the momentum equation. u ( ∂ u ∂ s ) + v ( ∂ v ∂ y ) = ν ( ∂ 2 u ∂ y 2 ) − ρ 0 ρ g β ( T − T 0 ) {\displaystyle u\left({\frac {\partial u}{\partial s}}\right)+v\left({\frac {\partial v}{\partial y}}\right)=\nu \left({\frac {\partial ^{2}u}{\partial y^{2}}}\right)-{\frac {\rho _{0}}{\rho }}g\beta (T-T_{0})}
To find the Grashof number from this point, the preceding equation must be non-dimensionalized. This means that every variable in the equation should have no dimension and should instead be a ratio characteristic to the geometry and setup of the problem. This is done by dividing each variable by corresponding constant quantities. Lengths are divided by a characteristic length, L c {\displaystyle L_{c}} . Velocities are divided by appropriate reference velocities, V {\displaystyle V} , which, considering the Reynolds number, gives V = R e L ν L c {\displaystyle V={\frac {\mathrm {Re} _{L}\nu }{L_{c}}}} . Temperatures are divided by the appropriate temperature difference, ( T s − T 0 ) {\displaystyle (T_{s}-T_{0})} . These dimensionless parameters look like the following:
The asterisks represent dimensionless parameter. Combining these dimensionless equations with the momentum equations gives the following simplified equation.
where:
The dimensionless parameter enclosed in the brackets in the preceding equation is known as the Grashof number:
Another form of dimensional analysis that will result in the Grashof number is known as the Buckingham π theorem . This method takes into account the buoyancy force per unit volume, F b {\displaystyle F_{b}} due to the density difference in the boundary layer and the bulk fluid.
F b = ( ρ − ρ 0 ) g {\displaystyle F_{b}=(\rho -\rho _{0})g}
This equation can be manipulated to give,
F b = − β g ρ 0 Δ T . {\displaystyle F_{b}=-\beta g\rho _{0}\Delta T.}
The list of variables that are used in the Buckingham π method is listed below, along with their symbols and dimensions.
With reference to the Buckingham π theorem there are 9 – 5 = 4 dimensionless groups. Choose L , μ , {\displaystyle \mu ,} k , g and β {\displaystyle \beta } as the reference variables. Thus the π {\displaystyle \pi } groups are as follows:
Solving these π {\displaystyle \pi } groups gives:
From the two groups π 2 {\displaystyle \pi _{2}} and π 3 , {\displaystyle \pi _{3},} the product forms the Grashof number:
Taking ν = μ ρ {\displaystyle \nu ={\frac {\mu }{\rho }}} and Δ T = ( T s − T 0 ) {\displaystyle \Delta T=(T_{s}-T_{0})} the preceding equation can be rendered as the same result from deriving the Grashof number from the energy equation.
In forced convection the Reynolds number governs the fluid flow. But, in natural convection the Grashof number is the dimensionless parameter that governs the fluid flow. Using the energy equation and the buoyant force combined with dimensional analysis provides two different ways to derive the Grashof number.
It is also possible to derive the Grashof number by physical definition of the number as follows:
G r = B u o y a n c y F o r c e F r i c t i o n F o r c e = m g τ A = L 3 ρ β ( Δ T ) g μ ( V / L ) L 2 = L 2 β ( Δ T ) g ν V {\displaystyle \mathrm {Gr} ={\frac {\mathrm {Buoyancy~Force} }{\mathrm {Friction~Force} }}={\frac {mg}{\tau A}}={\frac {L^{3}\rho \beta (\Delta T)g}{\mu (V/L)L^{2}}}={\frac {L^{2}\beta (\Delta T)g}{\nu V}}}
However, above expression, especially the final part at the right hand side, is slightly different from Grashof number appearing in literature. Following dimensionally correct scale in terms of dynamic viscosity can be used to have the final form.
μ = ρ V L {\displaystyle \mathrm {\mu } =\rho VL} Writing above scale in Gr gives;
G r = L 3 β ( Δ T ) g ν 2 {\displaystyle \mathrm {Gr} ={\frac {L^{3}\beta (\Delta T)g}{\nu ^{2}}}} Physical reasoning is helpful to grasp the meaning of the number. On the other hand, following velocity definition can be used as a characteristic velocity value for making certain velocities nondimensional.
V = L 2 β ( Δ T ) g ν G r {\displaystyle \mathrm {V} ={\frac {L^{2}\beta (\Delta T)g}{\nu Gr}}}
In a recent research carried out on the effects of Grashof number on the flow of different fluids driven by convection over various surfaces. [ 4 ] Using slope of the linear regression line through data points, it is concluded that increase in the value of Grashof number or any buoyancy related parameter implies an increase in the wall temperature and this makes the bond(s) between the fluid to become weaker, strength of the internal friction to decrease, the gravity to becomes stronger enough (i.e. makes the specific weight appreciably different between the immediate fluid layers adjacent to the wall). The effects of buoyancy parameter are highly significant in the laminar flow within the boundary layer formed on a vertically moving cylinder. This is only achievable when the prescribed surface temperature (PST) and prescribed wall heat flux (WHF) are considered. It can be concluded that buoyancy parameter has a negligible positive effect on the local Nusselt number. This is only true when the magnitude of Prandtl number is small or prescribed wall heat flux (WHF) is considered. Sherwood number, Bejan Number, Entropy generation, Stanton Number and pressure gradient are increasing properties of buoyancy related parameter while concentration profiles, frictional force, and motile microorganism are decreasing properties. | https://en.wikipedia.org/wiki/Grashof_number |
A grass mountain ( German : Grasberg ) in topography is a mountain covered with low vegetation , typically in the Alps and often steep-sided. [ 1 ] The nature of such cover, which often grows particularly well on sedimentary rock , will reflect local conditions.
The following mountain ranges of the Eastern Alps in Europe are often referred to as grass mountains ( Grasberge ):
Other areas where grass mountains occur include: the gorges of the Himalayas , [ 6 ] Scotland , [ 6 ] Poland's Tatra Mountains , [ 7 ] and Lofoten . [ 8 ]
Negotiating the steep grass-covered sides of grass mountains requires a special type of climbing known as grass climbing ( Grasklettern ). [ 12 ] | https://en.wikipedia.org/wiki/Grass_mountain |
A grassed waterway is a 2-to-48-meter-wide (6.6-to-157.5-foot) native grassland strip of green belt . It is generally installed in the thalweg , the deepest continuous line along a valley or watercourse , of a cultivated dry valley in order to control erosion . A study carried out on a grassed waterway during 8 years in Bavaria showed that it can lead to several other types of positive impacts, e.g. on biodiversity . [ 1 ]
Confusion between "grassed waterway" and "vegetative filter strips" should be avoided. The latter are generally narrower (only a few metres wide) and rather installed along rivers as well as along or within cultivated fields. However, buffer strip can be a synonym , with shrubs and trees added to the plant component, as does a riparian zone .
Runoff generated on cropland during storms or long winter rains concentrates in the thalweg where it can lead to rill or gully erosion.
Rills and gullies further concentrate runoff and speed up its transfer, which can worsen damage occurring downstream. This can result in a muddy flood .
In this context, a grassed waterway allows increasing soil cohesion and roughness. It also prevents the formation of rills and gullies. Furthermore, it can slow down runoff and allow its re-infiltration during long winter rains. In contrast, its infiltration capacity is generally not sufficient to reinfiltrate runoff produced by heavy spring and summer storms. It can therefore be useful to combine it with extra measures, like the installation of earthen dams across the grassed waterway, in order to buffer runoff temporarily. [ 2 ] | https://en.wikipedia.org/wiki/Grassed_waterway |
Grassing is one of the oldest methods of bleaching textile goods. The grassing method has been long been used in Europe to bleach linen and cotton based fabrics. [ 1 ]
The linens were laid out on the grass for over seven days after boiling with the ''lyes of ashes and rinsing''. [ 2 ] The atmospheric oxygen and the oxygen left by the grass provide the whitening action. The cloth becomes whiter day by day until it attains the full whiteness. It was a slow process, but safer for the subjected material. Chemical bleaching may harm the cloth, but in the grassing it hardly affects the cloth's strength. [ 1 ] [ 3 ] [ 4 ]
The Bleachfield was an open area to spread cloth. It was a field near the watercourse used by a bleachery. Bleachfields were common in and around the mill towns during the British Industrial Revolution [ 5 ]
With the discovery of Chlorine in the late 18th century, chemical bleaching took over from grassing, as it was quicker and could be done indoors. [ 1 ] [ 5 ] [ 2 ]
It is the conjugated double bonds of the substrate that makes the substrate capable of absorbing visible light. The absorption of light makes the cloth look yellowish. Bleaching with oxygen removes the chromophoric sites and makes the cloths whiter. Oxygen is a degrading bleaching agent. Its bleaching action is based on ''destroying the phenolic groups and the carbon–carbon double bonds.''. [ 6 ] A major source of chemical bleaching is hydrogen peroxide ( H 2 O 2 ) that contains a single bond , (–O–O–). When the bond breaks, it gives rise to very reactive oxygen specie, which is the active agent of the bleach. Around sixty percent of the world's hydrogen peroxide is used in chemical bleaching of textiles and wood pulp. [ 7 ] | https://en.wikipedia.org/wiki/Grassing_(textiles) |
In mathematics , a Grassmann–Cayley algebra is the exterior algebra with an additional product, which may be called the shuffle product or the regressive product. [ 1 ] It is the most general structure in which projective properties are expressed in a coordinate-free way. [ 2 ] The technique is based on work by German mathematician Hermann Grassmann on exterior algebra , and subsequently by British mathematician Arthur Cayley 's work on matrices and linear algebra .
It is a form of modeling algebra for use in projective geometry . [ citation needed ]
The technique uses subspaces as basic elements of computation, a formalism which allows the translation of synthetic geometric statements into invariant algebraic statements. This can create a useful framework for the modeling of conics and quadrics among other forms, and in tensor mathematics. It also has a number of applications in robotics , particularly for the kinematical analysis of manipulators.
This linear algebra -related article is a stub . You can help Wikipedia by expanding it .
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Grassmann–Cayley_algebra |
A tubular grate heater is any grate or heat exchanger for a fireplace , designed from metal tubing. [ 1 ] Through the tubing is circulated home air that becomes heated by the fire, and then vented back into the room and home. It is a heat recovery device that improves the efficiency and ability of a fireplace to get the heat from the fire out and into the home. From simple to ornate, they can contribute significantly to the overall comfort of a room and potentially to a whole house. [ 2 ] This in turn will reduce the amount of firewood needed to achieve the same comfort level, potentially reducing heating costs and expenses.
Heaters increase the efficiency of a fireplace and hence the amount of heat that makes it from the fireplace out into the home. They work by having naturally convected and forced air funneled into the metal heat exchanger tubing that is then heated by the coals and/or fire. They draw in cold air from the floor and blow heated air back out into your home. This adds an element of conductive and convective heating to the radiant heat typical of a basic fireplace. Grate heaters have been called many things: heatilator, hearth heater, fireplace blower, fireplace grate heater, Fireplace Furnace, tubular grate heater, etc.
The ideal tubular grate heater would be built like an ideal heat exchanger with as large a surface area as possible with material suitable to minimize the heaters thermal deterioration yet provide good thermal conductivity with a high airflow rate, similar to your home furnace . However the unique environment of a fireplace and the burning of gas, wood, coal, pellets, etc., require specific heater designs and material construction making few, if any, grate heaters compatible with all fuels.
The most critical elements of any tubular grate heater are: | https://en.wikipedia.org/wiki/Grate_heater |
Grating-coupled interferometry (GCI) is a biophysical characterization method mainly used in biochemistry and drug discovery for label-free analysis of molecular interactions . Similar to other optical methods such as surface plasmon resonance (SPR) or bio-layer interferometry (BLI), it is based on measuring refractive index changes within an evanescent field near a sensor surface. After immobilizing a target to the sensor surface, analyte molecules in solution which bind to that target cause a small increase in local refractive index. By monitoring these refractive changes over time characteristics such as kinetic rates and affinity constants of the analyte-target binding, or analyte concentrations, can be determined. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
GCI is based on phase-shifting waveguide interferometry . Light of the sensing arm of the interferometer is coupled into a monomode waveguide through a first grating, and undergoes a phase change until it reaches a second grating, depending on the local refractive index within the evanescent field (see image). The second grating is used for coupling in light of the reference arm of the interferometer, and interference created by the superposition of the sensing and reference waves after the second grating translates the phase changes into an intensity modulation. By rapid phase modulation of one of the arms using a liquid crystal element, and thanks to the long interaction length with the sample, extremely high sensitivities with respect to surface refractive index can be achieved even at acquisition rates above 10 Hz. Since the interference is created on chip and not through free-space propagation, a high robustness with respect to ambient disturbances such as vibrations or temperature changes is achieved. [ citation needed ]
This biophysics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Grating-coupled_interferometry |
Gravimetric analysis describes a set of methods used in analytical chemistry for the quantitative determination of an analyte (the ion being analyzed) based on its mass. The principle of this type of analysis is that once an ion's mass has been determined as a unique compound, that known measurement can then be used to determine the same analyte's mass in a mixture, as long as the relative quantities of the other constituents are known. [ 1 ]
The four main types of this method of analysis are precipitation , volatilization , electro-analytical and miscellaneous physical method . [ 2 ] The methods involve changing the phase of the analyte to separate it in its pure form from the original mixture and are quantitative measurements.
The precipitation method is the one used for the determination of the amount of calcium in water. Using this method, an excess of oxalic acid, H 2 C 2 O 4 , is added to a measured, known volume of water. By adding a reagent , here ammonium oxalate, the calcium will precipitate as calcium oxalate. The proper reagent, when added to aqueous solution, will produce highly insoluble precipitates from the positive and negative ions that would otherwise be soluble with their counterparts (equation 1). [ 3 ]
The reaction is:
Formation of calcium oxalate:
Ca 2+ (aq) + C 2 O 4 2- → CaC 2 O 4
The precipitate is collected, dried and ignited to high (red) heat which converts it entirely to calcium oxide.
The reaction is pure calcium oxide formed
CaC 2 O 4 → CaO (s) + CO (g) + CO 2 (g)
The pure precipitate is cooled, then measured by weighing, and the difference in weights before and after reveals the mass of analyte lost, in this case calcium oxide. [ 4 ] [ 5 ] That number can then be used to calculate the amount, or the percent concentration, of it in the original mix. [ 2 ] [ 4 ] [ 5 ]
Volatilization methods can be either direct or indirect . Water eliminated in a quantitative manner from many inorganic substances by ignition is an example of a direct determination. It is collected on a solid desiccant and its mass determined by the gain in mass of the desiccant.
Another direct volatilization method involves carbonates which generally decompose to release carbon dioxide when acids are used. Because carbon dioxide is easily evolved when heat is applied, its mass is directly established by the measured increase in the mass of the absorbent solid used. [ 6 ] [ 7 ]
Determination of the amount of water by measuring the loss in mass of the sample during heating is an example of an indirect method. It is well known that changes in mass occur due to decomposition of many substances when heat is applied, regardless of the presence or absence of water. Because one must make the assumption that water was the only component lost, this method is less satisfactory than direct methods.
This often faulty and misleading assumption has proven to be wrong on more than a few occasions. There are many substances other than water loss that can lead to loss of mass with the addition of heat, as well as a number of other factors that may contribute to it. The widened margin of error created by this all-too-often false assumption is not one to be lightly disregarded as the consequences could be far-reaching.
Nevertheless, the indirect method, although less reliable than direct, is still widely used in commerce. For example, it's used to measure the moisture content of cereals, where a number of imprecise and inaccurate instruments are available for this purpose.
In volatilization methods, removal of the analyte involves separation by heating or chemically decomposing a volatile sample at a suitable temperature. [ 2 ] [ 8 ] In other words, thermal or chemical energy is used to precipitate a volatile species. [ 9 ] For example, the water content of a compound can be determined by vaporizing the water using thermal energy (heat). Heat can also be used, if oxygen is present, for combustion to isolate the suspect species and obtain the desired results.
The two most common gravimetric methods using volatilization are those for water and carbon dioxide. [ 2 ] An example of this method is the isolation of sodium hydrogen bicarbonate (the main ingredient in most antacid tablets) from a mixture of carbonate and bicarbonate. [ 2 ] The total amount of this analyte, in whatever form, is obtained by addition of an excess of dilute sulfuric acid to the analyte in solution.
In this reaction, nitrogen gas is introduced through a tube into the flask which contains the solution. As it passes through, it gently bubbles. The gas then exits, first passing a drying agent (here CaSO 4 , the common desiccant Drierite ). It then passes a mixture of the drying agent and sodium hydroxide which lies on asbestos or Ascarite II , a non-fibrous silicate containing sodium hydroxide. [ 10 ] The mass of the carbon dioxide is obtained by measuring the increase in mass of this absorbent. [ 2 ] This is performed by measuring the difference in weight of the tube in which the ascarite contained before and after the procedure.
The calcium sulfate (CaSO 4 ) in the tube retains carbon dioxide selectively as it's heated, and thereby, removed from the solution. The drying agent absorbs any aerosolized water and/or water vapor (reaction 3.). The mix of the drying agent and NaOH absorbs the CO 2 and any water that may have been produced as a result of the absorption of the NaOH (reaction 4.). [ 11 ]
The reactions are:
Reaction 3 - absorption of water
NaHCO 3 (aq) + H 2 SO 4 (aq) → CO 2 (g ) + H 2 O (l) + NaHSO 4 (aq). [ 11 ]
Reaction 4. Absorption of CO 2 and residual water
CO 2 (g) + 2 NaOH (s) → Na 2 CO 3 (s) + H 2 O (l) . [ 11 ]
A chunk of ore is to be analyzed for sulfur content. It is treated with concentrated nitric acid and potassium chlorate to convert all of the sulfur to sulfate (SO 2− 4 ). The nitrate and chlorate are removed by treating the solution with concentrated HCl. The sulfate is precipitated with barium (Ba 2+ ) and weighed as BaSO 4 .
Gravimetric analysis, if methods are followed carefully, provides for exceedingly precise analysis. In fact, gravimetric analysis was used to determine the atomic masses of many elements in the periodic table to six figure accuracy. Gravimetry provides very little room for instrumental error and does not require a series of standards for calculation of an unknown. Also, methods often do not require expensive equipment. Gravimetric analysis, due to its high degree of accuracy, when performed correctly, can also be used to calibrate other instruments in lieu of reference standards. Gravimetric analysis is currently used to allow undergraduate chemistry/Biochemistry students to experience a grad level laboratory and it is a highly effective teaching tool to those who want to attend medical school or any research graduate school.
Gravimetric analysis usually only provides for the analysis of a single element, or a limited group of elements, at a time. Comparing modern dynamic flash combustion coupled with gas chromatography with traditional combustion analysis will show that the former is both faster and allows for simultaneous determination of multiple elements while traditional determination allowed only for the determination of carbon and hydrogen. Methods are often convoluted and a slight mis-step in a procedure can often mean disaster for the analysis (colloid formation in precipitation gravimetry, for example). Compare this with hardy methods such as spectrophotometry and one will find that analysis by these methods is much more efficient.
Diverse ions have a screening effect on dissociated ions which leads to extra dissociation. Solubility will show a clear increase in presence of diverse ions as the solubility product will increase. Look at the following example:
Find the solubility of AgCl (K sp = 1.0 x 10 −10 ) in 0.1 M NaNO 3 . The activity coefficients for silver and chloride are 0.75 and 0.76, respectively.
We can no longer use the thermodynamic equilibrium constant (i.e. in absence of diverse ions) and we have to consider the concentration equilibrium constant or use activities instead of concentration if we use Kth:
We have calculated the solubility of AgCl in pure water to be 1.0 x 10 −5 M, if we compare this value to that obtained in presence of diverse ions we see % increase in solubility = {(1.3 x 10 −5 – 1.0 x 10 −5 ) / 1.0 x 10 −5 } x 100 = 30%
Therefore, once again we have an evidence for an increase in dissociation or a shift of equilibrium to right in presence of diverse ions. | https://en.wikipedia.org/wiki/Gravimetric_analysis |
A gravimetric blender is an item of industrial equipment used in the plastics industry to accurately weigh two or more components and then mix them together prior to processing in an injection molding machine, plastics extrusion , or blow moulding machine. [ 1 ] [ 2 ]
There are two types of gravimetric blenders:
1. Loss in weight
This type of gravimetric blender measures the "loss in weight" from two or more hoppers using a load cell under each hopper. Material is usually dispensed from the hoppers using a screw conveyor . All materials are dispensed together and the rate of dosing from each hopper is controlled to ensure the correct blend is achieved.
2. Gain in weight (sometimes called a batch blender)
A gain in weight gravimetric blender has two or more hoppers arranged above a weigh-pan. These hoppers contain the components which are to be mixed, at the base of each hopper there is a valve to control the dispensing of material from the component hopper into the weigh-pan. The components are dispensed one at a time into the weigh pan until the target or batch weight is reached. Once the batch has been weighed out the contents of the weigh-pan are dispensed into a mixing chamber where they are blended. The resulting mixture exits the base of the mixing chamber into the processing machine.
A typical application of a gravimetric blender would be mixing virgin plastic granules, recycled plastic, and masterbatch (an additive used to colour plastics) together. | https://en.wikipedia.org/wiki/Gravimetric_blender |
Gravitational-wave astronomy is a subfield of astronomy concerned with the detection and study of gravitational waves emitted by astrophysical sources. [ 1 ]
Gravitational waves are minute distortions or ripples in spacetime caused by the acceleration of massive objects. They are produced by cataclysmic events such as the merger of binary black holes , the coalescence of binary neutron stars , supernova explosions and processes including those of the early universe shortly after the Big Bang . Studying them offers a new way to observe the universe, providing valuable insights into the behavior of matter under extreme conditions. Similar to electromagnetic radiation (such as light wave, radio wave, infrared radiation and X-rays) which involves transport of energy via propagation of electromagnetic field fluctuations, gravitational radiation involves fluctuations of the relatively weaker gravitational field. The existence of gravitational waves was first suggested by Oliver Heaviside in 1893 and then later conjectured by Henri Poincaré in 1905 as the gravitational equivalent of electromagnetic waves before they were predicted by Albert Einstein in 1916 as a corollary to his theory of general relativity .
In 1978, Russell Alan Hulse and Joseph Hooton Taylor Jr. provided the first experimental evidence for the existence of gravitational waves by observing two neutron stars orbiting each other and won the 1993 Nobel Prize in physics for their work. In 2015, nearly a century after Einstein's forecast, the first direct observation of gravitational waves as a signal from the merger of two black holes confirmed the existence of these elusive phenomena and opened a new era in astronomy. Subsequent detections have included binary black hole mergers, neutron star collisions, and other violent cosmic events. Gravitational waves are now detected using laser interferometry , which measures tiny changes in the length of two perpendicular arms caused by passing waves. Observatories like LIGO (Laser Interferometer Gravitational-wave Observatory), Virgo and KAGRA (Kamioka Gravitational Wave Detector) use this technology to capture the faint signals from distant cosmic events. LIGO co-founders Barry C. Barish , Kip S. Thorne , and Rainer Weiss were awarded the 2017 Nobel Prize in Physics for their ground-breaking contributions in gravitational wave astronomy.
When distant astronomical objects are observed using electromagnetic waves, different phenomena like scattering, absorption, reflection, refraction, etc. cause information loss. There are various regions in space only partially penetrable by photons, such as the insides of nebulae, the dense dust clouds at the galactic core, the regions near black holes, etc. Gravitational astronomy has the potential to be used in parallel with electromagnetic astronomy to study the universe at a better resolution. In an approach known as multi-messenger astronomy , gravitational wave data is combined with data from other wavelengths to get a more complete picture of astrophysical phenomena. Gravitational wave astronomy helps understand the early universe , test theories of gravity , and reveal the distribution of dark matter and dark energy . In particular, it can help find the Hubble constant , which describes the rate of accelerated expansion of the universe. All of these open doors to a physics beyond the Standard Model (BSM).
Challenges that remain in the field include noise interference, the lack of ultra-sensitive instruments, and the detection of low-frequency waves. Ground-based detectors face problems with seismic vibrations produced by environmental disturbances and the limitation of the arm length of detectors due to the curvature of the Earth’s surface. In the future, the field of gravitational wave astronomy will try develop upgraded detectors and next-generation observatories, along with possible space-based detectors such as LISA ( Laser Interferometer Space Antenna ). LISA will be able to listen to distant sources like compact supermassive black holes in the galactic core and primordial black holes, as well as low-frequency sensitive signals sources such as binary white dwarf merger and sources from the early universe. [ 2 ]
Gravitational waves are waves of the intensity of gravity generated by the accelerated masses of an orbital binary system that propagate as waves outward from their source at the speed of light . They were first proposed by Oliver Heaviside in 1893 and then later by Henri Poincaré in 1905 as waves similar to electromagnetic waves but the gravitational equivalent.
Gravitational waves were later predicted in 1916 by Albert Einstein on the basis of his general theory of relativity as ripples in spacetime . Later he refused to accept gravitational waves. [ 3 ] Gravitational waves transport energy as gravitational radiation, a form of radiant energy similar to electromagnetic radiation . Newton's law of universal gravitation , part of classical mechanics , does not provide for their existence, since that law is predicated on the assumption that physical interactions propagate instantaneously (at infinite speed) – showing one of the ways the methods of Newtonian physics are unable to explain phenomena associated with relativity.
The first indirect evidence for the existence of gravitational waves came in 1974 from the observed orbital decay of the Hulse–Taylor binary pulsar , which matched the decay predicted by general relativity as energy is lost to gravitational radiation. In 1993, Russell A. Hulse and Joseph Hooton Taylor Jr. received the Nobel Prize in Physics for this discovery.
Direct observation of gravitational waves was not made until 2015, when a signal generated by the merger of two black holes was received by the LIGO gravitational wave detectors in Livingston, Louisiana, and in Hanford, Washington. The 2017 Nobel Prize in Physics was subsequently awarded to Rainer Weiss , Kip Thorne and Barry Barish for their role in the direct detection of gravitational waves.
In gravitational-wave astronomy, observations of gravitational waves are used to infer data about the sources of gravitational waves. Sources that can be studied this way include binary star systems composed of white dwarfs , neutron stars , and black holes ; events such as supernovae ; and the formation of the early universe shortly after the Big Bang .
Collaboration between detectors aids in collecting unique and valuable information, owing to different specifications and sensitivity of each.
There are several ground-based laser interferometers which span several miles/kilometers, including: the two Laser Interferometer Gravitational-Wave Observatory (LIGO) detectors in Washington and Louisiana, USA; Virgo , at the European Gravitational Observatory in Italy; GEO600 in Germany, and the Kamioka Gravitational Wave Detector (KAGRA) in Japan. While LIGO, Virgo, and KAGRA have made joint observations to date, GEO600 is currently utilized for trial and test runs due to lower sensitivity of its instruments and has not participated in joint runs with the others recently.
In 2015, the LIGO project was the first to directly observe gravitational waves using laser interferometers. [ 5 ] [ 6 ] The LIGO detectors observed gravitational waves from the merger of two stellar-mass black holes , matching predictions of general relativity . [ 7 ] [ 8 ] [ 9 ] These observations demonstrated the existence of binary stellar-mass black hole systems, and were the first direct detection of gravitational waves and the first observation of a binary black hole merger. [ 10 ] This finding has been characterized as revolutionary to science, because of the verification of our ability to use gravitational-wave astronomy to progress in our search and exploration of dark matter and the big bang .
An alternative means of observation is using pulsar timing arrays (PTAs). There are three consortia, the European Pulsar Timing Array (EPTA), the North American Nanohertz Observatory for Gravitational Waves (NANOGrav), and the Parkes Pulsar Timing Array (PPTA), which co-operate as the International Pulsar Timing Array . These use existing radio telescopes, but since they are sensitive to frequencies in the nanohertz range, many years of observation are needed to detect a signal and detector sensitivity improves gradually. Current bounds are approaching those expected for astrophysical sources. [ 11 ]
In June 2023, four PTA collaborations, the three mentioned above and the Chinese Pulsar Timing Array, delivered independent but similar evidence for a stochastic background of nanohertz gravitational waves. [ 14 ] Each provided an independent first measurement of the theoretical Hellings-Downs curve , i.e., the quadrupolar correlation between two pulsars as a function of their angular separation in the sky, which is a telltale sign of the gravitational wave origin of the observed background. [ 15 ] [ 16 ] [ 17 ] [ 18 ] The sources of this background remain to be identified, although binaries of supermassive black holes are the most likely candidates. [ 19 ]
Further in the future, there is the possibility of space-borne detectors. The European Space Agency has selected a gravitational-wave mission for its L3 mission, due to launch 2034, the current concept is the evolved Laser Interferometer Space Antenna (eLISA). [ 20 ] Also in development is the Japanese Deci-hertz Interferometer Gravitational wave Observatory (DECIGO).
Astronomy has traditionally relied on electromagnetic radiation . Originating with the visible band, as technology advanced, it became possible to observe other parts of the electromagnetic spectrum , from radio to gamma rays . Each new frequency band gave a new perspective on the Universe and heralded new discoveries. [ 21 ] During the 20th century, indirect and later direct measurements of high-energy, massive particles provided an additional window into the cosmos. Late in the 20th century, the detection of solar neutrinos founded the field of neutrino astronomy , giving an insight into previously inaccessible phenomena, such as the inner workings of the Sun . [ 22 ] [ 23 ] The observation of gravitational waves provides a further means of making astrophysical observations.
Russell Hulse and Joseph Taylor were awarded the 1993 Nobel Prize in Physics for showing that the orbital decay of a pair of neutron stars, one of them a pulsar, fits general relativity's predictions of gravitational radiation. [ 24 ] Subsequently, many other binary pulsars (including one double pulsar system ) have been observed, all fitting gravitational-wave predictions. [ 25 ] In 2017, the Nobel Prize in Physics was awarded to Rainer Weiss , Kip Thorne and Barry Barish for their role in the first detection of gravitational waves. [ 26 ] [ 27 ] [ 28 ]
Gravitational waves provide complementary information to that provided by other means. By combining observations of a single event made using different means, it is possible to gain a more complete understanding of the source's properties. This is known as multi-messenger astronomy . Gravitational waves can also be used to observe systems that are invisible (or almost impossible to detect) by any other means. For example, they provide a unique method of measuring the properties of black holes.
Gravitational waves can be emitted by many systems, but, to produce detectable signals, the source must consist of extremely massive objects moving at a significant fraction of the speed of light . The main source is a binary of two compact objects . Example systems include:
In addition to binaries, there are other potential sources:
Gravitational waves interact only weakly with matter. This is what makes them difficult to detect. It also means that they can travel freely through the Universe, and are not absorbed or scattered like electromagnetic radiation. It is therefore possible to see to the center of dense systems, like the cores of supernovae or the Galactic Center . It is also possible to see further back in time than with electromagnetic radiation, as the early universe was opaque to light prior to recombination , but transparent to gravitational waves. [ 46 ]
The ability of gravitational waves to move freely through matter also means that gravitational-wave detectors , unlike telescopes , are not pointed to observe a single field of view but observe the entire sky. Detectors are more sensitive in some directions than others, which is one reason why it is beneficial to have a network of detectors. [ 47 ] Directionalization is also poor, due to the small number of detectors.
Cosmic inflation , a hypothesized period when the universe rapidly expanded during the first 10 −36 seconds after the Big Bang , would have given rise to gravitational waves; that would have left a characteristic imprint in the polarization of the CMB radiation. [ 48 ] [ 49 ]
It is possible to calculate the properties of the primordial gravitational waves from measurements of the patterns in the microwave radiation, and use those calculations to learn about the early universe. [ how? ]
As a young area of research, gravitational-wave astronomy is still in development; however, there is consensus within the astrophysics community that this field will evolve to become an established component of 21st century multi-messenger astronomy . [ 50 ]
Gravitational-wave observations complement observations in the electromagnetic spectrum . [ 51 ] [ 50 ] These waves also promise to yield information in ways not possible via detection and analysis of electromagnetic waves. Electromagnetic waves can be absorbed and re-radiated in ways that make extracting information about the source difficult. Gravitational waves, however, only interact weakly with matter, meaning that they are not scattered or absorbed. This should allow astronomers to view the center of a supernova, stellar nebulae, and even colliding galactic cores in new ways.
Ground-based detectors have yielded new information about the inspiral phase and mergers of binary systems of two stellar mass black holes , and merger of two neutron stars . They could also detect signals from core-collapse supernovae , and from periodic sources such as pulsars with small deformations. If there is truth to speculation about certain kinds of phase transitions or kink bursts from long cosmic strings in the very early universe (at cosmic times around 10 −25 seconds), these could also be detectable. [ 52 ] Space-based detectors like LISA should detect objects such as binaries consisting of two white dwarfs , and AM CVn stars (a white dwarf accreting matter from its binary partner, a low-mass helium star), and also observe the mergers of supermassive black holes and the inspiral of smaller objects (between one and a thousand solar masses ) into such black holes. LISA should also be able to listen to the same kind of sources from the early universe as ground-based detectors, but at even lower frequencies and with greatly increased sensitivity. [ 53 ]
Detecting emitted gravitational waves is a difficult endeavor. It involves ultra-stable high-quality lasers and detectors calibrated with a sensitivity of at least 2·10 −22 Hz −1/2 as shown at the ground-based detector, GEO600. [ 54 ] It has also been proposed that even from large astronomical events, such as supernova explosions, these waves are likely to degrade to vibrations as small as an atomic diameter. [ 55 ]
Pinpointing the location of where the gravitational waves comes from is also a challenge. But deflected waves through gravitational lensing combined with machine learning could make it easier and more accurate. [ 56 ] Just as the light from the SN Refsdal supernova was detected a second time almost a year after it was first discovered, due to gravitational lensing sending some of the light on a different path through the universe, the same approach could be used for gravitational waves. [ 57 ] While still at an early stage, a technique similar to the triangulation used by cell phones to determine their location in relation to GPS satellites, will help astronomers tracking down the origin of the waves. [ 58 ] | https://en.wikipedia.org/wiki/Gravitational-wave_astronomy |
In physics , the gravitational Aharonov-Bohm effect is a phenomenon involving the behavior of particles acting according to quantum mechanics while under the influence of a classical gravitational field . It is the gravitational analog of the well-known Aharonov–Bohm effect , which is about the quantum mechanical behavior of particles in a classical electromagnetic field .
There are many variants of the Aharonov-Bohm effect in electromagnetism. Here we review an electric version of the Aharonov-Bohm effect that is most similar to the gravitational effect which has been experimentally observed. This electric effect is caused by a charged particle (say, an electron) being in a superposition of traveling down two different paths. In both paths, the electric field that the electron sees is zero everywhere along the path, but the scalar electric potential that the electron sees is not the same for both paths.
In the above figure, the beamsplitter puts the electron in a superposition of taking the upper path and taking the lower path. In both paths, when the electron gets to the mirror, it is stopped and held there. During that time when the electron is held in place at a mirror, 2 electric charges each with charge Q {\displaystyle Q} are brought near the upper mirror in a symmetric manner such that the net electric field caused by the 2 charges at the upper mirror is 0. We assume that the lower mirror is far enough away from the upper mirror such that the electric potential (and electric field) caused by the 2 charges is 0 at the lower mirror. So, this creates an electric potential difference between upper and lower mirrors equal to Δ U = 2 Q 4 π ϵ 0 r {\displaystyle \Delta U={\frac {2Q}{4\pi \epsilon _{0}r}}} , where r {\displaystyle r} is the distance of the charges from the mirror and ϵ 0 {\displaystyle \epsilon _{0}} is the electric constant . The electron is held there for a time T {\displaystyle T} , after which the charges are moved away and the electron is allowed to continue moving along its path. Assuming that the time we take to move the 2 charges to and from the mirror is much smaller than T {\displaystyle T} , this time that the electron spends at the mirror causes a phase shift equal to
Δ ϕ = − e Δ U T / ℏ = − 2 e Q T 4 π ϵ 0 r ℏ {\displaystyle \Delta \phi =-e\Delta UT/\hbar =-{\frac {2eQT}{4\pi \epsilon _{0}r\hbar }}}
where e {\displaystyle e} is the elementary charge .
When the 2 paths of the interferometer are recombined, we see a different interference pattern depending on whether we brought the charges near the upper mirror to create a potential difference. This is surprising, because no matter whether we brought the charges near the upper mirror to create a potential difference, the electron always remains at a location where the electric field is zero (to be more precise, the wavefunction of the electron is only ever nonzero at locations where the electric field is 0).
This electric Aharonov-Bohm effect has not been experimentally observed, unlike the magnetic effect. It not generally feasible to trap an electron at a "mirror" in the interferometer while the potential is turned on and off, which is necessary in this setup to ensure that the electron stays in a region where the field is 0 while the potential is varied. Proposals for experimentally observing the effect instead involve shielding the electron from any electric field by having it travel through a conducting cylinder while the potential is varied. [ 1 ] In contrast, one experiment proposal for the gravitational Aharonov-Bohm effect actually does involve trapping atoms (which play an analogous role to electrons in the experiment proposal) and holding them in a region where the gravitation field is zero using optical lattices. [ 2 ]
Just as there are many variants of the Aharonov-Bohm effect in electromagnetism, there are many variants of the gravitational effect. The simplest version of the gravitational effect is analogous to the electric effect above, with the electron replaced by a small test mass such as an atom, and the 2 charges that create an electric potential replaced by 2 masses that create a gravitational potential. [ 2 ]
In the above figure, an atom passes through an atomic " beamsplitter " that puts the atom in a superposition of taking the upper and lower paths. The atoms are then reflected by atomic "mirrors" that cause them to recombine at the detector on the right, where an interference pattern is detected.
When the atom is at a "mirror", it is paused and held there while a potential is introduced. The potential is created by moving 2 massive objects, each with mass M {\displaystyle M} , to the left and right sides of the upper mirror, a distance r {\displaystyle r} away from the mirror. The masses are brought towards the upper mirror in a symmetric manner such that the gravitational field caused by the masses is 0 at the upper mirror. We assume that the upper mirror is far enough away from the lower mirror such that the masses create zero potential (and zero field) at the lower mirror, which means they create a gravitational potential difference of Δ U = − 2 G M r {\displaystyle \Delta U=-{\frac {2GM}{r}}} between the upper and lower mirrors. Despite this gravitational potential difference, the gravitational field at the upper and lower mirrors is 0, and the atom is never in any position with a nonzero gravitation field. Still, a time T {\displaystyle T} spent at the mirrors with that potential difference causes a phase shift,
Δ ϕ = Δ U m T / ℏ = − 2 G M m T r ℏ {\displaystyle \Delta \phi =\Delta UmT/\hbar =-{\frac {2GMmT}{r\hbar }}}
where m {\displaystyle m} is the mass of the atom. This phase shift is detected by observing the interference pattern where the atom paths recombine, which will be different depending on whether the potential difference was applied.
Instead of these idealized paths for the atom that involve "mirrors" that pause the atom in its place while a potential is applied, the atom could be moved in those paths by an optical lattice. [ 2 ] This would allow precise control over the positions of the atom and the amount of time spent in the gravitational potential.
The various electromagnetic versions of the Aharonv-Bohm effect can be described in a way that does not suggest any physical reality to the electromagnetic potentials and does not require any nonlocality, by treating the sources of the electromagnetic field and the electromagnetic field itself quantum mechanically, instead of treating the test charge (electron) quantum mechanically and the electromagnetic field and its sources classically. [ 3 ] [ 4 ] Without a theory of quantum gravity , we cannot appeal to a fully quantum treatment of the test mass (atom), the sources of the gravitational field, and the gravitational field itself in order to explain the gravitational Aharonov-Bohm effect in a fully local, gauge-independent manner. However, this effect can be explained in a local, gauge-independent manner by considering the gravitational time dilation experienced by the atom in the path with the nonzero potential, and taking into account that matter waves pick up a phase at the Compton frequency of the matter. [ 2 ]
In January 2022, a team led by Mark Kasevich announced that they had experimentally observed a gravitational Aharonov-Bohm effect with an experiment broadly similar to the one outlined above. [ 5 ]
The source of the gravitational potential in their experiment was a single 1.25 kg tungsten mass. The test masses were rubidium -87 atoms. The tungsten mass was fixed, so the gravitational field caused by the tungsten mass was not zero everywhere along the paths of the 87 Rb atoms. This means that the phase shift of the rubidium atoms between the 2 paths was not caused by a gravitational potential energy difference alone, but also by a difference in the gravitational force felt by the atoms in the 2 paths. By detecting a difference in the phase shift between when the tungsten mass is present and when it is not present, they observed a phase shift consistent with that predicted by the Aharonov-Bohm effect.
The "beamsplitters" and "mirrors" used to make the 87 Rb atoms interfere are not solid-state components as would be the case with standard interferometers with light. Rather, they consisted of laser pulses that coherently transfer momentum between the atoms and photons. [ 6 ] | https://en.wikipedia.org/wiki/Gravitational_Aharonov-Bohm_effect |
Gravitational Wave High-energy Electromagnetic Counterpart All-sky Monitor ( GECAM ) ( Chinese : 引力波暴高能电磁对应体全天监测器 ) is a space observatory composed of a constellation of two X-ray and gamma-ray all-sky observing small satellites, called GECAM A (aka KX 08A or Xiaoji , COSPAR 2020-094A) and GECAM B (aka KX 08B or Xiaomu , COSPAR 2020-094B), for research in electromagnetic counterparts of gravitational waves (GWs). It was launched on 9 December 2020 from the Xichang Satellite Launch Center at 20:14 UTC by a Long March 11 rocket. GECAM will focus on detecting electromagnetic counterparts of gravitational waves. In addition to signals from GWs, the observatory studies Ultra-long GRBs, X-ray Flashes, X-ray-rich GRBs, Magnetars and Terrestrial Gamma-ray Flashes. [ 1 ] [ 2 ] [ 3 ]
This spacecraft or satellite related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gravitational_Wave_High-energy_Electromagnetic_Counterpart_All-sky_Monitor |
Gravitational biology is the study of the effects gravity has on living organisms . Throughout the history of the Earth life has evolved to survive changing conditions, such as changes in the climate and habitat . However, one constant factor in evolution since life first began on Earth is the force of gravity. As a consequence, all biological processes are accustomed to the ever-present force of gravity and even small variations in this force can have significant impact on the health and function and the system of organisms. [ 1 ]
The force of gravity on the surface of the Earth, normally denoted g , has remained constant in both direction and magnitude since the formation of the planet. [ citation needed ] As a result, both plant and animal life have evolved to rely upon and cope with it in various ways. For example, humans employ internal models in motor planning that account for the effects of gravity on gross and fine motor skills. [ 2 ]
Plant tropisms are directional movements of a plant with respect to a directional stimulus. One such tropism is gravitropism , or the growth or movement of a plant with respect to gravity. Plant roots grow towards the pull of gravity and away from sunlight, and shoots and stems grow against the pull of gravity and towards sunlight.
Gravity has had an effect on the development of animal life since the first single-celled organism .
The size of single biological cells is inversely proportional to the strength of the gravitational field exerted on the cell. That is, in stronger gravitational fields the size of cells decreases, and in weaker gravitational fields the size of cells increases. Gravity is thus a limiting factor in the growth of individual cells.
Cells which were naturally larger than the size that gravity alone would allow for had to develop means to protect against internal sedimentation. Several of these methods are based upon protoplasmic motion, thin and elongated shape of the cell body, increased cytoplasmic viscosity , and a reduced range of specific gravity of cell components relative to the ground-plasma. [ 3 ]
The effects of gravity on multicellular organisms is considerably more drastic. During the period when animals first evolved to survive on land some method of directed locomotion and thus a form of inner skeleton or outer skeleton would have been required to cope with the increase in the apparent force of gravity due to the weakened upward force of buoyancy . Prior to this point, most lifeforms were small and had a worm- or jellyfish-like appearance, and without this evolutionary step would not have been able to maintain their form or move on land.
In larger terrestrial vertebrates gravitational forces influence musculoskeletal systems , fluid distribution, and hydrodynamics of the circulation . | https://en.wikipedia.org/wiki/Gravitational_biology |
Gravitational capture occurs when one object enters a stable orbit around another (typically referring to natural orbits rather than orbit insertion of a spacecraft with an orbital maneuvers ).
Asteroid capture turns a star-orbiting asteroid into an irregular moon if captured permanently, or a temporary satellite . Capture events explain how satellites can end up with retrograde orbits or rotation.
Planetary capture of a rogue planet by a star or other planet is also theoretically possible, but as of 2012 [update] , none has yet been directly observed. [ 1 ] Because the angle of encounter is somewhat random, such an event would likely leave the captured planet in an orbit outside the orbital plane of other planets in the Solar System , possibly in a retrograde orbit.
Planetary capture has been proposed one mechanism that could explain the unusual orbit of the hypothesized Planet Nine in the Solar System. [ 2 ] ( Planetary migration is a competing explanation.) Planetary capture (possibly planet swapping with neighboring stars) has been proposed as one explanation for why [ 3 ] an unusually high fraction of hot Jupiter exoplanets orbits are misaligned with their stars and a few even in the retrograde direction . [ 4 ]
The opposite process, ejection from orbit, can occur through orbital instability or one or more encounters with another passing object ( perturbations ), eventually putting the object on a hyperbolic trajectory . Rogue planets can theoretically be formed in this way, and planets could lose their moons this way. Tidally detached exomoons have been proposed to explain some astronomical observations, but as of 2023 [update] none have been observed. Severe stellar mass loss could also cause planets to escape orbit and go rogue. | https://en.wikipedia.org/wiki/Gravitational_capture |
In astrophysics , gravitational compression is a phenomenon in which gravity , acting on the mass of an object, compresses it, reducing its size and increasing the object's density .
At the center of a planet or star , gravitational compression produces heat by the Kelvin–Helmholtz mechanism . This is the mechanism that explains how Jupiter continues to radiate heat produced by its gravitational compression. [ 1 ]
The most common reference to gravitational compression is stellar evolution . The Sun and other main-sequence stars are produced by the initial gravitational collapse of a molecular cloud . Assuming the mass of the material is large enough, gravitational compression reduces the size of the core, increasing its temperature until hydrogen fusion can begin. This hydrogen -to- helium fusion reaction releases energy that balances the inward gravitational pressure and the star becomes stable for millions of years. No further gravitational compression occurs until the hydrogen is nearly used up, reducing the thermal pressure of the fusion reaction. [ 2 ] At the end of the Sun's life, gravitational compression will turn it into a white dwarf . [ 3 ]
At the other end of the scale are massive stars . These stars burn their fuel very quickly, ending their lives as supernovae , after which further gravitational compression will produce either a neutron star [ 4 ] or a black hole [ 5 ] from the remnants.
For planets and moons , equilibrium is reached when the gravitational compression is balanced by a pressure gradient. This pressure gradient is in the opposite direction due to the strength of the material, at which point gravitational compression ceases. | https://en.wikipedia.org/wiki/Gravitational_compression |
In quantum field theory , a contact term is a radiatively induced point-like interaction.
These typically occur when the vertex for the emission of a massless particle such as a photon , a graviton , or a gluon , is proportional to q 2 {\displaystyle q^{2}} (the invariant momentum of the radiated particle).
This factor cancels the 1 / q 2 {\displaystyle 1/q^{2}} of the Feynman propagator , and causes the exchange of the massless particle to produce a point-like δ {\displaystyle \delta } -function effective interaction, rather than the usual ∼ 1 / r {\displaystyle \sim 1/r} long-range potential. A notable example occurs in the weak interactions where a W-boson radiative correction to a gluon vertex produces a q 2 {\displaystyle q^{2}} term, leading to
what is known as a "penguin" interaction. [ 1 ] [ 2 ] The contact term then generates a correction to the full action of the theory.
Contact terms occur in gravity when there are non-minimal interactions, ( M P l a n c k 2 + α ϕ 2 ) R {\displaystyle (M_{Planck}^{2}+\alpha \phi ^{2})R} , or in Brans-Dicke Theory , ( M P l a n c k 2 + κ M P l a n c k Φ ) R {\displaystyle (M_{Planck}^{2}+\kappa M_{Planck}\Phi )R} .
The non-minimal couplings are quantum equivalent to an "Einstein frame," with a pure Einstein-Hilbert action , M P l a n c k 2 R {\displaystyle M_{Planck}^{2}R} ,
owing to gravitational contact terms. These arise classically from graviton exchange interactions. [ 3 ] The contact terms are an essential, yet hidden, part of the action and, if they are ignored, the Feynman diagram loops in different frames yield different results. At the leading order in 1 / M P l a n c k 2 {\displaystyle {1}/{M_{Planck}^{2}}} including the contact terms is equivalent to performing a Weyl Transformation to remove the non-minimal couplings and taking the theory to the Einstein-Hilbert form. In this sense, the Einstein-Hilbert form of the action is unique and "frame ambiguities" in loop calculations do not exist. | https://en.wikipedia.org/wiki/Gravitational_contact_terms |
Gravitational decoherence is a term for hypothetical mechanisms by which gravitation can act on quantum mechanical systems to produce decoherence . Advocates of gravitational decoherence include Frigyes Károlyházy , Roger Penrose and Lajos Diósi . [ 1 ] [ 2 ]
A number of experiments have been proposed to test the gravitational decoherence hypothesis. [ 1 ] [ 3 ] [ 4 ]
Dmitriy Podolskiy and Robert Lanza have argued that gravitational decoherence may explain the existence of the arrow of time . [ 5 ]
This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gravitational_decoherence |
The key idea in explaining the way in which structures evolve in the universe is gravitational instability . [ 1 ] If material is to be brought together to form structures, then a long-range force is required, and gravity is the only known possibility. (Although electromagnetism is a long-range force, charge neutrality demands that its influence is unimportant on large scales.) The basic picture is as follows.
Suppose that at some initial time, say decoupling , there are small irregularities in the distribution of matter. Those regions with more matter will exert a greater gravitational force on their neighboring regions and hence tend to draw in the surrounding material. This extra material makes them even more dense than before, increasing their gravitational attraction and further enhancing their pull on their neighbors. An irregular distribution of matter is therefore unstable under the influence of gravity, becoming more and more irregular as time goes by.
This instability is exactly what is needed to explain the observation that the Universe is much more irregular now than at decoupling, and gravitational instability is almost universally accepted to be the primary influence leading to the formation of structures in the Universe. It is an appealingly simple picture, rather spoiled in real life by the fact that while gravity may have the lead role, numerous other processes also have a part to play and things become quite complicated. For example, we know that radiation has pressure proportional to its density , and during structure formation , the irregularities create pressure gradients which lead to forces opposing the gravitational collapse . We know that neutrinos move relativistically and do not interact with other material, so they are able to escape from structures as they form. Once structure formation begins, the complex astrophysics of stars, especially supernovae , can inject energy back into the intergalactic medium and influence regions yet to complete their gravitational collapse. [ 2 ]
This physical cosmology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gravitational_instability |
A gravitational lens is matter, such as a cluster of galaxies or a point particle , that bends light from a distant source as it travels toward an observer. The amount of gravitational lensing is described by Albert Einstein 's general theory of relativity . [ 1 ] [ 2 ] If light is treated as corpuscles travelling at the speed of light , Newtonian physics also predicts the bending of light, but only half of that predicted by general relativity. [ 3 ] [ 4 ] [ 5 ] [ 6 ]
Orest Khvolson (1924) [ 7 ] and Frantisek Link (1936) [ 8 ] are generally credited with being the first to discuss the effect in print, but it is more commonly associated with Einstein, who made unpublished calculations on it in 1912 [ 9 ] and published an article on the subject in 1936. [ 10 ]
In 1937, Fritz Zwicky posited that galaxy clusters could act as gravitational lenses, a claim confirmed in 1979 by observation of the Twin QSO SBS 0957+561.
Unlike an optical lens , a point-like gravitational lens produces a maximum deflection of light that passes closest to its center, and a minimum deflection of light that travels furthest from its center. Consequently, a gravitational lens has no single focal point , but a focal line. The term "lens" in the context of gravitational light deflection was first used by O. J. Lodge, who remarked that it is "not permissible to say that the solar gravitational field acts like a lens, for it has no focal length". [ 11 ] If the (light) source, the massive lensing object, and the observer lie in a straight line, the original light source will appear as a ring around the massive lensing object (provided the lens has circular symmetry). If there is any misalignment, the observer will see an arc segment instead.
This phenomenon was first mentioned in 1924 by the St. Petersburg physicist Orest Khvolson , [ 12 ] and quantified by Albert Einstein in 1936. It is usually referred to in the literature as an Einstein ring , since Khvolson did not concern himself with the flux or radius of the ring image. More commonly, where the lensing mass is complex (such as a galaxy group or cluster ) and does not cause a spherical distortion of spacetime, the source will resemble partial arcs scattered around the lens. The observer may then see multiple distorted images of the same source; the number and shape of these depending upon the relative positions of the source, lens, and observer, and the shape of the gravitational well of the lensing object.
There are three classes of gravitational lensing: [ 13 ] : 399–401 [ 14 ]
Gravitational lenses act equally on all kinds of electromagnetic radiation , not just visible light, and also in non-electromagnetic radiation, like gravitational waves. Weak lensing effects are being studied for the cosmic microwave background as well as galaxy surveys . Strong lenses have been observed in radio and x-ray regimes as well. If a strong lens produces multiple images, there will be a relative time delay between two paths: that is, in one image the lensed object will be observed before the other image.
Henry Cavendish in 1784 (in an unpublished manuscript) and Johann Georg von Soldner in 1801 (published in 1804) had pointed out that Newtonian gravity predicts that starlight will bend around a massive object [ 15 ] as had already been supposed by Isaac Newton in 1704 in his Queries No.1 in his book Opticks . [ 16 ] The same value as Soldner's was calculated by Einstein in 1911 based on the equivalence principle alone. [ 13 ] : 3 However, Einstein noted in 1915, in the process of completing general relativity, that his (and thus Soldner's) 1911-result is only half of the correct value. Einstein became the first to calculate the correct value for light bending. [ 17 ]
The first observation of light deflection was performed by noting the change in position of stars as they passed near the Sun on the celestial sphere . The observations were performed in 1919 by Arthur Eddington , Frank Watson Dyson , and their collaborators during the total solar eclipse on May 29 . [ 18 ] The solar eclipse allowed the stars near the Sun to be observed. Observations were made simultaneously in the cities of Sobral, Ceará , Brazil and in São Tomé and Príncipe on the west coast of Africa. [ 19 ] The observations demonstrated that the light from stars passing close to the Sun was slightly bent, so that stars appeared slightly out of position. [ 20 ]
The result was considered spectacular news and made the front page of most major newspapers. It made Einstein and his theory of general relativity world-famous. When asked by his assistant what his reaction would have been if general relativity had not been confirmed by Eddington and Dyson in 1919, Einstein said "Then I would feel sorry for the dear Lord. The theory is correct anyway." [ 21 ] In 1912, Einstein had speculated that an observer could see multiple images of a single light source, if the light were deflected around a mass. This effect would make the mass act as a kind of gravitational lens. However, as he only considered the effect of deflection around a single star, he seemed to conclude that the phenomenon was unlikely to be observed for the foreseeable future since the necessary alignments between stars and observer would be highly improbable. Several other physicists speculated about gravitational lensing as well, but all reached the same conclusion that it would be nearly impossible to observe. [ 10 ]
Although Einstein made unpublished calculations on the subject, [ 9 ] the first discussion of the gravitational lens in print was by Khvolson, in a short article discussing the "halo effect" of gravitation when the source, lens, and observer are in near-perfect alignment, [ 7 ] now referred to as the Einstein ring .
In 1936, after some urging by Rudi W. Mandl, Einstein reluctantly published the short article "Lens-Like Action of a Star By the Deviation of Light In the Gravitational Field" in the journal Science . [ 10 ]
In 1937, Fritz Zwicky first considered the case where the newly discovered galaxies (which were called 'nebulae' at the time) could act as both source and lens, and that, because of the mass and sizes involved, the effect was much more likely to be observed. [ 22 ]
In 1963 Yu. G. Klimov, S. Liebes, and Sjur Refsdal recognized independently that quasars are an ideal light source for the gravitational lens effect. [ 23 ]
It was not until 1979 that the first gravitational lens would be discovered. It became known as the " Twin QSO " since it initially looked like two identical quasistellar objects. (It is officially named SBS 0957+561 .) This gravitational lens was discovered by Dennis Walsh , Bob Carswell, and Ray Weymann using the Kitt Peak National Observatory 2.1 meter telescope . [ 24 ]
In the 1980s, astronomers realized that the combination of CCD imagers and computers would allow the brightness of millions of stars to be measured each night. In a dense field, such as the galactic center or the Magellanic clouds, many microlensing events per year could potentially be found. This led to efforts such as Optical Gravitational Lensing Experiment , or OGLE, that have characterized hundreds of such events, including those of OGLE-2016-BLG-1190Lb and OGLE-2016-BLG-1195Lb .
Newton wondered whether light, in the form of corpuscles, would be bent due to gravity. The Newtonian prediction for light deflection refers to the amount of deflection a corpuscle would feel under the effect of gravity, and therefore one should read "Newtonian" in this context as the referring to the following calculations and not a belief that Newton held in the validity of these calculations. [ 25 ]
For a gravitational point-mass lens of mass M {\displaystyle M} , a corpuscle of mass m {\displaystyle m} feels a force
where r {\displaystyle r} is the lens-corpuscle separation. If we equate this force with Newton's second law , we can solve for the acceleration that the light undergoes:
The light interacts with the lens from initial time t = 0 {\displaystyle t=0} to t {\displaystyle t} , and the velocity boost the corpuscle receives is
If one assumes that initially the light is far enough from the lens to neglect gravity, the perpendicular distance between the light's initial trajectory and the lens is b (the impact parameter ), and the parallel distance is r ∥ {\displaystyle r_{\parallel }} , such that r 2 = b 2 + r ∥ 2 {\displaystyle r^{2}=b^{2}+r_{\parallel }^{2}} . We additionally assume a constant speed of light along the parallel direction, d r ∥ ≈ c d t {\displaystyle dr_{\parallel }\approx c\,dt} , and that the light is only being deflected a small amount. After plugging these assumptions into the above equation and further simplifying, one can solve for the velocity boost in the perpendicular direction. The angle of deflection between the corpuscle’s initial and final trajectories is therefore (see, e.g., M. Meneghetti 2021) [ 25 ]
Although this result appears to be half the prediction from general relativity, classical physics predicts that the speed of light c {\displaystyle c} is observer-dependent (see, e.g., L. Susskind and A. Friedman 2018) [ 26 ] which was superseded by a universal speed of light in special relativity .
In general relativity, light follows the curvature of spacetime, hence when light passes around a massive object, it is bent. This means that the light from an object on the other side will be bent towards an observer's eye, just like an ordinary lens. In general relativity the path of light depends on the shape of space (i.e. the metric). The gravitational attraction can be viewed as the motion of undisturbed objects in a background curved geometry or alternatively as the response of objects to a force in a flat geometry. The angle of deflection is
toward the mass M at a distance r from the affected radiation, where G is the universal constant of gravitation , and c is the speed of light in vacuum.
Since the Schwarzschild radius r s {\displaystyle r_{\text{s}}} is defined as r s = 2 G m / c 2 {\displaystyle r_{\text{s}}=2Gm/c^{2}} , and escape velocity v e {\displaystyle v_{\text{e}}} is defined as v e = 2 G m / r = β e c {\textstyle v_{\text{e}}={\sqrt {2Gm/r}}=\beta _{\text{e}}c} , this can also be expressed in simple form as
Most of the gravitational lenses in the past have been discovered accidentally. A search for gravitational lenses in the northern hemisphere (Cosmic Lens All Sky Survey, CLASS), done in radio frequencies using the Very Large Array (VLA) in New Mexico, led to the discovery of 22 new lensing systems, a major milestone. This has opened a whole new avenue for research ranging from finding very distant objects to finding values for cosmological parameters so we can understand the universe better.
A similar search in the southern hemisphere would be a very good step towards complementing the northern hemisphere search as well as obtaining other objectives for study. If such a search is done using well-calibrated and well-parameterized instruments and data, a result similar to the northern survey can be expected. The use of the Australia Telescope 20 GHz (AT20G) Survey data collected using the Australia Telescope Compact Array (ATCA) stands to be such a collection of data. As the data were collected using the same instrument maintaining a very stringent quality of data we should expect to obtain good results from the search. The AT20G survey is a blind survey at 20 GHz frequency in the radio domain of the electromagnetic spectrum. Due to the high frequency used, the chances of finding gravitational lenses increases as the relative number of compact core objects (e.g. quasars) are higher (Sadler et al. 2006). This is important as the lensing is easier to detect and identify in simple objects compared to objects with complexity in them. This search involves the use of interferometric methods to identify candidates and follow them up at higher resolution to identify them. Full detail of the project is currently under works for publication.
Microlensing techniques have been used to search for planets outside the Solar System . A statistical analysis of specific cases of observed microlensing over the time period of 2002 to 2007 found that most stars in the Milky Way galaxy hosted at least one orbiting planet within 0.5 to 10 AU. [ 28 ]
In 2009, weak gravitational lensing was used to extend the mass-X-ray-luminosity relation to older and smaller structures than was previously possible to improve measurements of distant galaxies. [ 29 ]
As of 2013 [update] the most distant gravitational lens galaxy, J1000+0221 , had been found using NASA 's Hubble Space Telescope . [ 30 ] [ 31 ] While it remains the most distant quad-image lensing galaxy known, an even more distant two-image lensing galaxy was subsequently discovered by an international team of astronomers using a combination of Hubble Space Telescope and Keck telescope imaging and spectroscopy. The discovery and analysis of the IRC 0218 lens was published in the Astrophysical Journal Letters on June 23, 2014. [ 32 ]
Research published September 30, 2013 in the online edition of Physical Review Letters , led by McGill University in Montreal , Québec , Canada, has discovered the B-modes , that are formed due to gravitational lensing effect, using National Science Foundation 's South Pole Telescope and with help from the Herschel space observatory. This discovery would open the possibilities of testing the theories of how our universe originated. [ 33 ] [ 34 ]
Albert Einstein predicted in 1936 that rays of light from the same direction that skirt the edges of the Sun would converge to a focal point approximately 542 AU from the Sun. [ 37 ] Thus, a probe positioned at this distance (or greater) from the Sun could use the Sun as a gravitational lens for magnifying distant objects on the opposite side of the Sun. [ 38 ] A probe's location could shift around as needed to select different targets relative to the Sun.
This distance is far beyond the progress and equipment capabilities of space probes such as Voyager 1 , and beyond the known planets and dwarf planets, though over thousands of years 90377 Sedna will move farther away on its highly elliptical orbit. The high gain for potentially detecting signals through this lens, such as microwaves at the 21-cm hydrogen line , led to the suggestion by Frank Drake in the early days of SETI that a probe could be sent to this distance. A multipurpose probe SETISAIL and later FOCAL was proposed to the ESA in 1993, but is expected to be a difficult task. [ 39 ] If a probe does pass 542 AU, magnification capabilities of the lens will continue to act at farther distances, as the rays that come to a focus at larger distances pass further away from the distortions of the Sun's corona. [ 40 ] A critique of the concept was given by Landis, [ 41 ] who discussed issues including interference of the solar corona, the high magnification of the target, which will make the design of the mission focal plane difficult, and an analysis of the inherent spherical aberration of the lens.
In 2020, NASA physicist Slava Turyshev presented his idea of Direct Multipixel Imaging and Spectroscopy of an Exoplanet with a Solar Gravitational Lens Mission. The lens could reconstruct the exoplanet image with ~25 km-scale surface resolution, enough to see surface features and signs of habitability. [ 42 ]
Kaiser, Squires and Broadhurst (1995), [ 44 ] Luppino & Kaiser (1997) [ 45 ] and Hoekstra et al. (1998) prescribed a method to invert the effects of the point spread function (PSF) smearing and shearing, recovering a shear estimator uncontaminated by the systematic distortion of the PSF. This method (KSB+) is the most widely used method in weak lensing shear measurements. [ 46 ] [ 47 ]
Galaxies have random rotations and inclinations. As a result, the shear effects in weak lensing need to be determined by statistically preferred orientations. The primary source of error in lensing measurement is due to the convolution of the PSF with the lensed image. The KSB method measures the ellipticity of a galaxy image. The shear is proportional to the ellipticity. The objects in lensed images are parameterized according to their weighted quadrupole moments. For a perfect ellipse, the weighted quadrupole moments are related to the weighted ellipticity. KSB calculate how a weighted ellipticity measure is related to the shear and use the same formalism to remove the effects of the PSF. [ 48 ]
KSB's primary advantages are its mathematical ease and relatively simple implementation. However, KSB is based on a key assumption that the PSF is circular with an anisotropic distortion. This is a reasonable assumption for cosmic shear surveys, but the next generation of surveys (e.g. LSST ) may need much better accuracy than KSB can provide.
Notes
Bibliography
Further reading
Historical papers | https://en.wikipedia.org/wiki/Gravitational_lens |
In general relativity , a point mass deflects a light ray with impact parameter b {\displaystyle b~} by an angle approximately equal to
where G is the gravitational constant , M the mass of the deflecting object and c the speed of light . A naive application of Newtonian gravity can yield exactly half this value, where the light ray is assumed as a massed particle and scattered by the gravitational potential well. This approximation is good when 4 G M / c 2 b {\displaystyle 4GM/c^{2}b} is small.
In situations where general relativity can be approximated by linearized gravity , the deflection due to a spatially extended mass can be written simply as a vector sum over point masses. In the continuum limit , this becomes an integral over the density ρ {\displaystyle \rho ~} , and if the deflection is small we can approximate the gravitational potential along the deflected trajectory by the potential along the undeflected trajectory, as in the Born approximation in quantum mechanics. The deflection is then
where z {\displaystyle z} is the line-of-sight coordinate, and b → {\displaystyle {\vec {b}}} is the vector impact parameter of the actual ray path from the infinitesimal mass d 2 ξ ′ d z ρ ( ξ → ′ , z ) {\displaystyle d^{2}\xi ^{\prime }dz\rho ({\vec {\xi }}^{\prime },z)} located at the coordinates ( ξ → ′ , z ) {\displaystyle ({\vec {\xi }}^{\prime },z)} . [ 1 ]
In the limit of a "thin lens", where the distances between the source, lens, and observer are much larger than the size of the lens (this is almost always true for astronomical objects), we can define the projected mass density
where ξ → ′ {\displaystyle {\vec {\xi }}^{\prime }} is a vector in the plane of the sky. The deflection angle is then
As shown in the diagram on the right, the difference between the unlensed angular position β → {\displaystyle {\vec {\beta }}} and the observed position θ → {\displaystyle {\vec {\theta }}} is this deflection angle, reduced by a ratio of distances, described as the lens equation
where D d s {\displaystyle D_{ds}~} is the distance from the lens to the source, D s {\displaystyle D_{s}~} is the distance from the observer to the source, and D d {\displaystyle D_{d}~} is the distance from the observer to the lens. For extragalactic lenses, these must be angular diameter distances .
In strong gravitational lensing, this equation can have multiple solutions, because a single source at β → {\displaystyle {\vec {\beta }}} can be lensed into multiple images.
The reduced deflection angle α → ( θ → ) {\displaystyle {\vec {\alpha }}({\vec {\theta }})} can be written as
where we define the convergence
and the critical surface density (not to be confused with the critical density of the universe)
We can also define the deflection potential
such that the scaled deflection angle is just the gradient of the potential and the convergence is half the Laplacian of the potential:
The deflection potential can also be written as a scaled projection of the Newtonian gravitational potential Φ {\displaystyle \Phi ~} of the lens [ 2 ]
The Jacobian between the unlensed and lensed coordinate systems is
where δ i j {\displaystyle \delta _{ij}~} is the Kronecker delta . Because the matrix of second derivatives must be symmetric, the Jacobian can be decomposed into a diagonal term involving the convergence and a trace -free term involving the shear γ {\displaystyle \gamma ~}
where ϕ {\displaystyle \phi ~} is the angle between α → {\displaystyle {\vec {\alpha }}} and the x-axis. The term involving the convergence magnifies the image by increasing its size while conserving surface brightness. The term involving the shear stretches the image tangentially around the lens, as discussed in weak lensing observables .
The shear defined here is not equivalent to the shear traditionally defined in mathematics, though both stretch an image non-uniformly.
There is an alternative way of deriving the lens equation, starting from the photon arrival time (Fermat surface)
where d z / c {\displaystyle dz/c} is the time to travel an infinitesimal line element along the source-observer straight line in vacuum, which is
then corrected by the factor
to get the line element along the bended path d l = d z c cos α ( z ) {\displaystyle dl={dz \over c\cos \alpha (z)}} with a varying small pitch angle α ( z ) , {\displaystyle \alpha (z),} and the refraction index n for the "aether", i.e., the gravitational field. The last can be obtained from the fact that a photon travels on a null geodesic of a weakly perturbed static Minkowski universe
where the uneven gravitational potential Φ ≪ c 2 {\displaystyle \Phi \ll c^{2}} drives a changing the speed of light
So the refraction index
The refraction index greater than unity because of the negative gravitational potential Φ {\displaystyle \Phi } .
Put these together and keep the leading terms we have the time arrival surface
The first term is the straight path travel time, the second term is the extra geometric path, and the third is the gravitational delay.
Make the triangle approximation that α ( z ) = θ − β {\displaystyle \alpha (z)=\theta -\beta } for the path between the observer and the lens,
and α ( z ) ≈ ( θ − β ) D d D d s {\displaystyle \alpha (z)\approx (\theta -\beta ){D_{d} \over D_{ds}}} for the path between the lens and the source.
The geometric delay term becomes
(How? There is no D s {\displaystyle D_{s}} on the left. Angular diameter distances don't add in a simple way, in general.)
So the Fermat surface becomes
where τ {\displaystyle \tau } is so-called dimensionless time delay, and the 2D lensing potential
The images lie at the extrema of this surface, so the variation of τ {\displaystyle \tau } with θ → {\displaystyle {\vec {\theta }}} is zero,
which is the lens equation. Take the Poisson's equation for 3D potential
and we find the 2D lensing potential
Here we assumed the lens is a collection of point masses M i {\displaystyle M_{i}} at angular coordinates θ → i {\displaystyle {\vec {\theta }}_{i}} and distances z = D i . {\displaystyle z=D_{i}.} Use sinh − 1 1 / x = ln ( 1 / x + 1 / x 2 + 1 ) ≈ − ln ( x / 2 ) {\displaystyle \sinh ^{-1}1/x=\ln(1/x+{\sqrt {1/x^{2}+1}})\approx -\ln(x/2)} for very small x we find
One can compute the convergence by applying the 2D Laplacian of the 2D lensing potential
in agreement with earlier definition κ ( θ → ) = Σ Σ c r {\displaystyle \kappa ({\vec {\theta }})={\Sigma \over \Sigma _{cr}}} as the ratio of projected density with the critical density.
Here we used ∇ 2 1 / r = − 4 π δ ( r ) {\displaystyle \nabla ^{2}1/r=-4\pi \delta (r)} and ∇ θ → = D d ∇ . {\displaystyle \nabla _{\vec {\theta }}=D_{d}\nabla .}
We can also confirm the previously defined reduced deflection angle
where θ E i {\displaystyle \theta _{Ei}} is the so-called Einstein angular radius of a point lens M i {\displaystyle M_{i}} . For a single point lens at the origin we recover the standard result
that there will be two images at the two solutions of the essentially quadratic equation
The amplification matrix can be obtained by double derivatives of the dimensionless time delay
where we have define the derivatives
which takes the meaning of convergence and shear. The amplification is the inverse of the Jacobian
where a positive A {\displaystyle A} means either a maxima or a minima, and a negative A {\displaystyle A} means a saddle point in the arrival surface.
For a single point lens, one can show (albeit a lengthy calculation) that
So the amplification of a point lens is given by
Note A diverges for images at the Einstein radius θ E . {\displaystyle \theta _{E}.}
In cases there are multiple point lenses plus a smooth background of (dark) particles of surface density Σ c r κ s m o o t h , {\displaystyle \Sigma _{\rm {cr}}\kappa _{\rm {smooth}},} the time arrival surface is
To compute the amplification, e.g., at the origin (0,0), due to identical point masses distributed at ( θ x i , θ y i ) {\displaystyle (\theta _{xi},\theta _{yi})} we have to add up the total shear, and include a convergence of the smooth background,
This generally creates a network of critical curves, lines connecting image points of infinite amplification.
In weak lensing by large-scale structure , the thin-lens approximation may break down, and low-density extended structures may not be well approximated by multiple thin-lens planes. In this case, the deflection can be derived by instead assuming that the gravitational potential is slowly varying everywhere (for this reason, this approximation is not valid for strong lensing).
This approach assumes the universe is well described by a Newtonian-perturbed FRW metric , but it makes no other assumptions about the distribution of the lensing mass.
As in the thin-lens case, the effect can be written as a mapping from the unlensed angular position β → {\displaystyle {\vec {\beta }}} to the lensed position θ → {\displaystyle {\vec {\theta }}} . The Jacobian of the transform can be written as an integral over the gravitational potential Φ {\displaystyle \Phi ~} along the line of sight [ 3 ]
where r {\displaystyle r~} is the comoving distance , x i {\displaystyle x^{i}~} are the transverse distances, and
is the lensing kernel , which defines the efficiency of lensing for a distribution of sources W ( r ) {\displaystyle W(r)~} .
The Jacobian A i j {\displaystyle A_{ij}~} can be decomposed into convergence and shear terms just as with the thin-lens case, and in the limit of a lens that is both thin and weak, their physical interpretations are the same.
In weak gravitational lensing , the Jacobian is mapped out by observing the effect of the shear on the ellipticities of background galaxies. This effect is purely statistical; the shape of any galaxy will be dominated by its random, unlensed shape, but lensing will produce a spatially coherent distortion of these shapes.
In most fields of astronomy, the ellipticity is defined as 1 − q {\displaystyle 1-q~} , where q = b a {\displaystyle q={\frac {b}{a}}} is the axis ratio of the ellipse . In weak gravitational lensing , two different definitions are commonly used, and both are complex quantities which specify both the axis ratio and the position angle ϕ {\displaystyle \phi ~} :
Like the traditional ellipticity, the magnitudes of both of these quantities range from 0 (circular) to 1 (a line segment). The position angle is encoded in the complex phase, but because of the factor of 2 in the trigonometric arguments, ellipticity is invariant under a rotation of 180 degrees. This is to be expected; an ellipse is unchanged by a 180° rotation. Taken as imaginary and real parts, the real part of the complex ellipticity describes the elongation along the coordinate axes, while the imaginary part describes the elongation at 45° from the axes.
The ellipticity is often written as a two-component vector instead of a complex number, though it is not a true vector with regard to transforms:
Real astronomical background sources are not perfect ellipses. Their ellipticities can be measured by finding a best-fit elliptical model to the data, or by measuring the second moments of the image about some centroid ( x ¯ , y ¯ ) {\displaystyle ({\bar {x}},{\bar {y}})}
The complex ellipticities are then
This can be used to relate the second moments to traditional ellipse parameters:
and in reverse:
The unweighted second moments above are problematic in the presence of noise, neighboring objects, or extended galaxy profiles, so it is typical to use apodized moments instead:
Here w ( x , y ) {\displaystyle w(x,y)~} is a weight function that typically goes to zero or quickly approaches zero at some finite radius.
Image moments cannot generally be used to measure the ellipticity of galaxies without correcting for observational effects , particularly the point spread function . [ 4 ]
Recall that the lensing Jacobian can be decomposed into shear γ {\displaystyle \gamma ~} and convergence κ {\displaystyle \kappa ~} .
Acting on a circular background source with radius R {\displaystyle R~} , lensing generates an ellipse with major and minor axes
as long as the shear and convergence do not change appreciably over the size of the source (in that case, the lensed image is not an ellipse). Galaxies are not intrinsically circular, however, so it is necessary to quantify the effect of lensing on a non-zero ellipticity.
We can define the complex shear in analogy to the complex ellipticities defined above
as well as the reduced shear
The lensing Jacobian can now be written as
For a reduced shear g {\displaystyle g~} and unlensed complex ellipticities χ s {\displaystyle \chi _{s}~} and ϵ s {\displaystyle \epsilon _{s}~} , the lensed ellipticities are
In the weak lensing limit, γ ≪ 1 {\displaystyle \gamma \ll 1} and κ ≪ 1 {\displaystyle \kappa \ll 1} , so
If we can assume that the sources are randomly oriented, their complex ellipticities average to zero, so
This is the principal equation of weak lensing: the average ellipticity of background galaxies is a direct measure of the shear induced by foreground mass.
While gravitational lensing preserves surface brightness, as dictated by Liouville's theorem , lensing does change the apparent solid angle of a source. The amount of magnification is given by the ratio of the image area to the source area. For a circularly symmetric lens, the magnification factor μ is given by
In terms of convergence and shear
For this reason, the Jacobian A {\displaystyle A~} is also known as the "inverse magnification matrix".
The reduced shear is invariant with the scaling of the Jacobian A {\displaystyle A~} by a scalar λ {\displaystyle \lambda ~} , which is equivalent to the transformations
and
Thus, κ {\displaystyle \kappa } can only be determined up to a transformation κ → λ κ + ( 1 − λ ) {\displaystyle \kappa \rightarrow \lambda \kappa +(1-\lambda )} , which is known as the "mass sheet degeneracy." In principle, this degeneracy can be broken if an independent measurement of the magnification is available because the magnification is not invariant under the aforementioned degeneracy transformation. Specifically, μ {\displaystyle \mu ~} scales with λ {\displaystyle \lambda ~} as μ ∝ λ − 2 {\displaystyle \mu \propto \lambda ^{-2}} . | https://en.wikipedia.org/wiki/Gravitational_lensing_formalism |
Gravitational memory effects , also known as gravitational-wave memory effects are predicted persistent changes in the relative position of pairs of masses in space due to the passing of a gravitational wave . [ 2 ] Detection of gravitational memory effects has been suggested as a way of validating general relativity . [ 3 ] [ 4 ]
In 2014 Andrew Strominger and Alexander Zhiboedov showed that the formula related to the memory effect is the Fourier transform in time of Weinberg 's soft graviton theorem . [ 5 ]
There are two kinds of predicted gravitational memory effect: one based on a linear approximation of Einstein's equations , first proposed in 1974 by the Soviet scientists Yakov Zeldovich and A. G. Polnarev , [ 2 ] [ 6 ] developed also by Vladimir Braginsky and L. P. Grishchuk , [ 2 ] and a non-linear phenomenon known as the non-linear memory effect , which was first proposed in the 1990s by Demetrios Christodoulou . [ 7 ] [ 8 ] [ 9 ]
The non-linear memory effect could be exploited to determine the inclination, with respect to us observers, of the plane on which the two objects that merged and generated the gravitational waves were moving, making the calculation of their distance more precise, since the amplitude of the received wave (what is experimentally measured) depends on the distance of the source and the aforementioned inclination with respect to us. [ 10 ]
In 2016, a new type of memory effect, induced by gravitational waves incident on rays of light moving along circular trajectories perpendicular to the waves, was proposed by Sabrina Gonzalez Pasterski , Strominger and Zhiboedov. This is caused by the angular momentum of the waves themselves and therefore termed gravitational spin memory . As in the previous case, this memory also turns out to be a Fourier transform in time, but, in this case, of the graviton theorem expanded to the subleading term. [ 11 ] [ 12 ]
The effect should, in theory, be detectable by recording changes in the distance between pairs of free-falling objects in spacetime before and after the passage of gravitational waves. The proposed LISA detector is expected to detect the memory effect easily. [ 13 ] In contrast, detection with the existing LIGO is complicated by two factors. First, LIGO detection targets a higher frequency range than is desirable for detection of memory effects. Secondly, LIGO is not in free-fall, and its parts will drift back to their equilibrium position following the passage of the gravitational waves. However, as thousands of events from LIGO and similar earth-based detectors are recorded and statistically analyzed over the course of several years, the cumulative data may be sufficient to confirm the existence of the gravitational memory effect. [ 14 ] | https://en.wikipedia.org/wiki/Gravitational_memory_effect |
Gravitational plane waves are described as "non-flat solutions of Albert Einstein ’s empty spacetime field equation". [ 1 ] [ 2 ] [ 3 ] [ 4 ] They are a special class of a vacuum pp-wave spacetime .
In general relativity , [ 5 ] the may be defined in terms of Brinkmann coordinates by
d s 2 = [ a ( u ) ( x 2 − y 2 ) + 2 b ( u ) x y ] d u 2 + 2 d u d v + d x 2 + d y 2 {\displaystyle ds^{2}=[a(u)(x^{2}-y^{2})+2b(u)xy]du^{2}+2dudv+dx^{2}+dy^{2}}
Here, a ( u ) , b ( u ) {\displaystyle a(u),b(u)} can be any smooth functions ; they control the waveform of the two possible polarization modes of gravitational radiation . In this context, these two modes are usually called the plus mode and cross mode , respectively.
This relativity -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gravitational_plane_wave |
Gravitational scattering is the alteration of trajectories when two or more celestial objects exchange energy and momentum through close gravitational encounters . [ 1 ] This process underpins many dynamical phenomena in astrophysics , from the formation of binary star systems to the ejection of bodies from planetary systems . [ 1 ] When objects like stars , planets , or black holes pass close enough to influence each other’s motions, their paths can shift dramatically. [ 2 ] Close passages between massive objects—such as stars , planets , or black holes —can produce either bound pairs or unbound ejecta . [ 3 ] An example is Jupiter scattering Kuiper belt objects out of the Solar System . [ 4 ]
Researchers investigate gravitational-scattering events with N -body simulations and other numerical models of gravitational fields and gravitational field interactions . [ 1 ] [ 4 ] A key aspect is the exchange of energy and momentum between the bodies. [ 5 ] For example, a fast body can impart kinetic energy to a slower one, producing the slingshot effect exploited by spacecraft during gravitational-assist flybys. [ 6 ]
Observational evidence of scattering clarifies several astrophysical problems, from stellar-cluster evolution to galaxy-core dynamics. [ 1 ] In dense regions such as star clusters , scattering influences star formation rates and the spatial distribution of stellar populations. [ 7 ] Hypervelocity stars are thought to originate when massive black holes scatter binary stars at galactic centers. [ 3 ] Close encounters between compact objects can emit gravitational waves , which have been detected by observatories such as the Laser Interferometer Gravitational-Wave Observatory (LIGO). [ 8 ] Analyses employ both Newtonian mechanics and general relativity ; the relativistic framework is essential for high-mass or high-speed encounters. [ 9 ]
Gravitational scattering can alter orbits and in extreme cases can eject celestial bodies from their native planetary systems. [ 3 ] One mechanism for shifting planets to wider orbits is scattering by massive neighbours; within a protoplanetary disk , similar kicks can arise from dense gas clumps. [ 10 ] In the Solar System , Uranus and Neptune may have been pushed outward after close encounters with Jupiter or Saturn . [ 11 ] [ 4 ] After the protoplanetary gas dissipates, multi-planet systems can experience comparable instabilities: orbits shift, and some planets are eventually ejected or spiral into the host star. [ 11 ] [ 4 ]
Planets scattered gravitationally can end on highly eccentric orbits with perihelia close to the star, enabling their orbits to be altered by the gravitational tides they raise on the star. [ 12 ] The eccentricities and inclinations of these planets are also excited during these encounters, providing one possible explanation for the observed eccentricity distribution of the closely orbiting exoplanets . [ 12 ] The resulting systems are often near the limits of stability. [ 13 ] As in the Nice model , systems of exoplanets with an outer disk of planetesimals can also undergo dynamical instabilities following resonance crossings during planetesimal-driven migration. [ 4 ] [ 14 ] The eccentricities and inclinations of the planets on distant orbits can be damped by dynamical friction with the planetesimals with the final values depending on the relative masses of the disk and the planets that had gravitational encounters. [ 14 ]
This article incorporates public domain material from websites or documents of the United States government . | https://en.wikipedia.org/wiki/Gravitational_scattering |
A gravitational singularity , spacetime singularity , or simply singularity , is a theoretical condition in which gravity is predicted to be so intense that spacetime itself would break down catastrophically. As such, a singularity is by definition no longer part of the regular spacetime and cannot be determined by "where" or "when”. Gravitational singularities exist at a junction between general relativity and quantum mechanics ; therefore, the properties of the singularity cannot be described without an established theory of quantum gravity . Trying to find a complete and precise definition of singularities in the theory of general relativity, the current best theory of gravity, remains a difficult problem. [ 1 ] [ 2 ] A singularity in general relativity can be defined by the scalar invariant curvature becoming infinite [ 3 ] or, better, by a geodesic being incomplete . [ 4 ]
General relativity predicts that any object collapsing beyond a certain point (for stars this is the Schwarzschild radius ) would form a black hole, inside which a singularity (covered by an event horizon ) [ 2 ] would appear (although observers outside the event horizon could never see it). [ 5 ] The density would become infinite at the singularity. General relativity also predicts that the initial state of the universe , at the beginning of the Big Bang , was a singularity of infinite density and temperature. [ 6 ] [ obsolete source ] However, classical gravitational theories are not expected to be accurate under these conditions, and a quantum description is likely needed. [ 7 ] For example, quantum mechanics does not permit particles to inhabit a space smaller than their Compton wavelengths . [ 8 ]
Many theories in physics have mathematical singularities of one kind or another. Equations for these physical theories predict that the ball of mass of some quantity becomes infinite or increases without limit. This is generally a sign for a missing piece in the theory, as in the ultraviolet catastrophe , re-normalization , and instability of a hydrogen atom predicted by the Larmor formula .
In classical field theories, including special relativity but not general relativity, one can say that a solution has a singularity at a particular point in spacetime where certain physical properties become ill-defined, with spacetime serving as a background field to locate the singularity. A singularity in general relativity, on the other hand, is more complex because spacetime itself becomes ill-defined, and the singularity is no longer part of the regular spacetime manifold. In general relativity, a singularity cannot be defined by "where" or "when". [ 9 ]
Some theories, such as the theory of loop quantum gravity , suggest that singularities may not exist. [ 10 ] This is also true for such classical unified field theories as the Einstein–Maxwell–Dirac equations. The idea can be stated in the form that, due to quantum gravity effects, there is a minimum distance beyond which the force of gravity no longer continues to increase as the distance between the masses becomes shorter, or alternatively that interpenetrating particle waves mask gravitational effects that would be felt at a distance.
Motivated by such philosophy of loop quantum gravity, recently it has been shown [ 11 ] that such conceptions can be realized through some elementary constructions based on the refinement of the first axiom of geometry, namely, the concept of a point [ 12 ] by considering Klein's prescription of accounting for the extension of a small spot that represents or demonstrates a point, [ 13 ] which was a programmatic call that he called as a fusion of arithmetic and geometry. [ 14 ] Klein's program, according to Born, was actually a mathematical route to consider 'natural uncertainty in all observations' while describing 'a physical situation' by means of 'real numbers'. [ 15 ]
There are multiple types of singularities, each with different physical features that have characteristics relevant to the theories from which they originally emerged, such as the different shapes of the singularities, conical and curved . They have also been hypothesized to occur without event horizons, structures that delineate one spacetime section from another in which events cannot affect past the horizon; these are called naked.
A conical singularity occurs when there is a point where the limit of some diffeomorphism invariant quantity does not exist or is infinite, in which case spacetime is not smooth at the point of the limit itself. Thus, spacetime looks like a cone around this point, where the singularity is located at the tip of the cone. The metric can be finite everywhere the coordinate system is used.
An example of such a conical singularity is a cosmic string and a Schwarzschild black hole . [ 16 ]
Solutions to the equations of general relativity or another theory of gravity (such as supergravity ) often result in encountering points where the metric blows up to infinity. However, many of these points are completely regular , and the infinities are merely a result of using an inappropriate coordinate system at this point . To test whether there is a singularity at a certain point, one must check whether at this point diffeomorphism invariant quantities (i.e. scalars ) become infinite. Such quantities are the same in every coordinate system, so these infinities will not "go away" by a change of coordinates.
An example is the Schwarzschild solution that describes a non-rotating, uncharged black hole. In coordinate systems convenient for working in regions far away from the black hole, a part of the metric becomes infinite at the event horizon . However, spacetime at the event horizon is regular . The regularity becomes evident when changing to another coordinate system (such as the Kruskal coordinates ), where the metric is perfectly smooth . On the other hand, in the center of the black hole, where the metric becomes infinite as well, the solutions suggest a singularity exists. The existence of the singularity can be verified by noting that the Kretschmann scalar , being the square of the Riemann tensor i.e. R μ ν ρ σ R μ ν ρ σ {\displaystyle R_{\mu \nu \rho \sigma }R^{\mu \nu \rho \sigma }} , which is diffeomorphism invariant, is infinite.
While in a non-rotating black hole the singularity occurs at a single point in the model coordinates, called a "point singularity", in a rotating black hole, also known as a Kerr black hole , the singularity occurs on a ring (a circular line), known as a " ring singularity ". Such a singularity may also theoretically become a wormhole . [ 17 ]
More generally, a spacetime is considered singular if it is geodesically incomplete , meaning that there are freely-falling particles whose motion cannot be determined beyond a finite time, being after the point of reaching the singularity. For example, any observer inside the event horizon of a non-rotating black hole would fall into its center within a finite period of time. The classical version of the Big Bang cosmological model of the universe contains a causal singularity at the start of time ( t =0), where all time-like geodesics have no extensions into the past. Extrapolating backward to this hypothetical time 0 results in a universe with all spatial dimensions of size zero, infinite density, infinite temperature, and infinite spacetime curvature.
Until the early 1990s, it was widely believed that general relativity hides every singularity behind an event horizon , making naked singularities impossible. This is referred to as the cosmic censorship hypothesis . However, in 1991, physicists Stuart Shapiro and Saul Teukolsky performed computer simulations of a rotating plane of dust that indicated that general relativity might allow for "naked" singularities. What these objects would actually look like in such a model is unknown. Nor is it known whether singularities would still arise if the simplifying assumptions used to make the simulation were removed. However, it is hypothesized that light entering a singularity would similarly have its geodesics terminated, thus making the naked singularity look like a black hole. [ 18 ] [ 19 ] [ 20 ]
Disappearing event horizons exist in the Kerr metric , which is a spinning black hole in a vacuum, if the angular momentum ( J {\displaystyle J} ) is high enough. Transforming the Kerr metric to Boyer–Lindquist coordinates , it can be shown [ 21 ] that the coordinate (which is not the radius) of the event horizon is, r ± = μ ± ( μ 2 − a 2 ) 1 / 2 {\displaystyle r_{\pm }=\mu \pm \left(\mu ^{2}-a^{2}\right)^{1/2}} , where μ = G M / c 2 {\displaystyle \mu =GM/c^{2}} , and a = J / M c {\displaystyle a=J/Mc} . In this case, "event horizons disappear" means when the solutions are complex for r ± {\displaystyle r_{\pm }} , or μ 2 < a 2 {\displaystyle \mu ^{2}<a^{2}} . However, this corresponds to a case where J {\displaystyle J} exceeds G M 2 / c {\displaystyle GM^{2}/c} (or in Planck units , J > M 2 {\displaystyle J>M^{2}} ) ; i.e. the spin exceeds what is normally viewed as the upper limit of its physically possible values.
Similarly, disappearing event horizons can also be seen with the Reissner–Nordström geometry of a charged black hole if the charge ( Q {\displaystyle Q} ) is high enough. In this metric, it can be shown [ 22 ] that the singularities occur at r ± = μ ± ( μ 2 − q 2 ) 1 / 2 {\displaystyle r_{\pm }=\mu \pm \left(\mu ^{2}-q^{2}\right)^{1/2}} , where μ = G M / c 2 {\displaystyle \mu =GM/c^{2}} , and q 2 = G Q 2 / ( 4 π ϵ 0 c 4 ) {\displaystyle q^{2}=GQ^{2}/\left(4\pi \epsilon _{0}c^{4}\right)} . Of the three possible cases for the relative values of μ {\displaystyle \mu } and q {\displaystyle q} , the case where μ 2 < q 2 {\displaystyle \mu ^{2}<q^{2}} causes both r ± {\displaystyle r_{\pm }} to be complex. This means the metric is regular for all positive values of r {\displaystyle r} , or in other words, the singularity has no event horizon. However, this corresponds to a case where Q / 4 π ϵ 0 {\displaystyle Q/{\sqrt {4\pi \epsilon _{0}}}} exceeds M G {\displaystyle M{\sqrt {G}}} (or in Planck units, Q > M {\displaystyle Q>M} ) ; i.e. the charge exceeds what is normally viewed as the upper limit of its physically possible values. Also, actual astrophysical black holes are not expected to possess any appreciable charge.
A black hole possessing the lowest M {\displaystyle M} value consistent with its J {\displaystyle J} and Q {\displaystyle Q} values and the limits noted above; i.e., one just at the point of losing its event horizon, is termed extremal .
Before Stephen Hawking came up with the concept of Hawking radiation , the question of black holes having entropy had been avoided. However, this concept demonstrates that black holes radiate energy, which conserves entropy and solves the incompatibility problems with the second law of thermodynamics . Entropy, however, implies heat and therefore temperature. The loss of energy also implies that black holes do not last forever, but rather evaporate or decay slowly. Black hole temperature is inversely related to mass . [ 23 ] All known black hole candidates are so large that their temperature is far below that of the cosmic background radiation, which means they will gain energy on net by absorbing this radiation. They cannot begin to lose energy on net until the background temperature falls below their own temperature. This will occur at a cosmological redshift of more than one million, rather than the thousand or so since the background radiation formed. [ citation needed ] | https://en.wikipedia.org/wiki/Gravitational_singularity |
Gravitational time dilation is a form of time dilation , an actual difference of elapsed time between two events , as measured by observers situated at varying distances from a gravitating mass . The lower the gravitational potential (the closer the clock is to the source of gravitation), the slower time passes, speeding up as the gravitational potential increases (the clock moving away from the source of gravitation). Albert Einstein originally predicted this in his theory of relativity , and it has since been confirmed by tests of general relativity . [ 1 ]
This effect has been demonstrated by noting that atomic clocks at differing altitudes (and thus different gravitational potential) will eventually show different times. The effects detected in such Earth-bound experiments are extremely small, with differences being measured in nanoseconds . Relative to Earth's age in billions of years, Earth's core is in effect 2.5 years younger than its surface. [ 2 ] Demonstrating larger effects would require measurements at greater distances from the Earth, or a larger gravitational source.
Gravitational time dilation was first described by Albert Einstein in 1907 [ 3 ] as a consequence of special relativity in accelerated frames of reference. In general relativity , it is considered to be a difference in the passage of proper time at different positions as described by a metric tensor of spacetime. The existence of gravitational time dilation was first confirmed directly by the Pound–Rebka experiment in 1959, and later refined by Gravity Probe A and other experiments.
Gravitational time dilation is closely related to gravitational redshift , [ 4 ] in which the closer a body emitting light of constant frequency is to a gravitating body, the more its time is slowed by gravitational time dilation, and the lower (more "redshifted") would seem to be the frequency of the emitted light, as measured by a fixed observer.
Clocks that are far from massive bodies (or at higher gravitational potentials) run more quickly, and clocks close to massive bodies (or at lower gravitational potentials) run more slowly. For example, considered over the total time-span of Earth (4.6 billion years), a clock set in a geostationary position at an altitude of 9,000 meters above sea level, such as perhaps at the top of Mount Everest ( prominence 8,848 m), would be about 39 hours ahead of a clock set at sea level. [ 5 ] [ 6 ] This is because gravitational time dilation is manifested in accelerated frames of reference or, by virtue of the equivalence principle , in the gravitational field of massive objects. [ 7 ]
According to general relativity, inertial mass and gravitational mass are the same, and all accelerated reference frames (such as a uniformly rotating reference frame with its proper time dilation) are physically equivalent to a gravitational field of the same strength. [ 8 ]
Consider a family of observers along a straight "vertical" line, each of whom experiences a distinct constant g-force directed along this line (e.g., a long accelerating spacecraft, [ 9 ] [ 10 ] a skyscraper, a shaft on a planet). Let g ( h ) {\displaystyle g(h)} be the dependence of g-force on "height", a coordinate along the aforementioned line. The equation with respect to a base observer at h = 0 {\displaystyle h=0} is
where T d ( h ) {\displaystyle T_{d}(h)} is the total time dilation at a distant position h {\displaystyle h} , g ( h ) {\displaystyle g(h)} is the dependence of g-force on "height" h {\displaystyle h} , c {\displaystyle c} is the speed of light , and exp {\displaystyle \exp } denotes exponentiation by e .
For simplicity, in a Rindler's family of observers in a flat spacetime , the dependence would be
with constant H {\displaystyle H} , which yields
On the other hand, when g {\displaystyle g} is nearly constant and g h {\displaystyle gh} is much smaller than c 2 {\displaystyle c^{2}} , the linear "weak field" approximation T d = 1 + g h / c 2 {\displaystyle T_{d}=1+gh/c^{2}} can also be used.
See Ehrenfest paradox for application of the same formula to a rotating reference frame in flat spacetime.
A common equation used to determine gravitational time dilation is derived from the Schwarzschild metric , which describes spacetime in the vicinity of a non-rotating massive spherically symmetric object. The equation is
where
To illustrate then, without accounting for the effects of rotation, proximity to Earth's gravitational well will cause a clock on the planet's surface to accumulate around 0.0219 fewer seconds over a period of one year than would a distant observer's clock. In comparison, a clock on the surface of the Sun will accumulate around 66.4 fewer seconds in one year.
In the Schwarzschild metric, free-falling objects can be in circular orbits if the orbital radius is larger than 3 2 r s {\displaystyle {\tfrac {3}{2}}r_{s}} (the radius of the photon sphere ). The formula for a clock at rest is given above; the formula below gives the general relativistic time dilation for a clock in a circular orbit: [ 11 ] [ 12 ]
Both dilations are shown in the figure below.
Gravitational time dilation has been experimentally measured using atomic clocks on airplanes, such as the Hafele–Keating experiment . The clocks aboard the airplanes were slightly faster than clocks on the ground. The effect is significant enough that the Global Positioning System's artificial satellites had their atomic clocks permanently corrected. [ 13 ]
Additionally, time dilations due to height differences of less than one metre have been experimentally verified in the laboratory. [ 14 ]
Gravitational time dilation in the form of gravitational redshift has also been confirmed by the Pound–Rebka experiment and observations of the spectra of the white dwarf Sirius B .
Gravitational time dilation has been measured in experiments with time signals sent to and from the Viking 1 Mars lander. [ 15 ] [ 16 ] | https://en.wikipedia.org/wiki/Gravitational_time_dilation |
Gravitational waves are oscillations of the gravitational field that travel through space at the speed of light ; they are generated by the relative motion of gravitating masses. [ 1 ] They were proposed by Oliver Heaviside in 1893 and then later by Henri Poincaré in 1905 as the gravitational equivalent of electromagnetic waves . [ 2 ] In 1916, [ 3 ] [ 4 ] Albert Einstein demonstrated that gravitational waves result from his general theory of relativity as ripples in spacetime . [ 5 ] [ 6 ]
Gravitational waves transport energy as gravitational radiation , a form of radiant energy similar to electromagnetic radiation . [ 7 ] Newton's law of universal gravitation , part of classical mechanics , does not provide for their existence, instead asserting that gravity has instantaneous effect everywhere. Gravitational waves therefore stand as an important relativistic phenomenon that is absent from Newtonian physics.
Gravitational-wave astronomy has the advantage that, unlike electromagnetic radiation, gravitational waves are not affected by intervening matter. Sources that can be studied this way include binary star systems composed of white dwarfs , neutron stars , [ 8 ] [ 9 ] and black holes ; events such as supernovae ; and the formation of the early universe shortly after the Big Bang .
The first indirect evidence for the existence of gravitational waves came in 1974 from the observed orbital decay of the Hulse–Taylor binary pulsar , which matched the decay predicted by general relativity for energy lost to gravitational radiation. In 1993, Russell Alan Hulse and Joseph Hooton Taylor Jr. received the Nobel Prize in Physics for this discovery.
The first direct observation of gravitational waves was made in September 2015, when a signal generated by the merger of two black holes was received by the LIGO gravitational wave detectors in Livingston, Louisiana, and in Hanford, Washington. The 2017 Nobel Prize in Physics was subsequently awarded to Rainer Weiss , Kip Thorne and Barry Barish for their role in the direct detection of gravitational waves.
In Albert Einstein 's general theory of relativity , gravity is treated as a phenomenon resulting from the curvature of spacetime . This curvature is caused by the presence of mass. (See: Stress–energy tensor ) If the masses move, the curvature of spacetime changes. If the motion is not spherically symmetric, the motion can cause gravitational waves which propagate away at the speed of light . [ 10 ]
As a gravitational wave passes an observer, that observer will find spacetime distorted by the effects of strain . Distances between objects increase and decrease rhythmically as the wave passes, at a frequency equal to that of the wave. The magnitude of this effect is inversely proportional to the distance (not distance squared) from the source. [ 11 ] : 227
Inspiraling binary neutron stars are predicted to be a powerful source of gravitational waves as they coalesce , due to the very large acceleration of their masses as they orbit close to one another. However, due to the astronomical distances to these sources, the effects when measured on Earth are predicted to be very small, having strains of less than 1 part in 10 20 .
Scientists demonstrate the existence of these waves with highly-sensitive detectors at multiple observation sites. As of 2012 [update] , the LIGO and Virgo observatories were the most sensitive detectors, operating at resolutions of about one part in 5 × 10 22 . [ 12 ] The Japanese detector KAGRA was completed in 2019; its first joint detection with LIGO and VIRGO was reported in 2021. [ 13 ] Another European ground-based detector, the Einstein Telescope , is under development. A space-based observatory, the Laser Interferometer Space Antenna (LISA), is also being developed by the European Space Agency .
Gravitational waves do not strongly interact with matter in the way that electromagnetic radiation does. [ 1 ] : 33–34 This allows for the observation of events involving exotic objects in the distant universe that cannot be observed with more traditional means such as optical telescopes or radio telescopes ; accordingly, gravitational wave astronomy gives new insights into the workings of the universe. [ 1 ] : 36–40
In particular, gravitational waves could be of interest to cosmologists as they offer a possible way of observing the very early universe. This is not possible with conventional astronomy, since before recombination the universe was opaque to electromagnetic radiation. [ 14 ] Precise measurements of gravitational waves will also allow scientists to test more thoroughly the general theory of relativity.
In principle, gravitational waves can exist at any frequency. Very low frequency waves can be detected using pulsar timing arrays. In this technique, the timing of approximately 100 pulsars spread widely across our galaxy is monitored over the course of years. Detectable changes in the arrival time of their signals can result from passing gravitational waves generated by merging supermassive black holes (SMBH) with wavelengths measured in lightyears. These timing changes can be used to locate the source of the waves. [ 15 ]
Using this technique, astronomers have discovered the 'hum' of various SMBH mergers occurring in the universe. Stephen Hawking and Werner Israel list different frequency bands for gravitational waves that could plausibly be detected, ranging from 10 −7 Hz up to 10 11 Hz. [ 16 ]
The speed of gravitational waves in the general theory of relativity is equal to the speed of light in vacuum, c . [ 17 ] Within the theory of special relativity , the constant c is not only about light; instead it is the highest possible speed for any interaction in nature. Formally, c is a conversion factor for changing the unit of time to the unit of space. [ 18 ] This makes it the only speed which does not depend either on the motion of an observer or a source of light and/or gravity.
Thus, the speed of "light" is also the speed of gravitational waves, and, further, the speed of any massless particle. Such particles include the gluon (carrier of the strong force), the photons that make up light (hence carrier of electromagnetic force), and the hypothetical gravitons (which are the presumptive field particles associated with gravity; however, an understanding of the graviton, if any exist, requires an as-yet unavailable theory of quantum gravity ).
In August 2017, the LIGO and Virgo detectors received a gravitational wave signal, GW170817 , at nearly the same time as gamma ray satellites and optical telescopes received signals from its source in galaxy NGC 4993 , about 130 million light years away. [ 19 ] This measurement constrained the experimental difference between the speed of gravitational waves and light to be smaller than one part in 10 −15 . [ 20 ]
The possibility of gravitational waves and that those might travel at the speed of light was discussed in 1893 by Oliver Heaviside , using the analogy between the inverse-square law of gravitation and the electrostatic force . [ 24 ] In 1905, Henri Poincaré proposed gravitational waves, emanating from a body and propagating at the speed of light, as being required by the Lorentz transformations [ 25 ] and suggested that, in analogy to an accelerating electrical charge producing electromagnetic waves , accelerated masses in a relativistic field theory of gravity should produce gravitational waves. [ 26 ] [ 27 ]
In 1915 Einstein published his general theory of relativity , a complete relativistic theory of gravitation. He conjectured, like Poincaré, that the equation would produce gravitational waves, but, as he mentions in a letter to Schwarzschild in February 1916, [ 27 ] these could not be similar to electromagnetic waves. Electromagnetic waves can be produced by dipole motion, requiring both a positive and a negative charge. Gravitation has no equivalent to negative charge. Einstein continued to work through the complexity of the equations of general relativity to find an alternative wave model. The result was published in June 1916, [ 4 ] and there he came to the conclusion that the gravitational wave must propagate with the speed of light, and there must, in fact, be three types of gravitational waves dubbed longitudinal–longitudinal, transverse–longitudinal, and transverse–transverse by Hermann Weyl . [ 27 ]
However, the nature of Einstein's approximations led many (including Einstein himself) to doubt the result. In 1922, Arthur Eddington showed that two of Einstein's types of waves were artifacts of the coordinate system he used, and could be made to propagate at any speed by choosing appropriate coordinates, leading Eddington to jest that they "propagate at the speed of thought". [ 28 ] : 72 This also cast doubt on the physicality of the third (transverse–transverse) type that Eddington showed always propagate at the speed of light regardless of coordinate system. In 1936, Einstein and Nathan Rosen submitted a paper to Physical Review in which they claimed gravitational waves could not exist in the full general theory of relativity because any such solution of the field equations would have a singularity. The journal sent their manuscript to be reviewed by Howard P. Robertson , who anonymously reported that the singularities in question were simply the harmless coordinate singularities of the employed cylindrical coordinates. Einstein, who was unfamiliar with the concept of peer review, angrily withdrew the manuscript, never to publish in Physical Review again. Nonetheless, his assistant Leopold Infeld , who had been in contact with Robertson, convinced Einstein that the criticism was correct, and the paper was rewritten with the opposite conclusion and published elsewhere. [ 27 ] [ 28 ] : 79ff In 1956, Felix Pirani remedied the confusion caused by the use of various coordinate systems by rephrasing the gravitational waves in terms of the manifestly observable Riemann curvature tensor . [ 29 ]
At the time, Pirani's work was overshadowed by the community's focus on a different question: whether gravitational waves could transmit energy . This matter was settled by a thought experiment proposed by Richard Feynman during the first "GR" conference at Chapel Hill in 1957. In short, his argument known as the " sticky bead argument " notes that if one takes a rod with beads then the effect of a passing gravitational wave would be to move the beads along the rod; friction would then produce heat, implying that the passing wave had done work . Shortly after, Hermann Bondi published a detailed version of the "sticky bead argument". [ 27 ] This later led to a series of articles (1959 to 1989) by Bondi and Pirani that established the existence of plane wave solutions for gravitational waves. [ 30 ]
Paul Dirac further postulated the existence of gravitational waves, declaring them to have "physical significance" in his 1959 lecture at the Lindau Meetings . [ 31 ] Further, it was Dirac who predicted gravitational waves with a well-defined energy density in 1964. [ 32 ]
After the Chapel Hill conference, Joseph Weber started designing and building the first gravitational wave detectors now known as Weber bars . In 1969, Weber claimed to have detected the first gravitational waves, and by 1970 he was "detecting" signals regularly from the Galactic Center ; however, the frequency of detection soon raised doubts on the validity of his observations as the implied rate of energy loss of the Milky Way would drain our galaxy of energy on a timescale much shorter than its inferred age. These doubts were strengthened when, by the mid-1970s, repeated experiments from other groups building their own Weber bars across the globe failed to find any signals, and by the late 1970s consensus was that Weber's results were spurious. [ 27 ]
In the same period, the first indirect evidence of gravitational waves was discovered. In 1974, Russell Alan Hulse and Joseph Hooton Taylor, Jr. discovered the first binary pulsar , which earned them the 1993 Nobel Prize in Physics . [ 33 ] Pulsar timing observations over the next decade showed a gradual decay of the orbital period of the Hulse–Taylor pulsar that matched the loss of energy and angular momentum in gravitational radiation predicted by general relativity. [ 34 ] [ 35 ] [ 27 ]
This indirect detection of gravitational waves motivated further searches, despite Weber's discredited result. Some groups continued to improve Weber's original concept, while others pursued the detection of gravitational waves using laser interferometers. The idea of using a laser interferometer for this seems to have been floated independently by various people, including M.E. Gertsenshtein and V. I. Pustovoit in 1962, [ 36 ] and Vladimir B. Braginskiĭ in 1966. The first prototypes were developed in the 1970s by Robert L. Forward and Rainer Weiss. [ 37 ] [ 38 ] In the decades that followed, ever more sensitive instruments were constructed, culminating in the construction of GEO600 , LIGO , and Virgo . [ 27 ]
After years of producing null results, improved detectors became operational in 2015. On 11 February 2016, the LIGO-Virgo collaborations announced the first observation of gravitational waves , [ 39 ] [ 40 ] [ 41 ] [ 42 ] from a signal (dubbed GW150914 ) detected at 09:50:45 GMT on 14 September 2015 of two black holes with masses of 29 and 36 solar masses merging about 1.3 billion light-years away. During the final fraction of a second of the merger, it released more than 50 times the power of all the stars in the observable universe combined. [ 43 ] The signal increased in frequency from 35 to 250 Hz over 10 cycles (5 orbits) as it rose in strength for a period of 0.2 second. [ 40 ] The mass of the new merged black hole was 62 solar masses. Energy equivalent to three solar masses was emitted as gravitational waves. [ 44 ] The signal was seen by both LIGO detectors in Livingston and Hanford, with a time difference of 7 milliseconds due to the angle between the two detectors and the source. The signal came from the Southern Celestial Hemisphere , in the rough direction of (but much farther away than) the Magellanic Clouds . [ 42 ] The confidence level of this being an observation of gravitational waves was 99.99994%. [ 44 ]
A year earlier, the BICEP2 collaboration claimed that they had detected the imprint of gravitational waves in the cosmic microwave background . However, they were later forced to retract this result. [ 21 ] [ 22 ] [ 45 ] [ 46 ]
In 2017, the Nobel Prize in Physics was awarded to Rainer Weiss , Kip Thorne and Barry Barish for their role in the detection of gravitational waves. [ 47 ] [ 48 ] [ 49 ]
In 2023, NANOGrav, EPTA, PPTA, and IPTA announced that they found evidence of a universal gravitational wave background . [ 50 ] North American Nanohertz Observatory for Gravitational Waves states, that they were created over cosmological time scales by supermassive black holes, identifying the distinctive Hellings-Downs curve in 15 years of radio observations of 25 pulsars. [ 51 ] Similar results are published by European Pulsar Timing Array, who claimed a 3 σ {\displaystyle 3\sigma } -significance . They expect that a 5 σ {\displaystyle 5\sigma } -significance will be achieved by 2025 by combining the measurements of several collaborations. [ 52 ] [ 53 ]
Gravitational waves are constantly passing Earth ; however, even the strongest have a minuscule effect since their sources are generally at a great distance. For example, the waves given off by the cataclysmic final merger of GW150914 reached Earth after travelling over a billion light-years , as a ripple in spacetime that changed the length of a 4 km LIGO arm by a thousandth of the width of a proton , proportionally equivalent to changing the distance to the nearest star outside the Solar System by one hair's width. [ 54 ] This tiny effect from even extreme gravitational waves makes them observable on Earth only with the most sophisticated detectors.
The effects of a passing gravitational wave, in an extremely exaggerated form, can be visualized by imagining a perfectly flat region of spacetime with a group of motionless test particles lying in a plane, e.g., the surface of a computer screen. As a gravitational wave passes through the particles along a line perpendicular to the plane of the particles, i.e., following the observer's line of vision into the screen, the particles will follow the distortion in spacetime, oscillating in a " cruciform " manner, as shown in the animations. The area enclosed by the test particles does not change and there is no motion along the direction of propagation. [ citation needed ]
The oscillations depicted in the animation are exaggerated for the purpose of discussion – in reality a gravitational wave has a very small amplitude (as formulated in linearized gravity ). However, they help illustrate the kind of oscillations associated with gravitational waves as produced by a pair of masses in a circular orbit . In this case the amplitude of the gravitational wave is constant, but its plane of polarization changes or rotates at twice the orbital rate, so the time-varying gravitational wave size, or 'periodic spacetime strain', exhibits a variation as shown in the animation. [ 55 ] If the orbit of the masses is elliptical then the gravitational wave's amplitude also varies with time according to Einstein's quadrupole formula . [ 4 ]
As with other waves , there are a number of characteristics used to describe a gravitational wave:
The speed, wavelength, and frequency of a gravitational wave are related by the equation c = λf , just like the equation for a light wave . For example, the animations shown here oscillate roughly once every two seconds. This would correspond to a frequency of 0.5 Hz, and a wavelength of about 600 000 km, or 47 times the diameter of the Earth.
In the above example, it is assumed that the wave is linearly polarized with a "plus" polarization, written h + . Polarization of a gravitational wave is just like polarization of a light wave except that the polarizations of a gravitational wave are 45 degrees apart, as opposed to 90 degrees. [ 56 ] In particular, in a "cross"-polarized gravitational wave, h × , the effect on the test particles would be basically the same, but rotated by 45 degrees, as shown in the second animation. Just as with light polarization, the polarizations of gravitational waves may also be expressed in terms of circularly polarized waves. Gravitational waves are polarized because of the nature of their source.
In general terms, gravitational waves are radiated by large, coherent motions of immense mass, especially in regions where gravity is so strong that Newtonian gravity begins to fail. [ 58 ] : 380
The effect does not occur in a purely spherically symmetric system. [ 10 ] A simple example of this principle is a spinning dumbbell . If the dumbbell spins around its axis of symmetry, it will not radiate gravitational waves; if it tumbles end over end, as in the case of two planets orbiting each other, it will radiate gravitational waves. The heavier the dumbbell, and the faster it tumbles, the greater is the gravitational radiation it will give off. In an extreme case, such as when the two weights of the dumbbell are massive stars like neutron stars or black holes, orbiting each other quickly, then significant amounts of gravitational radiation would be given off.
Some more detailed examples:
More technically, the second time derivative of the quadrupole moment (or the l -th time derivative of the l -th multipole moment ) of an isolated system's stress–energy tensor must be non-zero in order for it to emit gravitational radiation. This is analogous to the changing dipole moment of charge or current that is necessary for the emission of electromagnetic radiation .
Gravitational waves carry energy away from their sources and, in the case of orbiting bodies, this is associated with an in-spiral or decrease in orbit. [ 59 ] [ 60 ] Imagine for example a simple system of two masses – such as the Earth–Sun system – moving slowly compared to the speed of light in circular orbits. Assume that these two masses orbit each other in a circular orbit in the x – y plane. To a good approximation, the masses follow simple Keplerian orbits . However, such an orbit represents a changing quadrupole moment . That is, the system will give off gravitational waves.
In theory, the loss of energy through gravitational radiation could eventually drop the Earth into the Sun . However, the total energy of the Earth orbiting the Sun ( kinetic energy + gravitational potential energy ) is about 1.14 × 10 36 joules of which only 200 watts (joules per second) is lost through gravitational radiation, leading to a decay in the orbit by about 1 × 10 −15 meters per day or roughly the diameter of a proton . At this rate, it would take the Earth approximately 3 × 10 13 times more than the current age of the universe to spiral onto the Sun. This estimate overlooks the decrease in r over time, but the radius varies only slowly for most of the time and plunges at later stages, as r ( t ) = r 0 ( 1 − t t coalesce ) 1 / 4 , {\displaystyle r(t)=r_{0}\left(1-{\frac {t}{t_{\text{coalesce}}}}\right)^{1/4},} with r 0 {\displaystyle r_{0}} the initial radius and t coalesce {\displaystyle t_{\text{coalesce}}} the total time needed to fully coalesce. [ 61 ]
More generally, the rate of orbital decay can be approximated by [ 62 ]
where r is the separation between the bodies, t time, G the gravitational constant , c the speed of light , and m 1 and m 2 the masses of the bodies. This leads to an expected time to merger of [ 62 ]
Compact stars like white dwarfs and neutron stars can be constituents of binaries. For example, a pair of solar mass neutron stars in a circular orbit at a separation of 1.89 × 10 8 m (189,000 km) has an orbital period of 1,000 seconds, and an expected lifetime of 1.30 × 10 13 seconds or about 414,000 years. Such a system could be observed by LISA if it were not too far away. A far greater number of white dwarf binaries exist with orbital periods in this range. White dwarf binaries have masses in the order of the Sun , and diameters in the order of the Earth. They cannot get much closer together than 10,000 km before they will merge and explode in a supernova which would also end the emission of gravitational waves. Until then, their gravitational radiation would be comparable to that of a neutron star binary.
When the orbit of a neutron star binary has decayed to 1.89 × 10 6 m (1890 km), its remaining lifetime is about 130,000 seconds or 36 hours. The orbital frequency will vary from 1 orbit per second at the start, to 918 orbits per second when the orbit has shrunk to 20 km at merger. The majority of gravitational radiation emitted will be at twice the orbital frequency. Just before merger, the inspiral could be observed by LIGO if such a binary were close enough. LIGO has only a few minutes to observe this merger out of a total orbital lifetime that may have been billions of years. In August 2017, LIGO and Virgo observed the first binary neutron star inspiral in GW170817 , and 70 observatories collaborated to detect the electromagnetic counterpart, a kilonova in the galaxy NGC 4993 , 40 megaparsecs away, emitting a short gamma ray burst ( GRB 170817A ) seconds after the merger, followed by a longer optical transient ( AT 2017gfo ) powered by r-process nuclei. Advanced LIGO detectors should be able to detect such events up to 200 megaparsecs away; at this range, around 40 detections per year would be expected. [ 64 ]
Black hole binaries emit gravitational waves during their in-spiral, merger , and ring-down phases. Hence, in the early 1990s the physics community rallied around a concerted effort to predict the waveforms of gravitational waves from these systems with the Binary Black Hole Grand Challenge Alliance . [ 65 ] The largest amplitude of emission occurs during the merger phase, which can be modeled with the techniques of numerical relativity. [ 66 ] [ 67 ] [ 68 ] The first direct detection of gravitational waves, GW150914 , came from the merger of two black holes.
A supernova is a transient astronomical event that occurs during the last stellar evolutionary stages of a massive star's life, whose dramatic and catastrophic destruction is marked by one final titanic explosion. This explosion can happen in one of many ways, but in all of them a significant proportion of the matter in the star is blown away into the surrounding space at extremely high velocities (up to 10% of the speed of light). Unless there is perfect spherical symmetry in these explosions (i.e., unless matter is spewed out evenly in all directions), there will be gravitational radiation from the explosion. This is because gravitational waves are generated by a changing quadrupole moment , which can happen only when there is asymmetrical movement of masses. Since the exact mechanism by which supernovae take place is not fully understood, it is not easy to model the gravitational radiation emitted by them.
As noted above, a mass distribution will emit gravitational radiation only when there is spherically asymmetric motion among the masses. A spinning neutron star will generally emit no gravitational radiation because neutron stars are highly dense objects with a strong gravitational field that keeps them almost perfectly spherical. In some cases, however, there might be slight deformities on the surface called "mountains", which are bumps extending no more than 10 centimeters (4 inches) above the surface, [ 69 ] that make the spinning spherically asymmetric. This gives the star a quadrupole moment that changes with time, and it will emit gravitational waves until the deformities are smoothed out.
Gravitational waves from the early universe could provide a unique probe for cosmology. Because these wave interact very weakly with matter they would propagate freely from very early time when other signals are trapped by the large density of energy. If this gravitational radiation could be detected today it would be gravitational wave background complementary to the cosmic microwave background data. [ 20 ]
Water waves, sound waves, and electromagnetic waves are able to carry energy , momentum , and angular momentum and by doing so they carry those away from the source. [ 1 ] Gravitational waves perform the same function. Thus, for example, a binary system loses angular momentum as the two orbiting objects spiral towards each other – the angular momentum is radiated away by gravitational waves.
The waves can also carry off linear momentum, a possibility that has some interesting implications for astrophysics . [ 70 ] After two supermassive black holes coalesce, emission of linear momentum can produce a "kick" with amplitude as large as 4000 km/s. This is fast enough to eject the coalesced black hole completely from its host galaxy. Even if the kick is too small to eject the black hole completely, it can remove it temporarily from the nucleus of the galaxy, after which it will oscillate about the center, eventually coming to rest. [ 71 ] A kicked black hole can also carry a star cluster with it, forming a hyper-compact stellar system . [ 72 ] Or it may carry gas, allowing the recoiling black hole to appear temporarily as a " naked quasar ".
The quasar SDSS J092712.65+294344.0 is thought to contain a recoiling supermassive black hole. [ 73 ]
Like electromagnetic waves , gravitational waves should exhibit shifting of wavelength and frequency due to the relative velocities of the source and observer (the Doppler effect ), but also due to distortions of spacetime , such as cosmic expansion . [ 1 ] [ 74 ] Redshifting of gravitational waves is different from redshifting due to gravity ( gravitational redshift ).
In the framework of quantum field theory , the graviton is the name given to a hypothetical elementary particle speculated to be the force carrier that mediates gravity . However the graviton is not yet proven to exist, and no scientific model yet exists that successfully reconciles general relativity , which describes gravity, and the Standard Model , which describes all other fundamental forces . Attempts, such as quantum gravity , have been made, but are not yet accepted.
If such a particle exists, it is expected to be massless (because the gravitational force appears to have unlimited range) and must be a spin -2 boson . It can be shown that any massless spin-2 field would give rise to a force indistinguishable from gravitation, because a massless spin-2 field must couple to (interact with) the stress-energy tensor in the same way that the gravitational field does; therefore if a massless spin-2 particle were ever discovered, it would be likely to be the graviton without further distinction from other massless spin-2 particles. [ 75 ] Such a discovery would unite quantum theory with gravity. [ 76 ]
Due to the weakness of the coupling of gravity to matter, gravitational waves experience very little absorption or scattering, even as they travel over astronomical distances. In particular, gravitational waves are expected to be unaffected by the opacity of the very early universe. In these early phases, space had not yet become "transparent", so observations based upon light, radio waves, and other electromagnetic radiation that far back into time are limited or unavailable. Therefore, gravitational waves are expected in principle to have the potential to provide a wealth of observational data about the very early universe. [ 77 ]
The difficulty in directly detecting gravitational waves means it is also difficult for a single detector to identify by itself the direction of a source. Therefore, multiple detectors are used, both to distinguish signals from other "noise" by confirming the signal is not of earthly origin, and also to determine direction by means of triangulation . This technique uses the fact that the waves travel at the speed of light and will reach different detectors at different times depending on their source direction. Although the differences in arrival time may be just a few milliseconds , this is sufficient to identify the direction of the origin of the wave with considerable precision.
Only in the case of GW170814 were three detectors operating at the time of the event, therefore, the direction is precisely defined. The detection by all three instruments led to a very accurate estimate of the position of the source, with a 90% credible region of just 60 deg 2 , a factor 20 more accurate than before. [ 78 ]
During the past century, astronomy has been revolutionized by the use of new methods for observing the universe. Astronomical observations were initially made using visible light . Galileo Galilei pioneered the use of telescopes to enhance these observations. However, visible light is only a small portion of the electromagnetic spectrum , and not all objects in the distant universe shine strongly in this particular band. More information may be found, for example, in radio wavelengths. Using radio telescopes , astronomers have discovered pulsars and quasars , for example. Observations in the microwave band led to the detection of faint imprints of the Big Bang , a discovery Stephen Hawking called the "greatest discovery of the century, if not all time". Similar advances in observations using gamma rays , x-rays , ultraviolet light , and infrared light have also brought new insights to astronomy. As each of these regions of the spectrum has opened, new discoveries have been made that could not have been made otherwise. The astronomy community hopes that the same holds true of gravitational waves. [ 79 ] [ 80 ]
Gravitational waves have two important and unique properties. First, there is no need for any type of matter to be present nearby in order for the waves to be generated by a binary system of uncharged black holes, which would emit no electromagnetic radiation. Second, gravitational waves can pass through any intervening matter without being scattered significantly. Whereas light from distant stars may be blocked out by interstellar dust , for example, gravitational waves will pass through essentially unimpeded. These two features allow gravitational waves to carry information about astronomical phenomena heretofore never observed by humans. [ 77 ]
The sources of gravitational waves described above are in the low-frequency end of the gravitational-wave spectrum (10 −7 to 10 5 Hz). An astrophysical source at the high-frequency end of the gravitational-wave spectrum (above 10 5 Hz and probably 10 10 Hz) generates [ clarification needed ] relic gravitational waves that are theorized to be faint imprints of the Big Bang like the cosmic microwave background. [ 81 ] At these high frequencies it is potentially possible that the sources may be "man made" [ 16 ] that is, gravitational waves generated and detected in the laboratory. [ 82 ] [ 83 ]
A supermassive black hole , created from the merger of the black holes at the center of two merging galaxies detected by the Hubble Space Telescope , is theorized to have been ejected from the merger center by gravitational waves. [ 84 ] [ 85 ]
Although the waves from the Earth–Sun system are minuscule, astronomers can point to other sources for which the radiation should be substantial. One important example is the Hulse–Taylor binary – a pair of stars, one of which is a pulsar . [ 87 ] The characteristics of their orbit can be deduced from the Doppler shifting of radio signals given off by the pulsar. Each of the stars is about 1.4 M ☉ and the size of their orbits is about 1/75 of the Earth–Sun orbit , just a few times larger than the diameter of our own Sun. The combination of greater masses and smaller separation means that the energy given off by the Hulse–Taylor binary will be far greater than the energy given off by the Earth–Sun system – roughly 10 22 times as much.
The information about the orbit can be used to predict how much energy (and angular momentum) would be radiated in the form of gravitational waves. As the binary system loses energy, the stars gradually draw closer to each other, and the orbital period decreases. The resulting trajectory of each star is an inspiral, a spiral with decreasing radius. General relativity precisely describes these trajectories; in particular, the energy radiated in gravitational waves determines the rate of decrease in the period, defined as the time interval between successive periastrons (points of closest approach of the two stars). For the Hulse–Taylor pulsar, the predicted current change in radius is about 3 mm per orbit, and the change in the 7.75 hr period is about 2 seconds per year. Following a preliminary observation showing an orbital energy loss consistent with gravitational waves, [ 34 ] careful timing observations by Taylor and Joel Weisberg dramatically confirmed the predicted period decrease to within 10%. [ 34 ] With the improved statistics of more than 30 years of timing data since the pulsar's discovery, the observed change in the orbital period currently matches the prediction from gravitational radiation assumed by general relativity to within 0.2 percent. [ 88 ] In 1993, spurred in part by this indirect detection of gravitational waves, the Nobel Committee awarded the Nobel Prize in Physics to Hulse and Taylor for "the discovery of a new type of pulsar, a discovery that has opened up new possibilities for the study of gravitation." [ 89 ] The lifetime of this binary system, from the present to merger is estimated to be a few hundred million years. [ 90 ]
Inspirals are very important sources of gravitational waves. Any time two compact objects (white dwarfs, neutron stars, or black holes ) are in close orbits, they send out intense gravitational waves. As they spiral closer to each other, these waves become more intense. At some point they should become so intense that direct detection by their effect on objects on Earth or in space is possible. This direct detection is the goal of several large-scale experiments. [ 91 ]
The only difficulty is that most systems like the Hulse–Taylor binary are so far away. The amplitude of waves given off by the Hulse–Taylor binary at Earth would be roughly h ≈ 10 −26 . There are some sources, however, that astrophysicists expect to find that produce much greater amplitudes of h ≈ 10 −20 . At least eight other binary pulsars have been discovered. [ 92 ]
Gravitational waves are not easily detectable. When they reach the Earth, they have a small amplitude with strain approximately 10 −21 , meaning that an extremely sensitive detector is needed, and that other sources of noise can overwhelm the signal. [ 93 ] Gravitational waves are expected to have frequencies 10 −16 Hz < f < 10 4 Hz. [ 94 ]
Though the Hulse–Taylor observations were very important, they give only indirect evidence for gravitational waves. A more conclusive observation would be a direct measurement of the effect of a passing gravitational wave, which could also provide more information about the system that generated it. Any such direct detection is complicated by the extraordinarily small effect the waves would produce on a detector. The amplitude of a spherical wave will fall off as the inverse of the distance from the source (the 1/ R term in the formulas for h above). Thus, even waves from extreme systems like merging binary black holes die out to very small amplitudes by the time they reach the Earth. Astrophysicists expect that some gravitational waves passing the Earth may be as large as h ≈ 10 −20 , but generally no bigger. [ 95 ]
A simple device theorised to detect the expected wave motion is called a Weber bar – a large, solid bar of metal isolated from outside vibrations. This type of instrument was the first type of gravitational wave detector. Strains in space due to an incident gravitational wave excite the bar's resonant frequency and could thus be amplified to detectable levels. Conceivably, a nearby supernova might be strong enough to be seen without resonant amplification. With this instrument, Joseph Weber claimed to have detected daily signals of gravitational waves. His results, however, were contested in 1974 by physicists Richard Garwin and David Douglass . Modern forms of the Weber bar are still operated, cryogenically cooled, with superconducting quantum interference devices to detect vibration. Weber bars are not sensitive enough to detect anything but extremely powerful gravitational waves. [ 96 ]
MiniGRAIL is a spherical gravitational wave antenna using this principle. It is based at Leiden University , consisting of an exactingly machined 1,150 kg sphere cryogenically cooled to 20 millikelvins. [ 97 ] The spherical configuration allows for equal sensitivity in all directions, and is somewhat experimentally simpler than larger linear devices requiring high vacuum. Events are detected by measuring deformation of the detector sphere . MiniGRAIL is highly sensitive in the 2–4 kHz range, suitable for detecting gravitational waves from rotating neutron star instabilities or small black hole mergers. [ 98 ]
There are currently two detectors focused on the higher end of the gravitational wave spectrum (10 −7 to 10 5 Hz): one at University of Birmingham , England, [ 99 ] and the other at INFN Genoa, Italy. A third is under development at Chongqing University , China. The Birmingham detector measures changes in the polarization state of a microwave beam circulating in a closed loop about one meter across. Both detectors are expected to be sensitive to periodic spacetime strains of h ~ 2 × 10 −13 / √ Hz , given as an amplitude spectral density . The INFN Genoa detector is a resonant antenna consisting of two coupled spherical superconducting harmonic oscillators a few centimeters in diameter. The oscillators are designed to have (when uncoupled) almost equal resonant frequencies. The system is currently expected to have a sensitivity to periodic spacetime strains of h ~ 2 × 10 −17 / √ Hz , with an expectation to reach a sensitivity of h ~ 2 × 10 −20 / √ Hz . The Chongqing University detector is planned to detect relic high-frequency gravitational waves with the predicted typical parameters ≈10 11 Hz (100 GHz) and h ≈10 −30 to 10 −32 . [ 100 ]
A more sensitive class of detector uses a laser Michelson interferometer to measure gravitational-wave induced motion between separated 'free' masses. [ 101 ] This allows the masses to be separated by large distances (increasing the signal size); a further advantage is that it is sensitive to a wide range of frequencies (not just those near a resonance as is the case for Weber bars). After years of development ground-based interferometers made the first detection of gravitational waves in 2015.
Currently, the most sensitive is LIGO – the Laser Interferometer Gravitational Wave Observatory. LIGO has three detectors: one in Livingston, Louisiana , one at the Hanford site in Richland, Washington and a third (formerly installed as a second detector at Hanford) that is planned to be moved to India . Each observatory has two light storage arms that are 4 kilometers in length. These are at 90 degree angles to each other, with the light passing through 1 m diameter vacuum tubes running the entire 4 kilometers. A passing gravitational wave will slightly stretch one arm as it shortens the other. This is the motion to which an interferometer is most sensitive.
Even with such long arms, the strongest gravitational waves will only change the distance between the ends of the arms by at most roughly 10 −18 m. LIGO should be able to detect gravitational waves as small as h ~ 5 × 10 −22 . Upgrades to LIGO and Virgo should increase the sensitivity still further. Another highly sensitive interferometer, KAGRA , which is located in the Kamioka Observatory in Japan, is in operation since February 2020. A key point is that a tenfold increase in sensitivity (radius of 'reach') increases the volume of space accessible to the instrument by one thousand times. This increases the rate at which detectable signals might be seen from one per tens of years of observation, to tens per year. [ 102 ]
Interferometric detectors are limited at high frequencies by shot noise , which occurs because the lasers produce photons randomly; one analogy is to rainfall – the rate of rainfall, like the laser intensity, is measurable, but the raindrops, like photons, fall at random times, causing fluctuations around the average value. This leads to noise at the output of the detector, much like radio static. In addition, for sufficiently high laser power, the random momentum transferred to the test masses by the laser photons shakes the mirrors, masking signals of low frequencies. Thermal noise (e.g., Brownian motion ) is another limit to sensitivity. In addition to these 'stationary' (constant) noise sources, all ground-based detectors are also limited at low frequencies by seismic noise and other forms of environmental vibration, and other 'non-stationary' noise sources; creaks in mechanical structures, lightning or other large electrical disturbances, etc. may also create noise masking an event or may even imitate an event. All of these must be taken into account and excluded by analysis before detection may be considered a true gravitational wave event.
The simplest gravitational waves are those with constant frequency. The waves given off by a spinning, non-axisymmetric neutron star would be approximately monochromatic : a pure tone in acoustics . Unlike signals from supernovae or binary black holes, these signals evolve little in amplitude or frequency over the period it would be observed by ground-based detectors. However, there would be some change in the measured signal, because of Doppler shifting caused by the motion of the Earth. Despite the signals being simple, detection is extremely computationally expensive, because of the long stretches of data that must be analysed.
The Einstein@Home project is a distributed computing project similar to SETI@home intended to detect this type of gravitational wave. By taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwise. [ 103 ]
Space-based interferometers, such as LISA and DECIGO , are also being developed. LISA's design calls for three test masses forming an equilateral triangle, with lasers from each spacecraft to each other spacecraft forming two independent interferometers. LISA is planned to occupy a solar orbit trailing the Earth, with each arm of the triangle being 2.5 million kilometers. [ 104 ] This puts the detector in an excellent vacuum far from Earth-based sources of noise, though it will still be susceptible to heat, shot noise , and artifacts caused by cosmic rays and solar wind .
Pulsars are rapidly rotating stars. A pulsar emits beams of radio waves that, like lighthouse beams, sweep through the sky as the pulsar rotates. The signal from a pulsar can be detected by radio telescopes as a series of regularly spaced pulses, essentially like the ticks of a clock. GWs affect the time it takes the pulses to travel from the pulsar to a telescope on Earth. A pulsar timing array uses millisecond pulsars to seek out perturbations due to GWs in measurements of the time of arrival of pulses to a telescope, in other words, to look for deviations in the clock ticks. To detect GWs, pulsar timing arrays search for a distinct quadrupolar pattern of correlation and anti-correlation between the time of arrival of pulses from different pulsar pairs as a function of their angular separation in the sky. [ 107 ] Although pulsar pulses travel through space for hundreds or thousands of years to reach us, pulsar timing arrays are sensitive to perturbations in their travel time of much less than a millionth of a second.
The most likely source of GWs to which pulsar timing arrays are sensitive are supermassive black hole binaries, which form from the collision of galaxies. [ 108 ] In addition to individual binary systems, pulsar timing arrays are sensitive to a stochastic background of GWs made from the sum of GWs from many galaxy mergers. Other potential signal sources include cosmic strings and the primordial background of GWs from cosmic inflation .
Globally there are three active pulsar timing array projects. The North American Nanohertz Observatory for Gravitational Waves uses data collected by the Arecibo Radio Telescope and Green Bank Telescope . The Australian Parkes Pulsar Timing Array uses data from the Parkes radio-telescope . The European Pulsar Timing Array uses data from the four largest telescopes in Europe: the Lovell Telescope , the Westerbork Synthesis Radio Telescope , the Effelsberg Telescope and the Nancay Radio Telescope . These three groups also collaborate under the title of the International Pulsar Timing Array project. [ 109 ]
In June 2023, NANOGrav published the 15-year data release, which contained the first evidence for a stochastic gravitational wave background. In particular, it included the first measurement of the Hellings-Downs curve, the tell-tale sign of the gravitational wave origin of the observed background. [ 110 ] [ 105 ]
Primordial gravitational waves are gravitational waves observed in the cosmic microwave background . They were allegedly detected by the BICEP2 instrument, an announcement made on 17 March 2014, which was withdrawn on 30 January 2015 ("the signal can be entirely attributed to dust in the Milky Way" [ 86 ] ).
On 11 February 2016, the LIGO collaboration announced the first observation of gravitational waves , from a signal detected at 09:50:45 GMT on 14 September 2015 [ 39 ] of two black holes with masses of 29 and 36 solar masses merging about 1.3 billion light-years away. During the final fraction of a second of the merger, it released more than 50 times the power of all the stars in the observable universe combined. [ 111 ] The signal increased in frequency from 35 to 250 Hz over 10 cycles (5 orbits) as it rose in strength for a period of 0.2 second. [ 40 ] The mass of the new merged black hole was 62 solar masses. Energy equivalent to three solar masses was emitted as gravitational waves. [ 44 ] The signal was seen by both LIGO detectors in Livingston and Hanford, with a time difference of 7 milliseconds due to the angle between the two detectors and the source. The signal came from the Southern Celestial Hemisphere , in the rough direction of (but much farther away than) the Magellanic Clouds . [ 42 ] The gravitational waves were observed in the region more than 5 sigma [ 40 ] (in other words, 99.99997% chances of showing/getting the same result), the probability of finding enough to have been assessed/considered as the evidence/proof in an experiment of statistical physics . [ 112 ]
Since then LIGO and Virgo have reported more gravitational wave observations from merging black hole binaries.
On 16 October 2017, the LIGO and Virgo collaborations announced the first-ever detection of gravitational waves originating from the coalescence of a binary neutron star system. The observation of the GW170817 transient, which occurred on 17 August 2017, allowed for constraining the masses of the neutron stars involved between 0.86 and 2.26 solar masses. Further analysis allowed a greater restriction of the mass values to the interval 1.17–1.60 solar masses, with the total system mass measured to be 2.73–2.78 solar masses. The inclusion of the Virgo detector in the observation effort allowed for an improvement of the localization of the source by a factor of 10. This in turn facilitated the electromagnetic follow-up of the event. The signal lasted about 100 seconds, much longer than the few seconds measured from binary black holes. [ 113 ] Also in contrast to the case of binary black hole mergers, binary neutron star mergers were expected to yield an electromagnetic counterpart, that is, a light signal associated with the event. A gamma-ray burst ( GRB 170817A ) was detected by the Fermi Gamma-ray Space Telescope , occurring 1.7 seconds after the gravitational wave transient. The signal, originating near the galaxy NGC 4993 , was associated with the neutron star merger. This was corroborated by the electromagnetic follow-up of the event ( AT 2017gfo ), involving 70 telescopes and observatories and yielding observations over a large region of the electromagnetic spectrum which further confirmed the neutron star nature of the merged objects and the associated kilonova . [ 114 ] [ 115 ]
In 2021, the detection of the first two neutron star-black hole binaries by the LIGO and VIRGO detectors was published in the Astrophysical Journal Letters, allowing to first set bounds on the quantity of such systems. No neutron star-black hole binary had ever been observed using conventional means before the gravitational observation. [ 9 ]
In 1964, L. Halpern and B. Laurent theoretically proved that gravitational spin-2 electron transitions are possible in atoms. Compared to electric and magnetic transitions the emission probability is extremely low. Stimulated emission was discussed for increasing the efficiency of the process. Due to the lack of mirrors or resonators for gravitational waves, they determined that a single pass GASER (a kind of laser emitting gravitational waves) is practically unfeasible. [ 116 ]
In 1998, the possibility of a different implementation of the above theoretical analysis was proposed by Giorgio Fontana. The required coherence for a practical GASER could be obtained by Cooper pairs in superconductors that are characterized by a macroscopic collective wave-function. Cuprate high temperature superconductors are characterized by the presence of s-wave and d-wave [ 117 ] Cooper pairs. Transitions between s-wave and d-wave are gravitational spin-2. Out of equilibrium conditions can be induced by injecting s-wave Cooper pairs from a low temperature superconductor, for instance lead or niobium , which is pure s-wave, by means of a Josephson junction with high critical current. The amplification mechanism can be described as the effect of superradiance , and 10 cubic centimeters of cuprate high temperature superconductor seem sufficient for the mechanism to properly work. A detailed description of the approach can be found in "High Temperature Superconductors as Quantum Sources of Gravitational Waves: The HTSC GASER". Chapter 3 of this book. [ 118 ]
An episode of the 1962 Russian science-fiction novel Space Apprentice by Arkady and Boris Strugatsky shows an experiment monitoring the propagation of gravitational waves at the expense of annihilating a chunk of asteroid 15 Eunomia the size of Mount Everest . [ 119 ]
In Stanislaw Lem 's 1986 novel Fiasco , a "gravity gun" or "gracer" (gravity amplification by collimated emission of resonance) is used to reshape a collapsar, so that the protagonists can exploit the extreme relativistic effects and make an interstellar journey.
In Greg Egan 's 1997 novel Diaspora , the analysis of a gravitational wave signal from the inspiral of a nearby binary neutron star reveals that its collision and merger is imminent, implying a large gamma-ray burst is going to impact the Earth.
In Liu Cixin 's 2006 Remembrance of Earth's Past series, gravitational waves are used as an interstellar broadcast signal, which serves as a central plot point in the conflict between civilizations within the galaxy. | https://en.wikipedia.org/wiki/Gravitational_wave |
Gravitaxis (or geotaxis [ 1 ] ) is a form of taxis characterized by the directional movement of an organism in response to gravity . [ 2 ]
There are a few different causes for gravitaxis. Many microorganisms have receptors like statocysts that allow them to sense the direction of gravity and to adjust their orientation accordingly. However, gravitaxis can result also from a purely physical mechanism so that organs for sensing the direction of gravity are not necessary. An example is given by microorganisms with a center of mass that is shifted to one end of the organism. Similar to a buoy, such mass-anisotropic microorganisms orient upwards under gravity. It has been shown that even an asymmetry in the shape of microorganisms can be sufficient to cause gravitaxis. [ 3 ]
Gravitaxis is different from gravitropism in a way that the latter is more about the growth response of an organism to gravity.
Taxis is a behavioral response of a cell or an organism to an external stimulus. The movement is characteristically directional. The movement may be positive or negative. A positive taxis is one in which the organism or a cell gravitates towards the source of stimulation (attraction). A negative taxis is when the organism or a cell moves away from the source of stimulation (repulsion).
It can be seen in many microorganisms including Euglena . [ 4 ] The response of planktonic larvae of Lithodes aequispinus (king crab) to gravity is another example of gravitaxis. [ 1 ] They show both positive and negative gravitaxis responses in a way that they move either upward (negative) or downward (positive). Gravitaxis can also be observed in Drosophila . [ 5 ]
The term is coined from gravi- meaning gravity, and taxis or the movement of an organism in response to a stimulus .
The dictionary definition of gravitaxis at Wiktionary | https://en.wikipedia.org/wiki/Gravitaxis |
According to general relativity , a massive spinning body endowed with angular momentum S will alter the space-time fabric around it in such a way that several effects on moving test particles and propagating electromagnetic waves occur. [ 1 ]
In particular, the direction of motion with respect to the sense of rotation of the central body is relevant because co-and counter-propagating waves carry a "gravitomagnetic" time delay Δ t GM which could be, in principle, be measured [ 2 ] [ 3 ] if S is known.
On the contrary, if the validity of general relativity is assumed, it is possible to use Δ t GM to measure S . Such effect must not be confused with the much larger Shapiro time delay [ 4 ] Δ t GE induced by the "gravitoelectric" Schwarzschild -like component of the gravitational field of a planet of mass M considered non-rotating. Unlike the small Δ t GM , the Shapiro time delay has been accurately measured in several radar-ranging experiments with Solar System interplanetary spacecraft .
This relativity -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gravitomagnetic_time_delay |
Gravity-assisted microdissection (GAM) is one of the laser microdissection methods. The dissected material is allowed to fall by gravity into a cap and may thereafter be used for isolating proteins or genetic material. [ 1 ] Two manufacturers in the world have developed their own device based on GAM method. [ citation needed ]
In the case of ION LMD system, after preparing sample and staining, transfer tissue on window slide. The slide is mounted inversely. Motorized stage moves to pre-selected drawing line and laser beam cuts the cells of interests by laser ablation. Selected cells are collected in the tube cap which is under the slide via gravity. [ 2 ]
Dissected materials such as single cells or cell populations of interests are used for these further researches. | https://en.wikipedia.org/wiki/Gravity-assisted_microdissection |
A gravity-based structure ( GBS ) is a support structure held in place by gravity , most notably offshore oil platforms . These structures are often constructed in fjords due to their protected area and sufficient depth.
Prior to deployment, a study of the seabed must be done to ensure it can withstand the vertical load from the structure. [ 1 ] It is then constructed with steel reinforced concrete into tanks or cells, some of which are used to control the buoyancy. When construction is complete, the structure is towed to its intended location.
Notable GBSes include the 1997 Hibernia Gravity Base Structure off Newfoundland . Around 2020 GBSes became the fashion for Novatek 's exploitation of the petroleum resources in the Gulf of Ob . [ 2 ]
Early deployments of offshore wind power turbines used these structures. As of 2010, 14 of the world's offshore wind farms had some of their turbines supported by gravity-based structures. The deepest registered offshore wind farm with gravity-based structures is the Blyth Offshore Wind Farm, UK, with a depth of approx. 40 m. [ 3 ]
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gravity-based_structure |
gravitySimulator is a novel supercomputer that incorporates special-purpose GRAPE hardware to solve the gravitational n -body problem . It is housed in the Center for Computational Relativity and Gravitation (CCRG) at the Rochester Institute of Technology . It became operational in 2005.
The computer consists of 32 nodes, each of which contains a GRAPE-6A board ("mini-GRAPE") in a Peripheral Component Interconnect (PCI) slot. [ 1 ] The GRAPE boards use pipelines to compute pairwise forces between particles at a speed of 130 Gflops .
The on-board memory of each GRAPE board can hold data for 128,000 particles, and by combining 32 of them in a cluster, a total of four million particles can be integrated, at sustained speeds of 4 Tflops . [ 2 ]
gravitySimulator is used to study the dynamical evolution of galaxies and galactic nuclei . [ 3 ] [ 4 ] [ 5 ]
This supercomputer-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/GravitySimulator |
Gravity , in the context of fermenting alcoholic beverages , refers to the specific gravity (abbreviated SG), or relative density compared to water, of the wort or must at various stages in the fermentation. The concept is used in the brewing and wine-making industries. Specific gravity is measured by a hydrometer , refractometer , pycnometer or oscillating U-tube electronic meter.
The density of a wort is largely dependent on the sugar content of the wort. During alcohol fermentation , yeast converts sugars into carbon dioxide and alcohol. By monitoring the decline in SG over time the brewer obtains information about the health and progress of the fermentation and determines that it is complete when gravity stops declining. If the fermentation is finished, the specific gravity is called the final gravity (abbreviated FG). For example, for a typical strength beer, original gravity (abbreviated OG) could be 1.050 and FG could be 1.010.
Several different scales have been used for measuring the original gravity. For historical reasons, the brewing industry largely uses the Plato scale (°P), which is essentially the same as the Brix scale used by the wine industry. For example, OG 1.050 is roughly equivalent to 12 °P.
By considering the original gravity, the brewer or vintner obtains an indication as to the probable ultimate alcoholic content of their product. The OE (Original Extract) is often referred to as the "size" of the beer and is, in Europe, often printed on the label as Stammwürze or sometimes just as a percent. In the Czech Republic, for example, common descriptions are "10 degree beers", "12 degree beers" which refers to the gravity in Plato of the wort before the fermentation.
The difference between the original gravity of the wort and the final gravity of the beer is an indication of how much sugar has been turned into alcohol. The bigger the difference, the greater the amount of alcohol present and hence the stronger the beer. This is why strong beers are sometimes referred to as high gravity beers, and "session" or "small" beers are called low gravity beers, even though in theory the final gravity of a strong beer might be lower than that of a session beer because of the greater amount of alcohol present.
Specific gravity is the ratio of the density of a sample (of any substance) to the density of water. The ratio depends on the temperature and pressure of both the sample and water. The pressure is always considered (in brewing) to be 1 standard atmosphere (1,013.25 hPa) and the temperature is usually 20 °C (68 °F) for both sample and water but in some parts of the world different temperatures may be used and there are hydrometers sold calibrated to, for example, 16 °C (60 °F). It is important, where any conversion to °P is involved, that the proper pair of temperatures be used for the conversion table or formula being employed. The current ASBC table is (20 °C/20 °C) meaning that the density is measured at 20 °C (68 °F) and referenced to the density of water at 20 °C (68 °F) (i.e. 0.998203 g/cm 3 or 0.0360624 lb/cu in). Mathematically
This formula gives the true specific gravity i.e. based on densities. Brewers cannot (unless using a U-tube meter ) measure density directly and so must use a hydrometer, whose stem is bathed in air, or pycnometer weighings which are also done in air. Hydrometer readings and the ratio of pycnometer weights are influenced by air (see article Specific Gravity for details) and are called "apparent" readings. True readings are easily obtained from apparent readings by
However, the ASBC table uses apparent specific gravities, so many electronic density meters will produce the correct °P numbers automatically.
The original gravity is the specific gravity measured before fermentation. From it the analyst can compute the original extract which is the mass (grams) of sugar in 100 grams (3.5 oz) of wort (°P) by use of the Plato scale . The symbol p {\displaystyle p} will denote OE in the formulas which follow.
The final gravity is the specific gravity measured at the completion of fermentation. The apparent extract, denoted m {\displaystyle m} , is the °P obtained by inserting the FG into the formulas or tables in the Plato scale article. The use of "apparent" here is not to be confused with the use of that term to describe specific gravity readings which have not been corrected for the effects of air.
The amount of extract which was not converted to yeast biomass, carbon dioxide or ethanol can be estimated by removing the alcohol from beer which has been degassed and clarified by filtration or other means. This is often done as part of a distillation in which the alcohol is collected for quantitative analysis but can also be done by evaporation in a water bath. If the residue is made back up to the original volume of beer which was subject to the evaporation process, the specific gravity of that reconstituted beer measured and converted to Plato using the tables and formulas in the Plato article then the TE is
See the Plato article for details. TE is denoted by the symbol n {\displaystyle n} . This is the number of grams of extract remaining in 100 grams (3.5 oz) of beer at the completion of fermentation.
Knowing the amount of extract in 100 grams (3.5 oz) of wort before fermentation and the number of grams of extract in 100 grams (3.5 oz) of beer at its completion, the amount alcohol (in grams) formed during the fermentation can be determined. The formula follows, attributed to Balling [ 1 ] : 427
where f p n = 1 ( 2.0665 − 1.0665 p / 100 ) {\displaystyle f_{pn}={1 \over (2.0665-1.0665p/100)}} gives the number of grams of alcohol per 100 grams (3.5 oz) of beer i.e. the ABW. Note that the alcohol content depends not only on the diminution of extract ( p − n ) {\displaystyle (p-n)} but also on the multiplicative factor f p n {\displaystyle f_{pn}} which depends on the OE. De Clerck [ 1 ] : 428 tabulated Ballings values for f p n {\displaystyle f_{pn}} but they can be calculated simply from p
This formula is fine for those who wish to go to the trouble to compute TE (whose real value lies in determining attenuation) which is only a small fraction of brewers. Others want a simpler, quicker route to determining alcoholic strength. This lies in Tabarie's Principle [ 1 ] : 428 which states that the depression of specific gravity in beer to which ethanol is added is the same as the depression of water to which an equal amount of alcohol (on a w/w basis) has been added. Use of Tabarie's principle lets us calculate the true extract of a beer with apparent extract m {\displaystyle m} as
where P {\displaystyle P} is a function that converts SG to °P (see Plato ) and P − 1 {\displaystyle P^{-1}} (see Plato ) its inverse and ρ EtOH ( A w ) {\displaystyle \rho _{\text{EtOH}}(A_{w})} is the density of an aqueous ethanol solution of strength A w {\displaystyle A_{w}} by weight at 20 °C. Inserting this into the alcohol formula the result, after rearrangement, is
Which can be solved, albeit iteratively, for A w {\displaystyle A_{w}} as a function of OE and AE. It is again possible to come up with a relationship of the form
De Clerk also tabulates values for f p m = 0.39661 + 0.001709 p + 0.000010788 p 2 {\displaystyle f_{pm}=0.39661+0.001709p+0.000010788p^{2}} .
Most brewers and consumers are used to having alcohol content reported by volume (ABV) rather than weight. Interconversion is simple but the specific gravity of the beer must be known:
This is the number of cubic centimetres of ethanol in 100 cc (6 cu in) of beer.
Because ABV depends on multiplicative factors (one of which depends on the original extract and one on the final) as well as the difference between OE and AE it is impossible to come up with a formula of the form
where k {\displaystyle k} is a simple constant. Because of the near linear relationship between extract and (SG − 1) (see specific gravity ) in particular because p ≈ 1000 ( SG − 1 ) / 4 {\displaystyle p\approx 1000({\text{SG}}-1)/4} the ABV formula is written as
If the value given above for f p m {\displaystyle f_{pm}} corresponds to an OE of 12 °P which is 0.4187, and 1.010 can be taken as a typical FG then this simplifies to
With typical values of 1.050 and 1.010 for, respectively, OG and FG this simplified formula gives an ABV of 5.31% as opposed to 5.23% for the more accurate formula. Formulas for alcohol similar to this last simple one abound in the brewing literature and are very popular among home brewers. Formulas such as this one make it possible to mark hydrometers with "potential alcohol" scales based on the assumption that the FG will be close to 1 which is more likely to be the case in wine making than in brewing and it is to vintners that these are usually sold.
The drop in extract during the fermentation divided by the OE represents the percentage of sugar which has been consumed. The real degree of attenuation (RDF) is based on TE
and the apparent degree of fermentation (ADF) is based on AE
Because of the near linear relationship between (SG − 1) and °P specific gravities can be used in the ADF formula as shown.
The relationship between SG and °P can be roughly approximated using the rule-of-thumb conversion equation "brewer's points divided by four", where the "Brewing" or "Gravity points" are the thousandths of SG above 1:
The amount of extract in degrees Plato are thus approximately given by the points divided by 4:
As an example, a wort of SG 1.050 would be said to have 1000(1.050 − 1) = 50 points, and contain 50/4 = 12.5 °P of extract. This is simply the linear approximation to the true relationship between SG and °P.
However, the above approximation has increasingly larger error for increasing values of specific gravity and deviates e.g. by 0.67°P when SG = 1.080. A much more accurate (mean average error less than 0.02°P) conversion can be made using the following formula: [ 2 ]
where the specific gravity is to be measured at a temperature of T = 20 °C. The equivalent relation giving SG at 20 °C for a given °P is:
Points can be used in the ADF and RDF formulas. Thus a beer with OG 1.050 which fermented to 1.010 would be said to have attenuated 100 × (50 − 10)/50 = 80%. Points can also be used in the SG versions of the alcohol formulas. It is simply necessary to multiply by 1000 as points are 1000 times (SG − 1).
Software tools are available to brewers to convert between the various units of measurement and to adjust mash ingredients and schedules to meet target values. The resulting data can be exchanged via BeerXML to other brewers to facilitate accurate replication. | https://en.wikipedia.org/wiki/Gravity_(alcoholic_beverage) |
Gravity is a software program designed by Steve Safarik [ 1 ] to simulate the motions of planetary bodies in space. Users can create solar systems of up to 16 bodies. Mass, density, initial position, and initial velocity can be varied by user input. The bodies are then plotted as they move according to the Newtonian law of gravitation . Simulation settings may be saved as files with the extension " .GRV ".
This simulation software article is a stub . You can help Wikipedia by expanding it .
This astronomy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gravity_(software) |
Gravity and Extreme Magnetism Small Explorer ( GEMS or SMEX-13 ) mission was a NASA space observatory mission. [ 1 ] The main scientific goal of GEMS was to be the first mission to systematically measure the polarization of X-ray sources . GEMS would have provided data to help scientists study the shape of spacetime that has been distorted by a spinning black hole 's gravity and the structure and effects of the magnetic fields around neutron stars . It was cancelled by NASA in June 2012 for potential cost overruns due to delays in developing the technology and never moved into the development phase. [ 1 ]
GEMS was managed by the NASA Goddard Space Flight Center (GSFC). The project was an astrophysics program reporting to NASA's Science Mission Directorate (SMD) in Washington, D.C. [ 1 ]
Cancelled missions can be reinstated - for example, NuSTAR was cancelled in 2006, but reinstated a year later and launched in June 2012. [ 2 ] However, NuSTAR was not cancelled due to project overruns, but rather due to changes in the overall NASA budget, so the circumstances for cancellation were very different. Small missions of the Explorer program offer much flexibility and launch opportunities, and the lessons learned can be applied to the same missions goals, but on a different mission (compare, for instance, Vanguard 1 to Explorer 1 ). Several years later two new X-ray polarimetry missions won a NASA award to develop X-ray polarimetry missions. [ 3 ] NASA's IXPE X-ray polarimetry telescope was launched in 2021; its X-ray observational capabilities and mission objectives are very similar to those (proposed) of the GEMS.
The spacecraft would have been launched in July 2014 on a nine-month mission with a possible 15-month extension for a guest observer phase; [ 4 ] but the mission was terminated at the Confirmation Review stage on 10 May 2012 due to expected cost overruns.
The GEMS X-ray telescope was designed to indirectly measure the regions of distorted space around spinning black holes through a measurement of the polarization of X-rays emitted. It would have also probed the structure and effects of the magnetic fields around magnetars and other star remnants with magnetic fields trillions of times stronger than Earth's.
GEMS could reveal:
Current missions cannot do this because the required angular resolution is limited and magnetic fields are invisible.
The detector in GEMS would have been a small chamber filled with gas. When an X-ray is absorbed in the gas, an electron carries off most of the energy, and starts out in a direction related to the polarization direction of the X-ray. This electron loses energy by ionizing the gas; the instrument measures the direction of the ionization track, and thereby the polarization of the X-ray. The GEMS detector readout was to employ a time projection chamber to image the track. The GEMS instrument was planned to be about 100 times more sensitive than previous X-ray polarization experiments.
Mission costs were capped at US$105 million (in Fiscal Year 2008 dollars), excluding the launch vehicle, [ 6 ] but an independent confirmation review board at NASA claimed it would grow to an estimated US$150 million, leading to cancellation of the mission. The cancellation of GEMS marked the end of a multi-year-long binge of cancellations and attempted cancellations of current and future missions: it was at the time the last funded future U.S. space telescope besides James Webb Space Telescope (JWST). The cancellation of GEMS may have jeopardized the Pegasus XL launcher. [ 7 ] (The Pegasus XL has successfully launched other small explorer missions)
GEMS was one of six Small Explorer missions selected in May 2008 for the NASA Small Explorer (SMEX) Program Phase A study. [ 8 ] In June 2009, GEMS was chosen to be the second of these missions to go forward into Phase B, starting in October 2010 for a launch in April 2014. [ 6 ]
The project completed and successfully passed the Systems Requirements Review (SRR) in December 2010. [ 9 ]
GEMS did not pass a confirmation review conducted on 10 May 2012, which effectively cancelled the project. The project team intended to appeal the cancellation. [ 10 ]
On 7 June 2012, NASA officially announced the cancellation of the GEMS project. The mission was supposed to launch in July 2014 to study black holes and neutron stars, but external reviews found the project would likely exceed its budget. GEMS was supposed to hold at US$119 million, not counting the launch vehicle. NASA's astrophysics director, Paul Hertz , says the technology needed for the instrument took longer to develop than expected, and that drove up the price. [ 11 ]
NASA continued studying X-ray polarimetry missions in 2015 for future Explorer program observatories. [ 3 ]
The GEMS principal investigator was Dr Jean H. Swank , of NASA's Goddard Space Flight Center , Greenbelt, Maryland .
Other GEMS collaborators are from universities include: [ 13 ] [ 14 ] | https://en.wikipedia.org/wiki/Gravity_and_Extreme_Magnetism_Small_Explorer |
In fluid dynamics , a gravity current or density current is a primarily horizontal flow in a gravitational field that is driven by a density difference in a fluid or fluids and is constrained to flow horizontally by, for instance, a ceiling. Typically, the density difference is small enough for the Boussinesq approximation to be valid. Gravity currents can be thought of as either finite in volume, such as the pyroclastic flow from a volcano eruption , or continuously supplied from a source, such as warm air leaving the open doorway of a house in winter. [ 1 ] Other examples include dust storms , turbidity currents , avalanches , discharge from wastewater or industrial processes into rivers, or river discharge into the ocean. [ 2 ] [ 3 ]
Gravity currents are typically much longer than they are tall. Flows that are primarily vertical are known as plumes . As a result, it can be shown (using dimensional analysis ) that vertical velocities are generally much smaller than horizontal velocities in the current; the pressure distribution is thus approximately hydrostatic , apart from near the leading edge. Gravity currents may be simulated by the shallow water equations , with special dispensation for the leading edge which behaves as a discontinuity. [ 1 ] When a gravity current propagates along a plane of neutral buoyancy within a stratified ambient fluid, it is known as a gravity current intrusion .
Although gravity currents represent the flow of fluid of one density over/under another, discussion is usually focused on the fluid that is propagating. Gravity currents can originate either from finite volume flows or from continuous flows. In the latter case, the fluid in the head is constantly replaced and the gravity current can therefore propagate, in theory, forever. Propagation of a continuous flow can be thought of as the same as that of the tail (or body) of a very long finite volume. Gravity flows are described as consisting of two parts, a head and a tail. The head, which is the leading edge of the gravity current, is a region in which relatively large volumes of ambient fluid are displaced. The tail is the bulk of flow that follows the head. Flow characteristics can be characterized by the Froude and Reynolds numbers, which represent the ratio of flow speed to gravity (buoyancy) and viscosity, respectively. [ 3 ]
Propagation of the head usually occurs in three phases. In the first phase, the gravity current propagation is turbulent. The flow displays billowing patterns known as Kelvin-Helmholtz instabilities , which form in the wake of the head and engulf ambient fluid into the tail: a process referred to as "entrainment". Direct mixing also occurs at the front of the head through lobes and cleft structures which form on the surface of the head. According to one paradigm, the leading edge of a gravity current 'controls' the flow behind it: it provides a boundary condition for the flow. In this phase the propagation rate of the current is approximately constant with time. For many flows of interest, the leading edge moves at a Froude number of about 1; estimates of the exact value vary between about 0.7 and 1.4. [ 4 ] As the driving fluid depletes as a result of the current spreading into the environment, the driving head decreases until the flow becomes laminar. In this phase, there is only very little mixing and the billowing structure of the flow disappears. From this phase onward the propagation rate decreases with time and the current gradually slows down. Finally, as the current spreads even further, it becomes so thin that viscous forces between the intruding fluid and the ambient and boundaries govern the flow. In this phase no more mixing occurs and the propagation rate slows down even more. [ 4 ] [ 5 ]
The spread of a gravity current depends on the boundary conditions, and two cases are usually distinguished depending on whether the initial release is of the same width as the environment or not. In the case where the widths are the same, one obtains what is usually referred to as a "lock-exchange" or a "corridor" flow. This refers to the flow spreading along walls on both sides and effectively keeping a constant width whilst it propagates. In this case the flow is effectively two-dimensional. Experiments on variations of this flow have been made with lock-exchange flows propagating in narrowing/expanding environments. Effectively, a narrowing environment will result in the depth of the head increasing as the current advances and thereby its rate of propagation increasing with time, whilst in an expanding environment the opposite will occur. In the other case, the flow spreads radially from the source forming an "axisymmetric" flow. The angle of spread depends on the release conditions. In the case of a point release, an extremely rare event in nature, the spread is perfectly axisymmetric, in all other cases the current will form a sector.
When a gravity current encounters a solid boundary, it can either overcome the boundary, by flowing around or over it, or be reflected by it. The actual outcome of the collision depends primarily on the height and width of the obstacle. If the obstacle is shallow (part) of the gravity current will overcome the obstacle by flowing over it. Similarly, if the width of the obstacle is small, the gravity current will flow around it, just like a river flows around a boulder. If the obstacle cannot be overcome, provided propagation is in the turbulent phase, the gravity current will first surge vertically up (or down depending on the density contrast) along the obstacle, a process known as "sloshing". Sloshing induces a lot of mixing between the ambient and the current and this forms an accumulation of lighter fluid against the obstacle. As more and more fluid accumulates against the obstacle, this starts to propagate in the opposite direction to the initial current, effectively resulting in a second gravity current flowing on top of the original gravity current. This reflection process is a common feature of doorway flows (see below), where a gravity current flows into a finite-size space. In this case the flow repeatedly collides with the end walls of the space, causing a series of currents travelling back and forth between opposite walls. This process has been described in detail by Lane-Serff. [ 6 ]
The first mathematical study of the propagation of gravity currents can be attributed to T. B. Benjamin. [ 7 ] Observations of intrusions and collisions between fluids of differing density were made well before T. B. Benjamin's study, see for example those by Ellison and Tuner, [ 8 ] by M. B. Abbot [ 9 ] or D. I. H. Barr. [ 10 ] J. E. Simpson from the Department of Applied Mathematics and Theoretical Physics of Cambridge University in the UK carried out longstanding research on gravity currents and issued a multitude of papers on the subject. He published an article [ 11 ] in 1982 for Annual Review of Fluid Mechanics which summarizes the state of research in the domain of gravity currents at the time. Simpson also published a more detailed book on the topic. [ 12 ]
Gravity currents are capable of transporting material across large horizontal distances. For example, turbidity currents on the seafloor may carry material thousands of kilometers. Gravity currents occur at a variety of scales throughout nature. Examples include avalanches , haboobs , seafloor turbidity currents , [ 13 ] lahars , pyroclastic flows , and lava flows. There are also gravity currents with large density variations - the so-called low Mach number compressible flows. An example of such a gravity current is the heavy gas dispersion in the atmosphere with initial ratio of gas density to density of atmosphere between about 1.5 and 5.
Gravity currents are frequently encountered in the built environment in the form of doorway flows. These occur when a door (or window) separates two rooms of different temperature and air exchanges are allowed to occur. This can for example be experienced when sitting in a heated lobby during winter and the entrance door is suddenly opened. In this case the cold air will first be felt by ones feet as a result of the outside air propagating as a gravity current along the floor of the room.
Doorway flows are of interest in the domain of natural ventilation and air conditioning/refrigeration and have been extensively investigated. [ 14 ] [ 15 ] [ 16 ]
For a finite volume gravity current, perhaps the simplest modelling approach is via a box model where a "box" (rectangle for 2D problems, cylinder for 3D) is used to represent the current. The box does not rotate or shear, but changes in aspect ratio (i.e. stretches out) as the flow progresses. Here, the dynamics of the problem are greatly simplified (i.e. the forces controlling the flow are not direct considered, only their effects) and typically reduce to a condition dictating the motion of the front via a Froude number and an equation stating the global conservation of mass, i.e. for a 2D problem
where Fr is the Froude number, u f is the speed at the front, g ′ is the reduced gravity , h is the height of the box, l is the length of the box and Q is the volume per unit width. The model is not a good approximation in the early slumping stage of a gravity current, where h along the current is not at all constant, or the final viscous stage of a gravity current, where friction becomes important and changes Fr . The model is a good in the stage between these, where the Froude number at the front is constant and the shape of the current has a nearly constant height.
Additional equations can be specified for processes that would alter the density of the intruding fluid such as through sedimentation. The front condition (Froude number) generally cannot be determined analytically but can instead be found from experiment or observation of natural phenomena. The Froude number is not necessarily a constant, and may depend on the height of the flow in when this is comparable to the depth of overlying fluid.
The solution to this problem is found by noting that u f = dl / dt and integrating for an initial length, l 0 . In the case of a constant volume Q and Froude number Fr , this leads to | https://en.wikipedia.org/wiki/Gravity_current |
The term gravity current intrusion denotes the fluid mechanics phenomenon within which a fluid intrudes with a predominantly horizontal motion into a separate stratified fluid, typically along a plane of neutral buoyancy. This behaviour distinguishes the difference between gravity current intrusions and gravity currents , as intrusions are not restrained by a well-defined boundary surface. [ 1 ] As with gravity currents , intrusion flow is driven within a gravity field by density differences typically small enough to allow for the Boussinesq approximation .
The driving density difference between fluids that produces intrusion motion could simply be due to chemical composition. However variations can also be caused by differences in respective fluid temperatures, dissolved matter concentrations and by particulate matter suspended in flows. [ 2 ] Examples of particulate suspension intrusions include sediment laden river outflows within oceans, 'short-circuit' sewage sedimentation tank intrusions [ 3 ] and turbidity current flows over hypersaline Mediterranean pools. [ 4 ] Examples also exist of particulate intrusions caused by the lateral spread of thermals or plumes along planes of neutral buoyancy; such as intrusions containing metalliferous sediments formed from deep ocean hydrothermal vents. [ 5 ] Or equally crystal laden intrusions formed by plumes within volcanic magma chambers. [ 6 ] Arguably the most striking of all gravitational intrusions, is the atmospheric gravity current generated from a large, 'Plinean' volcanic eruption. In which case the volcano 's overhanging 'umbrella' is an example of an intrusion laterally intruding into the stratified Troposphere .
Work analysing gravity currents propagating within a single fluid host was broadened to consider intrusions within sharply stratified fluids by Hoyler & Huppert in 1980. [ 7 ] Since then there have been further significant analytical and experimental advancements into understanding specifically particle laden intrusions by researchers including Bonnecaze, et al., (1993, 1995, 1996), Rimoldi et al. (1996), and Rooij, et al. (1999). As of 2012 the most recent rigorous analytical analysis, designed to determine the propagation speed of a classically extending intrusion, was performed by Flynn and Linden. [ 8 ] Practical experimentation into intrusions has typically employed a lock exchange to study intrusion dynamics.
The basic structure of a gravity intrusion is approximate to that of a classic current with a roughly elliptical 'head' followed by a tail which stretches with increased current length, it is within the rear half of the intrusion head that the majority of mixing with ambient fluids takes place. [ 9 ] As with gravity currents, intrusions display the same 'slumping', 'self –similar' and 'viscous' phases as gravity currents during propagation. [ 3 ] | https://en.wikipedia.org/wiki/Gravity_current_intrusion |
Gravity feed is the use of earth's gravity to move something (usually a liquid ) from one place to another. It is a simple means of moving a liquid without the use of a pump . A common application is the supply of fuel to an internal combustion engine by placing the fuel tank above the engine, e.g. in motorcycles , lawn mowers , etc. A non-liquid application is the carton flow shelving system.
Ancient Roman aqueducts were gravity-fed, as water supply systems to remote villages in developing countries often are. In this case the flow of water to the village is provided by the hydraulic head , the vertical distance from the intake at the source to the outflow in the village, on which gravity acts; while it is opposed by the friction in the pipe which is determined primarily by the length and diameter of the pipe as well as by its age and the material of which it is made.
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gravity_feed |
Gravity filtration is a method of filtering impurities from solutions by using gravity to pull liquid through a filter. The two main kinds of filtration used in laboratories are gravity and vacuum/suction. Gravity filtration is often used in chemical laboratories to filter precipitates from precipitation reactions as well as drying agents, inadmissible side items, or remaining reactants. While it can also be used to separate out strong products, vacuum filtration is more commonly used for this purpose. [ 1 ]
The process of removing suspended matter contains two steps: transport and attachment. [ 2 ] This mode occurs when particles move to another place through the filter paper.
Gravity filtration is an easy way to remove solid impurities or the precipitation from an organic liquid. The impurity is trapped in the filter. Gravity filtration can collect any insoluble solid. [ 3 ]
Early in human history, people obtained clear water from muddy rivers or lakes by digging holes in sandy banks to a depth below the waterline of the river or lake. The sand filtered the water and clear water fills the hole; this method was used to reform cities and purify urban waters. [ 4 ]
In farming, people used gravity filtration to let water from higher areas flow to lower areas through filters. In this way, sand and small stones filter impurities producing clear water.
In Asia, people pump water from wells and put it into a jar with a small hole at the bottom. The jar is filled with small stones and the hole is covered with layers of gauze.
Filtration is commonly used to filter out solutions contain solids or insoluble precipitation. [ 5 ]
The solution is poured through a piece of filter paper folded into a cone in a glass funnel. Solids (or flocs ) remain on the filter paper while the filtered solution is caught by a flask under the funnel. [ 6 ] If a large volume of solution is filtered, the filter paper will need to be changed in order to prevent clogging. [ 7 ]
In many laboratories, gravity filtration is used to filter out solids to determine reaction yield. Several experimental errors need to be taken into account.
Some precipitated solid remains on the filter paper or in the funnel. In this case, a gap appears between the product yield and the measured yield.
If the precipitated solid is not dried thoroughly, excess fluid influences the experimental results. The actual yield of precipitation may then appear larger than the theoretical yield.
Incorrect use of filter paper may influence the filtration. Additionally, damaging the filter paper can allow small bits of precipitated solids to pass through the filter. [ 8 ]
A variety of filtration operations were tested with seawater for dissolving high concentrations of dimethylsulfoniopropionate . [ 9 ]
These filters contain three stages: flocculation , clarification and filtration.
Typical rapid gravity filters contain filter tanks made of coated or stainless steel or aluminum. Influent flows fall through the filter and are captured by the underdrain. The filter media removes particles from the water. It usually has 3 layers: anthracite coal, silica sand and gravel . [ 10 ]
This approach is effective for removing impurities and uses less cleaning time, lowering cost.
The project was to remove parasites and other contaminants such as lead . The project used multi-hole filters with diameters that allow water to flow by gravity. [ 11 ]
These filters are used for industry applications. The filter lets the fluid stream pass through the media to remain or filter out impurities. It can support large volumes.
Some gravity filter systems in the chemistry industry can remove chlorine and other organics or remove iron and heavy sediments or sand. [ 12 ]
Liquid is removed from a gas stream by coalescers in a single stage. The elements enter with the flow and then pass through the distributor. This is a primary separation device that can remove particles and then coalesce the cartridges in an inside-to-out direction. In this case, the liquids pass through the structure of the filter and then drain from the vessel. [ 13 ]
This filter is an open sand filter system used for water treatment in low budget environments. It can suit a variety of pressure-controlled backwashing. This filter is an automatic gravity filter that uses different pressures and backwashes the system with an injector. The system has no controls. The filters have no moving parts and no pumps. The backwashing water is held in a tank below the filter. [ 14 ] | https://en.wikipedia.org/wiki/Gravity_filtration |
A gravity laser , also sometimes referred to as a gaser , graser , or glaser , is a hypothetical device for stimulated emission of coherent gravitational radiation or gravitons , much in the same way that a standard laser produces coherent electromagnetic radiation .
While photons exist as excitations of a vector potential and so contain an oscillating dipole term, gravitons are a spin -2 field and so have an oscillating quadrupole term. For efficient lasing to occur, there are several conditions that must be met: [ 1 ]
Alternate design proposals involve free undulators akin to a free-electron laser . [ 2 ] [ 3 ] Several proposals involve exploiting the momentum transport properties of superconductors , where s-waves and d-waves couple distinctly to gravitational radiation. [ 4 ] [ 5 ]
As of 2024, interest in gravity lasers has begun to enter research. [ 6 ]
The idea of gravity lasers has been popularized by science fiction works such as David Brin's Earth (1990). While attempting to remove micro singularities inadvertently introduced into the planetary mantle, it is found they can serve as mirrors. With the necessary energy levels found in gravitational potentials of the planet's core and mantle, the resulting 'graser' beams are initially employed to nudge the singularities somewhere safer. Other uses are soon found, such as propelling objects into space and for weaponry of various levels of sophistication.
Other works, such as the RPG Star Ocean (1996) use them as a hypothetical weapon. [ 7 ] They are also commonly employed as a proposed mechanism for tractor beams , antigravity , and space propulsion .
In Alastair Reynolds ' novel Redemption Ark (2002), a graser is utilised by the Inhibitors to bore into, and puncture, Resurgam's sun.
In the television series Justice League (2001–2004), the Thanagarian military use a type of gravity laser to weigh down and paralyze the Flash .
The novel Earth Unaware (2012) uses 'glasers' as a plot device to enable planetary-scale manipulation of matter, akin to gravity guns . | https://en.wikipedia.org/wiki/Gravity_laser |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.